Sample records for sun computers running

  1. Program For Generating Interactive Displays

    NASA Technical Reports Server (NTRS)

    Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl; hide

    1991-01-01

    Sun/Unix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. Plus viewed as productivity tool for application developers and application end users, who benefit from resultant consistent and well-designed user interface sheltering them from intricacies of computer. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC and PS/2 compute

  2. Another Program For Generating Interactive Graphics

    NASA Technical Reports Server (NTRS)

    Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl; hide

    1991-01-01

    VAX/Ultrix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. When used throughout company for wide range of applications, makes both application program and computer seem transparent, with noticeable improvements in learning curve. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC's and PS/2 computers running AIX, and HP 9000 S

  3. SSL - THE SIMPLE SOCKETS LIBRARY

    NASA Technical Reports Server (NTRS)

    Campbell, C. E.

    1994-01-01

    The Simple Sockets Library (SSL) allows C programmers to develop systems of cooperating programs using Berkeley streaming Sockets running under the TCP/IP protocol over Ethernet. The SSL provides a simple way to move information between programs running on the same or different machines and does so with little overhead. The SSL can create three types of Sockets: namely a server, a client, and an accept Socket. The SSL's Sockets are designed to be used in a fashion reminiscent of the use of FILE pointers so that a C programmer who is familiar with reading and writing files will immediately feel comfortable with reading and writing with Sockets. The SSL consists of three parts: the library, PortMaster, and utilities. The user of the SSL accesses it by linking programs to the SSL library. The PortMaster initializes connections between clients and servers. The PortMaster also supports a "firewall" facility to keep out socket requests from unapproved machines. The "firewall" is a file which contains Internet addresses for all approved machines. There are three utilities provided with the SSL. SKTDBG can be used to debug programs that make use of the SSL. SPMTABLE lists the servers and port numbers on requested machine(s). SRMSRVR tells the PortMaster to forcibly remove a server name from its list. The package also includes two example programs: multiskt.c, which makes multiple accepts on one server, and sktpoll.c, which repeatedly attempts to connect a client to some server at one second intervals. SSL is a machine independent library written in the C-language for computers connected via Ethernet using the TCP/IP protocol. It has been successfully compiled and implemented on a variety of platforms, including Sun series computers running SunOS, DEC VAX series computers running VMS, SGI computers running IRIX, DECstations running ULTRIX, DEC alpha AXPs running OSF/1, IBM RS/6000 computers running AIX, IBM PC and compatibles running BSD/386 UNIX and HP Apollo 3000/4000/9000/400T computers running HP-UX. SSL requires 45K of RAM to run under SunOS and 80K of RAM to run under VMS. For use on IBM PC series computers and compatibles running DOS, SSL requires Microsoft C 6.0 and the Wollongong TCP/IP package. Source code for sample programs and debugging tools are provided. The documentation is available on the distribution medium in TeX and PostScript formats. The standard distribution medium for SSL is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 5.25 inch 360K MS-DOS format diskette. The SSL was developed in 1992 and was updated in 1993.

  4. ELM - A SIMPLE TOOL FOR THERMAL-HYDRAULIC ANALYSIS OF SOLID-CORE NUCLEAR ROCKET FUEL ELEMENTS

    NASA Technical Reports Server (NTRS)

    Walton, J. T.

    1994-01-01

    ELM is a simple computational tool for modeling the steady-state thermal-hydraulics of propellant flow through fuel element coolant channels in nuclear thermal rockets. Written for the nuclear propulsion project of the Space Exploration Initiative, ELM evaluates the various heat transfer coefficient and friction factor correlations available for turbulent pipe flow with heat addition. In the past, these correlations were found in different reactor analysis codes, but now comparisons are possible within one program. The logic of ELM is based on the one-dimensional conservation of energy in combination with Newton's Law of Cooling to determine the bulk flow temperature and the wall temperature across a control volume. Since the control volume is an incremental length of tube, the corresponding pressure drop is determined by application of the Law of Conservation of Momentum. The size, speed, and accuracy of ELM make it a simple tool for use in fuel element parametric studies. ELM is a machine independent program written in FORTRAN 77. It has been successfully compiled on an IBM PC compatible running MS-DOS using Lahey FORTRAN 77, a DEC VAX series computer running VMS, and a Sun4 series computer running SunOS UNIX. ELM requires 565K of RAM under SunOS 4.1, 360K of RAM under VMS 5.4, and 406K of RAM under MS-DOS. Because this program is machine independent, no executable is provided on the distribution media. The standard distribution medium for ELM is one 5.25 inch 360K MS-DOS format diskette. ELM was developed in 1991. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. Sun4 and SunOS are trademarks of Sun Microsystems, Inc. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation.

  5. Programs To Optimize Spacecraft And Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Petersen, F. M.; Cornick, D.E.; Stevenson, R.; Olson, D. W.

    1994-01-01

    POST/6D POST is set of two computer programs providing ability to target and optimize trajectories of powered or unpowered spacecraft or aircraft operating at or near rotating planet. POST treats point-mass, three-degree-of-freedom case. 6D POST treats more-general rigid-body, six-degree-of-freedom (with point masses) case. Used to solve variety of performance, guidance, and flight-control problems for atmospheric and orbital vehicles. Applications include computation of performance or capability of vehicle in ascent, or orbit, and during entry into atmosphere, simulation and analysis of guidance and flight-control systems, dispersion-type analyses and analyses of loads, general-purpose six-degree-of-freedom simulation of controlled and uncontrolled vehicles, and validation of performance in six degrees of freedom. Written in FORTRAN 77 and C language. Two machine versions available: one for SUN-series computers running SunOS(TM) (LAR-14871) and one for Silicon Graphics IRIS computers running IRIX(TM) operating system (LAR-14869).

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudman, D.L.

    For 17 years, the sensor-based IBM 1800 computer successfully fulfilled Sun's requirements for data acquisition and process control at its petroleum refinery in Toledo, Ohio. However, faltering reliability due to deterioration, coupled with IBM's announced withdrawal of contractual hardware maintenance, prompted Sun to approach IBM regarding potential solutions to the problem of economically maintaining the IBM 1800 as a viable system in the Toledo Refinery. In concert, IBM and Sun identified several options, but an IBM proposal which held the most promise for long term success was the direct replacement of the IBM 1800 processor and software systems with anmore » IBM 4300 running IBM's licensed program product ''Advanced Control System,'' i.e., ACS. Sun chose this solution. The intent of this paper is to chronicle the highlights of the project which successfully revitalized the process computer facilities in Sun's Toledo Refinery in only 10 months, under financial constraints, and using limited human resources.« less

  7. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (SUN3 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.

  8. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (SUN3 VERSION WITH MOTIF)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.

  9. Sun Series program for the REEDA System. [predicting orbital lifetime using sunspot values

    NASA Technical Reports Server (NTRS)

    Shankle, R. W.

    1980-01-01

    Modifications made to data bases and to four programs in a series of computer programs (Sun Series) which run on the REEDA HP minicomputer system to aid NASA's solar activity predictions used in orbital life time predictions are described. These programs utilize various mathematical smoothing technique and perform statistical and graphical analysis of various solar activity data bases residing on the REEDA System.

  10. Pyrolaser Operating System

    NASA Technical Reports Server (NTRS)

    Roberts, Floyd E., III

    1994-01-01

    Software provides for control and acquisition of data from optical pyrometer. There are six individual programs in PYROLASER package. Provides quick and easy way to set up, control, and program standard Pyrolaser. Temperature and emisivity measurements either collected as if Pyrolaser in manual operating mode or displayed on real-time strip charts and stored in standard spreadsheet format for posttest analysis. Shell supplied to allow macros, which are test-specific, added to system easily. Written using Labview software for use on Macintosh-series computers running System 6.0.3 or later, Sun Sparc-series computers running Open-Windows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatible computers running Microsoft Windows 3.1 or later.

  11. Comparison of scientific computing platforms for MCNP4A Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.; Brockhoff, R.C.

    1994-04-01

    The performance of seven computer platforms is evaluated with the widely used and internationally available MCNP4A Monte Carlo radiation transport code. All results are reproducible and are presented in such a way as to enable comparison with computer platforms not in the study. The authors observed that the HP/9000-735 workstation runs MCNP 50% faster than the Cray YMP 8/64. Compared with the Cray YMP 8/64, the IBM RS/6000-560 is 68% as fast, the Sun Sparc10 is 66% as fast, the Silicon Graphics ONYX is 90% as fast, the Gateway 2000 model 4DX2-66V personal computer is 27% as fast, and themore » Sun Sparc2 is 24% as fast. In addition to comparing the timing performance of the seven platforms, the authors observe that changes in compilers and software over the past 2 yr have resulted in only modest performance improvements, hardware improvements have enhanced performance by less than a factor of [approximately]3, timing studies are very problem dependent, MCNP4Q runs about as fast as MCNP4.« less

  12. TFSSRA - THICK FREQUENCY SELECTIVE SURFACE WITH RECTANGULAR APERTURES

    NASA Technical Reports Server (NTRS)

    Chen, J. C.

    1994-01-01

    Thick Frequency Selective Surface with Rectangular Apertures (TFSSRA) was developed to calculate the scattering parameters for a thick frequency selective surface with rectangular apertures on a skew grid at oblique angle of incidence. The method of moments is used to transform the integral equation into a matrix equation suitable for evaluation on a digital computer. TFSSRA predicts the reflection and transmission characteristics of a thick frequency selective surface for both TE and TM orthogonal linearly polarized plane waves. A model of a half-space infinite array is used in the analysis. A complete set of basis functions with unknown coefficients is developed for the waveguide region (waveguide modes) and for the free space region (Floquet modes) in order to represent the electromagnetic fields. To ensure the convergence of the solutions, the number of waveguide modes is adjustable. The method of moments is used to compute the unknown mode coefficients. Then, the scattering matrix of the half-space infinite array is calculated. Next, the reference plane of the scattering matrix is moved half a plate thickness in the negative z-direction, and a frequency selective surface of finite thickness is synthesized by positioning two plates of half-thickness back-to-back. The total scattering matrix is obtained by cascading the scattering matrices of the two half-space infinite arrays. TFSSRA is written in FORTRAN 77 with single precision. It has been successfully implemented on a Sun4 series computer running SunOS, an IBM PC compatible running MS-DOS, and a CRAY series computer running UNICOS, and should run on other systems with slight modifications. Double precision is recommended for running on a PC if many modes are used or if high accuracy is required. This package requires the LINPACK math library, which is included. TFSSRA requires 1Mb of RAM for execution. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. It is also available on a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. This program was developed in 1992 and is a copyrighted work with all copyright vested in NASA.

  13. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (DEC VAX ULTRIX VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.

  14. KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM

    NASA Technical Reports Server (NTRS)

    Hui, J.

    1994-01-01

    KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and ULTRIX are trademarks of Digital Equipment Corporation.

  15. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  16. Modernization of the NASA IRTF Telescope Control System

    NASA Astrophysics Data System (ADS)

    Pilger, Eric J.; Harwood, James V.; Onaka, Peter M.

    1994-06-01

    We describe the ongoing modernization of the NASA IR Telescope Facility Telescope Control System. A major mandate of this project is to keep the telescope available for observations throughout. Therefore, we have developed an incremental plan that will allow us to replace components of the software and hardware without shutting down the system. The current system, running under FORTH on a DEC LSI 11/23 minicomputer interfaced to a Bus and boards developed in house, will be replaced with a combination of a Sun SPARCstation running SunOS, a MicroSPARC based Single Board Computer running LynxOS, and various intelligent VME based peripheral cards. The software is based on a design philosophy originally developed by Pat Wallace for use on the Anglo Australian Telescope. This philosophy has gained wide acceptance, and is currently used in a number of observatories around the world. A key element of this philosophy is the division of the TCS into `Virtual' and `Real' parts. This will allow us to replace the higher level functions of the TCS with software running on the Sun, while still relying on the LSI 11/23 for performance of the lower level functions. Eventual transfer of lower level functions to the MicroSPARC system will then proceed incrementally through use of a Q-Bus to VME-Bus converter.

  17. NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (AMDAHL VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, J. E.

    1994-01-01

    The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).

  18. NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, J. E.

    1994-01-01

    The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).

  19. POMESH - DIFFRACTION ANALYSIS OF REFLECTOR ANTENNAS

    NASA Technical Reports Server (NTRS)

    Hodges, R. E.

    1994-01-01

    POMESH is a computer program capable of predicting the performance of reflector antennas. Both far field pattern and gain calculations are performed using the Physical Optics (PO) approximation of the equivalent surface currents. POMESH is primarily intended for relatively small reflectors. It is useful in situations where the surface is described by irregular data that must be interpolated and for cases where the surface derivatives are not known. This method is flexible and robust and also supports near field calculations. Because of the near field computation ability, this computational engine is quite useful for subreflector computations. The program is constructed in a highly modular form so that it may be readily adapted to perform tasks other than the one that is explicitly described here. Since the computationally intensive portions of the algorithm are simple loops, the program can be easily adapted to take advantage of vector processor and parallel architectures. In POMESH the reflector is represented as a piecewise planar surface comprised of triangular regions known as facets. A uniform physical optics (PO) current is assumed to exist on each triangular facet. Then, the PO integral on a facet is approximated by the product of the PO current value at the center and the area of the triangle. In this way, the PO integral over the reflector surface is reduced to a summation of the contribution from each triangular facet. The source horn, or feed, that illuminates the subreflector is approximated by a linear combination of plane patterns. POMESH contains three polarization pattern definitions for the feed; a linear x-polarized element, linear y-polarized element, and a circular polarized element. If a more general feed pattern is required, it is a simple matter to replace the subroutine that implements the pattern definitions. POMESH obtains information necessary to specify the coordinate systems, location of other data files, and parameters of the desired calculation from a user provided data file. A numerical description of the principle plane patterns of the source horn must also be provided. The program is supplied with an analytically defined parabolic reflector surface. However, it is a simple matter to replace it with a user defined reflector surface. Output is given in the form of a data stream to the terminal; a summary of the parameters used in the computation and some sample results in a file; and a data file of the results of the pattern calculations suitable for plotting. POMESH is written in FORTRAN 77 for execution on CRAY series computers running UNICOS. With minor modifications, it has also been successfully implemented on a Sun4 series computer running SunOS, a DEC VAX series computer running VMS, and an IBM PC series computer running OS/2. It requires 2.5Mb of RAM under SunOS 4.1.1, 2.5Mb of RAM under VMS 5-4.3, and 2.5Mb of RAM under OS/2. The OS/2 version requires the Lahey F77L compiler. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format and a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. POMESH was developed in 1989 and is a copyrighted work with all copyright vested in NASA. CRAY and UNICOS are registered trademarks of Cray Research, Inc. SunOS and Sun4 are trademarks of Sun Microsystems, Inc. DEC, DEC FILES-11, VAX and VMS are trademarks of Digital Equipment Corporation. IBM PC and OS/2 are registered trademarks of International Business Machines, Inc. UNIX is a registered trademark of Bell Laboratories.

  20. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Walters, D.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  1. TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, S. F.

    1994-01-01

    The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  2. HONTIOR - HIGHER-ORDER NEURAL NETWORK FOR TRANSFORMATION INVARIANT OBJECT RECOGNITION

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.

    1994-01-01

    Neural networks have been applied in numerous fields, including transformation invariant object recognition, wherein an object is recognized despite changes in the object's position in the input field, size, or rotation. One of the more successful neural network methods used in invariant object recognition is the higher-order neural network (HONN) method. With a HONN, known relationships are exploited and the desired invariances are built directly into the architecture of the network, eliminating the need for the network to learn invariance to transformations. This results in a significant reduction in the training time required, since the network needs to be trained on only one view of each object, not on numerous transformed views. Moreover, one hundred percent accuracy is guaranteed for images characterized by the built-in distortions, providing noise is not introduced through pixelation. The program HONTIOR implements a third-order neural network having invariance to translation, scale, and in-plane rotation built directly into the architecture, Thus, for 2-D transformation invariance, the network needs only to be trained on just one view of each object. HONTIOR can also be used for 3-D transformation invariant object recognition by training the network only on a set of out-of-plane rotated views. Historically, the major drawback of HONNs has been that the size of the input field was limited to the memory required for the large number of interconnections in a fully connected network. HONTIOR solves this problem by coarse coding the input images (coding an image as a set of overlapping but offset coarser images). Using this scheme, large input fields (4096 x 4096 pixels) can easily be represented using very little virtual memory (30Mb). The HONTIOR distribution consists of three main programs. The first program contains the training and testing routines for a third-order neural network. The second program contains the same training and testing procedures as the first, but it also contains a number of functions to display and edit training and test images. Finally, the third program is an auxiliary program which calculates the included angles for a given input field size. HONTIOR is written in C language, and was originally developed for Sun3 and Sun4 series computers. Both graphic and command line versions of the program are provided. The command line version has been successfully compiled and executed both on computers running the UNIX operating system and on DEC VAX series computer running VMS. The graphic version requires the SunTools windowing environment, and therefore runs only on Sun series computers. The executable for the graphics version of HONTIOR requires 1Mb of RAM. The standard distribution medium for HONTIOR is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The package includes sample input and output data. HONTIOR was developed in 1991. Sun, Sun3 and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation.

  3. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (SILICON GRAPHICS VERSION)

    NASA Technical Reports Server (NTRS)

    Walters, D.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  4. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (CONCURRENT VERSION)

    NASA Technical Reports Server (NTRS)

    Pearson, R. W.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  5. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (MASSCOMP VERSION)

    NASA Technical Reports Server (NTRS)

    Walters, D.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  6. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (DEC VAX VERSION)

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  7. Improving Reliability in a Stochastic Communication Network

    DTIC Science & Technology

    1990-12-01

    and GINO. In addition, the following computers were used: a Sun 386i workstation, a Digital Equipment Corporation (DEC) 11/785 miniframe , and a DEC...operating system. The DEC 11/785 miniframe used in the experiment was running Unix Version 4.3 (Berkley System Domain). Maxflo was run on the DEC 11/785...the file was still called Mod- ifyl.for). 4. The Maxflo program was started on the DEC 11/785 miniframe . 5. At this time the Convert.max file, created

  8. MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1994-01-01

    MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running SunOS, a Hewlett-Packard 720 computer running HP-UX, a Macintosh computer running MacOS, and an IBM PC compatible computer running MS-DOS. Accompanying the library is a set of 196 "demo" drivers that exercise all of the user-callable subprograms. The FORTRAN source code for MATH77 comprises 109K lines of code in 375 files with a total size of 4.5Mb. The demo drivers comprise 11K lines of code and 418K. Forty-four percent of the lines of the library code and 29% of those in the demo code are comment lines. The standard distribution medium for MATH77 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9track 1600 BPI magnetic tape in VAX BACKUP format and a TK50 tape cartridge in VAX BACKUP format. An electronic copy of the documentation is included on the distribution media. Previous releases of MATH77 have been used over a number of years in a variety of JPL applications. MATH77 Release 4.0 was completed in 1992. MATH77 is a copyrighted work with all copyright vested in NASA.

  9. Platform-independent software for medical image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Mancuso, Michael E.; Pathak, Sayan D.; Kim, Yongmin

    1997-05-01

    We have developed a software tool for image processing over the Internet. The tool is a general purpose, easy to use, flexible, platform independent image processing software package with functions most commonly used in medical image processing.It provides for processing of medical images located wither remotely on the Internet or locally. The software was written in Java - the new programming language developed by Sun Microsystems. It was compiled and tested using Microsoft's Visual Java 1.0 and Microsoft's Just in Time Compiler 1.00.6211. The software is simple and easy to use. In order to use the tool, the user needs to download the software from our site before he/she runs it using any Java interpreter, such as those supplied by Sun, Symantec, Borland or Microsoft. Future versions of the operating systems supplied by Sun, Microsoft, Apple, IBM, and others will include Java interpreters. The software is then able to access and process any image on the iNternet or on the local computer. Using a 512 X 512 X 8-bit image, a 3 X 3 convolution took 0.88 seconds on an Intel Pentium Pro PC running at 200 MHz with 64 Mbytes of memory. A window/level operation took 0.38 seconds while a 3 X 3 median filter took 0.71 seconds. These performance numbers demonstrate the feasibility of using this software interactively on desktop computes. Our software tool supports various image processing techniques commonly used in medical image processing and can run without the need of any specialized hardware. It can become an easily accessible resource over the Internet to promote the learning and of understanding image processing algorithms. Also, it could facilitate sharing of medical image databases and collaboration amongst researchers and clinicians, regardless of location.

  10. SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  11. Visually guided locomotion and computation of time-to-collision in the mongolian gerbil (Meriones unguiculatus): the effects of frontal and visual cortical lesions.

    PubMed

    Shankar, S; Ellard, C

    2000-02-01

    Past research has indicated that many species use the time-to-collision variable but little is known about its neural underpinnings in rodents. In a set of three experiments we set out to replicate and extend the findings of Sun et al. (Sun H-J, Carey DP, Goodale MA. Exp Brain Res 1992;91:171-175) in a visually guided task in Mongolian gerbils, and then investigated the effects of lesions to different cortical areas. We trained Mongolian gerbils to run in the dark toward a target on a computer screen. In some trials the target changed in size as the animal ran toward it in such a way as to produce 'virtual targets' if the animals were using time-to-collision or contact information. In experiment 1 we confirmed that gerbils use time-to-contact information to modulate their speed of running toward a target. In experiment 2 we established that visual cortex lesions attenuate the ability of lesioned animals to use information from the visual target to guide their run, while frontal cortex lesioned animals are not as severely affected. In experiment 3 we found that small radio-frequency lesions, of either area VI or of the lateral extrastriate regions of the visual cortex also affected the use of information from the target to modulate locomotion.

  12. PACER -- A fast running computer code for the calculation of short-term containment/confinement loads following coolant boundary failure. Volume 2: User information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sienicki, J.J.

    A fast running and simple computer code has been developed to calculate pressure loadings inside light water reactor containments/confinements under loss-of-coolant accident conditions. PACER was originally developed to calculate containment/confinement pressure and temperature time histories for loss-of-coolant accidents in Soviet-designed VVER reactors and is relevant to the activities of the US International Nuclear Safety Center. The code employs a multicompartment representation of the containment volume and is focused upon application to early time containment phenomena during and immediately following blowdown. PACER has been developed for FORTRAN 77 and earlier versions of FORTRAN. The code has been successfully compiled and executedmore » on SUN SPARC and Hewlett-Packard HP-735 workstations provided that appropriate compiler options are specified. The code incorporates both capabilities built around a hardwired default generic VVER-440 Model V230 design as well as fairly general user-defined input. However, array dimensions are hardwired and must be changed by modifying the source code if the number of compartments/cells differs from the default number of nine. Detailed input instructions are provided as well as a description of outputs. Input files and selected output are presented for two sample problems run on both HP-735 and SUN SPARC workstations.« less

  13. SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  14. PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  15. Space station operating system study

    NASA Technical Reports Server (NTRS)

    Horn, Albert E.; Harwell, Morris C.

    1988-01-01

    The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.

  16. Satellite Data Processing System (SDPS) users manual V1.0

    NASA Technical Reports Server (NTRS)

    Caruso, Michael; Dunn, Chris

    1989-01-01

    SDPS is a menu driven interactive program designed to facilitate the display and output of image and line-based data sets common to telemetry, modeling and remote sensing. This program can be used to display up to four separate raster images and overlay line-based data such as coastlines, ship tracks and velocity vectors. The program uses multiple windows to communicate information with the user. At any given time, the program may have up to four image display windows as well as auxiliary windows containing information about each image displayed. SDPS is not a commercial program. It does not contain complete type checking or error diagnostics which may allow the program to crash. Known anomalies will be mentioned in the appropriate section as notes or cautions. SDPS was designed to be used on Sun Microsystems Workstations running SunView1 (Sun Visual/Integrated Environment for Workstations). It was primarily designed to be used on workstations equipped with color monitors, but most of the line-based functions and several of the raster-based functions can be used with monochrome monitors. The program currently runs on Sun 3 series workstations running Sun OS 4.0 and should port easily to Sun 4 and Sun 386 series workstations with SunView1. Users should also be familiar with UNIX, Sun workstations and the SunView window system.

  17. RATIO_TOOL - SOFTWARE FOR COMPUTING IMAGE RATIOS

    NASA Technical Reports Server (NTRS)

    Yates, G. L.

    1994-01-01

    Geological studies analyze spectral data in order to gain information on surface materials. RATIO_TOOL is an interactive program for viewing and analyzing large multispectral image data sets that have been created by an imaging spectrometer. While the standard approach to classification of multispectral data is to match the spectrum for each input pixel against a library of known mineral spectra, RATIO_TOOL uses ratios of spectral bands in order to spot significant areas of interest within a multispectral image. Each image band can be viewed iteratively, or a selected image band of the data set can be requested and displayed. When the image ratios are computed, the result is displayed as a gray scale image. At this point a histogram option helps in viewing the distribution of values. A thresholding option can then be used to segment the ratio image result into two to four classes. The segmented image is then color coded to indicate threshold classes and displayed alongside the gray scale image. RATIO_TOOL is written in C language for Sun series computers running SunOS 4.0 and later. It requires the XView toolkit and the OpenWindows window manager (version 2.0 or 3.0). The XView toolkit is distributed with Open Windows. A color monitor is also required. The standard distribution medium for RATIO_TOOL is a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation is included on the program media. RATIO_TOOL was developed in 1992 and is a copyrighted work with all copyright vested in NASA. Sun, SunOS, and OpenWindows are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  18. PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  19. COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Smith, J. P.

    1994-01-01

    The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.

  20. COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Smith, J. P.

    1994-01-01

    The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.

  1. Characteristics of Operational Space Weather Forecasting: Observations and Models

    NASA Astrophysics Data System (ADS)

    Berger, Thomas; Viereck, Rodney; Singer, Howard; Onsager, Terry; Biesecker, Doug; Rutledge, Robert; Hill, Steven; Akmaev, Rashid; Milward, George; Fuller-Rowell, Tim

    2015-04-01

    In contrast to research observations, models and ground support systems, operational systems are characterized by real-time data streams and run schedules, with redundant backup systems for most elements of the system. We review the characteristics of operational space weather forecasting, concentrating on the key aspects of ground- and space-based observations that feed models of the coupled Sun-Earth system at the NOAA/Space Weather Prediction Center (SWPC). Building on the infrastructure of the National Weather Service, SWPC is working toward a fully operational system based on the GOES weather satellite system (constant real-time operation with back-up satellites), the newly launched DSCOVR satellite at L1 (constant real-time data network with AFSCN backup), and operational models of the heliosphere, magnetosphere, and ionosphere/thermosphere/mesophere systems run on the Weather and Climate Operational Super-computing System (WCOSS), one of the worlds largest and fastest operational computer systems that will be upgraded to a dual 2.5 Pflop system in 2016. We review plans for further operational space weather observing platforms being developed in the context of the Space Weather Operations Research and Mitigation (SWORM) task force in the Office of Science and Technology Policy (OSTP) at the White House. We also review the current operational model developments at SWPC, concentrating on the differences between the research codes and the modified real-time versions that must run with zero fault tolerance on the WCOSS systems. Understanding the characteristics and needs of the operational forecasting community is key to producing research into the coupled Sun-Earth system with maximal societal benefit.

  2. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SUN4 VERSION WITH MOTIF)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  3. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SUN4 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  4. Improvement of program to calculate electronic properties of narrow band gap materials

    NASA Technical Reports Server (NTRS)

    Patterson, James D.; Abdelhakiem, Wafaa

    1991-01-01

    The program was improved by reprogramming it so it will run on both a SUN and a VAX. Also it is easily transportable as it is on a disk for use on a SUN. A computer literature search resulted in some improved parameters for Hg(1-x)Cd(x)Te and a table of parameters for Hg(1-x)Zn(x)Te. The effects of neutral defects were added to the program, and it was found, as expected, that they contribute very little to the mobility at temperatures of interest. The effect were added of varying the following parameters: dielectric constants, screening parameters, disorder energies, donor and acceptor concentrations, momentum matrix element, different expressions for energy gap, and transverse effective charge.

  5. NQS - NETWORK QUEUING SYSTEM, VERSION 2.0 (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Walter, H.

    1994-01-01

    The Network Queuing System, NQS, is a versatile batch and device queuing facility for a single Unix computer or a group of networked computers. With the Unix operating system as a common interface, the user can invoke the NQS collection of user-space programs to move batch and device jobs freely around the different computer hardware tied into the network. NQS provides facilities for remote queuing, request routing, remote status, queue status controls, batch request resource quota limits, and remote output return. This program was developed as part of an effort aimed at tying together diverse UNIX based machines into NASA's Numerical Aerodynamic Simulator Processing System Network. This revision of NQS allows for creating, deleting, adding and setting of complexes that aid in limiting the number of requests to be handled at one time. It also has improved device-oriented queues along with some revision of the displays. NQS was designed to meet the following goals: 1) Provide for the full support of both batch and device requests. 2) Support all of the resource quotas enforceable by the underlying UNIX kernel implementation that are relevant to any particular batch request and its corresponding batch queue. 3) Support remote queuing and routing of batch and device requests throughout the NQS network. 4) Support queue access restrictions through user and group access lists for all queues. 5) Enable networked output return of both output and error files to possibly remote machines. 6) Allow mapping of accounts across machine boundaries. 7) Provide friendly configuration and modification mechanisms for each installation. 8) Support status operations across the network, without requiring a user to log in on remote target machines. 9) Provide for file staging or copying of files for movement to the actual execution machine. To support batch and device requests, NQS v.2 implements three queue types--batch, device and pipe. Batch queues hold and prioritize batch requests; device queues hold and prioritize device requests; pipe queues transport both batch and device requests to other batch, device, or pipe queues at local or remote machines. Unique to batch queues are resource quota limits that restrict the amounts of different resources that a batch request can consume during execution. Unique to each device queue is a set of one or more devices, such as a line printer, to which requests can be sent for execution. Pipe queues have associated destinations to which they route and deliver requests. If the proper destination machine is down or unreachable, pipe queues are able to requeue the request and deliver it later when the destination is available. All NQS network conversations are performed using the Berkeley socket mechanism as ported into the respective vendor kernels. NQS is written in C language. The generic UNIX version (ARC-13179) has been successfully implemented on a variety of UNIX platforms, including Sun3 and Sun4 series computers, SGI IRIS computers running IRIX 3.3, DEC computers running ULTRIX 4.1, AMDAHL computers running UTS 1.3 and 2.1, platforms running BSD 4.3 UNIX. The IBM RS/6000 AIX version (COS-10042) is a vendor port. NQS 2.0 will also communicate with the Cray Research, Inc. and Convex, Inc. versions of NQS. The standard distribution medium for either machine version of NQS 2.0 is a 60Mb, QIC-24, .25 inch streaming magnetic tape cartridge in UNIX tar format. Upon request the generic UNIX version (ARC-13179) can be provided in UNIX tar format on alternate media. Please contact COSMIC to discuss the availability and cost of media to meet your specific needs. An electronic copy of the NQS 2.0 documentation is included on the program media. NQS 2.0 was released in 1991. The IBM RS/6000 port of NQS was developed in 1992. IRIX is a trademark of Silicon Graphics Inc. IRIS is a registered trademark of Silicon Graphics Inc. UNIX is a registered trademark of UNIX System Laboratories Inc. Sun3 and Sun4 are trademarks of Sun Microsystems Inc. DEC and ULTRIX are trademarks of Digital Equipment Corporation.

  6. SPECIES - EVALUATING THERMODYNAMIC PROPERTIES, TRANSPORT PROPERTIES & EQUILIBRIUM CONSTANTS OF AN 11-SPECIES AIR MODEL

    NASA Technical Reports Server (NTRS)

    Thompson, R. A.

    1994-01-01

    Accurate numerical prediction of high-temperature, chemically reacting flowfields requires a knowledge of the physical properties and reaction kinetics for the species involved in the reacting gas mixture. Assuming an 11-species air model at temperatures below 30,000 degrees Kelvin, SPECIES (Computer Codes for the Evaluation of Thermodynamic Properties, Transport Properties, and Equilibrium Constants of an 11-Species Air Model) computes values for the species thermodynamic and transport properties, diffusion coefficients and collision cross sections for any combination of the eleven species, and reaction rates for the twenty reactions normally occurring. The species represented in the model are diatomic nitrogen, diatomic oxygen, atomic nitrogen, atomic oxygen, nitric oxide, ionized nitric oxide, the free electron, ionized atomic nitrogen, ionized atomic oxygen, ionized diatomic nitrogen, and ionized diatomic oxygen. Sixteen subroutines compute the following properties for both a single species, interaction pair, or reaction, and an array of all species, pairs, or reactions: species specific heat and static enthalpy, species viscosity, species frozen thermal conductivity, diffusion coefficient, collision cross section (OMEGA 1,1), collision cross section (OMEGA 2,2), collision cross section ratio, and equilibrium constant. The program uses least squares polynomial curve-fits of the most accurate data believed available to provide the requested values more quickly than is possible with table look-up methods. The subroutines for computing transport coefficients and collision cross sections use additional code to correct for any electron pressure when working with ionic species. SPECIES was developed on a SUN 3/280 computer running the SunOS 3.5 operating system. It is written in standard FORTRAN 77 for use on any machine, and requires roughly 92K memory. The standard distribution medium for SPECIES is a 5.25 inch 360K MS-DOS format diskette. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. This program was last updated in 1991. SUN and SunOS are registered trademarks of Sun Microsystems, Inc.

  7. The effect of surface boundary conditions on the climate generated by a coarse-mesh general circulation model

    NASA Technical Reports Server (NTRS)

    Cohen, C.

    1981-01-01

    A hierarchy of experiments was run, starting with an all water planet with zonally symmetric sea surface temperatures, then adding, one at a time, flat continents, mountains, surface physics, and realistic sea surface temperatures. The model was run with the sun fixed at a perpetual January. Ensemble means and standard deviations were computed and the t-test was used to determine the statistical significance of the results. The addition of realistic surface physics does not affect the model climatology to as large as extent as does the addition of mountains. Departures from zonal symmetry of the SST field result in a better simulation of the real atmosphere.

  8. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.

  9. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.

  10. Theoretical and experimental studies in support of the geophysical fluid flow experiment

    NASA Technical Reports Server (NTRS)

    Hart, J.; Toomre, J.; Gilman, P.

    1984-01-01

    Computer programming was completed for digital acquisition of temperature and velocity data generated by the Geophysical Fluid Flow Cell (GFFC) during the upcoming Spacelab 3 mission. A set of scenarios was developed which covers basic electro-hydrodynamic instability, highly supercritical convection with isothermal boundaries, convection with imposed thermal forcing, and some stably stratified runs to look at large-scale thermohaline ocean circulations. The extent to which the GFFC experimental results apply to more complicated circumstances within the Sun or giant planets was assessed.

  11. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  12. Interactions of the polarization and the sun compass in path integration of desert ants.

    PubMed

    Lebhardt, Fleur; Ronacher, Bernhard

    2014-08-01

    Desert ants, Cataglyphis fortis, perform large-scale foraging trips in their featureless habitat using path integration as their main navigation tool. To determine their walking direction they use primarily celestial cues, the sky's polarization pattern and the sun position. To examine the relative importance of these two celestial cues, we performed cue conflict experiments. We manipulated the polarization pattern experienced by the ants during their outbound foraging excursions, reducing it to a single electric field (e-)vector direction with a linear polarization filter. The simultaneous view of the sun created situations in which the directional information of the sun and the polarization compass disagreed. The heading directions of the homebound runs recorded on a test field with full view of the natural sky demonstrate that none of both compasses completely dominated over the other. Rather the ants seemed to compute an intermediate homing direction to which both compass systems contributed roughly equally. Direct sunlight and polarized light are detected in different regions of the ant's compound eye, suggesting two separate pathways for obtaining directional information. In the experimental paradigm applied here, these two pathways seem to feed into the path integrator with similar weights.

  13. FTC - THE FAULT-TREE COMPILER (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.

  14. Tracking Positions and Attitudes of Mars Rovers

    NASA Technical Reports Server (NTRS)

    Ali, Khaled; vanelli, Charles; Biesiadecki, Jeffrey; Martin, Alejandro San; Maimone, Mark; Cheng, Yang; Alexander, James

    2006-01-01

    The Surface Attitude Position and Pointing (SAPP) software, which runs on computers aboard the Mars Exploration Rovers, tracks the positions and attitudes of the rovers on the surface of Mars. Each rover acquires data on attitude from a combination of accelerometer readings and images of the Sun acquired autonomously, using a pointable camera to search the sky for the Sun. Depending on the nature of movement commanded remotely by operators on Earth, the software propagates attitude and position by use of either (1) accelerometer and gyroscope readings or (2) gyroscope readings and wheel odometry. Where necessary, visual odometry is performed on images to fine-tune the position updates, particularly on high-wheel-slip terrain. The attitude data are used by other software and ground-based personnel for pointing a high-gain antenna, planning and execution of driving, and positioning and aiming scientific instruments.

  15. Developments in REDES: The rocket engine design expert system

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth O.

    1990-01-01

    The Rocket Engine Design Expert System (REDES) is being developed at the NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP, a nozzle design program named RAO, a regenerative cooling channel performance evaluation code named RTE, and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES is built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.

  16. Developments in REDES: The Rocket Engine Design Expert System

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth O.

    1990-01-01

    The Rocket Engine Design Expert System (REDES) was developed at NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP; a nozzle design program named RAO; a regenerative cooling channel performance evaluation code named RTE; and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES was built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.

  17. Value-Range Analysis of C Programs

    NASA Astrophysics Data System (ADS)

    Simon, Axel

    In 1988, Robert T. Morris exploited a so-called buffer-overflow bug in finger (a dæmon whose job it is to return information on local users) to mount a denial-of-service attack on hundreds of VAX and Sun-3 computers [159]. He created what is nowadays called a worm; that is, a crafted stream of bytes that, when sent to a computer over the network, utilises a buffer-overflow bug in the software of that computer to execute code encoded in the byte stream. In the case of a worm, this code will send the very same byte stream to other computers on the network, thereby creating an avalanche of network traffic that ultimately renders the network and all computers involved in replicating the worm inaccessible. Besides duplicating themselves, worms can alter data on the host that they are running on. The most famous example in recent years was the MSBlaster32 worm, which altered the configuration database on many Microsoft Windows machines, thereby forcing the computers to reboot incessantly. Although this worm was rather benign, it caused huge damage to businesses who were unable to use their IT infrastructure for hours or even days after the appearance of the worm. A more malicious worm is certainly conceivable [187] due to the fact that worms are executed as part of a dæmon (also known as "service" on Windows machines) and thereby run at a privileged level, allowing access to any data stored on the remote computer. While the deletion of data presents a looming threat to valuable information, even more serious uses are espionage and theft, in particular because worms do not have to affect the running system and hence may be impossible to detect.

  18. Mobile Transactional Modelling: From Concepts to Incremental Knowledge

    NASA Astrophysics Data System (ADS)

    Launders, Ivan; Polovina, Simon; Hill, Richard

    In 1988, Robert T. Morris exploited a so-called buffer-overflow bug in finger (a dæmon whose job it is to return information on local users) to mount a denial-of-service attack on hundreds of VAX and Sun-3 computers [159]. He created what is nowadays called a worm; that is, a crafted stream of bytes that, when sent to a computer over the network, utilises a buffer-overflow bug in the software of that computer to execute code encoded in the byte stream. In the case of a worm, this code will send the very same byte stream to other computers on the network, thereby creating an avalanche of network traffic that ultimately renders the network and all computers involved in replicating the worm inaccessible. Besides duplicating themselves, worms can alter data on the host that they are running on. The most famous example in recent years was the MSBlaster32 worm, which altered the configuration database on many Microsoft Windows machines, thereby forcing the computers to reboot incessantly. Although this worm was rather benign, it caused huge damage to businesses who were unable to use their IT infrastructure for hours or even days after the appearance of the worm. A more malicious worm is certainly conceivable [187] due to the fact that worms are executed as part of a dæmon (also known as "service" on Windows machines) and thereby run at a privileged level, allowing access to any data stored on the remote computer. While the deletion of data presents a looming threat to valuable information, even more serious uses are espionage and theft, in particular because worms do not have to affect the running system and hence may be impossible to detect.

  19. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  20. 78 FR 14697 - Final Flood Elevation Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ... Communities affected elevation above ground [caret] Elevation in meters (MSL) Modified Cecil County, Maryland... 1 to Stone Run At the Stone Run +271 Town of Rising Sun, confluence. Unincorporated Areas of Cecil County. Approximately 460 feet +359 downstream of Pierce Road. Tributary 2 to Stone Run At the Stone Run...

  1. HZEFRG1 - SEMIEMPIRICAL NUCLEAR FRAGMENTATION MODEL

    NASA Technical Reports Server (NTRS)

    Townsend, L. W.

    1994-01-01

    The high charge and energy (HZE), Semiempirical Nuclear Fragmentation Model, HZEFRG1, was developed to provide a computationally efficient, user-friendly, physics-based program package for generating nuclear fragmentation databases. These databases can then be used in radiation transport applications such as space radiation shielding and dosimetry, cancer therapy with laboratory heavy ion beams, and simulation studies of detector design in nuclear physics experiments. The program provides individual element and isotope production cross sections for the breakup of high energy heavy ions by the combined nuclear and Coulomb fields of the interacting nuclei. The nuclear breakup contributions are estimated using an energy-dependent abrasion-ablation model of heavy ion fragmentation. The abrasion step involves removal of nucleons by direct knockout in the overlap region of the colliding nuclei. The abrasions are treated on a geometric basis and uniform spherical nuclear density distributions are assumed. Actual experimental nuclear radii obtained from tabulations of electron scattering data are incorporated. Nuclear transparency effects are included by using an energy-dependent, impact-parameter-dependent average transmission factor for the projectile and target nuclei, which accounts for the finite mean free path of nucleons in nuclear matter. The ablation step, as implemented by Bowman, Swiatecki, and Tsang (LBL report no. LBL-2908, July 1973), was treated as a single-nucleon emission for every 10 MeV of excitation energy. Fragmentation contributions from electromagnetic dissociation (EMD) processes, arising from the interacting Coulomb fields, are estimated by using the Weiszacker-Williams theory, extended to include electric dipole and electric quadrupole contributions to one-nucleon removal cross sections. HZEFRG1 consists of a main program, seven function subprograms, and thirteen subroutines. Each is fully commented and begins with a brief description of its functionality. The inputs, which are provided interactively by the user in response to on-screen questions, consist of the projectile kinetic energy in units of MeV/nucleon and the masses and charges of the projectile and target nuclei. With proper inputs, HZEFRG1 first calculates the EMD cross sections and then begins the calculations for nuclear fragmentation by searching through a specified number of isotopes for each charge number (Z) from Z=1 (hydrogen) to the charge of the incident fragmenting nucleus (Zp). After completing the nuclear fragmentation cross sections, HZEFRG1 sorts through the results and writes the sorted output to a file in descending order, based on the charge number of the fragmented nucleus. Details of the theory, extensive comparisons of its predictions with available experimental cross section data, and a complete description of the code implementing it are given in the program documentation. HZEFRG1 is written in ANSI FORTRAN 77 to be machine independent. It was originally developed on a DEC VAX series computer, and has been successfully implemented on a DECstation running RISC ULTRIX 4.3, a Sun4 series computer running SunOS 4.1, an HP 9000 series computer running HP-UX 8.0.1, a Cray Y-MP series computer running UNICOS, and IBM PC series computers running MS-DOS 3.3 and higher. HZEFRG1 requires 1Mb of RAM for execution. In addition, a FORTRAN 77 compiler is required to create an executable. A sample output run is included on the distribution medium for numerical comparison. The standard distribution medium for this program is a 3.5 inch 1.44Mb MS-DOS format diskette. Alternate distribution media and formats are available upon request. HZEFRG1 was completed in 1992.

  2. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    NASA Astrophysics Data System (ADS)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  3. FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.

  4. Concentrating Solar Power Basics | NREL

    Science.gov Websites

    concentrating solar power systems uses the sun as a heat source. The three main types of concentrating solar toward the sun, focusing sunlight on tubes (or receivers) that run the length of the mirrors. The mirrors to allow the mirrors greater mobility in tracking the sun. A dish/engine system uses a mirrored

  5. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (HP9000 SERIES 300/400 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is expected to be available on media suitable for seven different machine platforms: 1) DEC VAX computers running VMS (TK50 cartridge in VAX BACKUP format), 2) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and 7) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2.

  6. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is expected to be available on media suitable for seven different machine platforms: 1) DEC VAX computers running VMS (TK50 cartridge in VAX BACKUP format), 2) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and 7) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2.

  7. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (HP9000 SERIES 700/800 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  8. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (IBM RS/6000 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  9. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SILICON GRAPHICS VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  10. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (DEC RISC ULTRIX VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  11. Software to model AXAF-I image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Feng, Chen

    1995-01-01

    A modular user-friendly computer program for the modeling of grazing-incidence type x-ray optical systems has been developed. This comprehensive computer software GRAZTRACE covers the manipulation of input data, ray tracing with reflectivity and surface deformation effects, convolution with x-ray source shape, and x-ray scattering. The program also includes the capabilities for image analysis, detector scan modeling, and graphical presentation of the results. A number of utilities have been developed to interface the predicted Advanced X-ray Astrophysics Facility-Imaging (AXAF-I) mirror structural and thermal distortions with the ray-trace. This software is written in FORTRAN 77 and runs on a SUN/SPARC station. An interactive command mode version and a batch mode version of the software have been developed.

  12. IDG - INTERACTIVE DIF GENERATOR

    NASA Technical Reports Server (NTRS)

    Preheim, L. E.

    1994-01-01

    The Interactive DIF Generator (IDG) utility is a tool used to generate and manipulate Directory Interchange Format files (DIF). Its purpose as a specialized text editor is to create and update DIF files which can be sent to NASA's Master Directory, also referred to as the International Global Change Directory at Goddard. Many government and university data systems use the Master Directory to advertise the availability of research data. The IDG interface consists of a set of four windows: (1) the IDG main window; (2) a text editing window; (3) a text formatting and validation window; and (4) a file viewing window. The IDG main window starts up the other windows and contains a list of valid keywords. The keywords are loaded from a user-designated file and selected keywords can be copied into any active editing window. Once activated, the editing window designates the file to be edited. Upon switching from the editing window to the formatting and validation window, the user has options for making simple changes to one or more files such as inserting tabs, aligning fields, and indenting groups. The viewing window is a scrollable read-only window that allows fast viewing of any text file. IDG is an interactive tool and requires a mouse or a trackball to operate. IDG uses the X Window System to build and manage its interactive forms, and also uses the Motif widget set and runs under Sun UNIX. IDG is written in C-language for Sun computers running SunOS. This package requires the X Window System, Version 11 Revision 4, with OSF/Motif 1.1. IDG requires 1.8Mb of hard disk space. The standard distribution medium for IDG is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The program was developed in 1991 and is a copyrighted work with all copyright vested in NASA. SunOS is a trademark of Sun Microsystems, Inc. X Window System is a trademark of Massachusetts Institute of Technology. OSF/Motif is a trademark of the Open Software Foundation, Inc. UNIX is a trademark of Bell Laboratories.

  13. A generic archive protocol and an implementation

    NASA Technical Reports Server (NTRS)

    Jordan, J. M.; Jennings, D. G.; Mcglynn, T. A.; Ruggiero, N. G.; Serlemitsos, T. A.

    1992-01-01

    Archiving vast amounts of data has become a major part of every scientific space mission today. The Generic Archive/Retrieval Services Protocol (GRASP) addresses the question of how to archive the data collected in an environment where the underlying hardware archives may be rapidly changing. GRASP is a device independent specification defining a set of functions for storing and retrieving data from an archive, as well as other support functions. GRASP is divided into two levels: the Transfer Interface and the Action Interface. The Transfer Interface is computer/archive independent code while the Action Interface contains code which is dedicated to each archive/computer addressed. Implementations of the GRASP specification are currently available for DECstations running Ultrix, Sparcstations running SunOS, and microVAX/VAXstation 3100's. The underlying archive is assumed to function as a standard Unix or VMS file system. The code, written in C, is a single suite of files. Preprocessing commands define the machine unique code sections in the device interface. The implementation was written, to the greatest extent possible, using only ANSI standard C functions.

  14. GAP: yet another image processing system for solar observations.

    NASA Astrophysics Data System (ADS)

    Keller, C. U.

    GAP is a versatile, interactive image processing system for analyzing solar observations, in particular extended time sequences, and for preparing publication quality figures. It consists of an interpreter that is based on a language with a control flow similar to PASCAL and C. The interpreter may be accessed from a command line editor and from user-supplied functions, procedures, and command scripts. GAP is easily expandable via external FORTRAN programs that are linked to the GAP interface routines. The current version of GAP runs on VAX, DECstation, Sun, and Apollo computers. Versions for MS-DOS and OS/2 are in preparation.

  15. ESDAPT - APT PROGRAMMING EDITOR AND INTERPRETER

    NASA Technical Reports Server (NTRS)

    Premack, T.

    1994-01-01

    ESDAPT is a graphical programming environment for developing APT (Automatically Programmed Tool) programs for controlling numerically controlled machine tools. ESDAPT has a graphical user interface that provides the user with an APT syntax sensitive text editor and windows for displaying geometry and tool paths. APT geometry statement can also be created using menus and screen picks. ESDAPT interprets APT geometry statements and displays the results in its view windows. Tool paths are generated by batching the APT source to an APT processor (COSMIC P-APT recommended). The tool paths are then displayed in the view windows. Hardcopy output of the view windows is in color PostScript format. ESDAPT is written in C-language, yacc, lex, and XView for use on Sun4 series computers running SunOS. ESDAPT requires 4Mb of disk space, 7Mb of RAM, and MIT's X Window System, Version 11 Release 4, or OpenWindows version 3 for execution. Program documentation in PostScript format and an executable for OpenWindows version 3 are provided on the distribution media. The standard distribution medium for ESDAPT is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. This program was developed in 1992.

  16. Web interfaces to relational databases

    NASA Technical Reports Server (NTRS)

    Carlisle, W. H.

    1996-01-01

    This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.

  17. MERCATOR: Methods and Realization for Control of the Attitude and the Orbit of spacecraft

    NASA Technical Reports Server (NTRS)

    Tavernier, Gilles; Campan, Genevieve

    1993-01-01

    Since 1974, CNES has been involved in geostationary positioning. Among different entities participating in operations and their preparation, the Flight Dynamics Center (FDC) is in charge of performing the following tasks: orbit determination; attitude determination; computation, monitoring, and calibration of orbit maneuvers; computation, monitoring, and calibration of attitude maneuvers; and operational predictions. In order to fulfill this mission, the FDC receives telemetry from the satellite and localization measurements from ground stations (e.g., CNES, NASA, INTELSAT). These data are processed by space dynamics programs integrated in the MERCATOR system which is run on SUN workstations (UNIX O.S.). The main features of MERCATOR are redundancy, modularity, and flexibility: efficient, flexible, and user friendly man-machine interface; and four identical SUN stations redundantly linked in an Ethernet network. Each workstation can perform all the tasks from data acquisition to computation results dissemination through a video network. A team of four engineers can handle the space mechanics aspects of a complete geostationary positioning from the injection into a transfer orbit to the final maneuvers in the station-keeping window. MERCATOR has been or is to be used for operations related to more than ten geostationary positionings. Initially developed for geostationary satellites, MERCATOR's methodology was also used for satellite control centers and can be applied to a wide range of satellites and to future manned missions.

  18. 76 FR 27921 - Raisins Produced From Grapes Grown in California; Increase in Desirable Carryout Used To Compute...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-13

    ... rule would increase the desirable carryout used to compute the yearly trade demand for Natural (sun... raisins are available. Currently, the desirable carryout for Natural (sun-dried) Seedless (NS) raisins is... carryout level to be used in computing and announcing a crop year's marketing policy for Natural (sun-dried...

  19. FDTD-ANT User Manual

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin L.

    1995-01-01

    This manual explains the theory and operation of the finite-difference time domain code FDTD-ANT developed by Analex Corporation at the NASA Lewis Research Center in Cleveland, Ohio. This code can be used for solving electromagnetic problems that are electrically small or medium (on the order of 1 to 50 cubic wavelengths). Calculated parameters include transmission line impedance, relative effective permittivity, antenna input impedance, and far-field patterns in both the time and frequency domains. The maximum problem size may be adjusted according to the computer used. This code has been run on the DEC VAX and 486 PC's and on workstations such as the Sun Sparc and the IBM RS/6000.

  20. ARCGRAPH SYSTEM - AMES RESEARCH GRAPHICS SYSTEM

    NASA Technical Reports Server (NTRS)

    Hibbard, E. A.

    1994-01-01

    Ames Research Graphics System, ARCGRAPH, is a collection of libraries and utilities which assist researchers in generating, manipulating, and visualizing graphical data. In addition, ARCGRAPH defines a metafile format that contains device independent graphical data. This file format is used with various computer graphics manipulation and animation packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). In its full configuration, the ARCGRAPH system consists of a two stage pipeline which may be used to output graphical primitives. Stage one is associated with the graphical primitives (i.e. moves, draws, color, etc.) along with the creation and manipulation of the metafiles. Five distinct data filters make up stage one. They are: 1) PLO which handles all 2D vector primitives, 2) POL which handles all 3D polygonal primitives, 3) RAS which handles all 2D raster primitives, 4) VEC which handles all 3D raster primitives, and 5) PO2 which handles all 2D polygonal primitives. Stage two is associated with the process of displaying graphical primitives on a device. To generate the various graphical primitives, create and reprocess ARCGRAPH metafiles, and access the device drivers in the VDI (Video Device Interface) library, users link their applications to ARCGRAPH's GRAFIX library routines. Both FORTRAN and C language versions of the GRAFIX and VDI libraries exist for enhanced portability within these respective programming environments. The ARCGRAPH libraries were developed on a VAX running VMS. Minor documented modification of various routines, however, allows the system to run on the following computers: Cray X-MP running COS (no C version); Cray 2 running UNICOS; DEC VAX running BSD 4.3 UNIX, or Ultrix; SGI IRIS Turbo running GL2-W3.5 and GL2-W3.6; Convex C1 running UNIX; Amhdahl 5840 running UTS; Alliant FX8 running UNIX; Sun 3/160 running UNIX (no native device driver); Stellar GS1000 running Stellex (no native device driver); and an SGI IRIS 4D running IRIX (no native device driver). Currently with version 7.0 of ARCGRAPH, the VDI library supports the following output devices: A VT100 terminal with a RETRO-GRAPHICS board installed, a VT240 using the Tektronix 4010 emulation capability, an SGI IRIS turbo using the native GL2 library, a Tektronix 4010, a Tektronix 4105, and the Tektronix 4014. ARCGRAPH version 7.0 was developed in 1988.

  1. LARCRIM user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Davis, John S.; Heaphy, William J.

    1993-01-01

    LARCRIM is a relational database management system (RDBMS) which performs the conventional duties of an RDBMS with the added feature that it can store attributes which consist of arrays or matrices. This makes it particularly valuable for scientific data management. It is accessible as a stand-alone system and through an application program interface. The stand-alone system may be executed in two modes: menu or command. The menu mode prompts the user for the input required to create, update, and/or query the database. The command mode requires the direct input of LARCRIM commands. Although LARCRIM is an update of an old database family, its performance on modern computers is quite satisfactory. LARCRIM is written in FORTRAN 77 and runs under the UNIX operating system. Versions have been released for the following computers: SUN (3 & 4), Convex, IRIS, Hewlett-Packard, CRAY 2 & Y-MP.

  2. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less

  3. SUSTAINABILITY LOGISTICS BASING SCIENCE AND TECHNOLOGY OBJECTIVE DEMONSTRATION; SELECTED TECHNOLOGY ASSESSMENT

    DTIC Science & Technology

    2018-03-22

    generators by not running them as often and reducing wet-stacking. Force Projection: If the IPDs of the microgrid replace, but don’t add to, the number...decrease generator run time, reduce fuel consumption, enable silent operation, and provide power redundancy for military applications. Important...it requires some failsafe features – run out of water, drive out of the sun. o Integration was a challenge; series of valves to run this experiment

  4. ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Johnson, S. C.

    1994-01-01

    ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, is an interface that will enable reliability engineers to accurately design large semi-Markov models. The user describes the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. The abstract language allows efficient description of large, complex systems; a one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. Instead of listing the individual states of the Markov model, reliability engineers can specify the rules governing the behavior of a system, and these are used to automatically generate the model. ASSIST reads an input file describing the failure behavior of a system in an abstract language and generates a Markov model in the format needed for input to SURE, the semi-Markov Unreliability Range Evaluator program, and PAWS/STEM, the Pade Approximation with Scaling program and Scaled Taylor Exponential Matrix. A Markov model consists of a number of system states and transitions between them. Each state in the model represents a possible state of the system in terms of which components have failed, which ones have been removed, etc. Within ASSIST, each state is defined by a state vector, where each element of the vector takes on an integer value within a defined range. An element can represent any meaningful characteristic, such as the number of working components of one type in the system, or the number of faulty components of another type in use. Statements representing transitions between states in the model have three parts: a condition expression, a destination expression, and a rate expression. The first expression is a Boolean expression describing the state space variable values of states for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the documentation in PostScript, TeX, and DVI formats are provided on the distribution medium. (The VMS distribution lacks the .DVI format files, however.) ASSIST was developed in 1986 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  5. Computer Generated Snapshot of Our Sun's Magnetic Field

    NASA Technical Reports Server (NTRS)

    2003-01-01

    These banana-shaped loops are part of a computer-generated snapshot of our sun's magnetic field. The solar magnetic-field lines loop through the sun's corona, break through the sun's surface, and cornect regions of magnetic activity, such as sunspots. This image --part of a magnetic-field study of the sun by NASA's Allen Gary -- shows the outer portion (skins) of interconnecting systems of hot (2 million degrees Kelvin) coronal loops within and between two active magnetic regions on opposite sides of the sun's equator. The diameter of these coronal loops at their foot points is approximately the same size as the Earth's radius (about 6,000 kilometers).

  6. MOSAIC: Software for creating mosaics from collections of images

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Gezari, D. Y.

    1992-01-01

    We have developed a powerful, versatile image processing and analysis software package called MOSAIC, designed specifically for the manipulation of digital astronomical image data obtained with (but not limited to) two-dimensional array detectors. The software package is implemented using the Interactive Data Language (IDL), and incorporates new methods for processing, calibration, analysis, and visualization of astronomical image data, stressing effective methods for the creation of mosaic images from collections of individual exposures, while at the same time preserving the photometric integrity of the original data. Since IDL is available on many computers, the MOSAIC software runs on most UNIX and VAX workstations with the X-Windows or Sun View graphics interface.

  7. Hydrologic Observatory Data Telemetry Network in an Extreme Environment

    NASA Astrophysics Data System (ADS)

    Irving, K.; Kane, D.

    2007-12-01

    A network of hydrological research data stations on the North Slope of Alaska using radio telemetry to gather data in "near real time" will be described. The network consists of approximately 25 research stations, 10 repeater stations, and 3 Internet-connected base stations (though data is also collected at repeater stations and research stations may also function as repeaters). With this operational network, radio link redundancy is sufficient to reach any research station from any base station. The data network is driven in "pull" mode using software running on computers in Fairbanks, and emphasis is placed on reliably collecting and storing data as found on the remote data loggers. Work is underway to deploy dynamic routing software on the controlling computers, at which point the network will be capable of automatically working around problems which may include icing on antennas, satellite sun outages, animal damage, and many others.

  8. Computer-aided boundary delineation of agricultural lands

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1989-01-01

    The National Agricultural Statistics Service of the United States Department of Agriculture (USDA) presently uses labor-intensive aerial photographic interpretation techniques to divide large geographical areas into manageable-sized units for estimating domestic crop and livestock production. Prototype software, the computer-aided stratification (CAS) system, was developed to automate the procedure, and currently runs on a Sun-based image processing system. With a background display of LANDSAT Thematic Mapper and United States Geological Survey Digital Line Graph data, the operator uses a cursor to delineate agricultural areas, called sampling units, which are assigned to strata of land-use and land-cover types. The resultant stratified sampling units are used as input into subsequent USDA sampling procedures. As a test, three counties in Missouri were chosen for application of the CAS procedures. Subsequent analysis indicates that CAS was five times faster in creating sampling units than the manual techniques were.

  9. 75 FR 63724 - Raisins Produced From Grapes Grown in California; Use of Estimated Trade Demand To Compute Volume...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-18

    ... figure to compute volume regulation percentages for 2010- 11 crop Natural (sun-dried) Seedless (NS... compute volume regulation percentages for 2010-11 crop Natural (sun-dried) Seedless (NS) raisins covered...

  10. Relationship Between Rainfall in the Northern Hemisphere and Impulses of the Torque in the Sun's Motion

    NASA Technical Reports Server (NTRS)

    Landscheidt, T.

    1990-01-01

    The analysis of major change in the angular momentum of the sun's irregular motion about the barycenter of the solar system, represented by extrema in the running variance of impulses of the torque (IOT), discloses a connection with both extrema in the Gleissberg cycle of secular sunspot activity and maxima in the thickness of varves from Lake Saki, Crimea. This significant relationship can be traced back to the 7th century. Further inquiries link the running variance in IOT to rainfall over central Europe, England, Wales, eastern United States, and India, as well as to temperature in Europe. This significant correlation covers more than 130 years.

  11. DAMT - DISTRIBUTED APPLICATION MONITOR TOOL (HP9000 VERSION)

    NASA Technical Reports Server (NTRS)

    Keith, B.

    1994-01-01

    Typical network monitors measure status of host computers and data traffic among hosts. A monitor to collect statistics about individual processes must be unobtrusive and possess the ability to locate and monitor processes, locate and monitor circuits between processes, and report traffic back to the user through a single application program interface (API). DAMT, Distributed Application Monitor Tool, is a distributed application program that will collect network statistics and make them available to the user. This distributed application has one component (i.e., process) on each host the user wishes to monitor as well as a set of components at a centralized location. DAMT provides the first known implementation of a network monitor at the application layer of abstraction. Potential users only need to know the process names of the distributed application they wish to monitor. The tool locates the processes and the circuit between them, and reports any traffic between them at a user-defined rate. The tool operates without the cooperation of the processes it monitors. Application processes require no changes to be monitored by this tool. Neither does DAMT require the UNIX kernel to be recompiled. The tool obtains process and circuit information by accessing the operating system's existing process database. This database contains all information available about currently executing processes. Expanding the information monitored by the tool can be done by utilizing more information from the process database. Traffic on a circuit between processes is monitored by a low-level LAN analyzer that has access to the raw network data. The tool also provides features such as dynamic event reporting and virtual path routing. A reusable object approach was used in the design of DAMT. The tool has four main components; the Virtual Path Switcher, the Central Monitor Complex, the Remote Monitor, and the LAN Analyzer. All of DAMT's components are independent, asynchronously executing processes. The independent processes communicate with each other via UNIX sockets through a Virtual Path router, or Switcher. The Switcher maintains a routing table showing the host of each component process of the tool, eliminating the need for each process to do so. The Central Monitor Complex provides the single application program interface (API) to the user and coordinates the activities of DAMT. The Central Monitor Complex is itself divided into independent objects that perform its functions. The component objects are the Central Monitor, the Process Locator, the Circuit Locator, and the Traffic Reporter. Each of these objects is an independent, asynchronously executing process. User requests to the tool are interpreted by the Central Monitor. The Process Locator identifies whether a named process is running on a monitored host and which host that is. The circuit between any two processes in the distributed application is identified using the Circuit Locator. The Traffic Reporter handles communication with the LAN Analyzer and accumulates traffic updates until it must send a traffic report to the user. The Remote Monitor process is replicated on each monitored host. It serves the Central Monitor Complex processes with application process information. The Remote Monitor process provides access to operating systems information about currently executing processes. It allows the Process Locator to find processes and the Circuit Locator to identify circuits between processes. It also provides lifetime information about currently monitored processes. The LAN Analyzer consists of two processes. Low-level monitoring is handled by the Sniffer. The Sniffer analyzes the raw data on a single, physical LAN. It responds to commands from the Analyzer process, which maintains the interface to the Traffic Reporter and keeps track of which circuits to monitor. DAMT is written in C-language for HP-9000 series computers running HP-UX and Sun 3 and 4 series computers running SunOS. DAMT requires 1Mb of disk space and 4Mb of RAM for execution. This package requires MIT's X Window System, Version 11 Revision 4, with OSF/Motif 1.1. The HP-9000 version (GSC-13589) includes sample HP-9000/375 and HP-9000/730 executables which were compiled under HP-UX, and the Sun version (GSC-13559) includes sample Sun3 and Sun4 executables compiled under SunOS. The standard distribution medium for the HP version of DAMT is a .25 inch HP pre-formatted streaming magnetic tape cartridge in UNIX tar format. It is also available on a 4mm magnetic tape in UNIX tar format. The standard distribution medium for the Sun version of DAMT is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. DAMT was developed in 1992.

  12. 76 FR 42006 - Raisins Produced From Grapes Grown In California; Increase in Desirable Carryout Used To Compute...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-18

    ... increases the desirable carryout used to compute the yearly trade demand for Natural (sun-dried) Seedless..., the desirable carryout for Natural (sun-dried) Seedless (NS) raisins is defined as: The total... announcing a crop year's marketing policy for Natural (sun-dried) Seedless raisins shall be 85,000 natural...

  13. ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Johnson, S. C.

    1994-01-01

    ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, is an interface that will enable reliability engineers to accurately design large semi-Markov models. The user describes the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. The abstract language allows efficient description of large, complex systems; a one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. Instead of listing the individual states of the Markov model, reliability engineers can specify the rules governing the behavior of a system, and these are used to automatically generate the model. ASSIST reads an input file describing the failure behavior of a system in an abstract language and generates a Markov model in the format needed for input to SURE, the semi-Markov Unreliability Range Evaluator program, and PAWS/STEM, the Pade Approximation with Scaling program and Scaled Taylor Exponential Matrix. A Markov model consists of a number of system states and transitions between them. Each state in the model represents a possible state of the system in terms of which components have failed, which ones have been removed, etc. Within ASSIST, each state is defined by a state vector, where each element of the vector takes on an integer value within a defined range. An element can represent any meaningful characteristic, such as the number of working components of one type in the system, or the number of faulty components of another type in use. Statements representing transitions between states in the model have three parts: a condition expression, a destination expression, and a rate expression. The first expression is a Boolean expression describing the state space variable values of states for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the documentation in PostScript, TeX, and DVI formats are provided on the distribution medium. (The VMS distribution lacks the .DVI format files, however.) ASSIST was developed in 1986 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  14. CAESY - COMPUTER AIDED ENGINEERING SYSTEM

    NASA Technical Reports Server (NTRS)

    Wette, M. R.

    1994-01-01

    Many developers of software and algorithms for control system design have recognized that current tools have limits in both flexibility and efficiency. Many forces drive the development of new tools including the desire to make complex system modeling design and analysis easier and the need for quicker turnaround time in analysis and design. Other considerations include the desire to make use of advanced computer architectures to help in control system design, adopt new methodologies in control, and integrate design processes (e.g., structure, control, optics). CAESY was developed to provide a means to evaluate methods for dealing with user needs in computer-aided control system design. It is an interpreter for performing engineering calculations and incorporates features of both Ada and MATLAB. It is designed to be reasonably flexible and powerful. CAESY includes internally defined functions and procedures, as well as user defined ones. Support for matrix calculations is provided in the same manner as MATLAB. However, the development of CAESY is a research project, and while it provides some features which are not found in commercially sold tools, it does not exhibit the robustness that many commercially developed tools provide. CAESY is written in C-language for use on Sun4 series computers running SunOS 4.1.1 and later. The program is designed to optionally use the LAPACK math library. The LAPACK math routines are available through anonymous ftp from research.att.com. CAESY requires 4Mb of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. CAESY was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  15. Using USNO's API to Obtain Data

    NASA Astrophysics Data System (ADS)

    Lesniak, Michael V.; Pozniak, Daniel; Punnoose, Tarun

    2015-01-01

    The U.S. Naval Observatory (USNO) is in the process of modernizing its publicly available web services into APIs (Application Programming Interfaces). Services configured as APIs offer greater flexibility to the user and allow greater usage. Depending on the particular service, users who implement our APIs will receive either a PNG (Portable Network Graphics) image or data in JSON (JavaScript Object Notation) format. This raw data can then be embedded in third-party web sites or in apps.Part of the USNO's mission is to provide astronomical and timing data to government agencies and the general public. To this end, the USNO provides accurate computations of astronomical phenomena such as dates of lunar phases, rise and set times of the Moon and Sun, and lunar and solar eclipse times. Users who navigate to our web site and select one of our 18 services are prompted to complete a web form, specifying parameters such as date, time, location, and object. Many of our services work for years between 1700 and 2100, meaning that past, present, and future events can be computed. Upon form submission, our web server processes the request, computes the data, and outputs it to the user.Over recent years, the use of the web by the general public has vastly changed. In response to this, the USNO is modernizing its web-based data services. This includes making our computed data easier to embed within third-party web sites as well as more easily querying from apps running on tablets and smart phones. To facilitate this, the USNO has begun converting its services into APIs. In addition to the existing web forms for the various services, users are able to make direct URL requests that return either an image or numerical data.To date, four of our web services have been configured to run with APIs. Two are image-producing services: "Apparent Disk of a Solar System Object" and "Day and Night Across the Earth." Two API data services are "Complete Sun and Moon Data for One Day" and "Dates of Primary Phases of the Moon." Instructions for how to use our API services as well as examples of their use can be found on one of our explanatory web pages and will be discussed here.

  16. Thermal and orbital analysis of Earth monitoring Sun-synchronous space experiments

    NASA Technical Reports Server (NTRS)

    Killough, Brian D.

    1990-01-01

    The fundamentals of an Earth monitoring Sun-synchronous orbit are presented. A Sun-synchronous Orbit Analysis Program (SOAP) was developed to calculate orbital parameters for an entire year. The output from this program provides the required input data for the TRASYS thermal radiation computer code, which in turn computes the infrared, solar and Earth albedo heat fluxes incident on a space experiment. Direct incident heat fluxes can be used as input to a generalized thermal analyzer program to size radiators and predict instrument operating temperatures. The SOAP computer code and its application to the thermal analysis methodology presented, should prove useful to the thermal engineer during the design phases of Earth monitoring Sun-synchronous space experiments.

  17. Searching for possible hidden chambers in the Pyramid of the Sun

    NASA Astrophysics Data System (ADS)

    Alfaro, R.; Belmont, E.; Grabski, V.; Manzanilla, L.; Martinez-Davalos, A.; Menchaca-Rocha, A.; Moreno, M.; Sandoval, A.

    The Pyramid of the Sun, at Teotihuacan, Mexico, is being searched for possible hidden chambers, using a muon tracking technique inspired in the experiment carried out by Luis Alvarez over 30 years ago at the Chephren Pyramid, in Giza. A fortunate similarity between this monument and the Pyramid of the Sun is a tunnel, running 8 m below the base and ending close to the symmetry axis, which permits the use muon attenuation measurements. A brief account of the project, including planning, detector design, construction and simulations, as well as the current status of the project is presented

  18. FASTRAN II - FATIGUE CRACK GROWTH STRUCTURAL ANALYSIS (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Newman, J. C.

    1994-01-01

    Predictions of fatigue crack growth behavior can be made with the Fatigue Crack Growth Structural Analysis (FASTRAN II) computer program. As cyclic loads are applied to a selected crack configuration with an initial crack size, FASTRAN II predicts crack growth as a function of cyclic load history until either a desired crack size is reached or failure occurs. FASTRAN II is based on plasticity-induced crack-closure behavior of cracks in metallic materials and accounts for load-interaction effects, such as retardation and acceleration, under variable-amplitude loading. The closure model is based on the Dugdale model with modifications to allow plastically deformed material to be left along the crack surfaces as the crack grows. Plane stress and plane strain conditions, as well as conditions between these two, can be simulated in FASTRAN II by using a constraint factor on tensile yielding at the crack front to approximately account for three-dimensional stress states. FASTRAN II contains seventeen predefined crack configurations (standard laboratory fatigue crack growth rate specimens and many common crack configurations found in structures); and the user can define one additional crack configuration. The baseline crack growth rate properties (effective stress-intensity factor against crack growth rate) may be given in either equation or tabular form. For three-dimensional crack configurations, such as surface cracks or corner cracks at holes or notches, the fatigue crack growth rate properties may be different in the crack depth and crack length directions. Final failure of the cracked structure can be modelled with fracture toughness properties using either linear-elastic fracture mechanics (brittle materials), a two-parameter fracture criterion (brittle to ductile materials), or plastic collapse (extremely ductile materials). The crack configurations in FASTRAN II can be subjected to either constant-amplitude, variable-amplitude or spectrum loading. The applied loads may be either tensile or compressive. Several standardized aircraft flight-load histories, such as TWIST, Mini-TWIST, FALSTAFF, Inverted FALSTAFF, Felix and Gaussian, are included as options. FASTRAN II also includes two other methods that will help the user input spectrum load histories. The two methods are: (1) a list of stress points, and (2) a flight-by-flight history of stress points. Examples are provided in the user manual. Developed as a research program, FASTRAN II has successfully predicted crack growth in many metallic materials under various aircraft spectrum loading. A computer program DKEFF which is a part of the FASTRAN II package was also developed to analyze crack growth rate data from laboratory specimens to obtain the effective stress-intensity factor against crack growth rate relations used in FASTRAN II. FASTRAN II is written in standard FORTRAN 77. It has been successfully compiled and implemented on Sun4 series computers running SunOS and on IBM PC compatibles running MS-DOS using the Lahey F77L FORTRAN compiler. Sample input and output data are included with the FASTRAN II package. The UNIX version requires 660K of RAM for execution. The standard distribution medium for the UNIX version (LAR-14865) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the MS-DOS version (LAR-14944) is a 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The program was developed in 1984 and revised in 1992. Sun4 and SunOS are trademarks of Sun Microsystems, Inc. IBM PC is a trademark of International Business Machines Corp. MS-DOS is a trademark of Microsoft, Inc. F77L is a trademark of the Lahey Computer Systems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. PKWARE and PKUNZIP are trademarks of PKWare, Inc.

  19. FASTRAN II - FATIGUE CRACK GROWTH STRUCTURAL ANALYSIS (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Newman, J. C.

    1994-01-01

    Predictions of fatigue crack growth behavior can be made with the Fatigue Crack Growth Structural Analysis (FASTRAN II) computer program. As cyclic loads are applied to a selected crack configuration with an initial crack size, FASTRAN II predicts crack growth as a function of cyclic load history until either a desired crack size is reached or failure occurs. FASTRAN II is based on plasticity-induced crack-closure behavior of cracks in metallic materials and accounts for load-interaction effects, such as retardation and acceleration, under variable-amplitude loading. The closure model is based on the Dugdale model with modifications to allow plastically deformed material to be left along the crack surfaces as the crack grows. Plane stress and plane strain conditions, as well as conditions between these two, can be simulated in FASTRAN II by using a constraint factor on tensile yielding at the crack front to approximately account for three-dimensional stress states. FASTRAN II contains seventeen predefined crack configurations (standard laboratory fatigue crack growth rate specimens and many common crack configurations found in structures); and the user can define one additional crack configuration. The baseline crack growth rate properties (effective stress-intensity factor against crack growth rate) may be given in either equation or tabular form. For three-dimensional crack configurations, such as surface cracks or corner cracks at holes or notches, the fatigue crack growth rate properties may be different in the crack depth and crack length directions. Final failure of the cracked structure can be modelled with fracture toughness properties using either linear-elastic fracture mechanics (brittle materials), a two-parameter fracture criterion (brittle to ductile materials), or plastic collapse (extremely ductile materials). The crack configurations in FASTRAN II can be subjected to either constant-amplitude, variable-amplitude or spectrum loading. The applied loads may be either tensile or compressive. Several standardized aircraft flight-load histories, such as TWIST, Mini-TWIST, FALSTAFF, Inverted FALSTAFF, Felix and Gaussian, are included as options. FASTRAN II also includes two other methods that will help the user input spectrum load histories. The two methods are: (1) a list of stress points, and (2) a flight-by-flight history of stress points. Examples are provided in the user manual. Developed as a research program, FASTRAN II has successfully predicted crack growth in many metallic materials under various aircraft spectrum loading. A computer program DKEFF which is a part of the FASTRAN II package was also developed to analyze crack growth rate data from laboratory specimens to obtain the effective stress-intensity factor against crack growth rate relations used in FASTRAN II. FASTRAN II is written in standard FORTRAN 77. It has been successfully compiled and implemented on Sun4 series computers running SunOS and on IBM PC compatibles running MS-DOS using the Lahey F77L FORTRAN compiler. Sample input and output data are included with the FASTRAN II package. The UNIX version requires 660K of RAM for execution. The standard distribution medium for the UNIX version (LAR-14865) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the MS-DOS version (LAR-14944) is a 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The program was developed in 1984 and revised in 1992. Sun4 and SunOS are trademarks of Sun Microsystems, Inc. IBM PC is a trademark of International Business Machines Corp. MS-DOS is a trademark of Microsoft, Inc. F77L is a trademark of the Lahey Computer Systems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. PKWARE and PKUNZIP are trademarks of PKWare, Inc.

  20. The Automated Instrumentation and Monitoring System (AIMS) reference manual

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Hontalas, Philip; Listgarten, Sherry

    1993-01-01

    Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).

  1. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  2. LEOrbit: A program to calculate parameters relevant to modeling Low Earth Orbit spacecraft-plasma interaction

    NASA Astrophysics Data System (ADS)

    Marchand, R.; Purschke, D.; Samson, J.

    2013-03-01

    Understanding the physics of interaction between satellites and the space environment is essential in planning and exploiting space missions. Several computer models have been developed over the years to study this interaction. In all cases, simulations are carried out in the reference frame of the spacecraft and effects such as charging, the formation of electrostatic sheaths and wakes are calculated for given conditions of the space environment. In this paper we present a program used to compute magnetic fields and a number of space plasma and space environment parameters relevant to Low Earth Orbits (LEO) spacecraft-plasma interaction modeling. Magnetic fields are obtained from the International Geophysical Reference Field (IGRF) and plasma parameters are obtained from the International Reference Ionosphere (IRI) model. All parameters are computed in the spacecraft frame of reference as a function of its six Keplerian elements. They are presented in a format that can be used directly in most spacecraft-plasma interaction models. Catalogue identifier: AENY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 270308 No. of bytes in distributed program, including test data, etc.: 2323222 Distribution format: tar.gz Programming language: FORTRAN 90. Computer: Non specific. Operating system: Non specific. RAM: 7.1 MB Classification: 19, 4.14. External routines: IRI, IGRF (included in the package). Nature of problem: Compute magnetic field components, direction of the sun, sun visibility factor and approximate plasma parameters in the reference frame of a Low Earth Orbit satellite. Solution method: Orbit integration, calls to IGRF and IRI libraries and transformation of coordinates from geocentric to spacecraft frame reference. Restrictions: Low Earth orbits, altitudes between 150 and 2000 km. Running time: Approximately two seconds to parameterize a full orbit with 1000 points.

  3. Birthdate psychology: a new look at some old data.

    PubMed

    Pellegrini, R J

    1975-03-01

    This brief note deals with the development of alternative perspectives on the provocative, and as yet unexplained result of an earlier study in which groups of people born under different astrological zodiac signs were found to differ markedly in their scores on the California Psychological Inventory (CPI) scale described as a measure of "Femininity." Attention is focused on (a) discrepancies between the observed pattern of high and low scores on the CPI Femininity scale, and classification of sun signs as "masculine" or "feminine" by astrologers; (b) the trend in the data indicating that the six sun sign categories for which the highest scores were obtained on the Femininity scale correspond to birthdates running continuously from July 24 to January 20, while the six sun sign categories for which the lowest scores were obtained on that scale correspond to birthdates running continuously from January 21 to July 23; and (c) speculative consideration of the kinds of climatic, dietary, and/or cyclical geomagnetic events that might affect reproduction and prenatal and/or neonatal development in such a way as to influence adult personality.

  4. Assessment of performances of sun zenith angle and altitude parameterisations of atmospheric radiative transfer for spectral surface downwelling solar irradiance

    NASA Astrophysics Data System (ADS)

    Wald, L.; Blanc, Ph.

    2010-09-01

    Satellite-derived assessments of surface downwelling solar irradiance are more and more used by engineering companies in solar energy. Performances are judged satisfactory for the time being. Nevertheless, requests for more accuracy are increasing, in particular in the spectral definition and in the decomposition of the global radiation into direct and diffuse radiations. One approach to reach this goal is to improve both the modelling of the radiative transfer and the quality of the inputs describing the optical state. Within their joint project Heliosat-4, DLR and MINES ParisTech have adopted this approach to create advanced databases of solar irradiance succeeding to the current ones HelioClim and SolEMi. Regarding the model, we have opted for libRadtran, a well-known model of proven quality. As many similar models, running libRadtran is very time-consuming when it comes to process millions or more pixels or grid cells. This is incompatible with real-time operational process. One may adopt the abacus approach, or look-up tables, to overcome the problem. The model is run for a limited number of cases, covering the whole range of values taken by the various inputs of the model. Abaci are such constructed. For each real case, the irradiance value is computed by interpolating within the abaci. In this way, real-time can be envisioned. Nevertheless, the computation of the abaci themselves requires large computing capabilities. In addition, searching the abaci to find the values to interpolate can be time-consuming as the abaci are very large: several millions of values in total. Moreover, it raises the extrapolation problem of parameter out-of-range during the utilisation of the abaci. Parameterisation, when possible, is a means to reduce the amount of computations to be made and subsequently, the computation effort to create the abaci, the size of the abaci, the extrapolation and the searching time. It describes in analytical manner and with a few parameters the change in irradiance with a specific variable. The communication discusses two parameterisations found in the literature. One deals with the solar zenith angle, the other with the altitude. We assess their performances in retrieving solar irradiance for 32 spectral bands, from 240 nm to 4606 nm. The model libRadtran is run to create data sets for all sun zenith angles (every 5 degrees) and all altitudes (every km). These data sets are considered as a reference. Then, for each parameterisation, we compute the parameters using two irradiance values for specific values of angle (e.g., 0 and 60 degrees) or altitude (e.g., 0 and 3 km). The parameterisations are then applied to other values of angle and altitude. Differences between these assessments and the reference values of irradiance are computed and analysed. We conclude on the level of performances of each parameterisation for each spectral band as well as for the total irradiance. We discuss the possible use of these parameterisations in the future method Heliosat-4 and possible improvements. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement no. 218793 (MACC project).

  5. TRL - A FORMAL TEST REPRESENTATION LANGUAGE AND TOOL FOR FUNCTIONAL TEST DESIGNS

    NASA Technical Reports Server (NTRS)

    Hops, J. M.

    1994-01-01

    A Formal Test Representation Language and Tool for Functional Test Designs (TRL) is an automatic tool and a formal language that is used to implement the Category-Partition Method and produce the specification of test cases in the testing phase of software development. The Category-Partition Method is particularly useful in defining the inputs, outputs and purpose of the test design phase and combines the benefits of choosing normal cases with error exposing properties. Traceability can be maintained quite easily by creating a test design for each objective in the test plan. The effort to transform the test cases into procedures is simplified by using an automatic tool to create the cases based on the test design. The method allows the rapid elimination of undesired test cases from consideration, and easy review of test designs by peer groups. The first step in the category-partition method is functional decomposition, in which the specification and/or requirements are decomposed into functional units that can be tested independently. A secondary purpose of this step is to identify the parameters that affect the behavior of the system for each functional unit. The second step, category analysis, carries the work done in the previous step further by determining the properties or sub-properties of the parameters that would make the system behave in different ways. The designer should analyze the requirements to determine the features or categories of each parameter and how the system may behave if the category were to vary its value. If the parameter undergoing refinement is a data-item, then categories of this data-item may be any of its attributes, such as type, size, value, units, frequency of change, or source. After all the categories for the parameters of the functional unit have been determined, the next step is to partition each category's range space into mutually exclusive values that the category can assume. In choosing partition values, all possible kinds of values should be included, especially the ones that will maximize error detection. The purpose of the final step, partition constraint analysis, is to refine the test design specification so that only the technically effective and economically feasible test cases are implied. TRL is written in C-language to be machine independent. It has been successfully implemented on an IBM PC compatible running MS DOS, a Sun4 series computer running SunOS, an HP 9000/700 series workstation running HP-UX, a DECstation running DEC RISC ULTRIX, and a DEC VAX series computer running VMS. TRL requires 1Mb of disk space and a minimum of 84K of RAM. The documentation is available in electronic form in Word Perfect format. The standard distribution media for TRL is a 5.25 inch 360K MS-DOS format diskette. Alternate distribution media and formats are available upon request. TRL was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  6. Gadopentetate dimeglumine is potentially an alternative contrast agent for three-dimensional computed tomography angiography with multidetector-row helical scanning.

    PubMed

    Gupta, Atul K; Alberico, Ronald A; Litwin, Alan; Kanter, Peter; Grossman, Zachary D

    2002-01-01

    To demonstrate that gadopentetate dimeglumine is potentially an alternative contrast medium for computed tomographic angiography (CTA). One 12.2-kg Beagle dog was studied as proof of principle; the cervical vessels of three adult human patients were imaged for presurgical planning of the neck. Gadopentetate dimeglumine, 0.5 mol/l (Berlex Laboratories, Wayne, NJ, U.S.A.), a LightSpeed QX/i CT (General Electric Medical Systems, Milwaukee, WI, U.S.A.), and an Ultra Sparc II (SUN Microsystems, Santa Clara, CA, U.S.A.) running Advantage Windows 3.1 (General Electric Medical Systems) were used. Sufficient enhancement for CTA of the thoracic aorta, cervical vessels, and abdominal vessels was produced in the experimental dog, and the cervical vessels were clearly defined in all three patients. In that subset of patients with contraindications to iodinated contrast medium and for whom magnetic resonance angiography is inappropriate, gadopentetate dimeglumine may be an alternative contrast medium for CTA.

  7. Program For Tracking The Sun From The Moon

    NASA Technical Reports Server (NTRS)

    Woods, Warren K.; Spires, Dustin S.

    1995-01-01

    SUNTRACKER program computes azimuth and elevation angles of Sun, as viewed from given position on Moon, during time defined by user. Program gets selenographic (moon-centered) position of Sun at given Julian date, then converts selenographic position of Sun into azimuth and elevation at given position on Moon. Written in FORTRAN 77.

  8. Integration of an Apple II Plus Computer into an Existing Dual Axis Sun Tracker System.

    DTIC Science & Technology

    1984-06-01

    Identify by block number) S, tpec l Sun Tracker System Solar Energy Apple II Plus Computer 20. ABSTRACT (’ ntlnue on reveree ide If neceesery end...14 4. Dual Axis Sun Tracker (Side View) ----------------- 15 5. Solar Tracker System Block Diagram ---------------- 17 6. Plug Wiring Diagram for Top...sources will be competitive. Already many homes have solar collectors and other devices designed to decrease the consumption of gas, oil, and

  9. AdA Compiler Validation Summary Report: Certificate Number: 910920W1. 11211 Verdix Corporation, VADS Sun4 SunOS= 68020/30 ARTX, VAda-110-40120, Version 6.0, SPAECstation 2 (Host) to Motorola MVME147 (Target)

    DTIC Science & Technology

    1991-09-20

    SunOS Release 4.1.1) Target Computer System: Motorola MVME147 (Motorola 68030 Bare Board) Customer Agreement Number: 91-07-16- VRX See section 3.1 for...AVF-VSR-504.0292 18 February 1992 91-07-1 6- VRX Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 910920W1.11211 VERDIX Corporation VADS...SunOS Release 4.1.1) Target Computer System: Motorola MVME147 (Motorola 68030 Bare Board) Customer Agreement Number: 91-07-16- VRX See section 3.1 for

  10. TIERRAS: A package to simulate high energy cosmic ray showers underground, underwater and under-ice

    NASA Astrophysics Data System (ADS)

    Tueros, Matías; Sciutto, Sergio

    2010-02-01

    In this paper we present TIERRAS, a Monte Carlo simulation program based on the well-known AIRES air shower simulations system that enables the propagation of particle cascades underground, providing a tool to study particles arriving underground from a primary cosmic ray on the atmosphere or to initiate cascades directly underground and propagate them, exiting into the atmosphere if necessary. We show several cross-checks of its results against CORSIKA, FLUKA, GEANT and ZHS simulations and we make some considerations regarding its possible use and limitations. The first results of full underground shower simulations are presented, as an example of the package capabilities. Program summaryProgram title: TIERRAS for AIRES Catalogue identifier: AEFO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 489 No. of bytes in distributed program, including test data, etc.: 3 261 669 Distribution format: tar.gz Programming language: Fortran 77 and C Computer: PC, Alpha, IBM, HP, Silicon Graphics and Sun workstations Operating system: Linux, DEC Unix, AIX, SunOS, Unix System V RAM: 22 Mb bytes Classification: 1.1 External routines: TIERRAS requires AIRES 2.8.4 to be installed on the system. AIRES 2.8.4 can be downloaded from http://www.fisica.unlp.edu.ar/auger/aires/eg_AiresDownload.html. Nature of problem: Simulation of high and ultra high energy underground particle showers. Solution method: Modification of the AIRES 2.8.4 code to accommodate underground conditions. Restrictions: In AIRES some processes that are not statistically significant on the atmosphere are not simulated. In particular, it does not include muon photonuclear processes. This imposes a limitation on the application of this package to a depth of 1 km of standard rock (or 2.5 km of water equivalent). Neutrinos are not tracked on the simulation, but their energy is taken into account in decays. Running time: A TIERRAS for AIRES run of a 10 eV shower with statistical sampling (thinning) below 10 eV and 0.2 weight factor (see [1]) uses approximately 1 h of CPU time on an Intel Core 2 Quad Q6600 at 2.4 GHz. It uses only one core, so 4 simultaneous simulations can be run on this computer. Aires includes a spooling system to run several simultaneous jobs of any type. References:S. Sciutto, AIRES 2.6 User Manual, http://www.fisica.unlp.edu.ar/auger/aires/.

  11. PYROLASER - PYROLASER OPTICAL PYROMETER OPERATING SYSTEM

    NASA Technical Reports Server (NTRS)

    Roberts, F. E.

    1994-01-01

    The PYROLASER package is an operating system for the Pyrometer Instrument Company's Pyrolaser. There are 6 individual programs in the PYROLASER package: two main programs, two lower level subprograms, and two programs which, although independent, function predominantly as macros. The package provides a quick and easy way to setup, control, and program a standard Pyrolaser. Temperature and emissivity measurements may be either collected as if the Pyrolaser were in the manual operations mode, or displayed on real time strip charts and stored in standard spreadsheet format for post-test analysis. A shell is supplied to allow macros, which are test-specific, to be easily added to the system. The Pyrolaser Simple Operation program provides full on-screen remote operation capabilities, thus allowing the user to operate the Pyrolaser from the computer just as it would be operated manually. The Pyrolaser Simple Operation program also allows the use of "quick starts". Quick starts provide an easy way to permit routines to be used as setup macros for specific applications or tests. The specific procedures required for a test may be ordered in a sequence structure and then the sequence structure can be started with a simple button in the cluster structure provided. One quick start macro is provided for continuous Pyrolaser operation. A subprogram, Display Continuous Pyr Data, is used to display and store the resulting data output. Using this macro, the system is set up for continuous operation and the subprogram is called to display the data in real time on strip charts. The data is simultaneously stored in a spreadsheet format. The resulting spreadsheet file can be opened in any one of a number of commercially available spreadsheet programs. The Read Continuous Pyrometer program is provided as a continuously run subprogram for incorporation of the Pyrolaser software into a process control or feedback control scheme in a multi-component system. The program requires the Pyrolaser to be set up using the Pyrometer String Transfer macro. It requires no inputs and provides temperature and emissivity as outputs. The Read Continuous Pyrometer program can be run continuously and the data can be sampled as often or as seldom as updates of temperature and emissivity are required. PYROLASER is written using the Labview software for use on Macintosh series computers running System 6.0.3 or later, Sun Sparc series computers running OpenWindows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatibles running Microsoft Windows 3.1 or later. Labview requires a minimum of 5Mb of RAM on a Macintosh, 24Mb of RAM on a Sun, and 8Mb of RAM on an IBM PC or compatible. The Labview software is a product of National Instruments (Austin,TX; 800-433-3488), and is not included with this program. The standard distribution medium for PYROLASER is a 3.5 inch 800K Macintosh format diskette. It is also available on a 3.5 inch 720K MS-DOS format diskette, a 3.5 inch diskette in UNIX tar format, and a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation in Macintosh WordPerfect version 2.0.4 format is included on the distribution medium. Printed documentation is included in the price of the program. PYROLASER was developed in 1992.

  12. Satellites, scientists track storm from Sun to surface

    NASA Astrophysics Data System (ADS)

    Carlowicz, Michael

    1997-02-01

    On January 6, the Sun spat a coronal mass ejection (CME) into the solar wind and toward Earth; by January 10, a cloud of charged particles buffeted the face of the planet. It was, by several accounts, a run-of-the-mill space weather event. But the scientific work surrounding the storm was anything but run-of-the-mill. For the first time, space physicists observed and recorded a space weather event from start to finish, from solar surface to earthly impact. Researchers are calling it the first true success story of the four-year-old International Solar Terrestrial Physics program (ISTP), which includes NASA's WIND and POLAR spacecraft; the joint Solar and Heliospheric Observatory (SOHO) mission of NASA and the European Space Agency; the joint Geotail mission of NASA and Japan's Institute of Space and Aeronautical Science; and Russia's Interball satellites.

  13. UTDallas Offline Computing System for B Physics with the Babar Experiment at SLAC

    NASA Astrophysics Data System (ADS)

    Benninger, Tracy L.

    1998-10-01

    The University of Texas at Dallas High Energy Physics group is building a high performance, large storage computing system for B physics research with the BaBar experiment (``factory'') at the Stanford Linear Accelerator Center. The goal of this system is to analyze one terabyte of complex Event Store data from BaBar in one to two days. The foundation of the computing system is a Sun E6000 Enterprise multiprocessor system, with additions of a Sun StorEdge L1800 Tape Library, a Sun Workstation for processing batch jobs, staging disks and interface cards. The design considerations, current status, projects underway, and possible upgrade paths will be discussed.

  14. Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Tucker, Deanne (Technical Monitor)

    1994-01-01

    We present views and analysis of the execution of several PVM codes for Computational Fluid Dynamics on a network of Sparcstations, including (a) NAS Parallel benchmarks CG and MG (White, Alund and Sunderam 1993); (b) a multi-partitioning algorithm for NAS Parallel Benchmark SP (Wijngaart 1993); and (c) an overset grid flowsolver (Smith 1993). These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains (a) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (b) Monitor, a library of run-time trace-collection routines; (c) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (d) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses X11R5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (a) the impact of long message latencies; (b) the impact of multiprogramming overheads and associated load imbalance; (c) cache and virtual-memory effects; and (4significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (a) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets; and (b) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.

  15. Weekend sun protection and sunburn in Australia trends (1987-2002) and association with SunSmart television advertising.

    PubMed

    Dobbinson, Suzanne J; Wakefield, Melanie A; Jamsen, Kris M; Herd, Natalie L; Spittal, Matthew J; Lipscomb, John E; Hill, David J

    2008-02-01

    The Australian state of Victoria has run a population-based skin cancer prevention program called SunSmart since 1988, incorporating substantial public education efforts and environmental change strategies. Trends over 15 years in behavioral risk factors for skin cancer were examined in a population exposed to the SunSmart program. Whether outcomes were associated with extent of SunSmart television advertising was then assessed. In nine cross-sectional surveys from 1987 to 2002, 11,589 adults were interviewed by telephone about their sun exposure and sun protection during outdoor activities on summer weekends. Analyses completed in 2007 adjusted for ambient temperature and ultraviolet radiation. Sun protection and sunburn show substantial general improvement over time, but have stalled in recent years. Use of hats and sunscreens significantly increased over time and peaked during the mid to late 1990s, compared with the pre-SunSmart baseline. The mean proportion of unprotected skin was reduced and was lowest in the summer of 1997-1998. Summer sunburn incidence declined over time and was 9.1% in 2002, almost half baseline (OR=0.53; 95% CI=0.39-0.73). Higher exposure to SunSmart advertising in the 4 weeks before the interview increased: (1) preference for no tan, (2) hat and sunscreen use, and (3) proportion of body surface protected from the sun. The general improvement in sun-protective behaviors over time highlight that a population's sun-protective behaviors are amenable to change. Population-based prevention programs incorporating substantial television advertising campaigns into the mix of strategies may be highly effective in improving a population's sun-protective behaviors.

  16. QUICK - AN INTERACTIVE SOFTWARE ENVIRONMENT FOR ENGINEERING DESIGN

    NASA Technical Reports Server (NTRS)

    Schlaifer, R. S.

    1994-01-01

    QUICK provides the computer user with the facilities of a sophisticated desk calculator which can perform scalar, vector and matrix arithmetic, propagate conic orbits, determine planetary and satellite coordinates and perform other related astrodynamic calculations within a Fortran-like environment. QUICK is an interpreter, therefore eliminating the need to use a compiler or a linker to run QUICK code. QUICK capabilities include options for automated printing of results, the ability to submit operating system commands on some systems, and access to a plotting package (MASL)and a text editor without leaving QUICK. Mathematical and programming features of QUICK include the ability to handle arbitrary algebraic expressions, the capability to define user functions in terms of other functions, built-in constants such as pi, direct access to useful COMMON areas, matrix capabilities, extensive use of double precision calculations, and the ability to automatically load user functions from a standard library. The MASL (The Multi-mission Analysis Software Library) plotting package, included in the QUICK package, is a set of FORTRAN 77 compatible subroutines designed to facilitate the plotting of engineering data by allowing programmers to write plotting device independent applications. Its universality lies in the number of plotting devices it puts at the user's disposal. The MASL package of routines has proved very useful and easy to work with, yielding good plots for most new users on the first or second try. The functions provided include routines for creating histograms, "wire mesh" surface plots and contour plots as well as normal graphs with a large variety of axis types. The library has routines for plotting on cartesian, polar, log, mercator, cyclic, calendar, and stereographic axes, and for performing automatic or explicit scaling. The lengths of the axes of a plot are completely under the control of the program using the library. Programs written to use the MASL subroutines can be made to output to the Calcomp 1055 plotter, the Hewlett-Packard 2648 graphics terminal, the HP 7221, 7475 and 7550 pen plotters, the Tektronix 40xx and 41xx series graphics terminals, the DEC VT125/VT240 graphics terminals, the QMS 800 laser printer, the Sun Microsystems monochrome display, the Ridge Computers monochrome display, the IBM/PC color display, or a "dumb" terminal or printer. Programs using this library can be written so that they always use the same type of plotter or they can allow the choice of plotter type to be deferred until after program execution. QUICK is written in RATFOR for use on Sun4 series computers running SunOS. No source code is provided. The standard distribution medium for this program is a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation in ASCII format is included on the distribution medium. QUICK was developed in 1991 and is a copyrighted work with all copyright vested in NASA.

  17. NAVO MSRC Navigator. Fall 2008

    DTIC Science & Technology

    2008-01-01

    arrival of our two new HPC systems, DAVINCI (IBM P6) and EINSTEIN (Cray XT5), and our new mass storage server, NEWTON (Sun M5000). “The most...will run on both DAVINCI and EINSTEIN, providing researchers with the capability of running jobs of up to 4,256 and 12,736 cores in size...are expected to double as EINSTEIN and DAVINCI are brought online. We have also strengthened the backbone of our Disaster Recovery infrastructure, as

  18. Video movie making using remote procedure calls and 4BSD Unix sockets on Unix, UNICOS, and MS-DOS systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, D.W.; Johnston, W.E.; Hall, D.E.

    1990-03-01

    We describe the use of the Sun Remote Procedure Call and Unix socket interprocess communication mechanisms to provide the network transport for a distributed, client-server based, image handling system. Clients run under Unix or UNICOS and servers run under Unix or MS-DOS. The use of remote procedure calls across local or wide-area networks to make video movies is addressed.

  19. Seeing the Difference

    NASA Image and Video Library

    2014-01-03

    With its C2 coronagraph instrument, NASA's satellite SOHO captured a blossoming coronal mass ejection (CME) as it roared into space from the right side of the Sun (Dec. 28, 2013). SOHO also produces running difference images and movies of the Sun's corona in which the difference between one image and the next (taken about 10 minutes apart) is highlighted. This technique strongly emphasizes the changes that occurred. Here we have taken a single white light frame and shift it back and forth with a running difference image taken at the same time to illustrate the effect. Credit: NASA/GSFC/SOHO NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  20. 75 FR 43555 - Hewlett Packard; Hewlett Packard-Enterprise Business Services Formerly Known as Electronic Data...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-26

    ... Workers From Sun Microsystems, Inc., Dell Computer Corp., EMC Corp., EMC Corp. Total, Cisco Systems Capital Corporation, Microsoft Corp., Symantec Corp., Xerox Corp., Vmware, Inc., Sun Microsystems Federal... known as Electronic Data Systems, including on- site leased workers from Sun Microsystems, Inc., Dell...

  1. 75 FR 63509 - Hewlett Packard, Hewlett Packard-Enterprise Business Services, Formerly Known as Electronic Data...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-15

    ... Workers From Sun Microsystems, Inc., Dell Computer Corp., EMC Corp., EMC Corp. Total, Cisco Systems Capital Corporation, Microsoft Corp., Symantec Corp., Xerox Corp., VMWare, Inc., Sun Microsystems Federal...-- Services, formerly known as Electronic Data Systems, including on- site leased workers from Sun...

  2. Project SUN (Students Understanding Nature)

    NASA Technical Reports Server (NTRS)

    Curley, T.; Yanow, G.

    1995-01-01

    Project SUN is part of NASA's 'Mission to Planet Earth' education outreach effort. It is based on development of low cost, scientifi- cally accurate instrumentation and computer interfacing, coupled with Apple II computers as dedicated data loggers. The project is com- prised of: instruments, interfacing, software, curriculum, a detailed operating manual, and a system of training at the school sites.

  3. Mobile computing device configured to compute irradiance, glint, and glare of the sun

    DOEpatents

    Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

    2014-03-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

  4. Confidence range estimate of extended source imagery acquisition algorithms via computer simulations. [in optical communication systems

    NASA Technical Reports Server (NTRS)

    Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret

    1992-01-01

    Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.

  5. Ada Compiler Validation Summary Report: Certificate Number: 940325S1. 11341 DDC-I, DACS Sun SPARC/SunOS to 80186 Bare Ada Cross Compiler System, Version 4.6.4 Sun SPARCstation IPX = Intel iSBC 186/100 (Bare Machine)

    DTIC Science & Technology

    1994-03-25

    declarations requiring more digits than SYSTEM.MAXDIGITS: C24113L..Y (14 tests) C35705L..Y (14 tests) C35706L..Y (14 tests) C35707L..Y (14 tests) 2-1...before the DACS Run-Time System (RMs) libary normally searches for na-4ime routin;, in this way one can replace the standard DACS RTS routines with...SHORTINTEGER is range -128 .. 127; type INTEGER is range -32_768 .. 32_767; type LONGINTEGER is range -2147_483_648 .. 2_147_483_647; type FLOAT is digits 6

  6. Optimum Vehicle Component Integration with InVeST (Integrated Vehicle Simulation Testbed)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, W; Paddack, E; Aceves, S

    2001-12-27

    We have developed an Integrated Vehicle Simulation Testbed (InVeST). InVeST is based on the concept of Co-simulation, and it allows the development of virtual vehicles that can be analyzed and optimized as an overall integrated system. The virtual vehicle is defined by selecting different vehicle components from a component library. Vehicle component models can be written in multiple programming languages running on different computer platforms. At the same time, InVeST provides full protection for proprietary models. Co-simulation is a cost-effective alternative to competing methodologies, such as developing a translator or selecting a single programming language for all vehicle components. InVeSTmore » has been recently demonstrated using a transmission model and a transmission controller model. The transmission model was written in SABER and ran on a Sun/Solaris workstation, while the transmission controller was written in MATRIXx and ran on a PC running Windows NT. The demonstration was successfully performed. Future plans include the applicability of Co-simulation and InVeST to analysis and optimization of multiple complex systems, including those of Intelligent Transportation Systems.« less

  7. A semi-automated process for the production of custom-made shoes

    NASA Technical Reports Server (NTRS)

    Farmer, Franklin H.

    1991-01-01

    A more efficient, cost-effective and timely way of designing and manufacturing custom footware is needed. A potential solution to this problem lies in the use of computer-aided design and manufacturing (CAD/CAM) techniques in the production of custom shoes. A prototype computer-based system was developed, and the system is primarily a software entity which directs and controls a 3-D scanner, a lathe or milling machine, and a pattern-cutting machine to produce the shoe last and the components to be assembled into a shoe. The steps in this process are: (1) scan the surface of the foot to obtain a 3-D image; (2) thin the foot surface data and create a tiled wire model of the foot; (3) interactively modify the wire model of the foot to produce a model of the shoe last; (4) machine the last; (5) scan the surface of the last and verify that it correctly represents the last model; (6) design cutting patterns for shoe uppers; (7) cut uppers; (8) machine an inverse mold for the shoe innersole/sole combination; (9) mold the innersole/sole; and (10) assemble the shoe. For all its capabilities, this system still requires the direction and assistance of skilled operators, and shoemakers to assemble the shoes. Currently, the system is running on a SUN3/260 workstation with TAAC application accelerator. The software elements of the system are written in either Fortran or C and run under a UNIX operator system.

  8. A Distributed Prognostic Health Management Architecture

    NASA Technical Reports Server (NTRS)

    Bhaskar, Saha; Saha, Sankalita; Goebel, Kai

    2009-01-01

    This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.

  9. Sun-Direction Estimation Using a Partially Underdetermined Set of Coarse Sun Sensors

    NASA Astrophysics Data System (ADS)

    O'Keefe, Stephen A.; Schaub, Hanspeter

    2015-09-01

    A comparison of different methods to estimate the sun-direction vector using a partially underdetermined set of cosine-type coarse sun sensors (CSS), while simultaneously controlling the attitude towards a power-positive orientation, is presented. CSS are commonly used in performing power-positive sun-pointing and are attractive due to their relative inexpensiveness, small size, and reduced power consumption. For this study only CSS and rate gyro measurements are available, and the sensor configuration does not provide global triple coverage required for a unique sun-direction calculation. The methods investigated include a vector average method, a combination of least squares and minimum norm criteria, and an extended Kalman filter approach. All cases are formulated such that precise ground calibration of the CSS is not required. Despite significant biases in the state dynamics and measurement models, Monte Carlo simulations show that an extended Kalman filter approach, despite the underdetermined sensor coverage, can provide degree-level accuracy of the sun-direction vector both with and without a control algorithm running simultaneously. If no rate gyro measurements are available, and rates are partially estimated from CSS, the EKF performance degrades as expected, but is still able to achieve better than 10∘ accuracy using only CSS measurements.

  10. Long period nodal motion of sun synchronous orbits

    NASA Technical Reports Server (NTRS)

    Duck, K. I.

    1975-01-01

    An approximative model is formulated for assessing these perturbations that significantly affect long term modal motion of sun synchronous orbits. Computer simulations with several independent computer programs consider zonal and tesseral gravitational harmonics, third body gravitational disturbances induced by the sun and the moon, and atmospheric drag. A pendulum model consisting of evenzonal harmonics through order 4 and solar gravity dominated nodal motion approximation. This pendulum motion results from solar gravity inducing an inclination oscillation which couples into the nodal precession induced by the earth's oblateness. The pendulum model correlated well with simulations observed flight data.

  11. Children's Independent Exploration of a Natural Phenomenon by Using a Pictorial Computer-Based Simulation.

    ERIC Educational Resources Information Center

    Kangassalo, Marjatta

    Using a pictorial computer simulation of a natural phenomenon, children's exploration processes and their construction of conceptual models were examined. The selected natural phenomenon was the variations of sunlight and heat of the sun experienced on the earth in relation to the positions of the earth and sun in space, and the subjects were…

  12. DG TO FT - AUTOMATIC TRANSLATION OF DIGRAPH TO FAULT TREE MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both types of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Each model has its advantages. While digraphs can be derived in a fairly straightforward manner from system schematics and knowledge about component failure modes and system design, fault tree structure allows for fast processing using efficient techniques developed for tree data structures. The similarities between digraphs and fault trees permits the information encoded in the digraph to be translated into a logically equivalent fault tree. The DG TO FT translation tool will automatically translate digraph models, including those with loops or cycles, into fault tree models that have the same minimum cut set solutions as the input digraph. This tool could be useful, for example, if some parts of a system have been modeled using digraphs and others using fault trees. The digraphs could be translated and incorporated into the fault trees, allowing them to be analyzed using a number of powerful fault tree processing codes, such as cut set and quantitative solution codes. A cut set for a given node is a group of failure events that will cause the failure of the node. A minimum cut set for a node is any cut set that, if any of the failures in the set were to be removed, the occurrence of the other failures in the set will not cause the failure of the event represented by the node. Cut sets calculations can be used to find dependencies, weak links, and vital system components whose failures would cause serious systems failure. The DG TO FT translation system reads in a digraph with each node listed as a separate object in the input file. The user specifies a terminal node for the digraph that will be used as the top node of the resulting fault tree. A fault tree basic event node representing the failure of that digraph node is created and becomes a child of the terminal root node. A subtree is created for each of the inputs to the digraph terminal node and the root of those subtrees are added as children of the top node of the fault tree. Every node in the digraph upstream of the terminal node will be visited and converted. During the conversion process, the algorithm keeps track of the path from the digraph terminal node to the current digraph node. If a node is visited twice, then the program has found a cycle in the digraph. This cycle is broken by finding the minimal cut sets of the twice visited digraph node and forming those cut sets into subtrees. Another implementation of the algorithm resolves loops by building a subtree based on the digraph minimal cut sets calculation. It does not reduce the subtree to minimal cut set form. This second implementation produces larger fault trees, but runs much faster than the version using minimal cut sets since it does not spend time reducing the subtrees to minimal cut sets. The fault trees produced by DG TO FT will contain OR gates, AND gates, Basic Event nodes, and NOP gates. The results of a translation can be output as a text object description of the fault tree similar to the text digraph input format. The translator can also output a LISP language formatted file and an augmented LISP file which can be used by the FTDS (ARC-13019) diagnosis system, available from COSMIC, which performs diagnostic reasoning using the fault tree as a knowledge base. DG TO FT is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. DG TO FT is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is provided on the distribution medium. DG TO FT was developed in 1992. Sun, and SunOS are trademarks of Sun Microsystems, Inc. DECstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc. System 7 is a trademark of Apple Computers Inc. Microsoft Word is a trademark of Microsoft Corporation.

  13. Attitude Control System Design for the Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Starin, Scott R.; Bourkland, Kristin L.; Kuo-Chia, Liu; Mason, Paul A. C.; Vess, Melissa F.; Andrews, Stephen F.; Morgenstern, Wendy M.

    2005-01-01

    The Solar Dynamics Observatory mission, part of the Living With a Star program, will place a geosynchronous satellite in orbit to observe the Sun and relay data to a dedicated ground station at all times. SDO remains Sun- pointing throughout most of its mission for the instruments to take measurements of the Sun. The SDO attitude control system is a single-fault tolerant design. Its fully redundant attitude sensor complement includes 16 coarse Sun sensors, a digital Sun sensor, 3 two-axis inertial reference units, 2 star trackers, and 4 guide telescopes. Attitude actuation is performed using 4 reaction wheels and 8 thrusters, and a single main engine nominally provides velocity-change thrust. The attitude control software has five nominal control modes-3 wheel-based modes and 2 thruster-based modes. A wheel-based Safehold running in the attitude control electronics box improves the robustness of the system as a whole. All six modes are designed on the same basic proportional-integral-derivative attitude error structure, with more robust modes setting their integral gains to zero. The paper details the mode designs and their uses.

  14. The Effect of Sea-Surface Sun Glitter on Microwave Radiometer Measurements

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1981-01-01

    A relatively simple model for the microwave brightness temperature of sea surface Sun glitter is presented. The model is an accurate closeform approximation for the fourfold Sun glitter integral. The model computations indicate that Sun glitter contamination of on orbit radiometer measurements is appreciable over a large swath area. For winds near 20 m/s, Sun glitter affects the retrieval of environmental parameters for Sun angles as large as 20 to 25 deg. The model predicted biases in retrieved wind speed and sea surface temperature due to neglecting Sun glitter are consistent with those experimentally observed in SEASAT SMMR retrievals. A least squares retrieval algorithm that uses a combined sea and Sun model function shows the potential of retrieving accurate environmental parameters in the presence of Sun glitter so long as the Sun angles and wind speed are above 5 deg and 2 m/s, respectively.

  15. A Comparison of Priority-based and Incremental Real-Time Garbage Collectors in the Implementation of the Shadow Design Pattern

    DTIC Science & Technology

    2008-08-15

    running the real-time application we used in our previous study on IBM WebSphere Real Time. IBM WebSphere Real Time automatically sets Metronome , its...the experiment show that the modified code for the Shadow Design Pattern runs well under Metronome . 15. NUMBER OF PAGES 25 14. SUBJECT TERMS...includes the real-time garbage collector called the Metronome . Unlike the Sun RTGC, we cannot change the priority of the Metronome RTGC. Metronome is

  16. Nuclear shell model code CRUNCHER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resler, D.A.; Grimes, S.M.

    1988-05-01

    A new nuclear shell model code CRUNCHER, patterned after the code VLADIMIR, has been developed. While CRUNCHER and VLADIMIR employ the techniques of an uncoupled basis and the Lanczos process, improvements in the new code allow it to handle much larger problems than the previous code and to perform them more efficiently. Tests involving a moderately sized calculation indicate that CRUNCHER running on a SUN 3/260 workstation requires approximately one-half the central processing unit (CPU) time required by VLADIMIR running on a CRAY-1 supercomputer.

  17. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.

  18. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.

  19. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Riley, G.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.

  20. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (DEC VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.

  1. PPPC 4 DMν: a Poor Particle Physicist Cookbook for Neutrinos from Dark Matter annihilations in the Sun

    NASA Astrophysics Data System (ADS)

    Baratella, Pietro; Cirelli, Marco; Hektor, Andi; Pata, Joosep; Piibeleht, Morten; Strumia, Alessandro

    2014-03-01

    We provide ingredients and recipes for computing neutrino signals of TeV-scale Dark Matter (DM) annihilations in the Sun. For each annihilation channel and DM mass we present the energy spectra of neutrinos at production, including: state-of-the-art energy losses of primary particles in solar matter, secondary neutrinos, electroweak radiation. We then present the spectra after propagation to the Earth, including (vacuum and matter) flavor oscillations and interactions in solar matter. We also provide a numerical computation of the capture rate of DM particles in the Sun. These results are available in numerical form.

  2. Raster-Based Approach to Solar Pressure Modeling

    NASA Technical Reports Server (NTRS)

    Wright, Theodore W. II

    2013-01-01

    An algorithm has been developed to take advantage of the graphics processing hardware in modern computers to efficiently compute high-fidelity solar pressure forces and torques on spacecraft, taking into account the possibility of self-shading due to the articulation of spacecraft components such as solar arrays. The process is easily extended to compute other results that depend on three-dimensional attitude analysis, such as solar array power generation or free molecular flow drag. The impact of photons upon a spacecraft introduces small forces and moments. The magnitude and direction of the forces depend on the material properties of the spacecraft components being illuminated. The parts of the components being lit depends on the orientation of the craft with respect to the Sun, as well as the gimbal angles for any significant moving external parts (solar arrays, typically). Some components may shield others from the Sun. The purpose of this innovation is to enable high-fidelity computation of solar pressure and power generation effects of illuminated portions of spacecraft, taking self-shading from spacecraft attitude and movable components into account. The key idea in this innovation is to compute results dependent upon complicated geometry by using an image to break the problem into thousands or millions of sub-problems with simple geometry, and then the results from the simpler problems are combined to give high-fidelity results for the full geometry. This process is performed by constructing a 3D model of a spacecraft using an appropriate computer language (OpenGL), and running that model on a modern computer's 3D accelerated video processor. This quickly and accurately generates a view of the model (as shown on a computer screen) that takes rotation and articulation of spacecraft components into account. When this view is interpreted as the spacecraft as seen by the Sun, then only the portions of the craft visible in the view are illuminated. The view as shown on the computer screen is composed of up to millions of pixels. Each of those pixels is associated with a small illuminated area of the spacecraft. For each pixel, it is possible to compute its position, angle (surface normal) from the view direction, and the spacecraft material (and therefore, optical coefficients) associated with that area. With this information, the area associated with each pixel can be modeled as a simple flat plate for calculating solar pressure. The vector sum of these individual flat plate models is a high-fidelity approximation of the solar pressure forces and torques on the whole vehicle. In addition to using optical coefficients associated with each spacecraft material to calculate solar pressure, a power generation coefficient is added for computing solar array power generation from the sum of the illuminated areas. Similarly, other area-based calculations, such as free molecular flow drag, are also enabled. Because the model rendering is separated from other calculations, it is relatively easy to add a new model to explore a new vehicle or mission configuration. Adding a new model is performed by adding OpenGL code, but a future version might read a mesh file exported from a computer-aided design (CAD) system to enable very rapid turnaround for new designs

  3. Data-driven Applications for the Sun-Earth System

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.

    2016-12-01

    Advances in observational and data mining techniques allow extracting information from the large volume of Sun-Earth observational data that can be assimilated into first principles physical models. However, equations governing Sun-Earth phenomena are typically nonlinear, complex, and high-dimensional. The high computational demand of solving the full governing equations over a large range of scales precludes the use of a variety of useful assimilative tools that rely on applied mathematical and statistical techniques for quantifying uncertainty and predictability. Effective use of such tools requires the development of computationally efficient methods to facilitate fusion of data with models. This presentation will provide an overview of various existing as well as newly developed data-driven techniques adopted from atmospheric and oceanic sciences that proved to be useful for space physics applications, such as computationally efficient implementation of Kalman Filter in radiation belts modeling, solar wind gap-filling by Singular Spectrum Analysis, and low-rank procedure for assimilation of low-altitude ionospheric magnetic perturbations into the Lyon-Fedder-Mobarry (LFM) global magnetospheric model. Reduced-order non-Markovian inverse modeling and novel data-adaptive decompositions of Sun-Earth datasets will be also demonstrated.

  4. Space Science

    NASA Image and Video Library

    2003-01-01

    These banana-shaped loops are part of a computer-generated snapshot of our sun's magnetic field. The solar magnetic-field lines loop through the sun's corona, break through the sun's surface, and cornect regions of magnetic activity, such as sunspots. This image --part of a magnetic-field study of the sun by NASA's Allen Gary -- shows the outer portion (skins) of interconnecting systems of hot (2 million degrees Kelvin) coronal loops within and between two active magnetic regions on opposite sides of the sun's equator. The diameter of these coronal loops at their foot points is approximately the same size as the Earth's radius (about 6,000 kilometers).

  5. The MITy micro-rover: Sensing, control, and operation

    NASA Technical Reports Server (NTRS)

    Malafeew, Eric; Kaliardos, William

    1994-01-01

    The sensory, control, and operation systems of the 'MITy' Mars micro-rover are discussed. It is shown that the customized sun tracker and laser rangefinder provide internal, autonomous dead reckoning and hazard detection in unstructured environments. The micro-rover consists of three articulated platforms with sensing, processing and payload subsystems connected by a dual spring suspension system. A reactive obstacle avoidance routine makes intelligent use of robot-centered laser information to maneuver through cluttered environments. The hazard sensors include a rangefinder, inclinometers, proximity sensors and collision sensors. A 486/66 laptop computer runs the graphical user interface and programming environment. A graphical window displays robot telemetry in real time and a small TV/VCR is used for real time supervisory control. Guidance, navigation, and control routines work in conjunction with the mapping and obstacle avoidance functions to provide heading and speed commands that maneuver the robot around obstacles and towards the target.

  6. Thin film concentrator panel development

    NASA Technical Reports Server (NTRS)

    Zimmerman, D. K.

    1982-01-01

    The development and testing of a rigid panel concept that utilizes a thin film reflective surface for application to a low-cost point-focusing solar concentrator is discussed. It is shown that a thin film reflective surface is acceptable for use on solar concentrators, including 1500 F applications. Additionally, it is shown that a formed steel sheet substrate is a good choice for concentrator panels. The panel has good optical properties, acceptable forming tolerances, environmentally resistant substrate and stiffeners, and adaptability to low to mass production rates. Computer simulations of the concentrator optics were run using the selected reflector panel design. Experimentally determined values for reflector surface specularity and reflectivity along with dimensional data were used in the analysis. The simulations provided intercept factor and net energy into the aperture as a function of aperture size for different surface errors and pointing errors. Point source and Sun source optical tests were also performed.

  7. Exploratory research for the development of a computer aided software design environment with the software technology program

    NASA Technical Reports Server (NTRS)

    Hardwick, Charles

    1991-01-01

    Field studies were conducted by MCC to determine areas of research of mutual interest to MCC and JSC. NASA personnel from the Information Systems Directorate and research faculty from UHCL/RICIS visited MCC in Austin, Texas to examine tools and applications under development in the MCC Software Technology Program. MCC personnel presented workshops in hypermedia, design knowledge capture, and design recovery on site at JSC for ISD personnel. The following programs were installed on workstations in the Software Technology Lab, NASA/JSC: (1) GERM (Graphic Entity Relations Modeler); (2) gIBIS (Graphic Issues Based Information System); and (3) DESIRE (Design Recovery tool). These applications were made available to NASA for inspection and evaluation. Programs developed in the MCC Software Technology Program run on the SUN workstation. The programs do not require special configuration, but they will require larger than usual amounts of disk space and RAM to operate properly.

  8. A gamma ray observatory ground attitude error analysis study using the generalized calibration system

    NASA Technical Reports Server (NTRS)

    Ketchum, E.

    1988-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) will be responsible for performing ground attitude determination for Gamma Ray Observatory (GRO) support. The study reported in this paper provides the FDD and the GRO project with ground attitude determination error information and illustrates several uses of the Generalized Calibration System (GCS). GCS, an institutional software tool in the FDD, automates the computation of the expected attitude determination uncertainty that a spacecraft will encounter during its mission. The GRO project is particularly interested in the uncertainty in the attitude determination using Sun sensors and a magnetometer when both star trackers are inoperable. In order to examine the expected attitude errors for GRO, a systematic approach was developed including various parametric studies. The approach identifies pertinent parameters and combines them to form a matrix of test runs in GCS. This matrix formed the basis for this study.

  9. AMP: a science-driven web-based application for the TeraGrid

    NASA Astrophysics Data System (ADS)

    Woitaszek, M.; Metcalfe, T.; Shorrock, I.

    The Asteroseismic Modeling Portal (AMP) provides a web-based interface for astronomers to run and view simulations that derive the properties of Sun-like stars from observations of their pulsation frequencies. In this paper, we describe the architecture and implementation of AMP, highlighting the lightweight design principles and tools used to produce a functional fully-custom web-based science application in less than a year. Targeted as a TeraGrid science gateway, AMP's architecture and implementation are intended to simplify its orchestration of TeraGrid computational resources. AMP's web-based interface was developed as a traditional standalone database-backed web application using the Python-based Django web development framework, allowing us to leverage the Django framework's capabilities while cleanly separating the user interface development from the grid interface development. We have found this combination of tools flexible and effective for rapid gateway development and deployment.

  10. Allineamenti di tre basiliche romane con il Sole

    NASA Astrophysics Data System (ADS)

    Sigismondi, Costantino

    2016-06-01

    The astronomical azimut of a wall can be measured by timing the grazing Sun and computing the ephemerides of the Sun for that place, similarly the windows can cast sunbeams into the building allowing to time the alignments. The azimut of Saint Peter's Basilica, Saint Paul and Saint Pancratius outside the walls have been measured by timing the Sun at opportune positions.

  11. 7 CFR 989.154 - Marketing policy computations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... announcing a crop year's marketing policy for Natural (sun-dried) Seedless raisins shall be 85,000 natural... 2007-08 crop Natural (sun-dried) Seedless (NS) raisins if the crop estimate is equal to, less than, or...

  12. UNIVERS Product. Phase 1.

    DTIC Science & Technology

    1987-04-27

    foundation for MCAD, - ECAD , and CIM applications. The existing product runs under 4.2 BSD UNIX’** on SUN 3T s workstations, and will soon be available...on Digital Equipment’s VMSM operating system. Potential UNIVERS applications include Government-sponsored ECAD design applications (for example, the

  13. Full-scale transmission testing to evaluate advanced lubricants

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; Decker, Harry J.; Shimski, John T.

    1992-01-01

    Experimental tests were performed on the OH-58A helicopter main rotor transmission in the NASA Lewis 500 hp helicopter transmission test stand. The testing was part of a lubrication program. The objectives are to develop and show a separate lubricant for gearboxes with improved performance in life and load carrying capacity. The goal was to develop a testing procedure to fail certain transmission components using a MIL-L-23699 based reference oil and then to run identical tests with improved lubricants and show improved performance. The tests were directed at parts that failed due to marginal lubrication from Navy field experience. These failures included mast shaft bearing micropitting, sun gear and planet bearing fatigue, and spiral bevel gear scoring. A variety of tests were performed and over 900 hrs of total run time accumulated for these tests. Some success was achieved in developing a testing procedure to produce sun gear and planet bearing fatigue failures. Only marginal success was achieved in producing mast shaft bearing micropitting and spiral bevel gear scoring.

  14. CADNA: a library for estimating round-off error propagation

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie

    2008-06-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.

  15. Direct computation of orbital sunrise or sunset event parameters

    NASA Technical Reports Server (NTRS)

    Buglia, J. J.

    1986-01-01

    An analytical method is developed for determining the geometrical parameters which are needed to describe the viewing angles of the Sun relative to an orbiting spacecraft when the Sun rises or sets with respect to the spacecraft. These equations are rigorous and are frequently used for parametric studies relative to mission planning and for determining instrument parameters. The text is wholly self-contained in that no external reference to ephemerides or other astronomical tables is needed. Equations are presented which allow the computation of Greenwich sidereal time and right ascension and declination of the Sun generally to within a few seconds of arc, or a few tenths of a second in time.

  16. Atmospheric neutral points outside of the principal plane. [points of vanished skylight polarization

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.

    1981-01-01

    It is noted that the positions in the sky where the skylight is unpolarized, that is, the neutral points, are in most cases located in the vertical plane through the sun (the principal plane). Points have been observed outside the principal plane (Soret, 1888) when the plane intersected a lake or sea. Here, the neutral points were located at an azimuth of about 15 deg from the sun and near the almucantar through the sun. In order to investigate the effects of water surface and aerosols in the neutral point positions, the positions are computed for models of the earth-atmosphere system that simulate the observational conditions. The computed and measured positions are found to agree well. While previous observations provided only qualitative information on the degree of polarization, it is noted that the computations provide details concerning the polarization parameters.

  17. Results of the Magnetometer Navigation (MAGNAV)lnflight Experiment

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Harman, Richard R.; Bar-Itzhack, Itzhack Y.; Lambertson, Mike

    2004-01-01

    The Magnetometer Navigation (MAGNAV) algorithm is currently running as a flight experiment as part of the Wide Field Infrared Explorer (WIRE) Post-Science Engineering Testbed. Initialization of MAGNAV occurred on September 4, 2003. MAGNAV is designed to autonomously estimate the spacecraft orbit, attitude, and rate using magnetometer and sun sensor data. Since the Earth's magnetic field is a function of time and position, and since time is known quite precisely, the differences between the computed magnetic field and measured magnetic field components, as measured by the magnetometer throughout the entire spacecraft orbit, are a function of the spacecraft trajectory and attitude errors. Therefore, these errors are used to estimate both trajectory and attitude. In addition, the time rate of change of the magnetic field vector is used to estimate the spacecraft rotation rate. The estimation of the attitude and trajectory is augmented with the rate estimation into an Extended Kalman filter blended with a pseudo-linear Kalman filter. Sun sensor data is also used to improve the accuracy and observability of the attitude and rate estimates. This test serves to validate MAGNAV as a single low cost navigation system which utilizes reliable, flight qualified sensors. MAGNAV is intended as a backup algorithm, an initialization algorithm, or possibly a prime navigation algorithm for a mission with coarse requirements. Results from the first six months of operation are presented.

  18. SUN: A fully automated interferometric test bench aimed at measuring photolithographic grade lenses with a sub nanometer accuracy

    NASA Astrophysics Data System (ADS)

    Bourgois, R.; Hamy, A. L.; Pourcelot, P.

    2017-10-01

    SUN is a test bench developed by Safran Reosc to measure spherical or aspherical surface errors of litho-grade lenses with sub-nanometer accuracy. SUN provides full aperture high resolution interferometric measurements. Measurements are performed at the center of curvature using high precision transmission sphere (TS), and Computer Generated Holograms (CGH) for aspheres, in order to light the surface at normal incidence. SUN can measure lenses with diameter up to 350mm and a radius of curvature varying from 60 to 3000 mm.

  19. Autonomous Sun-Direction Estimation Using Partially Underdetermined Coarse Sun Sensor Configurations

    NASA Astrophysics Data System (ADS)

    O'Keefe, Stephen A.

    In recent years there has been a significant increase in interest in smaller satellites as lower cost alternatives to traditional satellites, particularly with the rise in popularity of the CubeSat. Due to stringent mass, size, and often budget constraints, these small satellites rely on making the most of inexpensive hardware components and sensors, such as coarse sun sensors (CSS) and magnetometers. More expensive high-accuracy sun sensors often combine multiple measurements, and use specialized electronics, to deterministically solve for the direction of the Sun. Alternatively, cosine-type CSS output a voltage relative to the input light and are attractive due to their very low cost, simplicity to manufacture, small size, and minimal power consumption. This research investigates using coarse sun sensors for performing robust attitude estimation in order to point a spacecraft at the Sun after deployment from a launch vehicle, or following a system fault. As an alternative to using a large number of sensors, this thesis explores sun-direction estimation techniques with low computational costs that function well with underdetermined sets of CSS. Single-point estimators are coupled with simultaneous nonlinear control to achieve sun-pointing within a small percentage of a single orbit despite the partially underdetermined nature of the sensor suite. Leveraging an extensive analysis of the sensor models involved, sequential filtering techniques are shown to be capable of estimating the sun-direction to within a few degrees, with no a priori attitude information and using only CSS, despite the significant noise and biases present in the system. Detailed numerical simulations are used to compare and contrast the performance of the five different estimation techniques, with and without rate gyro measurements, their sensitivity to rate gyro accuracy, and their computation time. One of the key concerns with reducing the number of CSS is sensor degradation and failure. In this thesis, a Modified Rodrigues Parameter based CSS calibration filter suitable for autonomous on-board operation is developed. The sensitivity of this method's accuracy to the available Earth albedo data is evaluated and compared to the required computational effort. The calibration filter is expanded to perform sensor fault detection, and promising results are shown for reduced resolution albedo models. All of the methods discussed provide alternative attitude, determination, and control system algorithms for small satellite missions looking to use inexpensive, small sensors due to size, power, or budget limitations.

  20. Bending of light in quantum gravity.

    PubMed

    Bjerrum-Bohr, N E J; Donoghue, John F; Holstein, Barry R; Planté, Ludovic; Vanhove, Pierre

    2015-02-13

    We consider the scattering of lightlike matter in the presence of a heavy scalar object (such as the Sun or a Schwarzschild black hole). By treating general relativity as an effective field theory we directly compute the nonanalytic components of the one-loop gravitational amplitude for the scattering of massless scalars or photons from an external massive scalar field. These results allow a semiclassical computation of the bending angle for light rays grazing the Sun, including long-range ℏ contributions. We discuss implications of this computation, in particular, the violation of some classical formulations of the equivalence principle.

  1. The Solar Swan Dive.

    ERIC Educational Resources Information Center

    Dilsaver, John S.; Siler, Joseph R.

    1991-01-01

    Solutions for a problem in which the time necessary for an object to fall into the sun from the average distance from the earth to the sun are presented. Both calculus- and noncalculus-based solutions are presented. A sample computer solution is included. (CW)

  2. CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.

  3. PMARC_12 - PANEL METHOD AMES RESEARCH CENTER, VERSION 12

    NASA Technical Reports Server (NTRS)

    Ashby, D. L.

    1994-01-01

    Panel method computer programs are software tools of moderate cost used for solving a wide range of engineering problems. The panel code PMARC_12 (Panel Method Ames Research Center, version 12) can compute the potential flow field around complex three-dimensional bodies such as complete aircraft models. PMARC_12 is a well-documented, highly structured code with an open architecture that facilitates modifications and the addition of new features. Adjustable arrays are used throughout the code, with dimensioning controlled by a set of parameter statements contained in an include file; thus, the size of the code (i.e. the number of panels that it can handle) can be changed very quickly. This allows the user to tailor PMARC_12 to specific problems and computer hardware constraints. In addition, PMARC_12 can be configured (through one of the parameter statements in the include file) so that the code's iterative matrix solver is run entirely in RAM, rather than reading a large matrix from disk at each iteration. This significantly increases the execution speed of the code, but it requires a large amount of RAM memory. PMARC_12 contains several advanced features, including internal flow modeling, a time-stepping wake model for simulating either steady or unsteady (including oscillatory) motions, a Trefftz plane induced drag computation, off-body and on-body streamline computations, and computation of boundary layer parameters using a two-dimensional integral boundary layer method along surface streamlines. In a panel method, the surface of the body over which the flow field is to be computed is represented by a set of panels. Singularities are distributed on the panels to perturb the flow field around the body surfaces. PMARC_12 uses constant strength source and doublet distributions over each panel, thus making it a low order panel method. Higher order panel methods allow the singularity strength to vary linearly or quadratically across each panel. Experience has shown that low order panel methods can provide nearly the same accuracy as higher order methods over a wide range of cases with significantly reduced computation times; hence, the low order formulation was adopted for PMARC_12. The flow problem is solved by modeling the body as a closed surface dividing space into two regions: the region external to the surface in which an unknown velocity potential exists representing the flow field of interest, and the region internal to the surface in which a known velocity potential (representing a fictitious flow) is prescribed as a boundary condition. Both velocity potentials are required to satisfy Laplace's equation. A surface integral equation for the unknown potential external to the surface can be written by applying Green's Theorem to the external region. Using the internal potential and zero flow through the surface as boundary conditions, the unknown potential external to the surface can be solved for. When the internal flow option, which allows the analysis of closed ducts, wind tunnels, and similar internal flow problems, is selected, the geometry is modeled such that the flow field of interest is inside the geometry and the fictitious flow is outside the geometry. Items such as wings, struts, or aircraft models can be included in the internal flow problem. The time-stepping wake model gives PMARC_12 the ability to model both steady and unsteady flow problems. The wake is convected downstream from the wake-separation line by the local velocity field. With each time step, a new row of wake panels is added to the wake at the wake-separation line. Time stepping can start from time t=0 (no initial wake) or from time t=t0 (an initial wake is specified). A wide range of motions can be prescribed, including constant rates of translation, constant rate of rotation about an arbitrary axis, oscillatory translation, and oscillatory rotation about any of the three coordinate axes. Investigators interested in a visual representation of the phenomenon they are studying with PMARC_12 may want to consider obtaining the program GVS (ARC-13361), the General Visualization System. GVS is a Silicon Graphics IRIS program which was created for the purpose of supporting the scientific visualization needs of PMARC_12. GVS is available separately from COSMIC. PMARC_12 is written in standard FORTRAN 77, with the exception of the NAMELIST extension used for input. This makes the code fairly machine independent. A compiler which supports the NAMELIST extension is required. The amount of free disk space and RAM memory required for PMARC_12 will vary depending on how the code is dimensioned using the parameter statements in the include file. The recommended minimum requirements are 20Mb of free disk space and 4Mb of RAM. PMARC_12 has been successfully implemented on a Macintosh II running System 6.0.7 or 7.0 (using MPW/Language Systems Fortran 3.0), a Sun SLC running SunOS 4.1.1, an HP 720 running HP-UX 8.07, an SGI IRIS running IRIX 4.0 (it will not run under IRIX 3.x.x without modifications), an IBM RS/6000 running AIX, a DECstation 3100 running ULTRIX, and a CRAY-YMP running UNICOS 6.0 or later. Due to its memory requirements, this program does not readily lend itself to implementation on MS-DOS based machines. The standard distribution medium for PMARC_12 is a set of three 3.5 inch 800K Macintosh format diskettes and one 3.5 inch 1.44Mb Macintosh format diskette which contains an electronic copy of the documentation in MS Word 5.0 format for the Macintosh. Alternate distribution media and formats are available upon request, but these will not include the electronic version of the document. No executables are included on the distribution media. This program is an update to PMARC version 11, which was released in 1989. PMARC_12 was released in 1993. It is available only for use by United States citizens.

  4. NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Phillips, T. A.

    1994-01-01

    NETS, A Tool for the Development and Evaluation of Neural Networks, provides a simulation of Neural Network algorithms plus an environment for developing such algorithms. Neural Networks are a class of systems modeled after the human brain. Artificial Neural Networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to brain neurons. Problems which involve pattern matching readily fit the class of problems which NETS is designed to solve. NETS uses the back propagation learning method for all of the networks which it creates. The nodes of a network are usually grouped together into clumps called layers. Generally, a network will have an input layer through which the various environment stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to some features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. NETS allows the user to customize the patterns of connections between layers of a network. NETS also provides features for saving the weight values of a network during the learning process, which allows for more precise control over the learning process. NETS is an interpreter. Its method of execution is the familiar "read-evaluate-print" loop found in interpreted languages such as BASIC and LISP. The user is presented with a prompt which is the simulator's way of asking for input. After a command is issued, NETS will attempt to evaluate the command, which may produce more prompts requesting specific information or an error if the command is not understood. The typical process involved when using NETS consists of translating the problem into a format which uses input/output pairs, designing a network configuration for the problem, and finally training the network with input/output pairs until an acceptable error is reached. NETS allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard distribution medium) or a .25 inch streaming magnetic tape cartridge in UNIX tar format. NETS was developed in 1989 and updated in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. SunIPC and SunOS are trademarks of Sun Microsystems, Inc. CRAY Y-MP and UNICOS are trademarks of Cray Research, Inc.

  5. NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACHINE INDEPENDENT VERSION)

    NASA Technical Reports Server (NTRS)

    Baffes, P. T.

    1994-01-01

    NETS, A Tool for the Development and Evaluation of Neural Networks, provides a simulation of Neural Network algorithms plus an environment for developing such algorithms. Neural Networks are a class of systems modeled after the human brain. Artificial Neural Networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to brain neurons. Problems which involve pattern matching readily fit the class of problems which NETS is designed to solve. NETS uses the back propagation learning method for all of the networks which it creates. The nodes of a network are usually grouped together into clumps called layers. Generally, a network will have an input layer through which the various environment stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to some features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. NETS allows the user to customize the patterns of connections between layers of a network. NETS also provides features for saving the weight values of a network during the learning process, which allows for more precise control over the learning process. NETS is an interpreter. Its method of execution is the familiar "read-evaluate-print" loop found in interpreted languages such as BASIC and LISP. The user is presented with a prompt which is the simulator's way of asking for input. After a command is issued, NETS will attempt to evaluate the command, which may produce more prompts requesting specific information or an error if the command is not understood. The typical process involved when using NETS consists of translating the problem into a format which uses input/output pairs, designing a network configuration for the problem, and finally training the network with input/output pairs until an acceptable error is reached. NETS allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard distribution medium) or a .25 inch streaming magnetic tape cartridge in UNIX tar format. NETS was developed in 1989 and updated in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. SunIPC and SunOS are trademarks of Sun Microsystems, Inc. CRAY Y-MP and UNICOS are trademarks of Cray Research, Inc.

  6. Analysis of the flight dynamics of the Solar Maximum Mission (SMM) off-sun scientific pointing

    NASA Technical Reports Server (NTRS)

    Pitone, D. S.; Klein, J. R.

    1989-01-01

    Algorithms are presented which were created and implemented by the Goddard Space Flight Center's (GSFC's) Solar Maximum Mission (SMM) attitude operations team to support large-angle spacecraft pointing at scientific objectives. The mission objective of the post-repair SMM satellite was to study solar phenomena. However, because the scientific instruments, such as the Coronagraph/Polarimeter (CP) and the Hard X ray Burst Spectrometer (HXRBS), were able to view objects other than the Sun, attitude operations support for attitude pointing at large angles from the nominal solar-pointing attitudes was required. Subsequently, attitude support for SMM was provided for scientific objectives such as Comet Halley, Supernova 1987A, Cygnus X-1, and the Crab Nebula. In addition, the analysis was extended to include the reverse problem, computing the right ascension and declination of a body given the off-Sun angles. This analysis led to the computation of the orbits of seven new solar comets seen in the field-of-view (FOV) of the CP. The activities necessary to meet these large-angle attitude-pointing sequences, such as slew sequence planning, viewing-period prediction, and tracking-bias computation are described. Analysis is presented for the computation of maneuvers and pointing parameters relative to the SMM-unique, Sun-centered reference frame. Finally, science data and independent attitude solutions are used to evaluate the large-angle pointing performance.

  7. Analysis of the flight dynamics of the Solar Maximum Mission (SMM) off-sun scientific pointing

    NASA Technical Reports Server (NTRS)

    Pitone, D. S.; Klein, J. R.; Twambly, B. J.

    1990-01-01

    Algorithms are presented which were created and implemented by the Goddard Space Flight Center's (GSFC's) Solar Maximum Mission (SMM) attitude operations team to support large-angle spacecraft pointing at scientific objectives. The mission objective of the post-repair SMM satellite was to study solar phenomena. However, because the scientific instruments, such as the Coronagraph/Polarimeter (CP) and the Hard X-ray Burst Spectrometer (HXRBS), were able to view objects other than the Sun, attitude operations support for attitude pointing at large angles from the nominal solar-pointing attitudes was required. Subsequently, attitude support for SMM was provided for scientific objectives such as Comet Halley, Supernova 1987A, Cygnus X-1, and the Crab Nebula. In addition, the analysis was extended to include the reverse problem, computing the right ascension and declination of a body given the off-Sun angles. This analysis led to the computation of the orbits of seven new solar comets seen in the field-of-view (FOV) of the CP. The activities necessary to meet these large-angle attitude-pointing sequences, such as slew sequence planning, viewing-period prediction, and tracking-bias computation are described. Analysis is presented for the computation of maneuvers and pointing parameters relative to the SMM-unique, Sun-centered reference frame. Finally, science data and independent attitude solutions are used to evaluate the larg-angle pointing performance.

  8. Calipso's Mission Design: Sun-Glint Avoidance Strategies

    NASA Technical Reports Server (NTRS)

    Mailhe, Laurie M.; Schiff, Conrad; Stadler, John H.

    2004-01-01

    CALIPSO will fly in formation with the Aqua spacecraft to obtain a coincident image of a portion of the Aqua/MODIS swath. Since MODIS pixels suffering sun-glint degradation are not processed, it is essential that CALIPSO only co- image the glint h e portion of the MODIS instrument swath. This paper presents sun-glint avoidance strategies for the CALIPSO mission. First, we introduce the Aqua sun-glint geometry and its relation to the CALIPSO-Aqua formation flying parameters. Then, we detail our implementation of the computation and perform a cross-track trade-space analysis. Finally, we analyze the impact of the sun-glint avoidance strategy on the spacecraft power and delta-V budget over the mission lifetime.

  9. Overview of the land analysis system (LAS)

    USGS Publications Warehouse

    Quirk, Bruce K.; Olseson, Lyndon R.

    1987-01-01

    The Land Analysis System (LAS) is a fully integrated digital analysis system designed to support remote sensing, image processing, and geographic information systems research. LAS is being developed through a cooperative effort between the National Aeronautics and Space Administration Goddard Space Flight Center and the U. S. Geological Survey Earth Resources Observation Systems (EROS) Data Center. LAS has over 275 analysis modules capable to performing input and output, radiometric correction, geometric registration, signal processing, logical operations, data transformation, classification, spatial analysis, nominal filtering, conversion between raster and vector data types, and display manipulation of image and ancillary data. LAS is currently implant using the Transportable Applications Executive (TAE). While TAE was designed primarily to be transportable, it still provides the necessary components for a standard user interface, terminal handling, input and output services, display management, and intersystem communications. With TAE the analyst uses the same interface to the processing modules regardless of the host computer or operating system. LAS was originally implemented at EROS on a Digital Equipment Corporation computer system under the Virtual Memorial System operating system with DeAnza displays and is presently being converted to run on a Gould Power Node and Sun workstation under the Berkeley System Distribution UNIX operating system.

  10. Comparison of Monte Carlo simulated and measured performance parameters of miniPET scanner

    NASA Astrophysics Data System (ADS)

    Kis, S. A.; Emri, M.; Opposits, G.; Bükki, T.; Valastyán, I.; Hegyesi, Gy.; Imrek, J.; Kalinka, G.; Molnár, J.; Novák, D.; Végh, J.; Kerek, A.; Trón, L.; Balkay, L.

    2007-02-01

    In vivo imaging of small laboratory animals is a valuable tool in the development of new drugs. For this purpose, miniPET, an easy to scale modular small animal PET camera has been developed at our institutes. The system has four modules, which makes it possible to rotate the whole detector system around the axis of the field of view. Data collection and image reconstruction are performed using a data acquisition (DAQ) module with Ethernet communication facility and a computer cluster of commercial PCs. Performance tests were carried out to determine system parameters, such as energy resolution, sensitivity and noise equivalent count rate. A modified GEANT4-based GATE Monte Carlo software package was used to simulate PET data analogous to those of the performance measurements. GATE was run on a Linux cluster of 10 processors (64 bit, Xeon with 3.0 GHz) and controlled by a SUN grid engine. The application of this special computer cluster reduced the time necessary for the simulations by an order of magnitude. The simulated energy spectra, maximum rate of true coincidences and sensitivity of the camera were in good agreement with the measured parameters.

  11. A synoptic study of Sudden Phase Anomalies (SPA's) effecting VLF navigation and timing

    NASA Technical Reports Server (NTRS)

    Swanson, E. R.; Kugel, C. P.

    1973-01-01

    Sudden phase anomalies (SPA's) observed on VLF recordings are related to sudden ionospheric disturbances due to solar flares. Results are presented for SPA statistics on 500 events observed in New York during the ten year period 1961 to 1970. Signals were at 10.2kHz and 13.6kHz emitted from the OMEGA transmitters in Hawaii and Trinidad. A relationship between SPA frequency and sun spot number was observed. For sun spot number near 85, about one SPA per day will be observed somewhere in the world. SPA activity nearly vanishes during periods of low sun spot number. During years of high solar activity, phase perturbations observed near noon are dominated by SPA effects beyond the 95th percentile. The SPA's can be represented by a rapid phase run-off which is approximately linear in time, peaking in about 6 minutes, and followed by a linear recovery. Typical duration is 49 minutes.

  12. AUTOCLASS III - AUTOMATIC CLASS DISCOVERY FROM DATA

    NASA Technical Reports Server (NTRS)

    Cheeseman, P. C.

    1994-01-01

    The program AUTOCLASS III, Automatic Class Discovery from Data, uses Bayesian probability theory to provide a simple and extensible approach to problems such as classification and general mixture separation. Its theoretical basis is free from ad hoc quantities, and in particular free of any measures which alter the data to suit the needs of the program. As a result, the elementary classification model used lends itself easily to extensions. The standard approach to classification in much of artificial intelligence and statistical pattern recognition research involves partitioning of the data into separate subsets, known as classes. AUTOCLASS III uses the Bayesian approach in which classes are described by probability distributions over the attributes of the objects, specified by a model function and its parameters. The calculation of the probability of each object's membership in each class provides a more intuitive classification than absolute partitioning techniques. AUTOCLASS III is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or omitted. The user specifies a class probability distribution function by associating attribute sets with supplied likelihood function terms. AUTOCLASS then searches in the space of class numbers and parameters for the maximally probable combination. It returns the set of class probability function parameters, and the class membership probabilities for each data instance. AUTOCLASS III is written in Common Lisp, and is designed to be platform independent. This program has been successfully run on Symbolics and Explorer Lisp machines. It has been successfully used with the following implementations of Common LISP on the Sun: Franz Allegro CL, Lucid Common Lisp, and Austin Kyoto Common Lisp and similar UNIX platforms; under the Lucid Common Lisp implementations on VAX/VMS v5.4, VAX/Ultrix v4.1, and MIPS/Ultrix v4, rev. 179; and on the Macintosh personal computer. The minimum Macintosh required is the IIci. This program will not run under CMU Common Lisp or VAX/VMS DEC Common Lisp. A minimum of 8Mb of RAM is required for Macintosh platforms and 16Mb for workstations. The standard distribution medium for this program is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 3.5 inch diskette in Macintosh format. An electronic copy of the documentation is included on the distribution medium. AUTOCLASS was developed between March 1988 and March 1992. It was initially released in May 1991. Sun is a trademark of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, VMS, and ULTRIX are trademarks of Digital Equipment Corporation. Macintosh is a trademark of Apple Computer, Inc. Allegro CL is a registered trademark of Franz, Inc.

  13. The Weekly Fab Five: Things You Should Do Every Week To Keep Your Computer Running in Tip-Top Shape.

    ERIC Educational Resources Information Center

    Crispen, Patrick

    2001-01-01

    Describes five steps that school librarians should follow every week to keep their computers running at top efficiency. Explains how to update virus definitions; run Windows update; run ScanDisk to repair errors on the hard drive; run a disk defragmenter; and backup all data. (LRW)

  14. Calculation of the twilight visibility function of near-sun objects

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.

    1976-01-01

    The visibility function, defined here as the magnitude difference between the excess brightness of a given object and that of the background sky, of near-sun objects during twilight is obtained from a general calculation which considers the twilight sky background, atmospheric extinction, and night glow. Visibility curves are computed for a number of cases in which observations have been recorded, particularly that of comet Kohoutek. For this object, the computed visibility maxima agree well in time with the reported times of observation.

  15. Solar Position Model for use in DIORAMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werley, Kenneth Alan

    2016-03-01

    The DIORAMA code requires the solar position relative to earth in order to compute GPS satellite orientation. The present document describes two functions that compute the unit vector from either the center of the Earth to the Sun or from any observer’s position to the Sun at some specified time. Another function determines if a satellite lies within the Earth’s shadow umbra. Similarly, functions determine the position of the moon and whether a satellite lies within the Moon’s shadow umbra.

  16. Recovering from "amnesia" brought about by radiation. Verification of the "Over the air" (OTA) application software update mechanism On-Board Solar Orbiter's Energetic Particle Detector

    NASA Astrophysics Data System (ADS)

    Da Silva, Antonio; Sánchez Prieto, Sebastián; Rodriguez Polo, Oscar; Parra Espada, Pablo

    Computer memories are not supposed to forget, but they do. Because of the proximity of the Sun, from the Solar Orbiter boot software perspective, it is mandatory to look out for permanent memory errors resulting from (SEL) latch-up failures in application binaries stored in EEPROM and its SDRAM deployment areas. In this situation, the last line in defense established by FDIR mechanisms is the capability of the boot software to provide an accurate report of the memories’ damages and to perform an application software update, that avoid the harmed locations by flashing EEPROM with a new binary. This paper describes the OTA EEPROM firmware update procedure verification of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. Since the maximum number of rewrites on real EEPROM is limited and permanent memory faults cannot be friendly emulated in real hardware, the verification has been accomplished by the use of a LEON2 Virtual Platform (Leon2ViP) with fault injection capabilities and real SpaceWire interfaces developed by the Space Research Group (SRG) of the University of Alcalá. This way it is possible to run the exact same target binary software as if was run on the real ICU platform. Furthermore, the use of this virtual hardware-in-the-loop (VHIL) approach makes it possible to communicate with Electrical Ground Support Equipment (EGSE) through real SpaceWire interfaces in an agile, controlled and deterministic environment.

  17. Diurnal Motion of the Sun as Seen From Mercury

    ERIC Educational Resources Information Center

    Turner, Lawrence E., Jr.

    1978-01-01

    Two methods are described for the quantitative description of the motion of the sun as observed from Mercury. A listing of a computer subroutine is included. The combination of slow rotation and high eccentricity of Mercury's orbit makes this problem an interesting one. (BB)

  18. Castelli, Benedetto (1578-1643)

    NASA Astrophysics Data System (ADS)

    Murdin, P.

    2000-11-01

    Mathematician, born in Brescia, Italy, Benedictine monk, professor at Padua. GALILEO's closest scientific collaborator, he defended, and edited Galileo, he helped his sunspot research, inventing the method of projection so as to view safely the Sun's image with a telescope. His book on hydraulics, Della Misura dell'Acque Correnti, or On the Measurement of Running Waters, founded modern hydrodynam...

  19. Development of a full-scale transmission testing procedure to evaluate advanced lubricants

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; Decker, Harry J.; Shimski, John T.

    1992-01-01

    Experimental tests were performed on the OH-58A helicopter main rotor transmission in the NASA Lewis 500-hp Helicopter Transmission Test Stand. The testing was part of a joint Navy/NASA/Army lubrication program. The objective of the program was to develop a separate lubricant for gearboxes and demonstrate an improved performance in life and load-carrying capacity. The goal of the experiments was to develop a testing procedure to fail certain transmission components using a MIL-L-23699 base reference oil, then run identical tests with improved lubricants and demonstrate performance. The tests were directed at failing components that the Navy has had problems with due to marginal lubrication. These failures included mast shaft bearing micropitting, sun gear and planet bearing fatigue, and spiral bevel gear scoring. A variety of tests were performed and over 900 hours of total run time accumulated for these tests. Some success was achieved in developing a testing procedure to produce sun gear and planet bearing fatigue failures. Only marginal success was achieved in producing mast shaft bearing micropitting and spiral bevel gear scoring.

  20. Transient dynamics capability at Sandia National Laboratories

    NASA Technical Reports Server (NTRS)

    Attaway, Steven W.; Biffle, Johnny H.; Sjaardema, G. D.; Heinstein, M. W.; Schoof, L. A.

    1993-01-01

    A brief overview of the transient dynamics capabilities at Sandia National Laboratories, with an emphasis on recent new developments and current research is presented. In addition, the Sandia National Laboratories (SNL) Engineering Analysis Code Access System (SEACAS), which is a collection of structural and thermal codes and utilities used by analysts at SNL, is described. The SEACAS system includes pre- and post-processing codes, analysis codes, database translation codes, support libraries, Unix shell scripts for execution, and an installation system. SEACAS is used at SNL on a daily basis as a production, research, and development system for the engineering analysts and code developers. Over the past year, approximately 190 days of CPU time were used by SEACAS codes on jobs running from a few seconds up to two and one-half days of CPU time. SEACAS is running on several different systems at SNL including Cray Unicos, Hewlett Packard PH-UX, Digital Equipment Ultrix, and Sun SunOS. An overview of SEACAS, including a short description of the codes in the system, are presented. Abstracts and references for the codes are listed at the end of the report.

  1. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  2. Evaluation of Immediate and 12-Week Effects of a Smartphone Sun-Safety Mobile Application: A Randomized Trial

    PubMed Central

    Buller, David B.; Berwick, Marianne; Lantz, Kathy; Buller, Mary Klein; Shane, James; Kane, Ilima; Liu, Xia

    2015-01-01

    Importance Mobile apps on smart phones can communicate a large amount of personalized, real-time health information, including advice on skin cancer prevention, but their effectiveness may be affected by whether Americans can be convinced to use them. Objective A smart phone mobile application delivering real-time sun protection advice was evaluated for a second time in a randomized trial. Design The trial conducted in 2013 utilized a randomized pretest-posttest controlled design. Screening procedures and a 3-week run-in period were added to increase use of the mobile app. Also, follow-ups at 3- and 8-weeks after randomization were conducted to examine immediate and longer-term effects. Setting Data was collected from participants recruited nationwide through online promotions. Participants A volunteer sample of adults aged 18 or older who owned an Android or iPhone smart phones. Intervention The mobile application gave feedback on sun protection (i.e., sun safety practices and sunburn risk) and alerted users to apply/reapply sunscreen and get out of the sun. Also, it displayed the hourly UV Index and vitamin D production based on the forecast UV Index, time, and location. Main Outcomes and Measures Percent of days using sun protection and days and minutes outdoors in the midday sun and number of sunburns in the past 3 months were assesed. Results Treatment group participants used wide-brimmed hats more at 7-weeks than controls. Women who used Solar Cell reported using all sun protection combined more than men but men and older individuals used sunscreen and hats less. Conclusions and Relevance The mobile application appeared to weakly improve sun protection initially. Use of the mobile application was higher than in a previous trial and associated with greater sun protection especially by women. Strategies to increase use are needed if the mobile app is to be effective deployed to the general adult population. PMID:25629819

  3. The Missing Mantle Paradox, and the Statistical Argument for Repeated Hit and Run Collisions

    NASA Astrophysics Data System (ADS)

    Asphaug, E.; Reufer, A.

    2015-10-01

    Mercury's formation can be explained by a giant impact. However, a direct hit blasting off the mantle [1] leaves debris stranded in orbitabout the Sun,to be re-accumulated back onto Mercury. A hit and run collision [2] provides a cleaner solution, and in most cases,much lower levels of shock and potentially greater retention of volatiles. However, hit and run is usually followed by subsequent re-collision, and ultimate accretion; an embryo's survival after being a hit and run projectile is unlikely in any single instance. Most of the original planetary embryoshave been accreted by Earth and Venus; unaccreted planets are lucky. Here we show that the surviving terrestrial planet population is likely to have about as many hit and run survivors, as it is to have untouched survivors. That is, the differences between Mercury and Mars can be explained in a statistical manner as a consequence of accretionary attrition. We consider applications to asteroids, meteorites and exoplanets.

  4. Java: An Explosion on the Internet.

    ERIC Educational Resources Information Center

    Read, Tim; Hall, Hazel

    Summer 1995 saw the release, with considerable media attention, of draft versions of Sun Microsystems' Java computer programming language and the HotJava browser. Java has been heralded as the latest "killer" technology in the Internet explosion. Sun Microsystems and numerous companies including Microsoft, IBM, and Netscape have agreed…

  5. Higher Flux from the Young Sun as an Explanation for Warm Temperatures for Early Earth and Mars

    NASA Technical Reports Server (NTRS)

    Sackmann, I.-Juliana

    2001-01-01

    Observations indicate that the Earth was at least warm enough for liquid water to exist as far back as 4 Gyr ago, namely, as early as half a billion years after the formation of the Earth; in fact, there is evidence suggesting that Earth may have been even warmer then than it is now. These relatively warm temperatures required on early Earth are in apparent contradiction to the dimness of the early Sun predicted by the standard solar models. This problem has generally been explained by assuming that Earth's early atmosphere contained huge amounts of carbon dioxide (CO2), resulting in a large enough greenhouse effect to counteract the effect of a dimmer Sun. However, recent work places an upper limit of 0.04 bar on the partial pressure of CO2 in the period from 2.75 to 2.2 Gyr ago, based on the absence of siderite in paleosols; this casts doubt on the viability of a strong CO2 greenhouse effect on early Earth. The existence of liquid water on early Mars has been even more of a puzzle; even the maximum possible CO2 greenhouse effect cannot yield warm enough Martian surface temperatures. These problems can be resolved simultaneously for both Earth and Mars, if the early Sun was brighter than predicted by the standard solar models. This could be accomplished if the early Sun was slightly more massive than it is now, i.e., if the solar wind was considerably stronger in the past than at present. A slightly more massive young Sun would have left fingerprints on the internal structure of the present Sun. Today, helioseismic observations exist that can measure the internal structure of the Sun with very high precision. The task undertaken here was to compute solar models with the highest precision possible at this time, starting with slightly greater initial masses. These were evolved to the present solar age, where comparisons with the helioseismic observations could be made. Our computations also yielded the time evolution of the solar flux at the planets - a key input to the climates of early Earth and Mars. Early solar mass loss is not the only influence that can alter the internal structure of the present Sun. There are minor uncertainties in the physics of the solar models and in the key observed solar parameters that also affect the present Sun's internal structure. It was therefore imperative to obtain an understanding of the effects of these other uncertainties, in order to disentangle them from the fingerprints that might be left by early solar mass loss. From these considerations, our work was divided into two parts: (1) We first computed the evolution of standard solar models with input parameters varied within their uncertainties, to determine their effect on the observable helioseismic quantities; (2) We then computed non-standard solar models with higher initial masses to test against the helioseismological observations.

  6. Dynamics of Nuclear Regions of Galaxies

    NASA Technical Reports Server (NTRS)

    Miller, Richard H.

    1996-01-01

    Current research carried out with the help of the ASEE-NASA Summer Faculty Program, at NASA-Ames, is concentrated on the dynamics of nuclear regions of galaxies. From a dynamical point of view a galaxy is a collection of around 10(sup 11) stars like our Sun, each of which moves in the summed gravitational field of all the remaining stars. Thus galaxy dynamics becomes a self-consistent n-body problem with forces given by Newtonian gravitation. Strong nonlinearity in the gravitational force and the inherent nonlinearity of self-consistent problems both argue for a numerical approach. The technique of numerical experiments consis of constructing an environment in the computer that is as close as possible to the physical conditions in a real galaxy and then carrying out experiments much like laboratory experiments in physics or engineering, in this environment. Computationally, an experiment is an initial value problem, and a good deal of thought and effort goes into the design of the starting conditions that serve as initial values. Experiments are run at Ames because all the 'equipment' is in place-the programs, the necessary computational power, and good facilities for post-run analysis. Our goal for this research program is to study the nuclear regions in detail and this means replacing most of the galaxy by a suitable boundary condition to allow the full capability of numerical experiments to be brought to bear on a small region perhaps 1/1000 of the linear dimensions of an entire galaxy. This is an extremely delicate numerical problem, one in which some small feature overlook, can easily lead to a collapse or blow-up of the entire system. All particles attract each other in gravitational problems, and the 1/r(sup 2) force is: (1) nonlinear; (2) strong at short range; (3) long-range, and (4) unscreened at any distance.

  7. Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images

    PubMed Central

    Ortega-Terol, Damian; Ballesteros, Rocio

    2017-01-01

    Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology. PMID:29036930

  8. Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images.

    PubMed

    Ortega-Terol, Damian; Hernandez-Lopez, David; Ballesteros, Rocio; Gonzalez-Aguilera, Diego

    2017-10-15

    Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology.

  9. YAMM - YET ANOTHER MENU MANAGER

    NASA Technical Reports Server (NTRS)

    Mazer, A. S.

    1994-01-01

    One of the most time-consuming yet necessary tasks of writing any piece of interactive software is the development of a user interface. Yet Another Menu Manager, YAMM, is an application independent menuing package, designed to remove much of the difficulty and save much of the time inherent in the implementation of the front ends for large packages. Written in C for UNIX-based operating systems, YAMM provides a complete menuing front end for a wide variety of applications, with provisions for terminal independence, user-specific configurations, and dynamic creation of menu trees. Applications running under the menu package consists of two parts: a description of the menu configuration and the body of application code. The menu configuration is used at runtime to define the menu structure and any non-standard keyboard mappings and terminal capabilities. Menu definitions define specific menus within the menu tree. The names used in a definition may be either a reference to an application function or the name of another menu defined within the menu configuration. Application parameters are entered using data entry screens which allow for required and optional parameters, tables, and legal-value lists. Both automatic and application-specific error checking are available. Help is available for both menu operation and specific applications. The YAMM program was written in C for execution on a Sun Microsystems workstation running SunOS, based on the Berkeley (4.2bsd) version of UNIX. During development, YAMM has been used on both 68020 and SPARC architectures, running SunOS versions 3.5 and 4.0. YAMM should be portable to most other UNIX-based systems. It has a central memory requirement of approximately 232K bytes. The standard distribution medium for this program is one .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. YAMM was developed in 1988 and last updated in 1990. YAMM is a copyrighted work with all copyright vested in NASA.

  10. Sun Valley Elementary School Reading and Writing Assessment Project: Final Report.

    ERIC Educational Resources Information Center

    Zakaluk, Beverley L.

    A study investigated the effectiveness of integrating computer technology (multimedia learning resources in a "virtual" classroom) with content area and reading and writing curriculum. All students in grades 2 through 5 at Sun Valley Elementary School, Canada, had their reading and writing assessed. In addition, the writing performance…

  11. Report on the solar physics-plasma physics workshop

    NASA Technical Reports Server (NTRS)

    Sturrock, P. A.; Baum, P. J.; Beckers, J. M.; Newman, C. E.; Priest, E. R.; Rosenberg, H.; Smith, D. F.; Wentzel, D. G.

    1976-01-01

    The paper summarizes discussions held between solar physicists and plasma physicists on the interface between solar and plasma physics, with emphasis placed on the question of what laboratory experiments, or computer experiments, could be pursued to test proposed mechanisms involved in solar phenomena. Major areas discussed include nonthermal plasma on the sun, spectroscopic data needed in solar plasma diagnostics, types of magnetic field structures in the sun's atmosphere, the possibility of MHD phenomena involved in solar eruptive phenomena, the role of non-MHD instabilities in energy release in solar flares, particle acceleration in solar flares, shock waves in the sun's atmosphere, and mechanisms of radio emission from the sun.

  12. The World Optical Depth Research and Calibration Center (WORCC) quality assurance and quality control of GAW-PFR AOD measurements

    NASA Astrophysics Data System (ADS)

    Kazadzis, Stelios; Kouremeti, Natalia; Nyeki, Stephan; Gröbner, Julian; Wehrli, Christoph

    2018-02-01

    The World Optical Depth Research Calibration Center (WORCC) is a section within the World Radiation Center at Physikalisches-Meteorologisches Observatorium (PMOD/WRC), Davos, Switzerland, established after the recommendations of the World Meteorological Organization for calibration of aerosol optical depth (AOD)-related Sun photometers. WORCC is mandated to develop new methods for instrument calibration, to initiate homogenization activities among different AOD networks and to run a network (GAW-PFR) of Sun photometers. In this work we describe the calibration hierarchy and methods used under WORCC and the basic procedures, tests and processing techniques in order to ensure the quality assurance and quality control of the AOD-retrieved data.

  13. The Sun was Not Born in M67

    NASA Astrophysics Data System (ADS)

    Pichardo, Bárbara; Moreno, Edmundo; Allen, Christine; Bedin, Luigi R.; Bellini, Andrea; Pasquini, Luca

    2012-03-01

    Using the most recent proper-motion determination of the old, solar-metallicity, Galactic open cluster M67 in orbital computations in a non-axisymmetric model of the Milky Way, including a bar and three-dimensional spiral arms, we explore the possibility that the Sun once belonged to this cluster. We have performed Monte Carlo numerical simulations to generate the present-day orbital conditions of the Sun and M67, and all the parameters in the Galactic model. We compute 3.5 × 105 pairs of orbits Sun-M67 looking for close encounters in the past with a minimum distance approach within the tidal radius of M67. In these encounters we find that the relative velocity between the Sun and M67 is larger than 20 km s-1. If the Sun had been ejected from M67 with this high velocity by means of a three-body encounter, this interaction would have either destroyed an initial circumstellar disk around the Sun or dispersed its already formed planets. We also find a very low probability, much lower than 10-7, that the Sun was ejected from M67 by an encounter of this cluster with a giant molecular cloud. This study also excludes the possibility that the Sun and M67 were born in the same molecular cloud. Our dynamical results convincingly demonstrate that M67 could not have been the birth cluster of our solar system. This work relies partly on observations of the Large Binocular Telescope (LBT). The LBT is an international collaboration among institutions in the United States, Italy, and Germany. LBT Corporation partners are The Ohio State University; The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota, and University of Virginia.

  14. Electronic Engineering Notebook: A software environment for research execution, documentation and dissemination

    NASA Technical Reports Server (NTRS)

    Moerder, Dan

    1994-01-01

    The electronic engineering notebook (EEN) consists of a free form research notebook, implemented in a commercial package for distributed hypermedia, which includes utilities for graphics capture, formatting and display of LaTex constructs, and interfaces to the host operating system. The latter capability consists of an information computer-aided software engineering (CASE) tool and a means to associate executable scripts with source objects. The EEN runs on Sun and HP workstations. The EEN, in day-to-day use can be used in much the same manner as the sort of research notes most researchers keep during development of projects. Graphics can be pasted in, equations can be entered via LaTex, etc. In addition, the fact that the EEN is hypermedia permits easy management of 'context', e.g., derivations and data can contain easily formed links to other supporting derivations and data. The CASE tool also permits development and maintenance of source code directly in the notebook, with access to its derivations and data.

  15. Processing and Managing the Kepler Mission's Treasure Trove of Stellar and Exoplanet Data

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.

    2016-01-01

    The Kepler telescope launched into orbit in March 2009, initiating NASAs first mission to discover Earth-size planets orbiting Sun-like stars. Kepler simultaneously collected data for 160,000 target stars at a time over its four-year mission, identifying over 4700 planet candidates, 2300 confirmed or validated planets, and over 2100 eclipsing binaries. While Kepler was designed to discover exoplanets, the long term, ultra- high photometric precision measurements it achieved made it a premier observational facility for stellar astrophysics, especially in the field of asteroseismology, and for variable stars, such as RR Lyraes. The Kepler Science Operations Center (SOC) was developed at NASA Ames Research Center to process the data acquired by Kepler from pixel-level calibrations all the way to identifying transiting planet signatures and subjecting them to a suite of diagnostic tests to establish or break confidence in their planetary nature. Detecting small, rocky planets transiting Sun-like stars presents a variety of daunting challenges, from achieving an unprecedented photometric precision of 20 parts per million (ppm) on 6.5-hour timescales, supporting the science operations, management, processing, and repeated reprocessing of the accumulating data stream. This paper describes how the design of the SOC meets these varied challenges, discusses the architecture of the SOC and how the SOC pipeline is operated and is run on the NAS Pleiades supercomputer, and summarizes the most important pipeline features addressing the multiple computational, image and signal processing challenges posed by Kepler.

  16. Two demonstrators and a simulator for a sparse, distributed memory

    NASA Technical Reports Server (NTRS)

    Brown, Robert L.

    1987-01-01

    Described are two programs demonstrating different aspects of Kanerva's Sparse, Distributed Memory (SDM). These programs run on Sun 3 workstations, one using color, and have straightforward graphically oriented user interfaces and graphical output. Presented are descriptions of the programs, how to use them, and what they show. Additionally, this paper describes the software simulator behind each program.

  17. Environmental and Water Quality Operational Studies. Bibliography of Effects of Commercial Navigation Traffic in Large Waterways

    DTIC Science & Technology

    1987-04-01

    Largemouth Bass and Green Sun- fish," Technical Paper 20 of the U. S. Bureau of Sport Fisheries and Wildlife. Washington, DC. Helvig, P. C. 1966. "An...as a Chronobiological Phenomena in, ,Running Water Ecosystems," Annual Review of Ecology and Systematics, Vol 5. pp 309-323. North Star Research

  18. More than a Dream-a Renewable Electricity Future - Continuum Magazine |

    Science.gov Websites

    when the sun is highest. Unfortunately, when people return home from work, their air-conditioning realizing a clean-energy-dominated grid. Unique Challenges with Renewable Electricity Production When effectively when they run continually and at a consistent level of output, they can ramp their outputs, but

  19. Spotless

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Everything runs in cycles and what goes up must come down. We hear that a lot these days. The topic of conversation is of course the sun. The solar cycle takes 11 years to go from sunspot minimum to maximum and back to minimum. The cycle is driven by changes in the Sun's magnetic field, and is actually a 22-year cycle: during the second 11 years the magnetic polarity of the solar field is reversed. The Solar and Heliospheric Observatory satellite (or SOHO for short), a joint ESA and NASA mission, has been watching the sun since 1995. Rarely is the sun as quiet as it was on September 27, 2008 - as shown in the visible-light image above left, there were absolutely no sunspots to be seen. If the activity stays this low, this might be the most inactive the Sun has been since the dawn of the space age. This still pales in comparison to the 17th century when for a period of 70 years (called the Maunder Minimum) there were no reported sunspots. Some scientists believe the Maunder Minimum responsible for a 'Little Ice Age' and the sound of some violins. The image on the right, taken 3 days later in extreme UV light, shows the formation of two active regions (in the circles) but both faded away before becoming full-fledged spots. So how low will it go? Only time will tell.

  20. Skin and antioxidants.

    PubMed

    Poljsak, Borut; Dahmane, Raja; Godic, Aleksandar

    2013-04-01

    It is estimated that total sun exposure occurs non-intentionally in three quarters of our lifetimes. Our skin is exposed to majority of UV radiation during outdoor activities, e.g. walking, practicing sports, running, hiking, etc. and not when we are intentionally exposed to the sun on the beach. We rarely use sunscreens during those activities, or at least not as much and as regular as we should and are commonly prone to acute and chronic sun damage of the skin. The only protection of our skin is endogenous (synthesis of melanin and enzymatic antioxidants) and exogenous (antioxidants, which we consume from the food, like vitamins A, C, E, etc.). UV-induced photoaging of the skin becomes clinically evident with age, when endogenous antioxidative mechanisms and repair processes are not effective any more and actinic damage to the skin prevails. At this point it would be reasonable to ingest additional antioxidants and/or to apply them on the skin in topical preparations. We review endogenous and exogenous skin protection with antioxidants.

  1. System and method for controlling power consumption in a computer system based on user satisfaction

    DOEpatents

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  2. Old flying ice-rock body in space allows a glance at its inner working.

    NASA Astrophysics Data System (ADS)

    Bieler, A. M.

    2015-12-01

    I am studying old, cold bodies of rock and ice flying through space, usually far, far away from the Sun. They are even behind the last of the big 8 balls we call our home worlds. (There were 9 balls a few yearsago, but then one of the balls was not considered a ball anymore by some people and he/she had to leave the group.)Because they are so far away from the Sun, they remain dark and very cold for the most part of their life.That is why even most of the very nervous stuff sticks on them ever since. With stuff I mean the little things that the Sun, the big 8 balls, we humans and everything else that is flying around the Sun ismade of. The nervous ones quickly change into something wind like andcan get lost. But the cold on the ice-rock bodies slows this down andthey stick around. This makes those ice-rock bodies interesting tostudy, they did not change too much since they were made.I study news sent back from a computer controlled box flying around oneof those rock-ice things that is now closer to the Sun. When thespace between such a body and the Sun gets smaller, it warms up andsome of the ice changes into wind like things. We find out how muchof what stuff is flying away from that body and at what time.Then I and my friends put those numbers into a big ass computer to findout more on how those rock-ice bodies work. Where does the wind comefrom? Do they all come from the same place or only some? Is it really the Sun's fault? How many cups of ice change into wind each day? Many questions.

  3. Mysterious Roving Rocks of Racetrack Playa

    NASA Image and Video Library

    2017-12-08

    The trails can be straight, or they can curve. Sometimes, two trails run alongside each other. Those two lines running from left to right in the back look like they were made by a car; but they were made by rocks. Photo credit: NASA/GSFC/Maggie McAdam To read a feature story on the Racetrack Playa go to: www.nasa.gov/topics/earth/features/roving-rocks.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe. Follow us on Twitter Join us on Facebook

  4. On-orbit checkout study. [for the synchronous meteorological satellite and the technology demonstration satellite

    NASA Technical Reports Server (NTRS)

    Pritchard, E. I.

    1977-01-01

    The spaceborne testing equipment carried by the orbiter and the measuring equipment onboard the satellite (telemetry) is tested to verify that each is operating satisfactorily. The satellite command system is also checked. Thermal stabilization with the satellite in the orbiter shadow is achieved in six to eight hours. Satellite subsystem tests are run, and thermal control by heaters is checked. Thermal stabilization with the satellite exposed to the sun (when the orbiter is in sunlight) is again achieved in an estimated six to eight hours. Subsystem tests are again run in the hot condition, and heat rejection tests are made.

  5. Response Across the Health-Literacy Spectrum of Kidney Transplant Recipients to a Sun-Protection Education Program Delivered on Tablet Computers: Randomized Controlled Trial.

    PubMed

    Robinson, June K; Friedewald, John J; Desai, Amishi; Gordon, Elisa J

    2015-08-18

    Sun protection can reduce skin cancer development in kidney transplant recipients, who have a greater risk of developing squamous cell carcinoma than the general population. A culturally sensitive sun-protection program (SunProtect) was created in English and Spanish with the option of choosing audio narration provided by the tablet computer (Samsung Galaxy Tab 2 10.1). The intervention, which showed skin cancer on patients with various skin tones, explained the following scenarios: skin cancer risk, the ability of sun protection to reduce this risk, as well as offered sun-protection choices. The length of the intervention was limited to the time usually spent waiting during a visit to the nephrologist. The development of this culturally sensitive, electronic, interactive sun-protection educational program, SunProtect, was guided by the "transtheoretical model," which focuses on decision making influenced by perceptions of personal risk or vulnerability to a health threat, importance (severity) of the disease, and benefit of sun-protection behavior. Transportation theory, which holds that narratives can have uniquely persuasive effects in overcoming preconceived beliefs and cognitive biases because people transported into a narrative world will alter their beliefs based on information, claims, or events depicted, guided the use of testimonials. Participant tablet use was self-directed. Self-reported responses to surveys were entered into the database through the tablet. Usability was tested through interviews. A randomized controlled pilot trial with 170 kidney transplant recipients was conducted, where the educational program (SunProtect) was delivered through a touch-screen tablet to 84 participants. The study involved 62 non-Hispanic white, 60 non-Hispanic black, and 48 Hispanic/Latino kidney transplant recipients. The demographic survey data showed no significant mean differences between the intervention and control groups in age, sex, income, or time since transplantation. The mean duration of program use varied by the ethnic/racial group, with non-Hispanic whites having the shortest use (23 minutes) and Hispanic/Latinos having the longest use (42 minutes). Knowledge, awareness of skin cancer risk, willingness to change sun protection, and use of sun protection increased from baseline to 2 weeks after the program in participants from all ethnic/racial groups in comparison with controls (P<.05). Kidney transplant recipients with inadequate (47/170, 28%) and marginal functional health literacy (59/170, 35%) listened to either Spanish or English audio narration accompanying the text and graphics. After completion of the program, Hispanic/Latino patients with initially inadequate health literacy increased their knowledge more than non-Hispanic white and black patients with adequate health literacy (P<.05). Sun protection implemented 2 weeks after education varied by the ethnic/racial group. Outdoor activities were reduced by Hispanics/Latinos, non-Hispanic blacks sought shade, Hispanic/Latinos and non-Hispanic blacks wore clothing, and non-Hispanic whites wore sunscreen (P<.05). Educational program with a tablet computer during the kidney transplant recipients' 6- or 12-month follow-up visits to the transplant nephrologist improved sun protection in all racial/ethnic groups. Tablets may be used to provide patient education and reduce the physician's burden of educating and training patients. ClinicalTrials.gov NCT01646099; https://clinicaltrials.gov/ct2/show/NCT01646099. ©June K Robinson, John J. Friedewald, Amishi Desai, Elisa J Gordon. Originally published in JMIR Cancer (http://cancer.jmir.org), 18.08.2015.

  6. Response Across the Health-Literacy Spectrum of Kidney Transplant Recipients to a Sun-Protection Education Program Delivered on Tablet Computers: Randomized Controlled Trial

    PubMed Central

    Friedewald, John J; Desai, Amishi; Gordon, Elisa J

    2015-01-01

    Background Sun protection can reduce skin cancer development in kidney transplant recipients, who have a greater risk of developing squamous cell carcinoma than the general population. Objective A culturally sensitive sun-protection program (SunProtect) was created in English and Spanish with the option of choosing audio narration provided by the tablet computer (Samsung Galaxy Tab 2 10.1). The intervention, which showed skin cancer on patients with various skin tones, explained the following scenarios: skin cancer risk, the ability of sun protection to reduce this risk, as well as offered sun-protection choices. The length of the intervention was limited to the time usually spent waiting during a visit to the nephrologist. Methods The development of this culturally sensitive, electronic, interactive sun-protection educational program, SunProtect, was guided by the “transtheoretical model,” which focuses on decision making influenced by perceptions of personal risk or vulnerability to a health threat, importance (severity) of the disease, and benefit of sun-protection behavior. Transportation theory, which holds that narratives can have uniquely persuasive effects in overcoming preconceived beliefs and cognitive biases because people transported into a narrative world will alter their beliefs based on information, claims, or events depicted, guided the use of testimonials. Participant tablet use was self-directed. Self-reported responses to surveys were entered into the database through the tablet. Usability was tested through interviews. A randomized controlled pilot trial with 170 kidney transplant recipients was conducted, where the educational program (SunProtect) was delivered through a touch-screen tablet to 84 participants. Results The study involved 62 non-Hispanic white, 60 non-Hispanic black, and 48 Hispanic/Latino kidney transplant recipients. The demographic survey data showed no significant mean differences between the intervention and control groups in age, sex, income, or time since transplantation. The mean duration of program use varied by the ethnic/racial group, with non-Hispanic whites having the shortest use (23 minutes) and Hispanic/Latinos having the longest use (42 minutes). Knowledge, awareness of skin cancer risk, willingness to change sun protection, and use of sun protection increased from baseline to 2 weeks after the program in participants from all ethnic/racial groups in comparison with controls (P<.05). Kidney transplant recipients with inadequate (47/170, 28%) and marginal functional health literacy (59/170, 35%) listened to either Spanish or English audio narration accompanying the text and graphics. After completion of the program, Hispanic/Latino patients with initially inadequate health literacy increased their knowledge more than non-Hispanic white and black patients with adequate health literacy (P<.05). Sun protection implemented 2 weeks after education varied by the ethnic/racial group. Outdoor activities were reduced by Hispanics/Latinos, non-Hispanic blacks sought shade, Hispanic/Latinos and non-Hispanic blacks wore clothing, and non-Hispanic whites wore sunscreen (P<.05). Conclusion Educational program with a tablet computer during the kidney transplant recipients’ 6- or 12-month follow-up visits to the transplant nephrologist improved sun protection in all racial/ethnic groups. Tablets may be used to provide patient education and reduce the physician’s burden of educating and training patients. Trial Registration ClinicalTrials.gov NCT01646099; https://clinicaltrials.gov/ct2/show/NCT01646099 PMID:28410176

  7. Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    We present views and analysis of the execution of several PVM (Parallel Virtual Machine) codes for Computational Fluid Dynamics on a networks of Sparcstations, including: (1) NAS Parallel Benchmarks CG and MG; (2) a multi-partitioning algorithm for NAS Parallel Benchmark SP; and (3) an overset grid flowsolver. These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains: (1) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (2) Monitor, a library of runtime trace-collection routines; (3) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (4) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran 77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses XIIR5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (1) the impact of long message latencies; (2) the impact of multiprogramming overheads and associated load imbalance; (3) cache and virtual-memory effects; and (4) significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (1) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets: and (2) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.

  8. The dance of the honeybee: how do honeybees dance to transfer food information effectively?

    PubMed

    Okada, R; Ikeno, H; Sasayama, Noriko; Aonuma, H; Kurabayashi, D; Ito, E

    2008-01-01

    A honeybee informs her nestmates of the location of a flower she has visited by a unique behavior called a "waggle dance." On a vertical comb, the direction of the waggle run relative to gravity indicates the direction to the food source relative to the sun in the field, and the duration of the waggle run indicates the distance to the food source. To determine the detailed biological features of the waggle dance, we observed worker honeybee behavior in the field. Video analysis showed that the bee does not dance in a single or random place in the hive but waggled several times in one place and then several times in another. It also showed that the information of the waggle dance contains a substantial margin of error. Angle and duration of waggle runs varied from run to run, with the range of +/-15 degrees and +/-15%, respectively, even in a series of waggle dances of a single individual. We also found that most dance followers that listen to the waggle dance left the dancer after one or two sessions of listening.

  9. The role of global cloud climatologies in validating numerical models

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN

    1993-01-01

    The purpose of this work is to estimate sampling errors of area-time averaged rain rate due to temporal samplings by satellites. In particular, the sampling errors of the proposed low inclination orbit satellite of the Tropical Rainfall Measuring Mission (TRMM) (35 deg inclination and 350 km altitude), one of the sun synchronous polar orbiting satellites of NOAA series (98.89 deg inclination and 833 km altitude), and two simultaneous sun synchronous polar orbiting satellites--assumed to carry a perfect passive microwave sensor for direct rainfall measurements--will be estimated. This estimate is done by performing a study of the satellite orbits and the autocovariance function of the area-averaged rain rate time series. A model based on an exponential fit of the autocovariance function is used for actual calculations. Varying visiting intervals and total coverage of averaging area on each visit by the satellites are taken into account in the model. The data are generated by a General Circulation Model (GCM). The model has a diurnal cycle and parameterized convective processes. A special run of the GCM was made at NASA/GSFC in which the rainfall and precipitable water fields were retained globally for every hour of the run for the whole year.

  10. FluorWPS: A Monte Carlo ray-tracing model to compute sun-induced chlorophyll fluorescence of three-dimensional canopy

    USDA-ARS?s Scientific Manuscript database

    A model to simulate radiative transfer (RT) of sun-induced chlorophyll fluorescence (SIF) of three-dimensional (3-D) canopy, FluorWPS, was proposed and evaluated. The inclusion of fluorescence excitation was implemented with the ‘weight reduction’ and ‘photon spread’ concepts based on Monte Carlo ra...

  11. "Earth, Sun and Moon": Computer Assisted Instruction in Secondary School Science--Achievement and Attitudes

    ERIC Educational Resources Information Center

    Ercan, Orhan; Bilen, Kadir; Ural, Evrim

    2016-01-01

    This study investigated the impact of a web-based teaching method on students' academic achievement and attitudes in the elementary education fifth grade Science and Technology unit, "System of Earth, Sun and Moon". The study was a quasi-experimental study with experimental and control groups comprising 54 fifth grade students attending…

  12. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.

    This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less

  13. A Vectorial Model to Compute Terrain Parameters, Local and Remote Sheltering, Scattering and Albedo using TIN Domains for Hydrologic Modeling.

    NASA Astrophysics Data System (ADS)

    Moreno, H. A.; Ogden, F. L.; Steinke, R. C.; Alvarez, L. V.

    2015-12-01

    Triangulated Irregular Networks (TINs) are increasingly popular for terrain representation in high performance surface and hydrologic modeling by their skill to capture significant changes in surface forms such as topographical summits, slope breaks, ridges, valley floors, pits and cols. This work presents a methodology for estimating slope, aspect and the components of the incoming solar radiation by using a vectorial approach within a topocentric coordinate system by establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computing a unit vector defining the position of the sun at each hour and DOY. Thus, a dot product determines the radiation flux at each TIN element. Remote shading is computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector. Sky view fractions are computed by a simplified scanning algorithm in prescribed directions and are useful to determine diffuse radiation. Finally, remote radiation scattering is computed from the sky view factor complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. This methodology represents an improvement on the current algorithms to compute terrain and radiation parameters on TINs in an efficient manner. All terrain features (e.g. slope, aspect, sky view factors and remote sheltering) can be pre-computed and stored for easy access for a subsequent ground surface or hydrologic simulation.

  14. Ground-to-air flow visualization using Solar Calcium-K line Background-Oriented Schlieren

    NASA Astrophysics Data System (ADS)

    Hill, Michael A.; Haering, Edward A.

    2017-01-01

    The Calcium-K Eclipse Background-Oriented Schlieren experiment was performed as a proof of concept test to evaluate the effectiveness of using the solar disk as a background to perform the Background-Oriented Schlieren (BOS) method of flow visualization. A ground-based imaging system was equipped with a Calcium-K line optical etalon filter to enable the use of the chromosphere of the sun as the irregular background to be used for BOS. A US Air Force T-38 aircraft performed three supersonic runs which eclipsed the sun as viewed from the imaging system. The images were successfully post-processed using optical flow methods to qualitatively reveal the density gradients in the flow around the aircraft.

  15. Modeling the Atmosphere of Solar and Other Stars: Radiative Transfer with PHOENIX/3D

    NASA Astrophysics Data System (ADS)

    Baron, Edward

    The chemical composition of stars is an important ingredient in our understanding of the formation, structure, and evolution of both the Galaxy and the Solar System. The composition of the sun itself is an essential reference standard against which the elemental contents of other astronomical objects are compared. Recently, redetermination of the elemental abundances using three-dimensional, time-dependent hydrodynamical models of the solar atmosphere has led to a reduction in the inferred metal abundances, particularly C, N, O, and Ne. However, this reduction in metals reduces the opacity such that models of the Sun no longer agree with the observed results obtained using helioseismology. Three dimensional (3-D) radiative transfer is an important problem in physics, astrophysics, and meteorology. Radiative transfer is extremely computationally complex and it is a natural problem that requires computation on the exascale. We intend to calculate the detailed compositional structure of the Sun and other stars at high resolution with full NLTE, treating the turbulent velocity flows in full detail in order to compare results from hydrodynamics and helioseismology, and understand the nature of the discrepancies found between the two approaches. We propose to perform 3-D high-resolution radiative transfer calculations with the PHOENIX/3D suite of solar and other stars using 3-D hydrodynamic models from different groups. While NLTE radiative transfer has been treated by the groups doing hydrodynamics, they are necessarily limited in their resolution to the consideration of only a few (4-20) frequency bins, whereas we can calculate full NLTE including thousands of wavelength points, resolving the line profiles, and solving the scattering problem with extremely high angular resolution. The code has been used for the analysis of supernova spectra, stellar and planetary spectra, and for time-dependent modeling of transient objects. PHOENIX/3D runs and scales very well on Cray XC-30 and XC-40 machines (tested up to 100,800 CPU cores) and should scale up to several million cores for large simulations. Non-local problems, particularly radiation hydrodynamics problems, are at the forefront of computational astrophysics and we will share our work with the community. Our research program brings a unified modeling strategy to the results of several disparate groups and thus will provide a unifying framework with which to assess the metal abundance of the stars and the chemical evolution of the galaxy. We will bring together 3-D hydrodynamical models, detailed radiative transfer, and astronomical abundance studies. We will also provide results of interest to the atomic physics and plasma physics communities. Our work will use data from NASA telescopes including the Hubble Space Telescope and the James Webb Space telescope. The ability to work with data from the UV to the far IR is crucial from validating our results. Our work will also extend the exascale computational capabilities, which is a national goal.

  16. A novel adaptive sun tracker for spacecraft solar panel based on hybrid unsymmetric composite laminates

    NASA Astrophysics Data System (ADS)

    Wu, Zhangming; Li, Hao

    2017-11-01

    This paper proposes a novel adaptive sun tracker which is constructed by hybrid unsymmetric composite laminates. The adaptive sun tracker could be applied on spacecraft solar panels to increase their energy efficiency through decreasing the inclined angle between the sunlight and the solar panel normal. The sun tracker possesses a large rotation freedom and its rotation angle depends on the laminate temperature, which is affected by the light condition in the orbit. Both analytical model and finite element model (FEM) are developed for the sun tracker to predict its rotation angle in different light conditions. In this work, the light condition of the geosynchronous orbit on winter solstice is considered in the numerical prediction of the temperatures of the hybrid laminates. The final inclined angle between the sunlight and the solar panel normal during a solar day is computed using the finite element model. Parametric study of the adaptive sun tracker is conducted to improve its capacity and effectiveness of sun tracking. The improved adaptive sun tracker is lightweight and has a state-of-the-art design. In addition, the adaptive sun tracker does not consume any power of the solar panel, since it has no electrical driving devices. The proposed adaptive sun tracker provides a potential alternative to replace the traditional sophisticated electrical driving mechanisms for spacecraft solar panels.

  17. Simulating three dimensional wave run-up over breakwaters covered by antifer units

    NASA Astrophysics Data System (ADS)

    Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader

    2014-06-01

    The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.

  18. WinSCP for Windows File Transfers | High-Performance Computing | NREL

    Science.gov Websites

    WinSCP for Windows File Transfers WinSCP for Windows File Transfers WinSCP for can used to securely transfer files between your local computer running Microsoft Windows and a remote computer running Linux

  19. Implementing Simulation Design of Experiments and Remote Execution on a High Performance Computing Cluster

    DTIC Science & Technology

    2007-09-01

    example, an application developed in Sun’s Netbeans [2007] integrated development environment (IDE) uses Swing class object for graphical user... Netbeans Version 5.5.1 [Computer Software]. Santa Clara, CA: Sun Microsystems. Process Modeler Version 7.0 [Computer Software]. Santa Clara, Ca

  20. NATURAL AND ANTHROPOGENIC FACTORS AFFECTING GLOBAL AND REGIONAL CLIMATE. A REPORT OF THE NEW ENGLAND REGIONAL ASSESSMENT GROUP FOR THE US GLOBAL CHANGE RESEARCH PROGRAM

    EPA Science Inventory

    With the advent of Earth-orbiting satellites to monitor our planet and spacecraft that study the sun, an active international joint project to monitor the Sun?Earth (Solar Terrestrial) environment has evolved. Coupled with an ever increasing computational capability, we are now a...

  1. A numerical algorithm for the explicit calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alex, Arne; Delft, Jan von; Kalus, Matthias

    2011-02-15

    We present an algorithm for the explicit numerical calculation of SU(N) and SL(N,C) Clebsch-Gordan coefficients, based on the Gelfand-Tsetlin pattern calculus. Our algorithm is well suited for numerical implementation; we include a computer code in an appendix. Our exposition presumes only familiarity with the representation theory of SU(2).

  2. Practical Sun Power: 5 Projects to Help Free You from Depending on Any Fuel Other Than the Sun.

    ERIC Educational Resources Information Center

    Rankins, William H., III; Wilson, David A.

    This publication describes in detail projects for using solar energy; five major projects and five mini-projects. The major projects are: (1) Parabolic reflectors, both cylindrical and spherical; (2) Solar oven; (3) Hot water heater; (4) House heating; and (5) Conversion to electricity. Mini-projects investigate: (1) Solar computers; (2) Fresnel…

  3. Modeling Studies of the MODIS Solar Diffuser Attenuation Screen and Comparison with On-Orbit Measurements

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Xiong, Xiao-Xiong; Guenther, Bruce; Barnes, William; VanSalomonson, Vincent V.

    2004-01-01

    The MODIS instrument relies on solar calibration to achieve the required radiometric accuracy. This solar calibration occurs as the TERRA spacecraft comes up over the North Pole. The earth underneath the spacecraft is still dark for approximately one minute and the sun is just rising over the earth's north polar regions. During this time the sun moves through about 4 degrees, the scan mirror rotates about 19 times and about 50 frames (exposures) are made of the white solar diffuser. For some of MODIS'S bands the brightness of the screen is reduced, to prevent detector saturation, by means of a pinhole screen, which produces approximately 600 pinhole images of the sun within the field of view of any one detector. Previous attempts at creating a detailed radiometric model of this calibration scenario produced intensity variations on the focal planes with insufficient detail to be useful. The current computational approach produces results, which take into account the motion of the sun and the scan mirror and produces variations, which strongly resemble the observed focal plane intensity variations. The computational approach and results and a comparison with actual observational data are presented.

  4. Sun damage in ultraviolet photographs correlates with phenotypic melanoma risk factors in 12-year-old children.

    PubMed

    Gamble, Ryan G; Asdigian, Nancy L; Aalborg, Jenny; Gonzalez, Victoria; Box, Neil F; Huff, Laura S; Barón, Anna E; Morelli, Joseph G; Mokrohisky, Stefan T; Crane, Lori A; Dellavalle, Robert P

    2012-10-01

    Ultraviolet (UV) photography has been used to motivate sun safety in behavioral interventions. The relationship between sun damage shown in UV photographs and melanoma risk has not been systematically investigated. To examine the relationship between severity of sun damage in UV photographs and phenotypic melanoma risk factors in children. UV, standard visible and cross-polarized photographs were recorded for 585 children. Computer software quantified sun damage. Full-body nevus counts, skin color by colorimetry, facial freckling, hair and eye color were collected in skin examinations. Demographic data were collected in telephone interviews of parents. Among 12-year-old children, sun damage shown in UV photographs correlated with phenotypic melanoma risk factors. Sun damage was greatest for children who were non-Hispanic white and those who had red hair, blue eyes, increased facial freckling, light skin and greater number of nevi (all P values < .001). Results were similar for standard visible and cross-polarized photographs. Freckling was the strongest predictor of sun damage in visible and UV photographs. All other phenotypic melanoma risk factors were also predictors for the UV photographs. Differences in software algorithms used to score the photographs could produce different results. UV photographs portray more sun damage in children with higher risk for melanoma based on phenotype. Therefore sun protection interventions targeting those with greater sun damage on UV photographs will target those at higher melanoma risk. This study establishes reference ranges dermatologists can use to assess sun damage in their pediatric patients. Copyright © 2011 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  5. A Randomized Controlled Trial of a Mobile Medical App for Kidney Transplant Recipients: Effect on Use of Sun Protection.

    PubMed

    Robinson, June K; Friedewald, John J; Desai, Amishi; Gordon, Elisa J

    2016-01-01

    Perception of skin cancer risk, belief that sun protection prevents skin cancer, and having sun protection choices enhance sun protection behaviors by kidney transplant recipients, who are at greater risk of developing skin cancer than the general population. A randomized controlled trial used stratified recruitment of non-Hispanic White, non-Hispanic Black, and Hispanic/Latino kidney transplant recipients, who received a transplant 2-24 months prior to the study. The same culturally sensitive SunProtect™ program was delivered to all recipients with tablet personal computers in two urban ambulatory offices. Text messages reminders were provided at two week intervals. Self-reported surveys and skin pigmentation measured prior to the intervention and six weeks later were analyzed. Among 552 eligible participants, 170 participated (62 non-Hispanic Whites, 60Blacks, and 48 Hispanics). Among participants receiving the intervention with skin that burns after sun exposure and becomes tan or becomes irritated and gets darker, there was a statistically significant increase in self-reported knowledge, recognition of personal skin cancer risk, confidence in sun protection preventing skin cancer, and sun protection behaviors in participants compared to those receiving usual education (p<0.05). At the six week follow-up, participants in the intervention group with skin that burns or becomes irritated, had significantly less darkening of the sun-exposed forearm than control participants (p<0.05). Providing sun protection education with SunProtect™ in the springwith reminders during the summer facilitated adoption of sun protection behaviors among kidney transplant recipients with skin that burns or becomes irritated.

  6. A Randomized Controlled Trial of a Mobile Medical App for Kidney Transplant Recipients: Effect on Use of Sun Protection

    PubMed Central

    Robinson, June K.; Friedewald, John J.; Desai, Amishi; Gordon, Elisa J.

    2016-01-01

    Background Perception of skin cancer risk, belief that sun protection prevents skin cancer, and having sun protection choices enhance sun protection behaviors by kidney transplant recipients, who are at greater risk of developing skin cancer than the general population. Methods A randomized controlled trial used stratified recruitment of non-Hispanic white, non-Hispanic black, and Hispanic/Latino kidney transplant recipients, who received a transplant 2 to 24 months before the study. The same culturally sensitive SunProtect program was delivered to all recipients with tablet personal computers in 2 urban ambulatory offices. Text messages reminders were provided at 2-week intervals. Self-reported surveys and skin pigmentation measured before the intervention and 6 weeks later were analyzed. Results Among 552 eligible participants, 170 participated (62 non-Hispanic whites, 60 blacks, and 48 Hispanics). Among participants receiving the intervention with skin that burns after sun exposure and becomes tan or becomes irritated and gets darker, there was a statistically significant increase in self-reported knowledge, recognition of personal skin cancer risk, confidence in sun protection preventing skin cancer, and sun protection behaviors in participants compared with those receiving usual education (P < 0.05). At the 6-week follow-up, participants in the intervention group with skin that burns or becomes irritated had significantly less darkening of the sun-exposed forearm than control participants (P < 0.05). Conclusions Providing sun protection education with SunProtect in the spring with reminders during the summer facilitated adoption of sun protection behaviors among kidney transplant recipients with skin that burns or becomes irritated. PMID:26900599

  7. RAPPORT: running scientific high-performance computing applications on the cloud.

    PubMed

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  8. GEANT4 distributed computing for compact clusters

    NASA Astrophysics Data System (ADS)

    Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.

    2014-11-01

    A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.

  9. The 3D Visualization of Slope Terrain in Sun Moon Lake.

    NASA Astrophysics Data System (ADS)

    Deng, F.; Gwo-shyn, S.; Pei-Kun, L.

    2015-12-01

    By doing topographical surveys in a reservoir, we can calculate siltation volume in the period of two measurements. It becomes basic requirement to provide more precise siltation value especially when the differential GPS positioning method and the multi-beams echo sounders have been prevailed; however, there are two problems making the result become challenging when doing the siltation-survey in reservoir. They are both relative with the difficulty in keeping survey accuracy to the area of side slope around the boundary of reservoir. Firstly, the efficiency or accuracy of horizontal positioning using the DGPS may decrease because of the satellite-blocking effect when the surveying ship nears the bank especially in the canyon type of reservoir. Secondly, measurement can only be acquired in the area covered by water using the echo sounder, such that the measuring data of side slope area above water surface are lack to decrease the accuracy or seriously affect the calculation of reservoir water volume. This research is to hold the terrain accuracy when measuring the reservoir side slope and the Sun Moon Lake Reservoir in central Taiwan is chosen as the experimental location. Sun Moon Lake is the most popular place for tourists in Taiwan and also the most important reservoir of the electricity facilities. Furthermore, it owns the biggest pumped-storage hydroelectricity in Asia. The water in the lake is self-contained, and its water supply has been input through two underground tunnels, such that a deposit fan is formed when the muds were settled down from the silty water of the Cho-Shui Shi. Three kinds of survey are conducted in this experiment. First, a close-range photogrammetry, around the border of the Sun Moon Lake is made, or it takes shoots along the bank using a camera linked with a computer running the software Pix4D. The result can provide the DTM data to the side slope above the water level. Second, the bathymetrical data can be obtained by sweeping the side-slope using the multi-beam sounder below the water surface. Finally, the image of the side-scan sonar is taken and merges with contour lines produced from underwater topographic DTM data. Combining those data, our purpose is by creating different 3D images to have good visualization checking the data of side-slope DTM surveys if they are in well qualified controlled.

  10. Science Activities in Energy: Solar Energy II.

    ERIC Educational Resources Information Center

    Oak Ridge Associated Universities, TN.

    Included in this science activities energy package are 14 activities related to solar energy for secondary students. Each activity is outlined on a single card and is introduced by a question such as: (1) how much solar heat comes from the sun? or (2) how many times do you have to run water through a flat-plate collector to get a 10 degree rise in…

  11. Computational Methods for Feedback Controllers for Aerodynamics Flow Applications

    DTIC Science & Technology

    2007-08-15

    Iteration #, and y-translation by: »> Fy=[unf(:,8);runA(:,8);runB(:,8);runC(:,8);runD(:,S); runE (:,8)]; >> Oy-[unf(:,23) ;runA(:,23) ;runB(:,23) ;runC(:,23...runD(:,23) ; runE (:,23)]; >> Iter-[unf(:,1);runA(U ,l);runB(:,l);runC(:,l) ;runD(:,l); runE (:,l)]; >> plot(Fy) Cobalt version 4.0 €blso!,,tic,,. ř-21

  12. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  13. Reconciling species-level vs plastic responses of evergreen leaf structure to light gradients: shade leaves punch above their weight.

    PubMed

    Lusk, Christopher H; Onoda, Yusuke; Kooyman, Robert; Gutiérrez-Girón, Alba

    2010-04-01

    *When grown in a common light environment, the leaves of shade-tolerant evergreen trees have a larger leaf mass per unit area (LMA) than their light-demanding counterparts, associated with differences in lifespan. Yet plastic responses of LMA run counter to this pattern: shade leaves have smaller LMA than sun leaves, despite often living longer. *We measured LMA and cell wall content, and conducted punch and shear tests, on sun and shade leaves of 13 rainforest evergreens of differing shade tolerance, in order to understand adaptation vs plastic responses of leaf structure and biomechanics to shade. *Species shade tolerance and leaf mechanical properties correlated better with cell wall mass per unit area than with LMA. Growth light environment had less effect on leaf mechanics than on LMA: shade leaves had, on average, 40% lower LMA than sun leaves, but differences in work-to-shear, and especially force-to-punch, were smaller. This was associated with a slightly larger cell wall fraction in shade leaves. *The persistence of shade leaves might reflect unattractiveness to herbivores because they yield smaller benefits (cell contents per area) per unit fracture force than sun leaves. In forest trees, cell wall fraction and force-to-punch are more robust correlates of species light requirements than LMA.

  14. Tracing the journey of the Sun and the Solar siblings through the Milky Way

    NASA Astrophysics Data System (ADS)

    Martínez-Barbosa, Carmen Adriana

    2016-04-01

    This thesis is focused on studying the motion of the Sun and the Solar siblings through the Galaxy. The Solar siblings are stars that were born with the Sun in the same molecular cloud 4.6 Gyr ago. In the first part of the thesis, we present an efficient method to calculate the evolution of small systems embedded in larger systems. Generalizations of this method are used to calculate the motion of the Sun and the Solar siblings in an analytical potential containing a central bar and spiral arms. By integrating the orbit of the Sun backwards in time, we determine its birth radius and the amount of radial migration experienced by our star. The birth radius of the Sun is used to investigate the evolution and disruption of the Sun's birth cluster. Depending on the Galaxy model parameters, the present-day phase-space distribution of the Solar siblings might be quite different. We used these data to predict the regions in the Galaxy where it will be more likely to search for So! lar siblings in the future. Finally, we compute the stellar encounters experienced by the Sun along its orbit and their role on the stability of the outer Solar System.

  15. What encourages sun protection among outdoor workers from four industries?

    PubMed

    Janda, Monika; Stoneham, Melissa; Youl, Philippa; Crane, Phil; Sendall, Marguerite C; Tenkate, Thomas; Kimlin, Michael

    2014-01-01

    We aimed to identify current practice of sun protection and factors associated with effective use in four outdoor worker industries in Queensland, Australia. Workplaces in four industries with a high proportion of outdoor workers (building/construction, rural/farming, local government, and public sector industries) were identified using an online telephone directory, screened for eligibility, and invited to participant via mail (n=15, recruitment rate 37%). A convenience sample of workers were recruited within each workplace (n=162). Workplaces' sun protective policies and procedures were identified using interviews and policy analysis with workplace representatives, and discussion groups and computer-assisted telephone interviews with workers. Personal characteristics and sun protection knowledge, attitudes and behaviors were collated and analysed. Just over half the workplaces had an existing policy which referred to sun protection (58%), and most provided at least some personal protective equipment (PPE), but few scheduled work outside peak sun hours (43%) or provided skin checks (21%). Several worker and workplace characteristics were associated with greater sun protection behaviour among workers, including having received education on the use of PPE (p<0.001), being concerned about being in the sun (p=0.002); and working in a smaller workplace (p=0.035). Uptake of sun protection by outdoor workers is affected by a complex interplay of both workplace and personal factors, and there is a need for effective strategies targeting both the workplace environment and workers' knowledge, attitudes and behaviors to decrease harmful sun exposure further.

  16. Autonomous navigation accuracy using simulated horizon sensor and sun sensor observations

    NASA Technical Reports Server (NTRS)

    Pease, G. E.; Hendrickson, H. T.

    1980-01-01

    A relatively simple autonomous system which would use horizon crossing indicators, a sun sensor, a quartz oscillator, and a microprogrammed computer is discussed. The sensor combination is required only to effectively measure the angle between the centers of the Earth and the Sun. Simulations for a particular orbit indicate that 2 km r.m.s. orbit determination uncertainties may be expected from a system with 0.06 deg measurement uncertainty. A key finding is that knowledge of the satellite orbit plane orientation can be maintained to this level because of the annual motion of the Sun and the predictable effects of Earth oblateness. The basic system described can be updated periodically by transits of the Moon through the IR horizon crossing indicator fields of view.

  17. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1989-01-01

    Reviewed are two computer software programs for Apple II computers on weather for upper elementary and middle school grades. "Weather" introduces the major factors (temperature, humidity, wind, and air pressure) affecting weather. "How Weather Works" uses simulation and auto-tutorial formats on sun, wind, fronts, clouds, and…

  18. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis which provides the relationships between the predictions of a kinetics model and the input parameters of the problem. LSENS provides for efficient and accurate chemical kinetics computations and includes sensitivity analysis for a variety of problems, including nonisothermal conditions. LSENS replaces the previous NASA general chemical kinetics codes GCKP and GCKP84. LSENS is designed for flexibility, convenience and computational efficiency. A variety of chemical reaction models can be considered. The models include static system, steady one-dimensional inviscid flow, reaction behind an incident shock wave including boundary layer correction, and the perfectly stirred (highly backmixed) reactor. In addition, computations of equilibrium properties can be performed for the following assigned states, enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static problems LSENS computes sensitivity coefficients with respect to the initial values of the dependent variables and/or the three rates coefficient parameters of each chemical reaction. To integrate the ODEs describing chemical kinetics problems, LSENS uses the packaged code LSODE, the Livermore Solver for Ordinary Differential Equations, because it has been shown to be the most efficient and accurate code for solving such problems. The sensitivity analysis computations use the decoupled direct method, as implemented by Dunker and modified by Radhakrishnan. This method has shown greater efficiency and stability with equal or better accuracy than other methods of sensitivity analysis. LSENS is written in FORTRAN 77 with the exception of the NAMELIST extensions used for input. While this makes the code fairly machine independent, execution times on IBM PC compatibles would be unacceptable to most users. LSENS has been successfully implemented on a Sun4 running SunOS and a DEC VAX running VMS. With minor modifications, it should also be easily implemented on other platforms with FORTRAN compilers which support NAMELIST input. LSENS required 4Mb of RAM under SunOS 4.1.1 and 3.4Mb of RAM under VMS 5.5.1. The standard distribution medium for LSENS is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 1600 BPI 9-track magnetic tape or a TK50 tape cartridge in DEC VAX BACKUP format. Alternate distribution media and formats are available upon request. LSENS was developed in 1992.

  19. Attitude estimation from magnetometer and earth-albedo-corrected coarse sun sensor measurements

    NASA Astrophysics Data System (ADS)

    Appel, Pontus

    2005-01-01

    For full 3-axes attitude determination the magnetic field vector and the Sun vector can be used. A Coarse Sun Sensor consisting of six solar cells placed on each of the six outer surfaces of the satellite is used for Sun vector determination. This robust and low cost setup is sensitive to surrounding light sources as it sees the whole sky. To compensate for the largest error source, the Earth, an albedo model is developed. The total albedo light vector has contributions from the Earth surface which is illuminated by the Sun and visible from the satellite. Depending on the reflectivity of the Earth surface, the satellite's position and the Sun's position the albedo light changes. This cannot be calculated analytically and hence a numerical model is developed. For on-board computer use the Earth albedo model consisting of data tables is transferred into polynomial functions in order to save memory space. For an absolute worst case the attitude determination error can be held below 2∘. In a nominal case it is better than 1∘.

  20. TX Cnc AS A MEMBER OF THE PRAESEPE OPEN CLUSTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, X. B.; Deng, L.; Lu, P.

    2009-08-15

    We present B-, V-, and I-band CCD photometry of the W UMa-type binary system TX Cnc, which is a member star of the Praesepe open cluster. Based on the observations, new ephemeris and a revised photometric solution of the binary system were derived. Combined with the results of the radial velocity solution contributed by Pribulla et al., the absolute parameters of the system were determined. The mass, radius, and luminosity of the primary component are derived to be 1.35 {+-} 0.02 M {sub sun}, 1.27 {+-} 0.04 R {sub sun}, and 2.13 {+-} 0.11 L {sub sun}. Those for themore » secondary star are computed as 0.61 {+-} 0.01 M {sub sun}, 0.89 {+-} 0.03 R {sub sun}, and 1.26 {+-} 0.07 L {sub sun}, respectively. Based on these results, a distance modulus of (m - M) {sub V} = 6.34 {+-} 0.05 is determined for the star. It confirms the membership of TX Cnc to the Praesepe open cluster. The evolutionary status and the physical nature of the binary system are discussed compared with the theoretical model.« less

  1. IRISpy: Analyzing IRIS Data in Python

    NASA Astrophysics Data System (ADS)

    Ryan, Daniel; Christe, Steven; Mumford, Stuart; Baruah, Ankit; Timothy, Shelbe; Pereira, Tiago; De Pontieu, Bart

    2017-08-01

    IRISpy is a new community-developed open-source software library for analysing IRIS level 2 data. It is written in Python, a free, cross-platform, general-purpose, high-level programming language. A wide array of scientific computing software packages have already been developed in Python, from numerical computation (NumPy, SciPy, etc.), to visualization and plotting (matplotlib), to solar-physics-specific data analysis (SunPy). IRISpy is currently under development as a SunPy-affiliated package which means it depends on the SunPy library, follows similar standards and conventions, and is developed with the support of of the SunPy development team. IRISpy’s has two primary data objects, one for analyzing slit-jaw imager data and another for analyzing spectrograph data. Both objects contain basic slicing, indexing, plotting, and animating functionality to allow users to easily inspect, reduce and analyze the data. As part of this functionality the objects can output SunPy Maps, TimeSeries, Spectra, etc. of relevant data slices for easier inspection and analysis. Work is also ongoing to provide additional data analysis functionality including derivation of systematic measurement errors (e.g. readout noise), exposure time correction, residual wavelength calibration, radiometric calibration, and fine scale pointing corrections. IRISpy’s code base is publicly available through github.com and can be contributed to by anyone. In this poster we demonstrate IRISpy’s functionality and future goals of the project. We also encourage interested users to become involved in further developing IRISpy.

  2. Form factors for dark matter capture by the Sun in effective theories

    NASA Astrophysics Data System (ADS)

    Catena, Riccardo; Schwabe, Bodo

    2015-04-01

    In the effective theory of isoscalar and isovector dark matter-nucleon interactions mediated by a heavy spin-1 or spin-0 particle, 8 isotope-dependent nuclear response functions can be generated in the dark matter scattering by nuclei. We compute the 8 nuclear response functions for the 16 most abundant elements in the Sun, i.e. H, 3He, 4He, 12C, 14N, 16O, 20Ne, 23Na, 24Mg, 27Al, 28Si, 32S, 40Ar, 40Ca, 56Fe, and 59Ni, through numerical shell model calculations. We use our response functions to compute the rate of dark matter capture by the Sun for all isoscalar and isovector dark matter-nucleon effective interactions, including several operators previously considered for dark matter direct detection only. We study in detail the dependence of the capture rate on specific dark matter-nucleon interaction operators, and on the different elements in the Sun. We find that a so far neglected momentum dependent dark matter coupling to the nuclear vector charge gives a larger contribution to the capture rate than the constant spin-dependent interaction commonly included in dark matter searches at neutrino telescopes. Our investigation lays the foundations for model independent analyses of dark matter induced neutrino signals from the Sun. The nuclear response functions obtained in this study are listed in analytic form in an appendix, ready to be used in other projects.

  3. High aerosol load over the Pearl River Delta, China, observed with Raman lidar and Sun photometer

    NASA Astrophysics Data System (ADS)

    Ansmann, Albert; Engelmann, Ronny; Althausen, Dietrich; Wandinger, Ulla; Hu, Min; Zhang, Yuanghang; He, Qianshan

    2005-07-01

    Height-resolved data of the particle optical properties, the vertical extend of the haze layer, aerosol stratification, and the diurnal cycle of vertical mixing over the Pearl River Delta in southern China are presented. The observations were performed with Raman lidar and Sun photometer at Xinken (22.6°N, 113.6°E) near the south coast of China throughout October 2004. The lidar run almost full time on 21 days. Sun photometer data were taken on 23 days, from about 0800 to 1700 local time. The particle optical depth (at about 533-nm wavelength) ranged from 0.3-1.7 and was, on average, 0.92. Ångström exponents varied from 0.65-1.35 (for wavelengths 380 to 502 nm) and from 0.75-1.6 (for 502 to 1044 nm), mean values were 0.97 and 1.22. The haze-layer mean extinction-to-backscatter ratio ranged from 35-59 sr, and was, on average, 46.7 sr. The top of the haze layer reached to heights of 1.5-3 km in most cases.

  4. Extending the Fermi-LAT Data Processing Pipeline to the Grid

    NASA Astrophysics Data System (ADS)

    Zimmer, S.; Arrabito, L.; Glanzman, T.; Johnson, T.; Lavalley, C.; Tsaregorodtsev, A.

    2012-12-01

    The Data Handling Pipeline (“Pipeline”) has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the collaboration and the Fermi Science Support Center. Aside from the reconstruction of raw data from the satellite (Level 1), data reprocessing and various event-level analyses are also reasonably heavy loads on the pipeline and computing resources. These other loads, unlike Level 1, can run continuously for weeks or months at a time. In addition it receives heavy use in performing production Monte Carlo tasks. In daily use it receives a new data download every 3 hours and launches about 2000 jobs to process each download, typically completing the processing of the data before the next download arrives. The need for manual intervention has been reduced to less than 0.01% of submitted jobs. The Pipeline software is written almost entirely in Java and comprises several modules. The software comprises web-services that allow online monitoring and provides charts summarizing work flow aspects and performance information. The server supports communication with several batch systems such as LSF and BQS and recently also Sun Grid Engine and Condor. This is accomplished through dedicated job control services that for Fermi are running at SLAC and the other computing site involved in this large scale framework, the Lyon computing center of IN2P3. While being different in the logic of a task, we evaluate a separate interface to the Dirac system in order to communicate with EGI sites to utilize Grid resources, using dedicated Grid optimized systems rather than developing our own. More recently the Pipeline and its associated data catalog have been generalized for use by other experiments, and are currently being used by the Enriched Xenon Observatory (EXO), Cryogenic Dark Matter Search (CDMS) experiments as well as for Monte Carlo simulations for the future Cherenkov Telescope Array (CTA).

  5. Using video-oriented instructions to speed up sequence comparison.

    PubMed

    Wozniak, A

    1997-04-01

    This document presents an implementation of the well-known Smith-Waterman algorithm for comparison of proteic and nucleic sequences, using specialized video instructions. These instructions, SIMD-like in their design, make possible parallelization of the algorithm at the instruction level. Benchmarks on an ULTRA SPARC running at 167 MHz show a speed-up factor of two compared to the same algorithm implemented with integer instructions on the same machine. Performance reaches over 18 million matrix cells per second on a single processor, giving to our knowledge the fastest implementation of the Smith-Waterman algorithm on a workstation. The accelerated procedure was introduced in LASSAP--a LArge Scale Sequence compArison Package software developed at INRIA--which handles parallelism at higher level. On a SUN Enterprise 6000 server with 12 processors, a speed of nearly 200 million matrix cells per second has been obtained. A sequence of length 300 amino acids is scanned against SWISSPROT R33 (1,8531,385 residues) in 29 s. This procedure is not restricted to databank scanning. It applies to all cases handled by LASSAP (intra- and inter-bank comparisons, Z-score computation, etc.

  6. MATISSE-v1.5 and MATISSE-v2.0: new developments and comparison with MIRAMER measurements

    NASA Astrophysics Data System (ADS)

    Simoneau, Pierre; Caillault, Karine; Fauqueux, Sandrine; Huet, Thierry; Labarre, Luc; Malherbe, Claire; Rosier, Bernard

    2009-05-01

    MATISSE is a background scene generator developed for the computation of natural background spectral radiance images and useful atmospheric radiatives quantities (radiance and transmission along a line of sight, local illumination, solar irradiance ...). The spectral bandwidth ranges from 0.4 to 14 μm. Natural backgrounds include atmosphere (taking into account spatial variability), low and high altitude clouds, sea and land. The current version MATISSE-v1.5 can be run on SUN and IBM workstations as well as on PC under Windows and Linux environment. An IHM developed under Java environment is also implemented. MATISSE-v2.0 recovers all the MATISSE-v1.5 functionalities, and includes a new sea surface radiance model depending on wind speed, wind direction and the fetch value. The release of this new version in planned for April 2009. This paper gives a description of MATISSE-v1.5 and MATISSE-v2.0 and shows preliminary comparison results between generated images and measured images during the MIRAMER campaign, which hold in May 2008 in the Mediterranean Sea.

  7. Simulation of LHC events on a millions threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2015-12-01

    Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.

  8. JPRS Report, China

    DTIC Science & Technology

    1988-05-19

    of the young members so that they can take up leading posts at various levels in the association. These qualities include the abilities to run the...by the young ." Finally, Sun Qimeng said: "To meet the new situation, we must further clarify the guiding principle for organiz- ing the members...tenacious struggle against aggression. In his " Self -Accounts," written after he was taken prisoner following the fall of Tianjing [referring to

  9. Evaluation of an oil-debris monitoring device for use in helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; Blanchette, Donald M.; Biron, Gilles

    1992-01-01

    Experimental tests were performed on an OH-58A helicopter main-rotor transmission to evaluate an oil-debris monitoring device (ODMD). The tests were performed in the NASA 500-hp Helicopter Transmission Test Stand. Five endurance tests were run as part of a U.S. Navy/NASA/Army advanced lubricants program. The tests were run at 100 percent design speed, 117-percent design torque, and 121 C (250 F) oil inlet temperature. Each test lasted between 29 and 122 hr. The oils that were used conformed to MIL-L-23699 and DOD-L-85734 specifications. One test produced a massive sun-gear fatigue failure; another test produced a small spall on one sun-gear tooth; and a third test produced a catastrophic planet-bearing cage failure. The ODMD results were compared with oil spectroscopy results. The capability of the ODMD to detect transmission component failures was not demonstrated. Two of the five tests produced large amounts of debris. For these two tests, two separate ODMD sensors failed, possibly because of prolonged exposure to relatively high oil temperatures. One test produced a small amount of debris and was not detected by the ODMD or by oil spectroscopy. In general, the ODMD results matched the oil spectroscopy results. The ODMD results were extremely sensitive to oil temperature and flow rate.

  10. Running Jobs on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    on the Peregrine high-performance computing (HPC) system. Running Different Types of Jobs Batch jobs scheduling policies - queue names, limits, etc. Requesting different node types Sample batch scripts

  11. ms2: A molecular simulation tool for thermodynamic properties

    NASA Astrophysics Data System (ADS)

    Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran

    2011-11-01

    This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.

  12. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  13. WinHPC System | High-Performance Computing | NREL

    Science.gov Websites

    System WinHPC System NREL's WinHPC system is a computing cluster running the Microsoft Windows operating system. It allows users to run jobs requiring a Windows environment such as ANSYS and MATLAB

  14. Integrated ray tracing simulation of annual variation of spectral bio-signatures from cloud free 3D optical Earth model

    NASA Astrophysics Data System (ADS)

    Ryu, Dongok; Kim, Sug-Whan; Kim, Dae Wook; Lee, Jae-Min; Lee, Hanshin; Park, Won Hyun; Seong, Sehyun; Ham, Sun-Jeong

    2010-09-01

    Understanding the Earth spectral bio-signatures provides an important reference datum for accurate de-convolution of collapsed spectral signals from potential earth-like planets of other star systems. This study presents a new ray tracing computation method including an improved 3D optical earth model constructed with the coastal line and vegetation distribution data from the Global Ecological Zone (GEZ) map. Using non-Lambertian bidirectional scattering distribution function (BSDF) models, the input earth surface model is characterized with three different scattering properties and their annual variations depending on monthly changes in vegetation distribution, sea ice coverage and illumination angle. The input atmosphere model consists of one layer with Rayleigh scattering model from the sea level to 100 km in altitude and its radiative transfer characteristics is computed for four seasons using the SMART codes. The ocean scattering model is a combination of sun-glint scattering and Lambertian scattering models. The land surface scattering is defined with the semi empirical parametric kernel method used for MODIS and POLDER missions. These three component models were integrated into the final Earth model that was then incorporated into the in-house built integrated ray tracing (IRT) model capable of computing both spectral imaging and radiative transfer performance of a hypothetical space instrument as it observes the Earth from its designated orbit. The IRT model simulation inputs include variation in earth orientation, illuminated phases, and seasonal sea ice and vegetation distribution. The trial simulation runs result in the annual variations in phase dependent disk averaged spectra (DAS) and its associated bio-signatures such as NDVI. The full computational details are presented together with the resulting annual variation in DAS and its associated bio-signatures.

  15. New design conception and development of the synchronizer/data buffer system in CDA station for China's GMS

    NASA Astrophysics Data System (ADS)

    Tong, Kai; Fan, Shiming; Gong, Derong; Lu, Zuming; Liu, Jian

    The synchronizer/data buffer (SDB) in the command and data acquisition station for China's future Geostationary Meteorological Satellite is described. Several computers and special microprocessors are used in tandem with minimized hardware to fulfill all of the functions. The high-accuracy digital phase locked loop is operated by computer and by controlling the count value of the 20-MHz clock to acquire and track such signals as sun pulse, scan synchronization detection pulse, and earth pulse. Sun pulse and VISSR data are recorded precisely and economically by digitizing the time relation. The VISSR scan timing and equiangular control timing, and equal time sampling on satellite are also discussed.

  16. Implementation of the Sun Position Calculation in the PDC-1 Control Microprocessor

    NASA Technical Reports Server (NTRS)

    Stallkamp, J. A.

    1984-01-01

    The several computational approaches to providing the local azimuth and elevation angles of the Sun as a function of local time and then the utilization of the most appropriate method in the PDC-1 microprocessor are presented. The full algorithm, the FORTRAN form, is felt to be very useful in any kind or size of computer. It was used in the PDC-1 unit to generate efficient code for the microprocessor with its floating point arithmetic chip. The balance of the presentation consists of a brief discussion of the tracking requirements for PPDC-1, the planetary motion equations from the first to the final version, and the local azimuth-elevation geometry.

  17. Seismic imaging of the Sun's far hemisphere and its applications in space weather forecasting

    NASA Astrophysics Data System (ADS)

    Lindsey, Charles; Braun, Douglas

    2017-06-01

    The interior of the Sun is filled acoustic waves with periods of about 5 min. These waves, called "p modes," are understood to be excited by convection in a thin layer beneath the Sun's surface. The p modes cause seismic ripples, which we call "the solar oscillations." Helioseismic observatories use Doppler observations to map these oscillations, both spatially and temporally. The p modes propagate freely throughout the solar interior, reverberating between the near and far hemispheres. They also interact strongly with active regions at the surfaces of both hemispheres, carrying the signatures of said interactions with them. Computational analysis of the solar oscillations mapped in the Sun's near hemisphere, applying basic principles of wave optics to model the implied p modes propagating through the solar interior, gives us seismic maps of large active regions in the Sun's far hemisphere. These seismic maps are useful for space weather forecasting. For the past decade, NASA's twin STEREO spacecraft have given us full coverage of the Sun's far hemisphere in electromagnetic (EUV) radiation from the far side of Earth's orbit about the Sun. We are now approaching a decade during which the STEREO spacecraft will lose their farside vantage. There will occur significant periods from thence during which electromagnetic coverage of the Sun's far hemisphere will be incomplete or nil. Solar seismology will make it possible to continue our monitor of large active regions in the Sun's far hemisphere for the needs of space weather forecasters during these otherwise blind periods.

  18. Seismic imaging of the Sun's far hemisphere and its applications in space weather forecasting.

    PubMed

    Lindsey, Charles; Braun, Douglas

    2017-06-01

    The interior of the Sun is filled acoustic waves with periods of about 5 min. These waves, called " p modes," are understood to be excited by convection in a thin layer beneath the Sun's surface. The p modes cause seismic ripples, which we call "the solar oscillations." Helioseismic observatories use Doppler observations to map these oscillations, both spatially and temporally. The p modes propagate freely throughout the solar interior, reverberating between the near and far hemispheres. They also interact strongly with active regions at the surfaces of both hemispheres, carrying the signatures of said interactions with them. Computational analysis of the solar oscillations mapped in the Sun's near hemisphere, applying basic principles of wave optics to model the implied p modes propagating through the solar interior, gives us seismic maps of large active regions in the Sun's far hemisphere. These seismic maps are useful for space weather forecasting. For the past decade, NASA's twin STEREO spacecraft have given us full coverage of the Sun's far hemisphere in electromagnetic (EUV) radiation from the far side of Earth's orbit about the Sun. We are now approaching a decade during which the STEREO spacecraft will lose their farside vantage. There will occur significant periods from thence during which electromagnetic coverage of the Sun's far hemisphere will be incomplete or nil. Solar seismology will make it possible to continue our monitor of large active regions in the Sun's far hemisphere for the needs of space weather forecasters during these otherwise blind periods.

  19. Analyzing Spacecraft Telecommunication Systems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric

    2004-01-01

    Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.

  20. Earth observations taken from orbiter Discovery during STS-91 mission

    NASA Image and Video Library

    2016-08-24

    STS091-713-061 (2-12 June 1998) --- The vertical stabilizer of the Space Shuttle Discovery runs through this Atlantic Ocean image made from its crew cabin. Many sets of internal waves are seen in the 70mm frame traveling through an area off the Atlantic coast of Nova Scotia, Canada. There are seven sets that run perpendicular to each other. Internal waves are tidally induced and travel below the surface of the ocean along a density change which occurs often around 150 feet deep. According to NASA scientists studying the STS-91 collection, the waves are visible because, as the wave action smoothes out the smaller waves on the surface, the manner in which the sun is reflected is changed.

  1. Development for SSV on a parallel processing system (PARAGON)

    NASA Astrophysics Data System (ADS)

    Gothard, Benny M.; Allmen, Mark; Carroll, Michael J.; Rich, Dan

    1995-12-01

    A goal of the surrogate semi-autonomous vehicle (SSV) program is to have multiple vehicles navigate autonomously and cooperatively with other vehicles. This paper describes the process and tools used in porting UGV/SSV (unmanned ground vehicle) autonomous mobility and target recognition algorithms from a SISD (single instruction single data) processor architecture (i.e., a Sun SPARC workstation running C/UNIX) to a MIMD (multiple instruction multiple data) parallel processor architecture (i.e., PARAGON-a parallel set of i860 processors running C/UNIX). It discusses the gains in performance and the pitfalls of such a venture. It also examines the merits of this processor architecture (based on this conceptual prototyping effort) and programming paradigm to meet the final SSV demonstration requirements.

  2. High Performance Computing (HPC) Innovation Service Portal Pilots Cloud Computing (HPC-ISP Pilot Cloud Computing)

    DTIC Science & Technology

    2011-08-01

    5 Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis...classification of streaming data. Example input images (top left). All digit prototypes (cluster centers) found, with size proportional to frequency (top...Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis 1 http

  3. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    NASA Technical Reports Server (NTRS)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.

  4. The influence of regional Arctic sea-ice decline on stratospheric and tropospheric circulation

    NASA Astrophysics Data System (ADS)

    McKenna, Christine; Bracegirdle, Thomas; Shuckburgh, Emily; Haynes, Peter

    2016-04-01

    Arctic sea-ice extent has rapidly declined over the past few decades, and most climate models project a continuation of this trend during the 21st century in response to greenhouse gas forcing. A number of recent studies have shown that this sea-ice loss induces vertically propagating Rossby waves, which weaken the stratospheric polar vortex and increase the frequency of sudden stratospheric warmings (SSWs). SSWs have been shown to increase the probability of a negative NAO in the following weeks, thereby driving anomalous weather conditions over Europe and other mid-latitude regions. In contrast, other studies have shown that Arctic sea-ice loss strengthens the polar vortex, increasing the probability of a positive NAO. Sun et al. (2015) suggest these conflicting results may be due to the region of sea-ice loss considered. They find that if only regions within the Arctic Circle are considered in sea-ice projections, the polar vortex weakens; if only regions outwith the Arctic Circle are considered, the polar vortex strengthens. This is because the anomalous Rossby waves forced in the former/latter scenario constructively/destructively interfere with climatological Rossby waves, thus enhancing/suppressing upward wave propagation. In this study, we investigate whether Sun et al.'s results are robust to a different model. We also divide the regions of sea-ice loss they considered into further sub-regions, in order to examine the regional differences in more detail. We do this by using the intermediate complexity climate model, IGCM4, which has a well resolved stratosphere and does a good job of representing stratospheric processes. Several simulations are run in atmosphere only mode, where one is a control experiment and the others are perturbation experiments. In the control run annually repeating historical mean surface conditions are imposed at the lower boundary, whereas in each perturbation run the model is forced by SST perturbations imposed in a specific region (one perturbation experiment combines all regions). These regions correspond to sea-ice loss hotspots such as the Barents-Kara Seas and the Bering Sea. The differences between the control and perturbation runs yields the effects of the imposed sea-ice loss on the polar vortex. To detect and count SSWs for each run, we use the World Meteorological Organisation's definition of an SSW (a reversal in zonal mean zonal wind at 10 hPa and 60° N, and a reversal in zonal mean meridional temperature gradient at 10 hPa between 60° N and 90° N). The poster will present and discuss the initial results of this study. Implications of the results for future change in the lower latitude mid-troposphere will be discussed. References Sun, L., C. Deser, and R. A. Tomas, 2015: Mechanisms of Stratospheric and Tropospheric Circulation Response to Projected Arctic Sea Ice Loss. J. Climate, 28, 7824-7845, doi: http://dx.doi.org/10.1175/JCLI-D-15-0169.1.

  5. Gene expression cross-profiling in genetically modified industrial Saccharomyces cerevisiae strains during high-temperature ethanol production from xylose.

    PubMed

    Ismail, Ku Syahidah Ku; Sakamoto, Takatoshi; Hatanaka, Haruyo; Hasunuma, Tomohisa; Kondo, Akihiko

    2013-01-10

    Production of ethanol from xylose at high temperature would be an economical approach since it reduces risk of contamination and allows both the saccharification and fermentation steps in SSF to be running at elevated temperature. Eight recombinant xylose-utilizing Saccharomyces cerevisiae strains developed from industrial strains were constructed and subjected to high-temperature fermentation at 38 °C. The best performing strain was sun049T, which produced up to 15.2 g/L ethanol (63% of the theoretical production), followed by sun048T and sun588T, both with 14.1 g/L ethanol produced. Via transcriptomic analysis, expression profiling of the top three best ethanol producing strains compared to a negative control strain, sun473T, led to the discovery of genes in common that were regulated in the same direction. Identification of the 20 most highly up-regulated and the 20 most highly down-regulated genes indicated that the cells regulate their central metabolism and maintain the integrity of the cell walls in response to high temperature. We also speculate that cross-protection in the cells occurs, allowing them to maintain ethanol production at higher concentration under heat stress than the negative controls. This report provides further transcriptomics information in the interest of producing a robust microorganism for high-temperature ethanol production utilizing xylose. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Design for Run-Time Monitor on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.

  7. Cloud Computing for Complex Performance Codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  8. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    NASA Astrophysics Data System (ADS)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.

  9. Planetary Nebula NGC 7293 also Known as the Helix Nebula

    NASA Image and Video Library

    2005-05-05

    This ultraviolet image from NASA Galaxy Evolution Explorer is of the planetary nebula NGC 7293 also known as the Helix Nebula. It is the nearest example of what happens to a star, like our own Sun, as it approaches the end of its life when it runs out of fuel, expels gas outward and evolves into a much hotter, smaller and denser white dwarf star. http://photojournal.jpl.nasa.gov/catalog/PIA07902

  10. Giga-Hertz Electromagnetic Wave Science and Devices for Advanced Battlefield Communications

    DTIC Science & Technology

    2010-12-15

    Yeal Song, Lei Lu , Zihui Wang, Yiyan Sun, and Joshua Bevivino, Seminar in the Department of Electrical and Computer Engineering, the University of...Celinski, “Spin wave resonance excitation in ferromagnetic films using planar waveguide structures”, J. Appl. Phys. 108, 023907 (2010) 6. Zihui ...Young-Yeal Song, Yiyan Sun, Lei Lu , Joshua Bevivino, and Mingzhong Wu, Appl. Phys. Lett. 97, 173502 (2010). 12. “Electric-field control of ferromagnetic

  11. Combining Offline and Online Computation for Solving Partially Observable Markov Decision Process

    DTIC Science & Technology

    2015-03-06

    David Hsu and Wee Sun Lee, Monte Carlo Bayesian Reinforcement Learning, International Conference on Machine Learning (ICML), 2012. • Haoyu Bai, David...and Automation (ICRA), 2015. • Zhan Wei Lim, David Hsu, and Wee Sun Lee, Adaptive Informative Path Planning in Metric Spaces. Submitted to Int. J... Automation (ICRA), 2015. 2. Bai, H., Hsu, D., Kochenderfer, M. J., and Lee, W. S., Unmanned aircraft collision avoidance using continuous state POMDPs

  12. A Journey from the Sun to the Earth

    ERIC Educational Resources Information Center

    Psycharis, Sarantos; Daflos, Athanasios

    2005-01-01

    Computer-aided modelling and investigations can bring the real world into classrooms and facilitate its exploration, in contrast to acquiring factual knowledge from textbooks. Computer modelling puts a whole new "spin" on science education, redefining and reshaping the classroom learning experience. The authors used information and…

  13. Form factors for dark matter capture by the Sun in effective theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catena, Riccardo; Schwabe, Bodo

    2015-04-24

    In the effective theory of isoscalar and isovector dark matter-nucleon interactions mediated by a heavy spin-1 or spin-0 particle, 8 isotope-dependent nuclear response functions can be generated in the dark matter scattering by nuclei. We compute the 8 nuclear response functions for the 16 most abundant elements in the Sun, i.e. H, {sup 3}He, {sup 4}He, {sup 12}C, {sup 14}N, {sup 16}O, {sup 20}Ne, {sup 23}Na, {sup 24}Mg, {sup 27}Al, {sup 28}Si, {sup 32}S, {sup 40}Ar, {sup 40}Ca, {sup 56}Fe, and {sup 59}Ni, through numerical shell model calculations. We use our response functions to compute the rate of dark mattermore » capture by the Sun for all isoscalar and isovector dark matter-nucleon effective interactions, including several operators previously considered for dark matter direct detection only. We study in detail the dependence of the capture rate on specific dark matter-nucleon interaction operators, and on the different elements in the Sun. We find that a so far neglected momentum dependent dark matter coupling to the nuclear vector charge gives a larger contribution to the capture rate than the constant spin-dependent interaction commonly included in dark matter searches at neutrino telescopes. Our investigation lays the foundations for model independent analyses of dark matter induced neutrino signals from the Sun. The nuclear response functions obtained in this study are listed in analytic form in an appendix, ready to be used in other projects.« less

  14. Nonlinear Analysis of a Bolted Marine Riser Connector Using NASTRAN Substructuring

    NASA Technical Reports Server (NTRS)

    Fox, G. L.

    1984-01-01

    Results of an investigation of the behavior of a bolted, flange type marine riser connector is reported. The method used to account for the nonlinear effect of connector separation due to bolt preload and axial tension load is described. The automated multilevel substructing capability of COSMIC/NASTRAN was employed at considerable savings in computer run time. Simplified formulas for computer resources, i.e., computer run times for modules SDCOMP, FBS, and MPYAD, as well as disk storage space, are presented. Actual run time data on a VAX-11/780 is compared with the formulas presented.

  15. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  16. Fingerprinting Communication and Computation on HPC Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean

    2010-06-02

    How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less

  17. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.

  18. Solar Physics at Evergreen: Solar Dynamo and Chromospheric MHD

    NASA Astrophysics Data System (ADS)

    Zita, E. J.; Maxwell, J.; Song, N.; Dikpati, M.

    2006-12-01

    We describe our five year old solar physics research program at The Evergreen State College. Famed for its cloudy skies, the Pacific Northwest is an ideal location for theoretical and remote solar physics research activities. Why does the Sun's magnetic field flip polarity every 11 years or so? How does this contribute to the magnetic storms Earth experiences when the Sun's field reverses? Why is the temperature in the Sun's upper atmosphere millions of degrees higher than the Sun's surface temperature? How do magnetic waves transport energy in the Sun’s chromosphere and the Earth’s atmosphere? How does solar variability affect climate change? Faculty and undergraduates investigate questions such as these in collaboration with the High Altitude Observatory (HAO) at the National Center for Atmospheric Research (NCAR) in Boulder. We will describe successful student research projects, logistics of remote computing, and our current physics investigations into (1) the solar dynamo and (2) chromospheric magnetohydrodynamics.

  19. Active Nodal Task Seeking for High-Performance, Ultra-Dependable Computing

    DTIC Science & Technology

    1994-07-01

    implementation. Figure 1 shows a hardware organization of ANTS: stand-alone computing nodes inter - connected by buses. 2.1 Run Time Partitioning The...nodes in 14 respond to changing loads [27] or system reconfiguration [26]. Existing techniques are all source-initiated or server-initiated [27]. 5.1...short-running task segments. The task segments must be short-running in order that processors will become avalable often enough to satisfy changing

  20. SOHO starts a revolution in the science of the Sun

    NASA Astrophysics Data System (ADS)

    1996-07-01

    In addition, SOHO has found clues to the forces that accelerate the solar wind of atomic particles blowing unceasingly through the Solar System. By relating the huge outbursts called coronal mass ejections to preceding magnetic changes in the Sun, SOHO scientists hope to predict such events which, in the Earth's vicinity, endanger power supplies and satellites. SOHO sees differences in the strength of the solar wind in various directions, by mapping a cavity in the cloud of interstellar hydrogen surrounding the Sun. As a bonus, SOHO secured remarkable images of Comet Hyakutake, by ultraviolet and visible light. The revolution in solar science will seem more complete when all the pieces and actions of the Sun, detected by twelve different instruments, are brought together in observations and concepts. Fundamental questions will then be open to re-examination, about the origin of the Sun's magnetism, the cause of its variations in the 11-year cycle of sunspot activity, and the consequences for the Solar System at large. SOHO is greater than the sum of its parts. "SOHO takes solar science by storm," says Roger Bonnet, the European Space Agency's Director of Science, "thanks to its combination of instruments. Unprecedented results from individual telescopes and spectrometers are impressive, of course, but what is breathtaking is SOHO's ability to explore the Sun all the way from its nuclear core to the Earth's vicinity and beyond. We can expect a completely new picture of how agitation inside the Sun, transmitted through the solar atmosphere, directly affects us on the Earth." SOHO is a project of international cooperation between the European Space Agency and NASA. The spacecraft was built in Europe and instrumented by scientists on both sides of the Atlantic. NASA launched SOHO and provides the ground stations and an operations centre at the Goddard Space Flight Center near Washington. SOHO has an uninterrupted view of the Sun from a halo orbit around Lagrangian Point No. 1 where the gravity of the Sun and the Earth are in balance. The spacecraft's engineering has proved to be excellent and no difficulty is anticipated in keeping it operational for at least six years. Early SOHO results were summarized in ESA's Information Note Nr 07-96, 2 May 1996. Here follow notes and comments on some further conclusions by SOHO's scientists. Fast action in the Sun's atmosphere The ultraviolet spectrometers aboard SOHO, called SUMER and CDS, were designed to analyse events in the solar atmosphere and discover temperatures, densities and speeds of motion in the gas. Their detailed results come in the spectra, which analyse the intensities at different wavelengths with high sensitivity, but the spectrometers also generate images by scanning selected regions of the Sun. When the SUMER instrument scans the whole Sun by the ultraviolet light of strongly ionized sulphur atoms (S VI at 933 angstroms) it picks out gas at 200,000 degrees C and reveals a vast number of bright regions created by magnetic field lines looping through the atmosphere. The brightness can change by a factor of ten in a distance of a few thousand kilometres or in a few seconds of time. SUMER has also shown that thick streaks called polar plumes, which climb far into space from the Sun's polar regions, are anchored in bright regions near the Sun's visible surface. The spectrometer CDS has observed fast action in the Sun's atmosphere. It can measure velocities along the line of sight by shifts in the wavelength of emissions from selected atoms, and contrary motions (turbulence) appear in a spreading of the wavelengths. In one high-velocity event, corresponding with a small streak of brightness in the scanned image, CDS detected vertical motions differing by 450 kilometres per second, and an overall motion of 65 kilometres per second downwards. "By taking the Sun's atmosphere to pieces we begin to understand how it influences our lives," says Richard Harrison of the UK's Rutherford Appleton Laboratory, principal investigator for the CDS spectrometer. "Surprises here on Earth don't come from the steady light and heat, which we take for granted, but from atmospheric storms that send shock waves through the Solar System. By making temperature and density maps of the Sun's atmosphere we expect to find out how these storms develop." Accelerator of the solar wind All of the common chemical elements are present in the Sun's atmosphere, though they are not always detectable. They are represented more plainly in the solar wind. SOHO's solar-wind analyser CELIAS has demonstrated an unprecedented ability to recognize and quantify many different elements and isotopes. There is a puzzle about how the heavy atoms are accelerated, so that they can keep up with the commonplace lightweight hydrogen of the solar wind. If the speeds of atomic particles were due only to heat, heavy atoms would travel much more slowly than the hydrogen atoms. That is not the case. Instead, a natural electromagnetic accelerator, akin to man-made particle accelerators, operates in the Sun's atmosphere and treats all elements similarly. Measurements of the speeds of oxygen atoms leaving the Sun's atmosphere to join the solar wind catch them in the process of acceleration. As the stop light changes to green, the oxygen atoms go from less than 100 kilometres per second at 250,000 kilometres above the solar surface, to about 225 kilometres per second a million kilometres farther out. This result comes from SOHO's ultraviolet coronagraph UVCS, observing conditions above a polar coronal hole, where the atmosphere is relatively cool and magnetic lines run freely into space. Here originates a fast solar wind at around 700 kilometres per second, with about twice the speed of the solar wind coming from magnetically constrained regions near the Sun's equator. One of SOHO's main tasks is to explain the solar wind, and further investigations by UVCS may settle arguments about how the natural accelerator works. "Some of the big rewards from SOHO will come from better and more continuous observation" comments Vicente Domingo, ESA's project scientist for SOHO. "In other cases wholly new results will help to decide between conflicting theories. UVCS's high-speed oxygen atoms at the source of the fast solar wind are one case in point. Sub-surface motions revealed by MDI are another." Sub-surface flows show pancake-like features MDI is SOHO's oscillations imager and it is the most elaborate of the instruments that probe inside the Sun by helioseismology, using oscillations at the visible surface due to sound waves reverberating through the interior. MDI divides the Sun's surface into a million points and measures vertical motions once a minute by small changes of the wavelength of light. Deducing flows just below the visible surface requires prolonged calculations with a supercomputer. These detect small changes in the travel-time of sound waves according to whether they are heading into, or travelling with, the flow of material inside the Sun. After mapping sub-surface flows across a wide area, the MDI team has analysed a vertical slice. Along a 300,000-kilometre line at the Sun's equator, the computation cuts 8000 kilometres deep into the turbulent convection zone, where the outer part of the Sun boils like a kettle. The main convection cells that link ascending and descending flows turn out to be surprisingly shallow and pancake-like. They reach down about 1500 kilometres, compared with about 4000 kilometres expected by some theorists. Further results from an intensive observing campaign will enable the MDI scientists to confirm that their first results are typical, and to make a movie to see how structures change with time. Stormy weather ahead The oscillation imager MDI also charts magnetic fields running in and out of the Sun's surface. The speckled pattern that it sees will change dramatically in the years ahead, when the Sun is due to swap its north and south magnetic poles around and sunspots will become much more numerous. Among SOHO's earliest results, the daily observations by the extreme ultraviolet imager EIT revealed many bright and active spots. They tell of remarkable activity in many parts of the Sun's atmosphere, even at a time when the surface observed by visible light looks very calm. The extent of atmospheric storms becomes more apparent in a new processing of EIT images which compares the intensities at different wavelengths. In one case a huge and complex magnetic disturbance in the Sun's equatorial atmosphere was almost half as wide as the visible disk of the Sun. The extent and violence of such events can only tend to increase as the Sun becomes more active. "EIT is beginning a career similar to the meteorological satellites that monitor the weather on the Earth every day," says its principal investigator, Jean-Pierre Delaboudini the Institut d'Astrophysique Spatiale at Orsay in France. "Just as those have revolutionized meteorology, so our observations give us vivid new impressions of the Sun's weather. SOHO is due to operate for at least six years, into the next maximum of sunspot activity, so we shall see more precisely than ever before the changes in solar weather with the magnetic seasons, which also affect conditions at the Earth."

  1. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  2. P-KIMMO: A Prolog Implementation of the Two Level Model.

    ERIC Educational Resources Information Center

    Lee, Kang-Hyuk

    Implementation of a computer-based model for morphological analysis and synthesis of language, entitled P-KIMMO, is discussed. The model was implemented in Quintus Prolog on a Sun Workstation and exported to a Macintosh computer. This model has two levels of morphophonological representation, lexical and surface levels, associated by…

  3. The Impact and Promise of Open-Source Computational Material for Physics Teaching

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang

    2017-01-01

    A computer-based modeling approach to teaching must be flexible because students and teachers have different skills and varying levels of preparation. Learning how to run the ``software du jour'' is not the objective for integrating computational physics material into the curriculum. Learning computational thinking, how to use computation and computer-based visualization to communicate ideas, how to design and build models, and how to use ready-to-run models to foster critical thinking is the objective. Our computational modeling approach to teaching is a research-proven pedagogy that predates computers. It attempts to enhance student achievement through the Modeling Cycle. This approach was pioneered by Robert Karplus and the SCIS Project in the 1960s and 70s and later extended by the Modeling Instruction Program led by Jane Jackson and David Hestenes at Arizona State University. This talk describes a no-cost open-source computational approach aligned with a Modeling Cycle pedagogy. Our tools, curricular material, and ready-to-run examples are freely available from the Open Source Physics Collection hosted on the AAPT-ComPADRE digital library. Examples will be presented.

  4. Colt: an experiment in wormhole run-time reconfiguration

    NASA Astrophysics Data System (ADS)

    Bittner, Ray; Athanas, Peter M.; Musgrove, Mark

    1996-10-01

    Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.

  5. Search For Gravitational-wave Bursts Associated with Gamma-ray Bursts using Data from LIGO Science Run 5 and Virgo Science Run 1

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Acernese, F.; Adhikari, R.; Ajith, P.; Allen, B.; Allen, G.; Alshourbagy, M.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Antonucci, F.; Aoudia, S.; Arain, M. A.; Araya, M.; Armandula, H.; Armor, P.; Arun, K. G.; Aso, Y.; Aston, S.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Barker, C.; Barker, D.; Barone, F.; Barr, B.; Barriga, P.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Bauer, Th. S.; Behnke, B.; Beker, M.; Benacquista, M.; Betzwieser, J.; Beyersdorf, P. T.; Bigotta, S.; Bilenko, I. A.; Billingsley, G.; Birindelli, S.; Biswas, R.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Boccara, C.; Bodiya, T. P.; Bogue, L.; Bondu, F.; Bonelli, L.; Bork, R.; Boschi, V.; Bose, S.; Bosi, L.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brooks, A. F.; Brown, D. A.; Brummit, A.; Brunet, G.; Budzyński, R.; Bulik, T.; Bullington, A.; Bulten, H. J.; Buonanno, A.; Burmeister, O.; Buskulic, D.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campagna, E.; Cannizzo, J.; Cannon, K. C.; Canuel, B.; Cao, J.; Carbognani, F.; Cardenas, L.; Caride, S.; Castaldi, G.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Chalkley, E.; Charlton, P.; Chassande-Mottin, E.; Chatterji, S.; Chelkowski, S.; Chen, Y.; Chincarini, A.; Christensen, N.; Chung, C. T. Y.; Clark, D.; Clark, J.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cokelaer, T.; Colacino, C. N.; Colas, J.; Colla, A.; Colombini, M.; Conte, R.; Cook, D.; Corbitt, T. R. C.; Corda, C.; Cornish, N.; Corsi, A.; Coulon, J.-P.; Coward, D.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Culter, R. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dari, A.; Dattilo, V.; Daudert, B.; Davier, M.; Davies, G.; Daw, E. J.; Day, R.; De Rosa, R.; DeBra, D.; Degallaix, J.; del Prete, M.; Dergachev, V.; Desai, S.; DeSalvo, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Emilio, M. Di Paolo; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doomes, E. E.; Drago, M.; Drever, R. W. P.; Dueck, J.; Duke, I.; Dumas, J.-C.; Dwyer, J. G.; Echols, C.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Espinoza, E.; Etzel, T.; Evans, M.; Evans, T.; Fafone, V.; Fairhurst, S.; Faltas, Y.; Fan, Y.; Fazi, D.; Fehrmann, H.; Ferrante, I.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Flaminio, R.; Flasch, K.; Foley, S.; Forrest, C.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Franzen, A.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T.; Fritschel, P.; Frolov, V. V.; Fyffe, M.; Galdi, V.; Gammaitoni, L.; Garofoli, J. A.; Garufi, F.; Gemme, G.; Genin, E.; Gennai, A.; Gholami, I.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Goda, K.; Goetz, E.; Goggin, L. M.; González, G.; Gorodetsky, M. L.; Goeßzetler, S.; Goßler, S.; Gouaty, R.; Granata, M.; Granata, V.; Grant, A.; Gras, S.; Gray, C.; Gray, M.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Greverie, C.; Grimaldi, F.; Grosso, R.; Grote, H.; Grunewald, S.; Guenther, M.; Guidi, G.; Gustafson, E. K.; Gustafson, R.; Hage, B.; Hallam, J. M.; Hammer, D.; Hammond, G. D.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Haughian, K.; Hayama, K.; Heefner, J.; Heitmann, H.; Hello, P.; Heng, I. S.; Heptonstall, A.; Hewitson, M.; Hild, S.; Hirose, E.; Hoak, D.; Hodge, K. A.; Holt, K.; Hosken, D. J.; Hough, J.; Hoyland, D.; Huet, D.; Hughey, B.; Huttner, S. H.; Ingram, D. R.; Isogai, T.; Ito, M.; Ivanov, A.; Jaranowski, P.; Johnson, B.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kanner, J.; Kasprzyk, D.; Katsavounidis, E.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khan, R.; Khazanov, E.; King, P.; Kissel, J. S.; Klimenko, S.; Kokeyama, K.; Kondrashov, V.; Kopparapu, R.; Koranda, S.; Kowalska, I.; Kozak, D.; Krishnan, B.; Królak, A.; Kumar, R.; Kwee, P.; La Penna, P.; Lam, P. K.; Landry, M.; Lantz, B.; Lazzarini, A.; Lei, H.; Lei, M.; Leindecker, N.; Leonor, I.; Leroy, N.; Letendre, N.; Li, C.; Lin, H.; Lindquist, P. E.; Littenberg, T. B.; Lockerbie, N. A.; Lodhia, D.; Longo, M.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lu, P.; Lubiński, M.; Lucianetti, A.; Lück, H.; Machenschalk, B.; MacInnis, M.; Mackowski, J.-M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Markowitz, J.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McHugh, M.; McIntyre, G.; McKechan, D. J. A.; McKenzie, K.; Mehmet, M.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Menzinger, F.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Michel, C.; Milano, L.; Miller, J.; Minelli, J.; Minenkov, Y.; Mino, Y.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Miyakawa, O.; Moe, B.; Mohan, M.; Mohanty, S. D.; Mohapatra, S. R. P.; Moreau, J.; Moreno, G.; Morgado, N.; Morgia, A.; Morioka, T.; Mors, K.; Mosca, S.; Moscatelli, V.; Mossavi, K.; Mours, B.; MowLowry, C.; Mueller, G.; Muhammad, D.; Mukherjee, S.; Mukhopadhyay, H.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murray, P. G.; Myers, E.; Myers, J.; Nash, T.; Nelson, J.; Neri, I.; Newton, G.; Nishizawa, A.; Nocera, F.; Numata, K.; Ochsner, E.; O'Dell, J.; Ogin, G. H.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pagliaroli, G.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Parameshwaraiah, V.; Pardi, S.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patel, P.; Pedraza, M.; Penn, S.; Perreca, A.; Persichetti, G.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Postiglione, F.; Prato, M.; Principe, M.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Raab, F. J.; Rabaste, O.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raics, Z.; Rainer, N.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Rehbein, H.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Rivera, B.; Roberts, P.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rolland, L.; Rollins, J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Russell, P.; Ryan, K.; Sakata, S.; Salemi, F.; Sancho de la Jordana, L.; Sandberg, V.; Sannibale, V.; Santamaría, L.; Saraf, S.; Sarin, P.; Sassolas, B.; Sathyaprakash, B. S.; Sato, S.; Satterthwaite, M.; Saulson, P. R.; Savage, R.; Savov, P.; Scanlan, M.; Schilling, R.; Schnabel, R.; Schofield, R.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Searle, A. C.; Sears, B.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Sinha, S.; Sintes, A. M.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Somiya, K.; Sorazu, B.; Stein, A.; Stein, L. C.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, K.-X.; Sung, M.; Sutton, P. J.; Swinkels, B.; Szokoly, G. P.; Talukder, D.; Tang, L.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Terenzi, R.; Thacker, J.; Thorne, K. A.; Thorne, K. S.; Thüring, A.; Tokmakov, K. V.; Toncelli, A.; Tonelli, M.; Torres, C.; Torrie, C.; Tournefier, E.; Travasso, F.; Traylor, G.; Trias, M.; Trummer, J.; Ugolini, D.; Ulmen, J.; Urbanek, K.; Vahlbruch, H.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Veggel, A. A.; Vass, S.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P.; Veltkamp, C.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A.; Vinet, J.-Y.; Vocca, H.; Vorvick, C.; Vyachanin, S. P.; Waldman, S. J.; Wallace, L.; Ward, R. L.; Was, M.; Weidner, A.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wen, S.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, H. R.; Williams, L.; Willke, B.; Wilmut, I.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Woan, G.; Wooley, R.; Worden, J.; Wu, W.; Yakushin, I.; Yamamoto, H.; Yan, Z.; Yoshida, S.; Yvert, M.; Zanolin, M.; Zhang, J.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M. E.; zur Mühlen, H.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2010-06-01

    We present the results of a search for gravitational-wave bursts (GWBs) associated with 137 gamma-ray bursts (GRBs) that were detected by satellite-based gamma-ray experiments during the fifth LIGO science run and first Virgo science run. The data used in this analysis were collected from 2005 November 4 to 2007 October 1, and most of the GRB triggers were from the Swift satellite. The search uses a coherent network analysis method that takes into account the different locations and orientations of the interferometers at the three LIGO-Virgo sites. We find no evidence for GWB signals associated with this sample of GRBs. Using simulated short-duration (<1 s) waveforms, we set upper limits on the amplitude of gravitational waves associated with each GRB. We also place lower bounds on the distance to each GRB under the assumption of a fixed energy emission in gravitational waves, with a median limit of D ~ 12 Mpc(E iso GW/0.01 M sun c 2)1/2 for emission at frequencies around 150 Hz, where the LIGO-Virgo detector network has best sensitivity. We present astrophysical interpretations and implications of these results, and prospects for corresponding searches during future LIGO-Virgo runs.

  6. ARECIBO PALFA SURVEY AND EINSTEIN-HOME: BINARY PULSAR DISCOVERY BY VOLUNTEER COMPUTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knispel, B.; Allen, B.; Aulbert, C.

    2011-05-01

    We report the discovery of the 20.7 ms binary pulsar J1952+2630, made using the distributed computing project Einstein-Home in Pulsar ALFA survey observations with the Arecibo telescope. Follow-up observations with the Arecibo telescope confirm the binary nature of the system. We obtain a circular orbital solution with an orbital period of 9.4 hr, a projected orbital radius of 2.8 lt-s, and a mass function of f = 0.15 M{sub sun} by analysis of spin period measurements. No evidence of orbital eccentricity is apparent; we set a 2{sigma} upper limit e {approx}< 1.7 x 10{sup -3}. The orbital parameters suggest amore » massive white dwarf companion with a minimum mass of 0.95 M{sub sun}, assuming a pulsar mass of 1.4 M{sub sun}. Most likely, this pulsar belongs to the rare class of intermediate-mass binary pulsars. Future timing observations will aim to determine the parameters of this system further, measure relativistic effects, and elucidate the nature of the companion star.« less

  7. Dynamical Evolution of the Inner Heliosphere Approaching Solar Activity Maximum: Interpreting Ulysses Observations Using a Global MHD Model. Appendix 1

    NASA Technical Reports Server (NTRS)

    Riley, Pete; Mikic, Z.; Linker, J. A.

    2003-01-01

    In this study we describe a series of MHD simulations covering the time period from 12 January 1999 to 19 September 2001 (Carrington Rotation 1945 to 1980). This interval coincided with: (1) the Sun s approach toward solar maximum; and (2) Ulysses second descent to the southern polar regions, rapid latitude scan, and arrival into the northern polar regions. We focus on the evolution of several key parameters during this time, including the photospheric magnetic field, the computed coronal hole boundaries, the computed velocity profile near the Sun, and the plasma and magnetic field parameters at the location of Ulysses. The model results provide a global context for interpreting the often complex in situ measurements. We also present a heuristic explanation of stream dynamics to describe the morphology of interaction regions at solar maximum and contrast it with the picture that resulted from Ulysses first orbit, which occurred during more quiescent solar conditions. The simulation results described here are available at: http://sun.saic.com.

  8. DYNACLIPS (DYNAmic CLIPS): A dynamic knowledge exchange tool for intelligent agents

    NASA Technical Reports Server (NTRS)

    Cengeloglu, Yilmaz; Khajenoori, Soheil; Linton, Darrell

    1994-01-01

    In a dynamic environment, intelligent agents must be responsive to unanticipated conditions. When such conditions occur, an intelligent agent may have to stop a previously planned and scheduled course of actions and replan, reschedule, start new activities and initiate a new problem solving process to successfully respond to the new conditions. Problems occur when an intelligent agent does not have enough knowledge to properly respond to the new situation. DYNACLIPS is an implementation of a framework for dynamic knowledge exchange among intelligent agents. Each intelligent agent is a CLIPS shell and runs a separate process under SunOS operating system. Intelligent agents can exchange facts, rules, and CLIPS commands at run time. Knowledge exchange among intelligent agents at run times does not effect execution of either sender and receiver intelligent agent. Intelligent agents can keep the knowledge temporarily or permanently. In other words, knowledge exchange among intelligent agents would allow for a form of learning to be accomplished.

  9. Virtualization and cloud computing in dentistry.

    PubMed

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  10. CRANS - CONFIGURABLE REAL-TIME ANALYSIS SYSTEM

    NASA Technical Reports Server (NTRS)

    Mccluney, K.

    1994-01-01

    In a real-time environment, the results of changes or failures in a complex, interconnected system need evaluation quickly. Tabulations showing the effects of changes and/or failures of a given item in the system are generally only useful for a single input, and only with regard to that item. Subsequent changes become harder to evaluate as combinations of failures produce a cascade effect. When confronted by multiple indicated failures in the system, it becomes necessary to determine a single cause. In this case, failure tables are not very helpful. CRANS, the Configurable Real-time ANalysis System, can interpret a logic tree, constructed by the user, describing a complex system and determine the effects of changes and failures in it. Items in the tree are related to each other by Boolean operators. The user is then able to change the state of these items (ON/OFF FAILED/UNFAILED). The program then evaluates the logic tree based on these changes and determines any resultant changes to other items in the tree. CRANS can also search for a common cause for multiple item failures, and allow the user to explore the logic tree from within the program. A "help" mode and a reference check provide the user with a means of exploring an item's underlying logic from within the program. A commonality check determines single point failures for an item or group of items. Output is in the form of a user-defined matrix or matrices of colored boxes, each box representing an item or set of items from the logic tree. Input is via mouse selection of the matrix boxes, using the mouse buttons to toggle the state of the item. CRANS is written in C-language and requires the MIT X Window System, Version 11 Revision 4 or Revision 5. It requires 78K of RAM for execution and a three button mouse. It has been successfully implemented on Sun4 workstations running SunOS, HP9000 workstations running HP-UX, and DECstations running ULTRIX. No executable is provided on the distribution medium; however, a sample makefile is included. Sample input files are also included. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. This program was developed in 1992.

  11. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

  12. Particle-in-cell simulations of Earth-like magnetosphere during a magnetic field reversal

    NASA Astrophysics Data System (ADS)

    Barbosa, M. V. G.; Alves, M. V.; Vieira, L. E. A.; Schmitz, R. G.

    2017-12-01

    The geologic record shows that hundreds of pole reversals have occurred throughout Earth's history. The mean interval between the poles reversals is roughly 200 to 300 thousand years and the last reversal occurred around 780 thousand years ago. Pole reversal is a slow process, during which the strength of the magnetic field decreases, become more complex, with the appearance of more than two poles for some time and then the field strength increases, changing polarity. Along the process, the magnetic field configuration changes, leaving the Earth-like planet vulnerable to the harmful effects of the Sun. Understanding what happens with the magnetosphere during these pole reversals is an open topic of investigation. Only recently PIC codes are used to modeling magnetospheres. Here we use the particle code iPIC3D [Markidis et al, Mathematics and Computers in Simulation, 2010] to simulate an Earth-like magnetosphere at three different times along the pole reversal process. The code was modified, so the Earth-like magnetic field is generated using an expansion in spherical harmonics with the Gauss coefficients given by a MHD simulation of the Earth's core [Glatzmaier et al, Nature, 1995; 1999; private communication to L.E.A.V.]. Simulations show the qualitative behavior of the magnetosphere, such as the current structures. Only the planet magnetic field was changed in the runs. The solar wind is the same for all runs. Preliminary results show the formation of the Chapman-Ferraro current in the front of the magnetosphere in all the cases. Run for the middle of the reversal process, the low intensity magnetic field and its asymmetrical configuration the current structure changes and the presence of multiple poles can be observed. In all simulations, a structure similar to the radiation belts was found. Simulations of more severe solar wind conditions are necessary to determine the real impact of the reversal in the magnetosphere.

  13. High throughput profile-profile based fold recognition for the entire human proteome.

    PubMed

    McGuffin, Liam J; Smith, Richard T; Bryson, Kevin; Sørensen, Søren-Aksel; Jones, David T

    2006-06-07

    In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power. In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.

  14. Framework for architecture-independent run-time reconfigurable applications

    NASA Astrophysics Data System (ADS)

    Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.

    2000-10-01

    Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.

  15. Evaluation of the Rational Environment

    DTIC Science & Technology

    1988-07-01

    Computer, Inc. Rational , R1000, and Rational Environment are trademarks of Rational . Smalltalk-8C is a trademark of Xerox. Sun is a trademark of Sun...Introduction 1 1.1. Background 1 1.2. The Rational Environment as Evaluated 2 1.3. Scope of Evaluation 3 1.4. Road Map for the Reader 4 2...CMVC Implementation 26 2.3.2. Workorder Management .--,---...28 3. Capabilities of the Rational Environment i , 31 3.1.1.1..nd Namin 3.1

  16. Irregular-Mesh Terrain Analysis and Incident Solar Radiation for Continuous Hydrologic Modeling in Mountain Watersheds

    NASA Astrophysics Data System (ADS)

    Moreno, H. A.; Ogden, F. L.; Alvarez, L. V.

    2016-12-01

    This research work presents a methodology for estimating terrain slope degree, aspect (slope orientation) and total incoming solar radiation from Triangular Irregular Network (TIN) terrain models. The algorithm accounts for self shading and cast shadows, sky view fractions for diffuse radiation, remote albedo and atmospheric backscattering, by using a vectorial approach within a topocentric coordinate system and establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computingunit vector defining the position of the sun at each hour and day of the year. Thus, a dot product determines the radiation flux at each TIN element. Cast shadows are computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector only in the visible horizon range. Sky view fractions are computed by a simplified scanning algorithm from the highest to the lowest triangles along prescribed directions and visible distances, useful to determine diffuse radiation. Finally, remotealbedo is computed from the sky view fraction complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. The sensitivity of the different radiative components is tested a in a moutainuous watershed in Wyoming, to seasonal changes in weather and surrounding albedo (snow). This methodology represents an improvement on the current algorithms to compute terrain and radiation values on triangular-based models in an accurate and efficient manner. All terrain-related features (e.g. slope, aspect, sky view fraction) can be pre-computed and stored for easy access for a subsequent, progressive-in-time, numerical simulation.

  17. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  18. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE PAGES

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...

    2017-12-06

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  19. CADNA_C: A version of CADNA for use with C or C++ programs

    NASA Astrophysics Data System (ADS)

    Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne

    2010-11-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.

  20. Inlet Spillage Drag Predictions Using the AIRPLANE Code

    NASA Technical Reports Server (NTRS)

    Thomas, Scott D.; Won, Mark A.; Cliff, Susan E.

    1999-01-01

    AIRPLANE (Jameson/Baker) is a steady inviscid unstructured Euler flow solver. It has been validated on many HSR geometries. It is implemented as MESHPLANE, an unstructured mesh generator, and FLOPLANE, an iterative flow solver. The surface description from an Intergraph CAD system goes into MESHPLANE as collections of polygonal curves to generate the 3D mesh. The flow solver uses a multistage time stepping scheme with residual averaging to approach steady state, but R is not time accurate. The flow solver was ported from Cray to IBM SP2 by Wu-Sun Cheng (IBM); it could only be run on 4 CPUs at a time because of memory limitations. Meshes for the four cases had about 655,000 points in the flow field, about 3.9 million tetrahedra, about 77,500 points on the surface. The flow solver took about 23 wall seconds per iteration when using 4 CPUs. It took about eight and a half wall hours to run 1,300 iterations at a time (the queue limit is 10 hours). A revised version of FLOPLANE (Thomas) was used on up to 64 CPUs to finish up some calculations at the end. We had to turn on more communication when using more processors to eliminate noise that was contaminating the flow field; this added about 50% to the elapsed wall time per iteration when using 64 CPUs. This study involved computing lift and drag for a wing/body/nacelle configuration at Mach 0.9 and 4 degrees pitch. Four cases were considered, corresponding to four nacelle mass flow conditions.

  1. Preliminary Data Pipeline for SunRISE: Assessing the Performance of Space Based Radio Arrays

    NASA Astrophysics Data System (ADS)

    Hegedus, A. M.; Kasper, J. C.; Lazio, J.; Amiri, N.; Stuart, J.

    2017-12-01

    The Sun Radio Interferometer Space Experiment (SunRISE) is a NASA Heliophysics Explorer Mission of Opportunity that was recently awarded phase A funding. SunRISE's main science goals are to localize the source of particle acceleration in coronal mass ejections to 1/4th of their width, and trace the path of electron beams along magnetic field lines out to 20 solar radii. These processes generate cascading Type II and III bursts that have ever only been detected in low frequencies with single spacecraft antenna. These bursts emit below the ionospheric cutoff of 10 MHz past 2 solar radii, so a synthetic aperture made from multiple space antennae is needed to pinpoint the origin of these bursts. In this work, we create an end to end simulation of the data processing pipeline of SunRISE, which uses 6 small satellites to do this localization. One of the main inputs of the simulation is a ground truth of what we want the array to image. We idealized this as an elliptical Gaussian offset from the sun, which previous modeling suggests is a good approximation of what SunRISE would see in space. Another input is an orbit file describing the positions of all the spacecraft. The simulated orbit determinations are made with GPS sidelobes and have an error associated with the recovered positions. From there we compute the Fourier coefficients every antenna will see, then apply the correct phase lags and multiply each pair of coefficients to simulate the process of correlation. We compute the projected UVW coordinates and put these along with the correlated visibilities into a CASA MS file. The correlated visibilities are compared to CASA's simulated visibilities at the same UVW coordinates, verifying the accuracy of our method. The visibilities are then subjected to realistic thermal noise, as well as phase noise from uncertainties in the spacecraft position. We employ CASA's CLEAN algorithm to image the data, and CASA's imfit algorithm to estimate the parameters of the imaged elliptical Gaussian, which we can compare directly to the input. We find that at the upper frequencies the phase noise can negatively affect performance of the array, but for the large majority of the tracking range of interest, SunRISE can sufficiently resolve the radio bursts to fulfill its science requirements and constrain Solar Energetic Particle acceleration and transport.

  2. Tube dynamics and low energy Earth-Moon transfers in the 4-body system

    NASA Astrophysics Data System (ADS)

    Onozaki, Kaori; Yoshimura, Hiroaki; Ross, Shane D.

    2017-11-01

    In this paper, we show a low energy Earth-Moon transfer in the context of the Sun-Earth-Moon-spacecraft 4-body system. We consider the 4-body system as the coupled system of the Sun-Earth-spacecraft 3-body system perturbed by the Moon (which we call the Moon-perturbed system) and the Earth-Moon-spacecraft 3-body system perturbed by the Sun (which we call the Sun-perturbed system). In both perturbed systems, analogs of the stable and unstable manifolds are computed numerically by using the notion of Lagrangian coherent structures, wherein the stable and unstable manifolds play the role of separating orbits into transit and non-transit orbits. We obtain a family of non-transit orbits departing from a low Earth orbit in the Moon-perturbed system, and a family of transit orbits arriving into a low lunar orbit in the Sun-perturbed system. Finally, we show that we can construct a low energy transfer from the Earth to the Moon by choosing appropriate trajectories from both families and patching these trajectories with a maneuver.

  3. Multitasking the code ARC3D. [for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Barton, John T.; Hsiung, Christopher C.

    1986-01-01

    The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.

  4. Study of the mapping of Navier-Stokes algorithms onto multiple-instruction/multiple-data-stream computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.; Stevens, K.

    1984-01-01

    Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.

  5. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.

  6. Brief questions highlight the need for melanoma information campaigns.

    PubMed

    Foote, Janet A; Poole, Catherine M

    2013-12-01

    Melanoma awareness was briefly assessed at walk/runs held simultaneously in Philadelphia PA, Phoenix AZ, and Seattle WA. Of the participants, 75 % (1521) answered short questions during event registration. Among 1,036 respondents aged 14 years and older, 66 % reported knowing melanoma warning signs. Significantly more respondents with melanoma family history reported having a physician-administered skin exam and knowing warning signs. More than one third of walk/run participants reported no definitive melanoma warning sign knowledge. Self-reported melanoma awareness and detection indices were lowest among Phoenix participants; the event city with the greatest annual sun exposure. Educational efforts for melanoma awareness are critically needed. Selected results of this project were presented in a poster forum at the 2006 Congress for Epidemiology meeting held in Seattle, WA (June 2006).

  7. Analysis of model output and science data in the Virtual Model Repository (VMR).

    NASA Astrophysics Data System (ADS)

    De Zeeuw, D.; Ridley, A. J.

    2014-12-01

    Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.

  8. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    PubMed Central

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811

  9. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    PubMed

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  10. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  11. Solar-B E/PO Program at Chabot Space and Science Center, Oakland, California

    NASA Astrophysics Data System (ADS)

    Burress, B. S.

    2005-05-01

    Chabot Space and Science Center in Oakland, California, conducts the Education/Public Outreach program for the Lockheed-Martin Solar and Astrophysics Lab Solar-B Focal Plane Package project. Since opening its doors in August 2000, Chabot has carried out this program in activities and educational products in the public outreach, informal education, and formal education spheres. We propose a poster presentation that illustrates the spectrum of our Solar-B E/PO program. Solar-B, scheduled to launch in September 2006, is another step in an increasingly sophisticated investigation and understanding of our Sun, its behavior, and its effects on the Earth and our technological civilization. A mission of the Japan Aerospace Exploration Agency (JAXA), Solar-B is an international collaboration between Japan, the US/NASA, and the UK/PPARC. Solar-B's main optical telescope, extreme ultraviolet imaging spectrometer, and x-ray telescope will collect data on the Sun's magnetic dynamics from the photosphere through the corona at higher spatial and time resolution than on current and previous solar satellite missions, furthering our understanding of the Sun's behavior and, ultimately, its effects on the Earth. Chabot's E/PO program for the Lockheed-Martin Solar-B Focal Plane Package is multi-faceted, including elements focused on technology/engineering, solar physics, and Sun-Earth Connection themes. In the Public Outreach arena, we conduct events surrounding NASA Sun-Earth Day themes and programs other live and/or interactive events, facilitate live solar viewing, and present a series of exhibits focused on the Solar-B and other space-based missions, the dynamic Sun, and light and optics. In the Informal Education sector we run a solar day camp for kids and produce educational products, including a poster on the Solar-B mission and CDROM multimedia packages. In Formal Education, we develop classroom curriculum guides and conduct workshops training teachers in their implementation. Our poster presentation will address the highlights of our program in all three of these areas.

  12. Solar wind acceleration in the solar corona

    NASA Technical Reports Server (NTRS)

    Giordano, S.; Antonucci, E.; Benna, C.; Kohl, J. L.; Noci, G.; Michels, J.; Fineschi, S.

    1997-01-01

    The intensity ratio of the O VI doublet in the extended area is analyzed. The O VI intensity data were obtained with the ultraviolet coronagraph spectrometer (UVCS) during the SOHO campaign 'whole sun month'. The long term observations above the north pole of the sun were used for the polar coronal data. Using these measurements, the solar wind outflow velocity in the extended corona was determined. The 100 km/s level is running along the streamer borders. The acceleration of the solar wind is found to be high in regions between streamers. In the central part of streamers, the outflow velocity of the coronal plasma remains below 100 km/s at least within 3.8 solar radii. The regions at the north and south poles, characterized by a more rapid acceleration of the solar wind, correspond to regions where the UVCS observes enhanced O VI line broadenings.

  13. Themis - A solar power station

    NASA Astrophysics Data System (ADS)

    Hillairet, J.

    The organization, goals, equipment, costs, and performance of the French Themis (Thermo-helio-electric-MW) project are outlined. The program was begun for both the domestic energy market and for export. The installation comprises a molten eutectic salt loop which receives heat from radiators situated in a central tower. The salt transfers the heat to water for steam generation of electricity. A storage tank holds enough molten salt to supply one day's reserve of power, 40 MWh. A field of heliostats directs the suns rays for an estimated 2400 hr/yr onto the central receiver aperture, while 11 additional parabolic concentrators provide sufficient heat to keep the salt reservoir at temperatures exceeding 200 C. In a test run of several months during the spring of 1982 the heliostats directed the sun's rays with an average efficiency of 75 percent, yielding 2.3 MW of power at a system efficiency of 20.5 percent in completely automatic operation.

  14. Compressed quantum computation using a remote five-qubit quantum computer

    NASA Astrophysics Data System (ADS)

    Hebenstreit, M.; Alsina, D.; Latorre, J. I.; Kraus, B.

    2017-05-01

    The notion of compressed quantum computation is employed to simulate the Ising interaction of a one-dimensional chain consisting of n qubits using the universal IBM cloud quantum computer running on log2(n ) qubits. The external field parameter that controls the quantum phase transition of this model translates into particular settings of the quantum gates that generate the circuit. We measure the magnetization, which displays the quantum phase transition, on a two-qubit system, which simulates a four-qubit Ising chain, and show its agreement with the theoretical prediction within a certain error. We also discuss the relevant point of how to assess errors when using a cloud quantum computer with a limited amount of runs. As a solution, we propose to use validating circuits, that is, to run independent controlled quantum circuits of similar complexity to the circuit of interest.

  15. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  16. Probabilistic Solar Wind Forecasting Using Large Ensembles of Near-Sun Conditions With a Simple One-Dimensional "Upwind" Scheme

    NASA Astrophysics Data System (ADS)

    Owens, Mathew J.; Riley, Pete

    2017-11-01

    Long lead-time space-weather forecasting requires accurate prediction of the near-Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near-Sun solar wind and magnetic field conditions provide the inner boundary condition to three-dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics-based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near-Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near-Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near-Sun solar wind speed at a range of latitudes about the sub-Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun-Earth line. Propagating these conditions to Earth by a three-dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one-dimensional "upwind" scheme is used. The variance in the resulting near-Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996-2016, the upwind ensemble is found to provide a more "actionable" forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large).

  17. Probabilistic Solar Wind Forecasting Using Large Ensembles of Near-Sun Conditions With a Simple One-Dimensional "Upwind" Scheme.

    PubMed

    Owens, Mathew J; Riley, Pete

    2017-11-01

    Long lead-time space-weather forecasting requires accurate prediction of the near-Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near-Sun solar wind and magnetic field conditions provide the inner boundary condition to three-dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics-based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near-Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near-Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near-Sun solar wind speed at a range of latitudes about the sub-Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun-Earth line. Propagating these conditions to Earth by a three-dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one-dimensional "upwind" scheme is used. The variance in the resulting near-Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996-2016, the upwind ensemble is found to provide a more "actionable" forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large).

  18. Aerosol Optical Depth Determinations for BOREAS

    NASA Technical Reports Server (NTRS)

    Wrigley, R. C.; Livingston, J. M.; Russell, P. B.; Guzman, R. P.; Ried, D.; Lobitz, B.; Peterson, David L. (Technical Monitor)

    1994-01-01

    Automated tracking sun photometers were deployed by NASA/Ames Research Center aboard the NASA C-130 aircraft and at a ground site for all three Intensive Field Campaigns (IFCs) of the Boreal Ecosystem-Atmosphere Study (BOREAS) in central Saskatchewan, Canada during the summer of 1994. The sun photometer data were used to derive aerosol optical depths for the total atmospheric column above each instrument. The airborne tracking sun photometer obtained data in both the southern and northern study areas at the surface prior to takeoff, along low altitude runs near the ground tracking sun photometer, during ascents to 6-8 km msl, along remote sensing flightlines at altitude, during descents to the surface, and at the surface after landing. The ground sun photometer obtained data from the shore of Candle Lake in the southern area for all cloud-free times. During the first IFC in May-June ascents and descents of the airborne tracking sun photometer indicated the aerosol optical depths decreased steadily from the surface to 3.5 kni where they leveled out at approximately 0.05 (at 525 nm), well below levels caused by the eruption of Mt. Pinatubo. On a very clear day, May 31st, surface optical depths measured by either the airborne or ground sun photometers approached those levels (0.06-0.08 at 525 nm), but surface optical depths were often several times higher. On June 4th they increased from 0.12 in the morning to 0.20 in the afternoon with some evidence of brief episodes of pollen bursts. During the second IFC surface aerosol optical depths were variable in the extreme due to smoke from western forest fires. On July 20th the aerosol optical depth at 525 nm decreased from 0.5 in the morning to 0.2 in the afternoon; they decreased still further the next day to 0.05 and remained consistently low throughout the day to provide excellent conditions for several remote sensing missions flown that day. Smoke was heavy for the early morning of July 24th but cleared partially by 10:30 local time and cleared fully by 11:30. Heavy smoke characterized the rest of the IFC in both study areas.

  19. Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Brentner, Kenneth S.

    2000-01-01

    This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.

  20. A computer simulation model to compute the radiation transfer of mountainous regions

    NASA Astrophysics Data System (ADS)

    Li, Yuguang; Zhao, Feng; Song, Rui

    2011-11-01

    In mountainous regions, the radiometric signal recorded at the sensor depends on a number of factors such as sun angle, atmospheric conditions, surface cover type, and topography. In this paper, a computer simulation model of radiation transfer is designed and evaluated. This model implements the Monte Carlo ray-tracing techniques and is specifically dedicated to the study of light propagation in mountainous regions. The radiative processes between sun light and the objects within the mountainous region are realized by using forward Monte Carlo ray-tracing methods. The performance of the model is evaluated through detailed comparisons with the well-established 3D computer simulation model: RGM (Radiosity-Graphics combined Model) based on the same scenes and identical spectral parameters, which shows good agreements between these two models' results. By using the newly developed computer model, series of typical mountainous scenes are generated to analyze the physical mechanism of mountainous radiation transfer. The results show that the effects of the adjacent slopes are important for deep valleys and they particularly affect shadowed pixels, and the topographic effect needs to be considered in mountainous terrain before accurate inferences from remotely sensed data can be made.

  1. A Genetic Algorithm for UAV Routing Integrated with a Parallel Swarm Simulation

    DTIC Science & Technology

    2005-03-01

    Metrics. 2.3.5.1 Amdahl’s, Gustafson-Barsis’s, and Sun-Ni’s Laws . At the heart of parallel computing is the ratio of communication time to...parallel execution. Three ‘ laws ’ in particular are of interest with regard to this ratio: Amdahl’s Law , the Gustafson-Barsis’s Law , and Sun-Ni’s Law ...Amdahl’s Law makes the case for fixed size speedup. This conjecture states that speedup saturates and efficiency drops as a consequence of holding the

  2. Airborne tracking sunphotometer apparatus and system

    NASA Technical Reports Server (NTRS)

    Matsumoto, Yutaka (Inventor); Mina, Cesar (Inventor); Russell, Philip B. (Inventor); Vanark, William B. (Inventor)

    1987-01-01

    An airborne tracking Sun photometer apparatus has a rotatable dome. An azimuth drive motor is connected to rotate the dome. The dome has an equatorial slot. A cylindrical housing is pivotally mounted inside the dome at the equatorial slot. A photometer is mounted in the housing to move in the equatorial slot as the housing pivots. The photometer has an end facing from the slot with an optical flat transparent window. An elevation drive motor is connected to pivot the cylindrical housing. The rotatable dome is mounted in the bulkhead of an aircraft to extend from the interior of the aircraft. A Sun sensor causes the photometer to track the Sun automatically. Alternatively, the photometer may be oriented manually or by computer.

  3. Summer weekend sun exposure and sunburn among a New Zealand urban population, 1994-2006.

    PubMed

    McLeod, Geraldine Geri F H; Reeder, Anthony I; Gray, Andrew R; McGee, Rob

    2013-08-30

    To describe summer weekend sun exposure and sunburn experience, 1994-2006, among urban New Zealanders (15-69 years) by sex, age group, skin type and outdoor activity type. A series of five telephone surveys undertaken in the summers of 1994, 1997, 1999-2000, 2002-3 and 2005-6 provided a sample of 6,195 respondents with usable data from five major cities (Auckland, Hamilton, Wellington, Christchurch and Dunedin). Respondents were administered a Computer Assisted Telephone Interview (CATI) questionnaire which sought sociodemographic information, sun exposure, and sunburn experience during the most recent weekend. Overall, 69% of the sample had spent at least 15 minutes outdoors between 11am and 4pm. Weekend sunburn was reported by 21%, and was more common among males, young adults and those with highly sun-sensitive skin than females, older adults and those with less sensitive skin. The head/face/neck was the body area most frequently and severely sunburned. Sunburn was associated with greater time spent outdoors and occurred most frequently during water-based (29%) and passive recreational activities (25%) and paid work (23%). Sun protection strategies could usefully be targeted not only towards at-risk population groups, but also towards those activities and contexts most strongly associated with potentially harmful sun exposure.

  4. Can we use the ozone response in a CCM to say which solar spectral irradiance is most likely correct?

    NASA Astrophysics Data System (ADS)

    Ball, William; Rozanov, Eugene; Shapiro, Anna

    2015-04-01

    Ozone plays a key role in the temperature structure of the Earth's atmosphere and absorbs damaging ultraviolet (UV) solar radiation. Evidence suggests that variations in stratospheric ozone resulting from changes in solar UV output may have an important role to play in weather over the North Atlantic and Europe on decadal timescales through a "top-down" coupling with the troposphere. However, the magnitude of the stratospheric response to the Sun over the 11-year solar cycle (SC) depends primarily on how much the UV changes. SC UV changes differ significantly between different observational instruments and the observations and models. The substantial disagreements between existing SSI datasets lead to different atmospheric responses when they are used in climate models and, therefore, we still cannot fully understand and simulate the ozone variability. We use the SOCOL chemistry-climate model, in specified dynamics mode, to calculate the atmospheric response from using different spectral irradiance from the SATIRE-S and NRLSSI models and with SORCE observations and a constant Sun. We compare the ozone and hydroxl results from these runs with observations to try to determine which SSI dataset is most likely to be correct. This is important to get a better understanding of which SSI dataset should be used in climate modelling and what magnitude of UV variability the Sun has. This will lead to a better understanding of the Sun's influence upon our climate and weather.

  5. Counterfactual quantum computation through quantum interrogation

    NASA Astrophysics Data System (ADS)

    Hosten, Onur; Rakher, Matthew T.; Barreiro, Julio T.; Peters, Nicholas A.; Kwiat, Paul G.

    2006-02-01

    The logic underlying the coherent nature of quantum information processing often deviates from intuitive reasoning, leading to surprising effects. Counterfactual computation constitutes a striking example: the potential outcome of a quantum computation can be inferred, even if the computer is not run. Relying on similar arguments to interaction-free measurements (or quantum interrogation), counterfactual computation is accomplished by putting the computer in a superposition of `running' and `not running' states, and then interfering the two histories. Conditional on the as-yet-unknown outcome of the computation, it is sometimes possible to counterfactually infer information about the solution. Here we demonstrate counterfactual computation, implementing Grover's search algorithm with an all-optical approach. It was believed that the overall probability of such counterfactual inference is intrinsically limited, so that it could not perform better on average than random guesses. However, using a novel `chained' version of the quantum Zeno effect, we show how to boost the counterfactual inference probability to unity, thereby beating the random guessing limit. Our methods are general and apply to any physical system, as illustrated by a discussion of trapped-ion systems. Finally, we briefly show that, in certain circumstances, counterfactual computation can eliminate errors induced by decoherence.

  6. Normalization Of Thermal-Radiation Form-Factor Matrix

    NASA Technical Reports Server (NTRS)

    Tsuyuki, Glenn T.

    1994-01-01

    Report describes algorithm that adjusts form-factor matrix in TRASYS computer program, which calculates intraspacecraft radiative interchange among various surfaces and environmental heat loading from sources such as sun.

  7. 4273π: Bioinformatics education on low cost ARM hardware

    PubMed Central

    2013-01-01

    Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194

  8. 4273π: bioinformatics education on low cost ARM hardware.

    PubMed

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  9. Lunar ephemeris and selenographic coordinates of the earth and sun for 1971 and 1972

    NASA Technical Reports Server (NTRS)

    Hartung, A. D.

    1972-01-01

    Ephemeris data are presented for each month of 1971 and 1972 to provide a time history of lunar coordinates and related geometric information. A NASA Manned Spacecraft Center modification of the Jet Propulsion Laboratory ephemeris tape was used to calculate and plot coordinates of the earth, moon, and sun. The ephemeris is referenced to the mean vernal equinox at the nearest beginning of a Besselian year. Therefore, the reference equinox changes from one year to the next between 30 June and 1 July. The apparent discontinuity in the data is not noticeable in the graphical presentation, but can be observed in the digital output. The mean equator of epoch is used in all cases. The computer program used to compute and plot the ephemeris data is described.

  10. Lunar ephemeris and selenographic coordinates of the earth and sun for 1979 and 1980

    NASA Technical Reports Server (NTRS)

    Hartung, A. D.

    1972-01-01

    Ephemeris data are presented in sections for each month for 1979 and 1980 to provide a time history of lunar coordinates and related geometric information. A NASA Manned Spacecraft Center modification of an ephemeris tape was used to calculate and plot coordinates of the earth, moon, and sun. The ephemeris is referenced to the mean vernal equinox at the nearest beginning of a Besselian year. Therefore, the reference equinox changes from one year to the next between 30 June and 1 July. The apparent discontinuity in the data is not noticeable in the graphical presentation, but can be observed in the digital output. The mean equator of epoch is used in all cases. The computer program used to compute and plot the ephemeris data is described in the appendix.

  11. Lunar ephemeris and selenographic coordinates of the earth and sun for 1983 and 1984

    NASA Technical Reports Server (NTRS)

    Hartung, A. D.

    1972-01-01

    Ephemeris data are presented in sections for each month for 1983 and 1984 to provide a time history of lunar coordinates and related geometric information. A NASA Manned Spacecraft Center modification of an ephemeris tape was used to calculate and plot coordinates of the earth, moon, and sun. The ephemeris is referenced to the mean vernal equinox at the nearest beginning of a Besselian year. Therefore, the reference equinox changes from one year to the next between 30 June and 1 July. The apparent discontinuity in the data is not noticeable in the graphical presentation, but can be observed in the digital output. The mean equator of epoch is used in all cases. The computer program used to compute and plot the ephemeris data is described in the appendix.

  12. Lunar ephemeris and selenographic coordinates of the earth and sun for 1977 and 1978

    NASA Technical Reports Server (NTRS)

    Hartung, A. D.

    1972-01-01

    Ephemeris data are presented in sections for each month for 1977 and 1978 to provide a time history of lunar coordinates and related geometric information. A NASA Manned Spacecraft Center modification of an ephmeris tape was used to calculate and plot coordinates of the earth, moon, and sun. The ephemeris is referenced to the mean vernal equinox at the nearest beginning of a Besselian year. Therefore, the reference equinox changes from one year to the next between June 30 and July 1. The apparent discontinuity in the data is not noticeable in the graphical presentation, but can be observed in the digital output. The mean equator of epoch is used in all cases. The computer program used to compute and plot the ephemeris data is described in the appendix.

  13. Lunar ephemeris and selenographic coordinates of the earth and sun for 1973 and 1974

    NASA Technical Reports Server (NTRS)

    Hartung, A. D.

    1972-01-01

    Ephemeris data are presented for each month of 1973 and 1974 to provide a time history of lunar coordinates and related geometric information. A NASA Manned Spacecraft Center modification of the Jet Propulsion Laboratory ephemeris tape was used to calculate and plot coordinates of the earth, moon, and sun. The ephemeris is referenced to the mean vernal equinox at the nearest beginning of a Besselian year. Therefore, the reference equinox changes from one year to the next between 30 June and 1 July. The apparent discontinuity in the data is not noticeable in the graphical presentation, but can be observed in the digital output. The mean equator of epoch is used in all cases. The computer program used to compute and plot the ephemeris data is described.

  14. Statistical fingerprinting for malware detection and classification

    DOEpatents

    Prowell, Stacy J.; Rathgeb, Christopher T.

    2015-09-15

    A system detects malware in a computing architecture with an unknown pedigree. The system includes a first computing device having a known pedigree and operating free of malware. The first computing device executes a series of instrumented functions that, when executed, provide a statistical baseline that is representative of the time it takes the software application to run on a computing device having a known pedigree. A second computing device executes a second series of instrumented functions that, when executed, provides an actual time that is representative of the time the known software application runs on the second computing device. The system detects malware when there is a difference in execution times between the first and the second computing devices.

  15. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  16. Superplastic Behavior of Ti-6Al-4V-0.1B Alloy (Preprint)

    DTIC Science & Technology

    2011-10-01

    Scott (UES, Inc.) for help with running the high temperature tension tests. The Ti-6Al-4V-0.1B sheets used in this study were fabricated in...collaboration with Scott Reed (Flowserve) and Oscar Yu (RTI) under EMTEC Project CT-86. 6 Approved for public release; distribution unlimited. References...Sun, M. Bennett, and J.M. Scott , “Production of Plates and Sheets from As-Cast Ti-6Al-4V via Boron Modification”, in: Ti-2007 Science and Technology

  17. A Visual Editor in Java for View

    NASA Technical Reports Server (NTRS)

    Stansifer, Ryan

    2000-01-01

    In this project we continued the development of a visual editor in the Java programming language to create screens on which to display real-time data. The data comes from the numerous systems monitoring the operation of the space shuttle while on the ground and in space, and from the many tests of subsystems. The data can be displayed on any computer platform running a Java-enabled World Wide Web (WWW) browser and connected to the Internet. Previously a special-purpose program bad been written to display data on emulations of character-based display screens used for many years at NASA. The goal now is to display bit-mapped screens created by a visual editor. We report here on the visual editor that creates the display screens. This project continues the work we bad done previously. Previously we had followed the design of the 'beanbox,' a prototype visual editor created by Sun Microsystems. We abandoned this approach and implemented a prototype using a more direct approach. In addition, our prototype is based on newly released Java 2 graphical user interface (GUI) libraries. The result has been a visually more appealing appearance and a more robust application.

  18. Prestack depth migration for complex 2D structure using phase-screen propagators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, P.; Huang, Lian-Jie; Burch, C.

    1997-11-01

    We present results for the phase-screen propagator method applied to prestack depth migration of the Marmousi synthetic data set. The data were migrated as individual common-shot records and the resulting partial images were superposed to obtain the final complete Image. Tests were performed to determine the minimum number of frequency components required to achieve the best quality image and this in turn provided estimates of the minimum computing time. Running on a single processor SUN SPARC Ultra I, high quality images were obtained in as little as 8.7 CPU hours and adequate images were obtained in as little as 4.4more » CPU hours. Different methods were tested for choosing the reference velocity used for the background phase-shift operation and for defining the slowness perturbation screens. Although the depths of some of the steeply dipping, high-contrast features were shifted slightly the overall image quality was fairly insensitive to the choice of the reference velocity. Our jests show the phase-screen method to be a reliable and fast algorithm for imaging complex geologic structures, at least for complex 2D synthetic data where the velocity model is known.« less

  19. IFDOTMETER: A New Software Application for Automated Immunofluorescence Analysis.

    PubMed

    Rodríguez-Arribas, Mario; Pizarro-Estrella, Elisa; Gómez-Sánchez, Rubén; Yakhine-Diop, S M S; Gragera-Hidalgo, Antonio; Cristo, Alejandro; Bravo-San Pedro, Jose M; González-Polo, Rosa A; Fuentes, José M

    2016-04-01

    Most laboratories interested in autophagy use different imaging software for managing and analyzing heterogeneous parameters in immunofluorescence experiments (e.g., LC3-puncta quantification and determination of the number and size of lysosomes). One solution would be software that works on a user's laptop or workstation that can access all image settings and provide quick and easy-to-use analysis of data. Thus, we have designed and implemented an application called IFDOTMETER, which can run on all major operating systems because it has been programmed using JAVA (Sun Microsystems). Briefly, IFDOTMETER software has been created to quantify a variety of biological hallmarks, including mitochondrial morphology and nuclear condensation. The program interface is intuitive and user-friendly, making it useful for users not familiar with computer handling. By setting previously defined parameters, the software can automatically analyze a large number of images without the supervision of the researcher. Once analysis is complete, the results are stored in a spreadsheet. Using software for high-throughput cell image analysis offers researchers the possibility of performing comprehensive and precise analysis of a high number of images in an automated manner, making this routine task easier. © 2015 Society for Laboratory Automation and Screening.

  20. XOP: a multiplatform graphical user interface for synchrotron radiation spectral and optics calculations

    NASA Astrophysics Data System (ADS)

    Sanchez del Rio, Manuel; Dejus, Roger J.

    1997-11-01

    XOP (X-ray OPtics utilities) is a graphical user interface (GUI) created to execute several computer programs that calculate the basic information needed by a synchrotron beamline scientist (designer or experimentalist). Typical examples of such calculations are: insertion device (undulator or wiggler) spectral and angular distributions, mirror and multilayer reflectivities, and crystal diffraction profiles. All programs are provided to the user under a unified GUI, which greatly simplifies their execution. The XOP optics applications (especially mirror calculations) take their basic input (optical constants, compound and mixture tables) from a flexible file-oriented database, which allows the user to select data from a large number of choices and also to customize their own data sets. XOP includes many mathematical and visualization capabilities. It also permits the combination of reflectivities from several mirrors and filters, and their effect, onto a source spectrum. This feature is very useful when calculating thermal load on a series of optical elements. The XOP interface is written in the IDL (Interactive Data Language). An embedded version of XOP, which freely runs under most Unix platforms (HP, Sun, Dec, Linux, etc) and under Windows95 and NT, is available upon request.

  1. Numerical Analysis of Ginzburg-Landau Models for Superconductivity.

    NASA Astrophysics Data System (ADS)

    Coskun, Erhan

    Thin film conventional, as well as High T _{c} superconductors of various geometric shapes placed under both uniform and variable strength magnetic field are studied using the universially accepted macroscopic Ginzburg-Landau model. A series of new theoretical results concerning the properties of solution is presented using the semi -discrete time-dependent Ginzburg-Landau equations, staggered grid setup and natural boundary conditions. Efficient serial algorithms including a novel adaptive algorithm is developed and successfully implemented for solving the governing highly nonlinear parabolic system of equations. Refinement technique used in the adaptive algorithm is based on modified forward Euler method which was also developed by us to ease the restriction on time step size for stability considerations. Stability and convergence properties of forward and modified forward Euler schemes are studied. Numerical simulations of various recent physical experiments of technological importance such as vortes motion and pinning are performed. The numerical code for solving time-dependent Ginzburg-Landau equations is parallelized using BlockComm -Chameleon and PCN. The parallel code was run on the distributed memory multiprocessors intel iPSC/860, IBM-SP1 and cluster of Sun Sparc workstations, all located at Mathematics and Computer Science Division, Argonne National Laboratory.

  2. Running R Statistical Computing Environment Software on the Peregrine

    Science.gov Websites

    for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing

  3. Dish layouts analysis method for concentrative solar power plant.

    PubMed

    Xu, Jinshan; Gan, Shaocong; Li, Song; Ruan, Zhongyuan; Chen, Shengyong; Wang, Yong; Gui, Changgui; Wan, Bin

    2016-01-01

    Designs leading to maximize the use of sun radiation of a given reflective area without increasing the expense on investment are important to solar power plants construction. We here provide a method that allows one to compute shade area at any given time as well as the total shading effect of a day. By establishing a local coordinate system with the origin at the apex of a parabolic dish and z -axis pointing to the sun, neighboring dishes only with [Formula: see text] would shade onto the dish when in tracking mode. This procedure reduces the required computational resources, simplifies the calculation and allows a quick search for the optimum layout by considering all aspects leading to optimized arrangement: aspect ratio, shifting and rotation. Computer simulations done with information on dish Stirling system as well as DNI data released from NREL, show that regular-spacing is not an optimal layout, shifting and rotating column by certain amount can bring more benefits.

  4. High Resolution Nature Runs and the Big Data Challenge

    NASA Technical Reports Server (NTRS)

    Webster, W. Phillip; Duffy, Daniel Q.

    2015-01-01

    NASA's Global Modeling and Assimilation Office at Goddard Space Flight Center is undertaking a series of very computationally intensive Nature Runs and a downscaled reanalysis. The nature runs use the GEOS-5 as an Atmospheric General Circulation Model (AGCM) while the reanalysis uses the GEOS-5 in Data Assimilation mode. This paper will present computational challenges from three runs, two of which are AGCM and one is downscaled reanalysis using the full DAS. The nature runs will be completed at two surface grid resolutions, 7 and 3 kilometers and 72 vertical levels. The 7 km run spanned 2 years (2005-2006) and produced 4 PB of data while the 3 km run will span one year and generate 4 BP of data. The downscaled reanalysis (MERRA-II Modern-Era Reanalysis for Research and Applications) will cover 15 years and generate 1 PB of data. Our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS), a specialization of the concept of business process-as-a-service that is an evolving extension of IaaS, PaaS, and SaaS enabled by cloud computing. In this presentation, we will describe two projects that demonstrate this shift. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS. MERRA/AS enables MapReduce analytics over MERRA reanalysis data collection by bringing together the high-performance computing, scalable data management, and a domain-specific climate data services API. NASA's High-Performance Science Cloud (HPSC) is an example of the type of compute-storage fabric required to support CAaaS. The HPSC comprises a high speed Infinib and network, high performance file systems and object storage, and a virtual system environments specific for data intensive, science applications. These technologies are providing a new tier in the data and analytic services stack that helps connect earthbound, enterprise-level data and computational resources to new customers and new mobility-driven applications and modes of work. In our experience, CAaaS lowers the barriers and risk to organizational change, fosters innovation and experimentation, and provides the agility required to meet our customers' increasing and changing needs

  5. Improved programs for DNA and protein sequence analysis on the IBM personal computer and other standard computer systems.

    PubMed Central

    Mount, D W; Conrad, B

    1986-01-01

    We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780

  6. Why not make a PC cluster of your own? 5. AppleSeed: A Parallel Macintosh Cluster for Scientific Computing

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor K.; Dauger, Dean E.

    We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.

  7. Solar Anomalous and Magnetospheric Particle Explorer attitude control electronics box design and performance

    NASA Technical Reports Server (NTRS)

    Chamberlin, K.; Clagett, C.; Correll, T.; Gruner, T.; Quinn, T.; Shiflett, L.; Schnurr, R.; Wennersten, M.; Frederick, M.; Fox, S. M.

    1993-01-01

    The attitude Control Electronics (ACE) Box is the center of the Attitude Control Subsystem (ACS) for the Solar Anomalous and Magnetospheric Particle Explorer (SAMPEX) satellite. This unit is the single point interface for all of the Attitude Control Subsystem (ACS) related sensors and actuators. Commands and telemetry between the SAMPEX flight computer and the ACE Box are routed via a MIL-STD-1773 bus interface, through the use of an 80C85 processor. The ACE Box consists of the flowing electronic elements: power supply, momentum wheel driver, electromagnet driver, coarse sun sensor interface, digital sun sensor interface, magnetometer interface, and satellite computer interface. In addition, the ACE Box also contains an independent Safehold electronics package capable of keeping the satellite pitch axis pointing towards the sun. The ACE Box has dimensions of 24 x 31 x 8 cm, a mass of 4.3 kg, and an average power consumption of 10.5 W. This set of electronics was completely designed, developed, integrated, and tested by personnel at NASA GSFC. SAMPEX was launched on July 3, 1992, and the initial attitude acquisition was successfully accomplished via the analog Safehold electronics in the ACE Box. This acquisition scenario removed the excess body rates via magnetic control and precessed the satellite pitch axis to within 10 deg of the sun line. The performance of the SAMPEX ACS in general and the ACE Box in particular has been quite satisfactory.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catena, Riccardo; Schwabe, Bodo, E-mail: riccardo.catena@theorie.physik.uni-goettingen.de, E-mail: bodo.schwabe@theorie.physik.uni-goettingen.de

    In the effective theory of isoscalar and isovector dark matter-nucleon interactions mediated by a heavy spin-1 or spin-0 particle, 8 isotope-dependent nuclear response functions can be generated in the dark matter scattering by nuclei. We compute the 8 nuclear response functions for the 16 most abundant elements in the Sun, i.e. H, {sup 3}He, {sup 4}He, {sup 12}C, {sup 14}N, {sup 16}O, {sup 20}Ne, {sup 23}Na, {sup 24}Mg, {sup 27}Al, {sup 28}Si, {sup 32}S, {sup 40}Ar, {sup 40}Ca, {sup 56}Fe, and {sup 59}Ni, through numerical shell model calculations. We use our response functions to compute the rate of dark mattermore » capture by the Sun for all isoscalar and isovector dark matter-nucleon effective interactions, including several operators previously considered for dark matter direct detection only. We study in detail the dependence of the capture rate on specific dark matter-nucleon interaction operators, and on the different elements in the Sun. We find that a so far neglected momentum dependent dark matter coupling to the nuclear vector charge gives a larger contribution to the capture rate than the constant spin-dependent interaction commonly included in dark matter searches at neutrino telescopes. Our investigation lays the foundations for model independent analyses of dark matter induced neutrino signals from the Sun. The nuclear response functions obtained in this study are listed in analytic form in an appendix, ready to be used in other projects.« less

  9. I Workshop on Science and Astronomy at the DAM of the UB

    NASA Astrophysics Data System (ADS)

    Masana, E.; Ribas, S. J.; Jordi, C.; Gómez, V.

    The Department of Astronomy and Meteorology (DAM) of the University of Barcelona organized the I Workshop on Science and Astronomy for Youth in November 2007, with the title The Sun: Radiation and Gravitation, as one of its outreach activities for high school students. About 350 participants took part in four different activities during the Workshop. On one hand, some days before the beginning of the activities, some DAM members went to the different high schools to present the sessions and introduce some key concepts to follow them. On the other hand, during their visit to the facilities of the Physics Faculty and the Astronomy Department of the University of Barcelona, they took part in: an observation of the Sun looking at sunspots, and a short lecture on safety rules on Sun's observations and on the Sun's structure and activity; a lecture with the title Why do stars shine?; and a computer experience named Gravitation: Kepler's 3rd Law.

  10. Research on techniques for computer three-dimensional simulation of satellites and night sky

    NASA Astrophysics Data System (ADS)

    Yan, Guangwei; Hu, Haitao

    2007-11-01

    To study space attack-defense technology, a simulation of satellites is needed. We design and implement a 3d simulating system of satellites. The satellites are rendered under the Night sky background. The system structure is as follows: one computer is used to simulate the orbital of satellites, the other computers are used to render 3d simulation scene. To get a realistic effect, a three-channel multi-projector display system is constructed. We use MultiGen Creator to construct satellite and star models. We use MultiGen Distributed Vega to render the three-channel scene. There are one master and three slaves. The master controls the three slaves to render three channels separately. To get satellites' positions and attitudes, the master communicates with the satellite orbit simulator based on TCP/IP protocol. Then it calculates the observer's position, the satellites' position, the moon's and the sun's position and transmits the data to the slaves. To get a smooth orbit of target satellites, an orbit prediction method is used. Because the target satellite data packets and the attack satellite data packets cannot keep synchronization in the network, a target satellite dithering phenomenon will occur when the scene is rendered. To resolve this problem, an anti-dithering algorithm is designed. To render Night sky background, a file which stores stars' position and brightness data is used. According to the brightness of each star, the stars are classified into different magnitude. The star model is scaled according to the magnitude. All the stars are distributed on a celestial sphere. Experiments show, the whole system can run correctly, and the frame rate can reach 30Hz. The system can be used in a space attack-defense simulation field.

  11. NASA Tech Briefs, August 1995. Volume 19, No. 8

    NASA Technical Reports Server (NTRS)

    1995-01-01

    There is a special focus on computer graphics and simulation in this issue. Topics covered include : Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer programs, Mechanics; Machinery; Fabrication Technology; and Mathematics and Information Sciences. There is a section on for Laser Technology, which includes a feature on Moving closer to the suns power.

  12. Optimal Jet Finder (v1.0 C++)

    NASA Astrophysics Data System (ADS)

    Chumakov, S.; Jankowski, E.; Tkachov, F. V.

    2006-10-01

    We describe a C++ implementation of the Optimal Jet Definition for identification of jets in hadronic final states of particle collisions. We explain interface subroutines and provide a usage example. The source code is available from http://www.inr.ac.ru/~ftkachov/projects/jets/. Program summaryTitle of program: Optimal Jet Finder (v1.0 C++) Catalogue identifier: ADSB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSB_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with a standard C++ compiler Tested with:GNU gcc 3.4.2, Linux Fedora Core 3, Intel i686; Forte Developer 7 C++ 5.4, SunOS 5.9, UltraSPARC III+; Microsoft Visual C++ Toolkit 2003 (compiler 13.10.3077, linker 7.10.30777, option /EHsc), Windows XP, Intel i686. Programming language used: C++ Memory required:˜1 MB (or more, depending on the settings) No. of lines in distributed program, including test data, etc.: 3047 No. of bytes in distributed program, including test data, etc.: 17 884 Distribution format: tar.gz Nature of physical problem: Analysis of hadronic final states in high energy particle collision experiments often involves identification of hadronic jets. A large number of hadrons detected in the calorimeter is reduced to a few jets by means of a jet finding algorithm. The jets are used in further analysis which would be difficult or impossible when applied directly to the hadrons. Grigoriev et al. [D.Yu. Grigoriev, E. Jankowski, F.V. Tkachov, Phys. Rev. Lett. 91 (2003) 061801] provide brief introduction to the subject of jet finding algorithms and a general review of the physics of jets can be found in [R. Barlow, Rep. Prog. Phys. 36 (1993) 1067]. Method of solution: The software we provide is an implementation of the so-called Optimal Jet Definition (OJD). The theory of OJD was developed in [F.V. Tkachov, Phys. Rev. Lett. 73 (1994) 2405; Erratum, Phys. Rev. Lett. 74 (1995) 2618; F.V. Tkachov, Int. J. Modern Phys. A 12 (1997) 5411; F.V. Tkachov, Int. J. Modern Phys. A 17 (2002) 2783]. The desired jet configuration is obtained as the one that minimizes Ω, a certain function of the input particles and jet configuration. A FORTRAN 77 implementation of OJD is described in [D.Yu. Grigoriev, E. Jankowski, F.V. Tkachov, Comput. Phys. Comm. 155 (2003) 42]. Restrictions on the complexity of the program: Memory required by the program is proportional to the number of particles in the input × the number of jets in the output. For example, for 650 particles and 20 jets ˜300 KB memory is required. Typical running time: The running time (in the running mode with a fixed number of jets) is proportional to the number of particles in the input × the number of jets in the output × times the number of different random initial configurations tried ( ntries). For example, for 65 particles in the input and 4 jets in the output, the running time is ˜4ṡ10 s per try (Pentium 4 2.8 GHz).

  13. A programmable optimization environment using the GAMESS-US and MERLIN/MCL packages. Applications on intermolecular interaction energies

    NASA Astrophysics Data System (ADS)

    Kalatzis, Fanis G.; Papageorgiou, Dimitrios G.; Demetropoulos, Ioannis N.

    2006-09-01

    The Merlin/MCL optimization environment and the GAMESS-US package were combined so as to offer an extended and efficient quantum chemistry optimization system, capable of implementing complex optimization strategies for generic molecular modeling problems. A communication and data exchange interface was established between the two packages exploiting all Merlin features such as multiple optimizers, box constraints, user extensions and a high level programming language. An important feature of the interface is its ability to perform dimer computations by eliminating the basis set superposition error using the counterpoise (CP) method of Boys and Bernardi. Furthermore it offers CP-corrected geometry optimizations using analytic derivatives. The unified optimization environment was applied to construct portions of the intermolecular potential energy surface of the weakly bound H-bonded complex C 6H 6-H 2O by utilizing the high level Merlin Control Language. The H-bonded dimer HF-H 2O was also studied by CP-corrected geometry optimization. The ab initio electronic structure energies were calculated using the 6-31G ** basis set at the Restricted Hartree-Fock and second-order Moller-Plesset levels, while all geometry optimizations were carried out using a quasi-Newton algorithm provided by Merlin. Program summaryTitle of program: MERGAM Catalogue identifier:ADYB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYB_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: The program is designed for machines running the UNIX operating system. It has been tested on the following architectures: IA32 (Linux with gcc/g77 v.3.2.3), AMD64 (Linux with the Portland group compilers v.6.0), SUN64 (SunOS 5.8 with the Sun Workshop compilers v.5.2) and SGI64 (IRIX 6.5 with the MIPSpro compilers v.7.4) Installations: University of Ioannina, Greece Operating systems or monitors under which the program has been tested: UNIX Programming language used: ANSI C, ANSI Fortran-77 No. of lines in distributed program, including test data, etc.:11 282 No. of bytes in distributed program, including test data, etc.: 49 458 Distribution format: tar.gz Memory required to execute with typical data: Memory requirements mainly depend on the selection of a GAMESS-US basis set and the number of atoms No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: no Nature of physical problem: Multidimensional geometry optimization is of great importance in any ab initio calculation since it usually is one of the most CPU-intensive tasks, especially on large molecular systems. For example, the geometric and energetic description of van der Waals and weakly bound H-bonded complexes requires the construction of related important portions of the multidimensional intermolecular potential energy surface (IPES). So the various held views about the nature of these bonds can be quantitatively tested. Method of solution: The Merlin/MCL optimization environment was interconnected with the GAMESS-US package to facilitate geometry optimization in quantum chemistry problems. The important portions of the IPES require the capability to program optimization strategies. The Merlin/MCL environment was used for the implementation of such strategies. In this work, a CP-corrected geometry optimization was performed on the HF-H 2O complex and an MCL program was developed to study portions of the potential energy surface of the C 6H 6-H 2O complex. Restrictions on the complexity of the problem: The Merlin optimization environment and the GAMESS-US package must be installed. The MERGAM interface requires GAMESS-US input files that have been constructed in Cartesian coordinates. This restriction occurs from a design-time requirement to not allow reorientation of atomic coordinates; this rule holds always true when applying the COORD = UNIQUE keyword in a GAMESS-US input file. Typical running time: It depends on the size of the molecular system, the size of the basis set and the method of electron correlation. Execution of the test run took approximately 5 min on a 2.8 GHz Intel Pentium CPU.

  14. The Solar Ultraviolet Environment at the Ocean.

    PubMed

    Mobley, Curtis D; Diffey, Brian L

    2018-05-01

    Atmospheric and oceanic radiative transfer models were used to compute spectral radiances between 285 and 400 nm onto horizontal and vertical plane surfaces over water. The calculations kept track of the contributions by the sun's direct beam, by diffuse-sky radiance, by radiance reflected from the sea surface and by water-leaving radiance. Clear, hazy and cloudy sky conditions were simulated for a range of solar zenith angles, wind speeds and atmospheric ozone concentrations. The radiances were used to estimate erythemal exposures due to the sun and sky, as well as from radiation reflected by the sea surface and backscattered from the water column. Diffuse-sky irradiance is usually greater than direct-sun irradiance at wavelengths below 330 nm, and reflected and water-leaving irradiance accounts for <20% of the UV exposure on a vertical surface. Total exposure depends strongly on solar zenith angle and azimuth angle relative to the sun. Sea surface roughness affects the UV exposures by only a few percent. For very clear waters and the sun high in the sky, the UV index within the water can be >10 at depths down to two meters and >6 down to 5 m. © 2018 The American Society of Photobiology.

  15. GOES Satellite Data Validation Via Hand-held 4 LED Sun Photometer at Norfolk State University

    NASA Technical Reports Server (NTRS)

    Reynolds, Arthur, Jr.; Jackson, Tyrone; Reynolds, Kevin; Davidson, Cassy; Coope-Pabis, Barbara

    2005-01-01

    Sun photometry is a passive means of measuring a quantity of light radiation. The GIFTS- IOMI/GLOBE Water Vapor/Haze Sun photometer contains four light emitting diodes (LEDs), which are used to convert photocurrent to voltage. The intensity of the incoming and outgoing radiation as detected on the Earth s surface can be affected by aerosols and gases in the atmosphere. The focus of this research is primarily on aerosol and water vapor particles that absorb and reemit energy. Two LEDs in the photometer correspond to light scattered at 530 nm (green spectrum) and 620 nm (red spectrum). They collect data pertaining to aerosols that scatter light. The other two LEDs detect the light scattered by water vapor at wavelengths of 820 nm and 920 nm. The water vapor measurements will be compared to data collected by the Geostationary Observation Environmental Satellite (GOES). Before a comparison can be made, the extraterrestrial constant (ET), which is intrinsic to each sun photometer, must be measured. This paper will present determination of the ET constant, from which the aerosol optical thickness (AOT) can be computed for comparison to the GOES satellite to ascertain the reliability of the sun photometer.

  16. User's instructions for the cardiovascular Walters model

    NASA Technical Reports Server (NTRS)

    Croston, R. C.

    1973-01-01

    The model is a combined, steady-state cardiovascular and thermal model. It was originally developed for interactive use, but was converted to batch mode simulation for the Sigma 3 computer. The model has the purpose to compute steady-state circulatory and thermal variables in response to exercise work loads and environmental factors. During a computer simulation run, several selected variables are printed at each time step. End conditions are also printed at the completion of the run.

  17. A Quantum Computing Approach to Model Checking for Advanced Manufacturing Problems

    DTIC Science & Technology

    2014-07-01

    amount of time. In summary, the tool we developed succeeded in allowing us to produce good solutions for optimization problems that did not fit ...We compared the value of the objective obtained in each run with the known optimal value, and used this information to compute the probability of ...success for each given instance. Then we used this information to compute the expected number of repetitions (or runs) needed to obtain the optimal

  18. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    NASA Astrophysics Data System (ADS)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  19. Collaborative observations of the Sun during ihy

    NASA Astrophysics Data System (ADS)

    Strong, K. T.

    2003-04-01

    Many of the major solar physics space missions (Solar Max, Yohkoh, SOHO, and TRACE) have feature extensive collaborative observations with ground-based observers, sounding rocket flights and other space missions. These joint observations have produced some significant results. In preparation for IHY, this poster presents some of the lessons learned from some of these collaborations. The more successful ones have a clear scientific goal and have been planned, coordinated and advertised well in advance with at least one dry run. They have generally not relied on a particular type of solar activity being present at the time of the observations or have been very flexible in the timing of the investigation. Most importantly, they have had a plan with a set schedule to follow up the observation run with data processing, analysis and modeling workshops whether it's a large group or just individual scientists.

  20. On the density and field sensitivities of dielectronic recombination. [rates in coronal plasmas of late stars and sun

    NASA Technical Reports Server (NTRS)

    Reisenfeld, Daniel B.; Raymond, John C.; Young, Albert R.; Kohl, John L.

    1992-01-01

    Dielectronic recombination dominates the recombination rates of most ions in coronal plasmas at their temperatures of peak concentration. Because dielectronic recombination goes by way of high nl doubly excited levels, it is susceptible to collisional excitation and ionization, leading to a decreased rate. On the other hand, theoretical studies show that Stark mixing of the nl levels by a modest electric field enhances the dielectronic recombination rate severalfold. The ionization balance is computed here as as function of density, and it is found that the new results require increased emission measures to match the C IV emission line intensities observed in the sun and in late-type stars. They also make it more difficult to interpret the overall EUV emission line spectrum of the sun.

  1. Running Batch Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov Websites

    Using Resource Feature to Request Different Node Types Peregrine has several types of compute nodes incompatibility and get the job running. More information about requesting different node types in Peregrine is available. Queues In order to meet the needs of different types of jobs, nodes on Peregrine are available

  2. Host-Nation Operations: Soldier Training on Governance (HOST-G) Training Support Package

    DTIC Science & Technology

    2011-07-01

    restricted this webpage from running scripts or ActiveX controls that could access your computer. Click here for options…” • If this occurs, select that...scripts and ActiveX controls can be useful, but active content might also harm your computer. Are you sure you want to let this file run active

  3. 24 CFR 15.110 - What fees will HUD charge?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... duplicating machinery. The computer run time includes the cost of operating a central processing unit for that... Applies. (6) Computer run time (includes only mainframe search time not printing) The direct cost of... estimated fee is more than $250.00 or you have a history of failing to pay FOIA fees to HUD in a timely...

  4. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  5. Identification of Program Signatures from Cloud Computing System Telemetry Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.

    Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less

  6. Providing Assistive Technology Applications as a Service Through Cloud Computing.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.

  7. Computer Simulation of Great Lakes-St. Lawrence Seaway Icebreaker Requirements.

    DTIC Science & Technology

    1980-01-01

    of Run No. 1 for Taconite Task Command ... ....... 6-41 6.22d Results of Run No. I for Oil Can Task Command ........ ... 6-42 6.22e Results of Run No...Port and Period for Run No. 2 ... .. ... ... 6-47 6.23c Results of Run No. 2 for Taconite Task Command ... ....... 6-48 6.23d Results of Run No. 2 for...6-53 6.24b Predicted Icebreaker Fleet by Home Port and Period for Run No. 3 6-54 6.24c Results of Run No. 3 for Taconite Task Command. ....... 6

  8. New vibration-rotation code for tetraatomic molecules exhibiting wide-amplitude motion: WAVR4

    NASA Astrophysics Data System (ADS)

    Kozin, Igor N.; Law, Mark M.; Tennyson, Jonathan; Hutson, Jeremy M.

    2004-11-01

    A general computational method for the accurate calculation of rotationally and vibrationally excited states of tetraatomic molecules is developed. The resulting program is particularly appropriate for molecules executing wide-amplitude motions and isomerizations. The program offers a choice of coordinate systems based on Radau, Jacobi, diatom-diatom and orthogonal satellite vectors. The method includes all six vibrational dimensions plus three rotational dimensions. Vibration-rotation calculations with reduced dimensionality in the radial degrees of freedom are easily tackled via constraints imposed on the radial coordinates via the input file. Program summaryTitle of program: WAVR4 Catalogue number: ADUN Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUN Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: Persons requesting the program must sign the standard CPC nonprofit use license Computer: Developed under Tru64 UNIX, ported to Microsoft Windows and Sun Unix Operating systems under which the program has been tested: Tru64 Unix, Microsoft Windows, Sun Unix Programming language used: Fortran 90 Memory required to execute with typical data: case dependent No. of lines in distributed program, including test data, etc.: 11 937 No. of bytes in distributed program, including test data, etc.: 84 770 Distribution format: tar.gz Nature of physical problem: WAVR4 calculates the bound ro-vibrational levels and wavefunctions of a tetraatomic system using body-fixed coordinates based on generalised orthogonal vectors. Method of solution: The angular coordinates are treated using a finite basis representation (FBR) based on products of spherical harmonics. A discrete variable representation (DVR) [1] based on either Morse-oscillator-like or spherical-oscillator functions [2] is used for the radial coordinates. Matrix elements are computed using an efficient Gaussian quadrature in the angular coordinates and the DVR approximation in the radial coordinates. The solution of the secular problem is carried through a series of intermediate diagonalisations and truncations. Restrictions on the complexity of the problem: (1) The size of the final Hamiltonian matrix that can be practically diagonalised; (2) The DVR approximation for a radial coordinate fails for values of the coordinate near zero—this is remedied only for one radial coordinate by using analytical integration. Typical running time: problem-dependent Unusual features of the program: A user-supplied subroutine to evaluate the potential energy is a program requirement. External routines: BLAS and LAPACK are required. References: [1] J.C. Light, I.P. Hamilton, J.V. Lill, J. Chem. Phys. 92 (1985) 1400. [2] J.R. Henderson, C.R. Le Sueur, J. Tennyson, Comp. Phys. Comm. 75 (1993) 379.

  9. Loss of the integral nuclear envelope protein SUN1 induces alteration of nucleoli

    PubMed Central

    Matsumoto, Ayaka; Sakamoto, Chiyomi; Matsumori, Haruka; Katahira, Jun; Yasuda, Yoko; Yoshidome, Katsuhide; Tsujimoto, Masahiko; Goldberg, Ilya G; Matsuura, Nariaki; Nakao, Mitsuyoshi; Saitoh, Noriko; Hieda, Miki

    2016-01-01

    ABSTRACT A supervised machine learning algorithm, which is qualified for image classification and analyzing similarities, is based on multiple discriminative morphological features that are automatically assembled during the learning processes. The algorithm is suitable for population-based analysis of images of biological materials that are generally complex and heterogeneous. Here we used the algorithm wndchrm to quantify the effects on nucleolar morphology of the loss of the components of nuclear envelope in a human mammary epithelial cell line. The linker of nucleoskeleton and cytoskeleton (LINC) complex, an assembly of nuclear envelope proteins comprising mainly members of the SUN and nesprin families, connects the nuclear lamina and cytoskeletal filaments. The components of the LINC complex are markedly deficient in breast cancer tissues. We found that a reduction in the levels of SUN1, SUN2, and lamin A/C led to significant changes in morphologies that were computationally classified using wndchrm with approximately 100% accuracy. In particular, depletion of SUN1 caused nucleolar hypertrophy and reduced rRNA synthesis. Further, wndchrm revealed a consistent negative correlation between SUN1 expression and the size of nucleoli in human breast cancer tissues. Our unbiased morphological quantitation strategies using wndchrm revealed an unexpected link between the components of the LINC complex and the morphologies of nucleoli that serves as an indicator of the malignant phenotype of breast cancer cells. PMID:26962703

  10. Loss of the integral nuclear envelope protein SUN1 induces alteration of nucleoli.

    PubMed

    Matsumoto, Ayaka; Sakamoto, Chiyomi; Matsumori, Haruka; Katahira, Jun; Yasuda, Yoko; Yoshidome, Katsuhide; Tsujimoto, Masahiko; Goldberg, Ilya G; Matsuura, Nariaki; Nakao, Mitsuyoshi; Saitoh, Noriko; Hieda, Miki

    2016-01-01

    A supervised machine learning algorithm, which is qualified for image classification and analyzing similarities, is based on multiple discriminative morphological features that are automatically assembled during the learning processes. The algorithm is suitable for population-based analysis of images of biological materials that are generally complex and heterogeneous. Here we used the algorithm wndchrm to quantify the effects on nucleolar morphology of the loss of the components of nuclear envelope in a human mammary epithelial cell line. The linker of nucleoskeleton and cytoskeleton (LINC) complex, an assembly of nuclear envelope proteins comprising mainly members of the SUN and nesprin families, connects the nuclear lamina and cytoskeletal filaments. The components of the LINC complex are markedly deficient in breast cancer tissues. We found that a reduction in the levels of SUN1, SUN2, and lamin A/C led to significant changes in morphologies that were computationally classified using wndchrm with approximately 100% accuracy. In particular, depletion of SUN1 caused nucleolar hypertrophy and reduced rRNA synthesis. Further, wndchrm revealed a consistent negative correlation between SUN1 expression and the size of nucleoli in human breast cancer tissues. Our unbiased morphological quantitation strategies using wndchrm revealed an unexpected link between the components of the LINC complex and the morphologies of nucleoli that serves as an indicator of the malignant phenotype of breast cancer cells.

  11. The Effects of the Uncertainty of Thermodynamic and Kinetic Properties on Nucleation and Evolution Kinetics of Cr-Rich Phase in Fe-Cr Alloys

    DTIC Science & Technology

    2012-12-01

    M. A.; Horstemeyer, M. F.; Gao, F.; Sun, X.: Khaleel, M. Scripta Materialia. 2011, 64, 908. 80. Plimpton , S . Journal of Computational Physics...99 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) Mark Tschopp,* Fei Gao,** and Xin Sun** 5d. PROJECT NUMBER 5e. TASK NUMBER...5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) U.S. Army Research Laboratory ATTN: RDRL-WMM-F Aberdeen Proving Ground

  12. A new version of the CADNA library for estimating round-off error propagation in Fortran programs

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc

    2010-11-01

    The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.

  13. The rid-redundant procedure in C-Prolog

    NASA Technical Reports Server (NTRS)

    Chen, Huo-Yan; Wah, Benjamin W.

    1987-01-01

    C-Prolog can conveniently be used for logical inferences on knowledge bases. However, as similar to many search methods using backward chaining, a large number of redundant computation may be produced in recursive calls. To overcome this problem, the 'rid-redundant' procedure was designed to rid all redundant computations in running multi-recursive procedures. Experimental results obtained for C-Prolog on the Vax 11/780 computer show that there is an order of magnitude improvement in the running time and solvable problem size.

  14. An Upgrade of the Aeroheating Software ''MINIVER''

    NASA Technical Reports Server (NTRS)

    Louderback, Pierce

    2013-01-01

    Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.

  15. A Functional Description of the Geophysical Data Acquisition System

    DTIC Science & Technology

    1990-08-10

    less than 50 SPS nor greater than 250 SPS 3.0 SENSORS/TRANSDUCERS 3.1 CHAPTER OVERVIEW Most of the research supported by GDAS has primarily involved two...signal for the computer. The SRUN signal from the computer is fed to a retriggerable oneshot multivibrator on the board. SRUN consists of a pulse train...that is present when the computer is running. The oneshot output drives the RUN lamp on the front panel. Finally, one pin on the board edge connector is

  16. Network support for system initiated checkpoints

    DOEpatents

    Chen, Dong; Heidelberger, Philip

    2013-01-29

    A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.

  17. Convergence properties of simple genetic algorithms

    NASA Technical Reports Server (NTRS)

    Bethke, A. D.; Zeigler, B. P.; Strauss, D. M.

    1974-01-01

    The essential parameters determining the behaviour of genetic algorithms were investigated. Computer runs were made while systematically varying the parameter values. Results based on the progress curves obtained from these runs are presented along with results based on the variability of the population as the run progresses.

  18. Modeling Subsurface Reactive Flows Using Leadership-Class Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Richard T; Hammond, Glenn; Lichtner, Peter

    2009-01-01

    We describe our experiences running PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  19. Non-exchangeability of running vs. other exercise in their association with adiposity, and its implications for public health recommendations.

    PubMed

    Williams, Paul T

    2012-01-01

    Current physical activity recommendations assume that different activities can be exchanged to produce the same weight-control benefits so long as total energy expended remains the same (exchangeability premise). To this end, they recommend calculating energy expenditure as the product of the time spent performing each activity and the activity's metabolic equivalents (MET), which may be summed to achieve target levels. The validity of the exchangeability premise was assessed using data from the National Runners' Health Study. Physical activity dose was compared to body mass index (BMI) and body circumferences in 33,374 runners who reported usual distance run and pace, and usual times spent running and other exercises per week. MET hours per day (METhr/d) from running was computed from: a) time and intensity, and b) reported distance run (1.02 MET • hours per km). When computed from time and intensity, the declines (slope±SE) per METhr/d were significantly greater (P<10(-15)) for running than non-running exercise for BMI (slopes±SE, male: -0.12 ± 0.00 vs. 0.00±0.00; female: -0.12 ± 0.00 vs. -0.01 ± 0.01 kg/m(2) per METhr/d) and waist circumference (male: -0.28 ± 0.01 vs. -0.07±0.01; female: -0. 31±0.01 vs. -0.05 ± 0.01 cm per METhr/d). Reported METhr/d of running was 38% to 43% greater when calculated from time and intensity than distance. Moreover, the declines per METhr/d run were significantly greater when estimated from reported distance for BMI (males: -0.29 ± 0.01; females: -0.27 ± 0.01 kg/m(2) per METhr/d) and waist circumference (males: -0.67 ± 0.02; females: -0.69 ± 0.02 cm per METhr/d) than when computed from time and intensity (cited above). The exchangeability premise was not supported for running vs. non-running exercise. Moreover, distance-based running prescriptions may provide better weight control than time-based prescriptions for running or other activities. Additional longitudinal studies and randomized clinical trials are required to verify these results prospectively.

  20. NESSUS/NASTRAN Interface

    NASA Technical Reports Server (NTRS)

    Millwater, Harry; Riha, David

    1996-01-01

    The NESSUS probabilistic analysis computer program has been developed with a built-in finite element analysis program NESSUS/FEM. However, the NESSUS/FEM program is specialized for engine structures and may not contain sufficient features for other applications. In addition, users often become well acquainted with a particular finite element code and want to use that code for probabilistic structural analysis. For these reasons, this work was undertaken to develop an interface between NESSUS and NASTRAN such that NASTRAN can be used for the finite element analysis and NESSUS can be used for the probabilistic analysis. In addition, NESSUS was restructured such that other finite element codes could be more easily coupled with NESSUS. NESSUS has been enhanced such that NESSUS will modify the NASTRAN input deck for a given set of random variables, run NASTRAN and read the NASTRAN result. The coordination between the two codes is handled automatically. The work described here was implemented within NESSUS 6.2 which was delivered to NASA in September 1995. The code runs on Unix machines: Cray, HP, Sun, SGI and IBM. The new capabilities have been implemented such that a user familiar with NESSUS using NESSUS/FEM and NASTRAN can immediately use NESSUS with NASTRAN. In other words, the interface with NASTRAN has been implemented in an analogous manner to the interface with NESSUS/FEM. Only finite element specific input has been changed. This manual is written as an addendum to the existing NESSUS 6.2 manuals. We assume users have access to NESSUS manuals and are familiar with the operation of NESSUS including probabilistic finite element analysis. Update pages to the NESSUS PFEM manual are contained in Appendix E. The finite element features of the code and the probalistic analysis capabilities are summarized.

  1. A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software

    NASA Astrophysics Data System (ADS)

    Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.

    2017-10-01

    Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.

  2. Creating a Parallel Version of VisIt for Microsoft Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlock, B J; Biagas, K S; Rawson, P L

    2011-12-07

    VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less

  3. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    NASA Astrophysics Data System (ADS)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  4. Computational steering of GEM based detector simulations

    NASA Astrophysics Data System (ADS)

    Sheharyar, Ali; Bouhali, Othmane

    2017-10-01

    Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.

  5. CERN openlab: Engaging industry for innovation in the LHC Run 3-4 R&D programme

    NASA Astrophysics Data System (ADS)

    Girone, M.; Purcell, A.; Di Meglio, A.; Rademakers, F.; Gunne, K.; Pachou, M.; Pavlou, S.

    2017-10-01

    LHC Run3 and Run4 represent an unprecedented challenge for HEP computing in terms of both data volume and complexity. New approaches are needed for how data is collected and filtered, processed, moved, stored and analysed if these challenges are to be met with a realistic budget. To develop innovative techniques we are fostering relationships with industry leaders. CERN openlab is a unique resource for public-private partnership between CERN and leading Information Communication and Technology (ICT) companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. In 2015, CERN openlab started its phase V with a strong focus on tackling the upcoming LHC challenges. Several R&D programs are ongoing in the areas of data acquisition, networks and connectivity, data storage architectures, computing provisioning, computing platforms and code optimisation and data analytics. This paper gives an overview of the various innovative technologies that are currently being explored by CERN openlab V and discusses the long-term strategies that are pursued by the LHC communities with the help of industry in closing the technological gap in processing and storage needs expected in Run3 and Run4.

  6. Memoized Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Yang, Guowei; Pasareanu, Corina S.; Khurshid, Sarfraz

    2012-01-01

    This paper introduces memoized symbolic execution (Memoise), a novel approach for more efficient application of forward symbolic execution, which is a well-studied technique for systematic exploration of program behaviors based on bounded execution paths. Our key insight is that application of symbolic execution often requires several successive runs of the technique on largely similar underlying problems, e.g., running it once to check a program to find a bug, fixing the bug, and running it again to check the modified program. Memoise introduces a trie-based data structure that stores the key elements of a run of symbolic execution. Maintenance of the trie during successive runs allows re-use of previously computed results of symbolic execution without the need for re-computing them as is traditionally done. Experiments using our prototype embodiment of Memoise show the benefits it holds in various standard scenarios of using symbolic execution, e.g., with iterative deepening of exploration depth, to perform regression analysis, or to enhance coverage.

  7. Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI

    USGS Publications Warehouse

    Donato, David I.

    2017-01-01

    In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.

  8. Evaluating the Efficacy of the Cloud for Cluster Computation

    NASA Technical Reports Server (NTRS)

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  9. Pegasus Workflow Management System: Helping Applications From Earth and Space

    NASA Astrophysics Data System (ADS)

    Mehta, G.; Deelman, E.; Vahi, K.; Silva, F.

    2010-12-01

    Pegasus WMS is a Workflow Management System that can manage large-scale scientific workflows across Grid, local and Cloud resources simultaneously. Pegasus WMS provides a means for representing the workflow of an application in an abstract XML form, agnostic of the resources available to run it and the location of data and executables. It then compiles these workflows into concrete plans by querying catalogs and farming computations across local and distributed computing resources, as well as emerging commercial and community cloud environments in an easy and reliable manner. Pegasus WMS optimizes the execution as well as data movement by leveraging existing Grid and cloud technologies via a flexible pluggable interface and provides advanced features like reusing existing data, automatic cleanup of generated data, and recursive workflows with deferred planning. It also captures all the provenance of the workflow from the planning stage to the execution of the generated data, helping scientists to accurately measure performance metrics of their workflow as well as data reproducibility issues. Pegasus WMS was initially developed as part of the GriPhyN project to support large-scale high-energy physics and astrophysics experiments. Direct funding from the NSF enabled support for a wide variety of applications from diverse domains including earthquake simulation, bacterial RNA studies, helioseismology and ocean modeling. Earthquake Simulation: Pegasus WMS was recently used in a large scale production run in 2009 by the Southern California Earthquake Centre to run 192 million loosely coupled tasks and about 2000 tightly coupled MPI style tasks on National Cyber infrastructure for generating a probabilistic seismic hazard map of the Southern California region. SCEC ran 223 workflows over a period of eight weeks, using on average 4,420 cores, with a peak of 14,540 cores. A total of 192 million files were produced totaling about 165TB out of which 11TB of data was saved. Astrophysics: The Laser Interferometer Gravitational-Wave Observatory (LIGO) uses Pegasus WMS to search for binary inspiral gravitational waves. A month of LIGO data requires many thousands of jobs, running for days on hundreds of CPUs on the LIGO Data Grid (LDG) and Open Science Grid (OSG). Ocean Temperature Forecast: Researchers at the Jet Propulsion Laboratory are exploring Pegasus WMS to run ocean forecast ensembles of the California coastal region. These models produce a number of daily forecasts for water temperature, salinity, and other measures. Helioseismology: The Solar Dynamics Observatory (SDO) is NASA's most important solar physics mission of this coming decade. Pegasus WMS is being used to analyze the data from SDO, which will be predominantly used to learn about solar magnetic activity and to probe the internal structure and dynamics of the Sun with helioseismology. Bacterial RNA studies: SIPHT is an application in bacterial genomics, which predicts sRNA (small non-coding RNAs)-encoding genes in bacteria. This project currently provides a web-based interface using Pegasus WMS at the backend to facilitate large-scale execution of the workflows on varied resources and provide better notifications of task/workflow completion.

  10. Controlling Laboratory Processes From A Personal Computer

    NASA Technical Reports Server (NTRS)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  11. WinHPC System Programming | High-Performance Computing | NREL

    Science.gov Websites

    Programming WinHPC System Programming Learn how to build and run an MPI (message passing interface (mpi.h) and library (msmpi.lib) are. To build from the command line, run... Start > Intel Software Development Tools > Intel C++ Compiler Professional... > C++ Build Environment for applications running

  12. Accessing files in an Internet: The Jade file system

    NASA Technical Reports Server (NTRS)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  13. Accessing files in an internet - The Jade file system

    NASA Technical Reports Server (NTRS)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  14. Greece and Turkey

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Summer is in full swing in this stunning true-color image of the southeastern European countries and Turkey captured by MODIS on June 29, 2002. Clockwise from left, the mountains of Greece, Albania, Macedonia, Yugoslavia, Bulgaria, and Turkey are swathed in brilliant greens and shades of golden brown; meanwhile (counterclockwise from left) the Ionian, Mediterranean, Aegean, and Black Seas are beautifully blue and green.Running diagonally across the image from the bottom middle to the top right is a gray streak that is caused by the angle of reflection of the sun on the water (called sun glint). The darker areas within this gray swath denote calmer water, and make visible currents that would not otherwise be noticeable.Surprisingly few fires were burning hot enough to be detectable by MODIS when this image was acquired during the height of the summer dry season. A single fire is visible burning in mainland Greece, six are visible in northwestern Turkey, and one burns on the western coast (marked with red outlines). Credit: Jacques Descloitres, MODIS Land Rapid Response Team, NASA/GSFC

  15. New Web Server - the Java Version of Tempest - Produced

    NASA Technical Reports Server (NTRS)

    York, David W.; Ponyik, Joseph G.

    2000-01-01

    A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.

  16. Computer-based testing of the modified essay question: the Singapore experience.

    PubMed

    Lim, Erle Chuen-Hian; Seet, Raymond Chee-Seong; Oh, Vernon M S; Chia, Boon-Lock; Aw, Marion; Quak, Seng-Hock; Ong, Benjamin K C

    2007-11-01

    The modified essay question (MEQ), featuring an evolving case scenario, tests a candidate's problem-solving and reasoning ability, rather than mere factual recall. Although it is traditionally conducted as a pen-and-paper examination, our university has run the MEQ using computer-based testing (CBT) since 2003. We describe our experience with running the MEQ examination using the IVLE, or integrated virtual learning environment (https://ivle.nus.edu.sg), provide a blueprint for universities intending to conduct computer-based testing of the MEQ, and detail how our MEQ examination has evolved since its inception. An MEQ committee, comprising specialists in key disciplines from the departments of Medicine and Paediatrics, was formed. We utilized the IVLE, developed for our university in 1998, as the online platform on which we ran the MEQ. We calculated the number of man-hours (academic and support staff) required to run the MEQ examination, using either a computer-based or pen-and-paper format. With the support of our university's information technology (IT) specialists, we have successfully run the MEQ examination online, twice a year, since 2003. Initially, we conducted the examination with short-answer questions only, but have since expanded the MEQ examination to include multiple-choice and extended matching questions. A total of 1268 man-hours was spent in preparing for, and running, the MEQ examination using CBT, compared to 236.5 man-hours to run it using a pen-and-paper format. Despite being more labour-intensive, our students and staff prefer CBT to the pen-and-paper format. The MEQ can be conducted using a computer-based testing scenario, which offers several advantages over a pen-and-paper format. We hope to increase the number of questions and incorporate audio and video files, featuring clinical vignettes, to the MEQ examination in the near future.

  17. Portability studies of modular data base managers. Interim reports. [Running CDC's DATATRAN 2 on IBM 360/370 and IBM's JOSHUA on CDC computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopp, H.J.; Mortensen, G.A.

    1978-04-01

    Approximately 60% of the full CDC 6600/7600 Datatran 2.0 capability was made operational on IBM 360/370 equipment. Sufficient capability was made operational to demonstrate adequate performance for modular program linking applications. Also demonstrated were the basic capabilities and performance required to support moderate-sized data base applications and moderately active scratch input/output applications. Approximately one to two calendar years are required to develop DATATRAN 2.0 capabilities fully for the entire spectrum of applications proposed. Included in the next stage of conversion should be syntax checking and syntax conversion features that would foster greater FORTRAN compatibility between IBM and CDC developed modules.more » The batch portion of the JOSHUA Modular System, which was developed by Savannah River Laboratory to run on an IBM computer, was examined for the feasibility of conversion to run on a Control Data Corporation (CDC) computer. Portions of the JOSHUA Precompiler were changed so as to be operable on the CDC computer. The Data Manager and Batch Monitor were also examined for conversion feasibility, but no changes were made in them. It appears to be feasible to convert the batch portion of the JOSHUA Modular System to run on a CDC computer with an estimated additional two to three man-years of effort. 9 tables.« less

  18. Joint Polar Satellite System

    NASA Technical Reports Server (NTRS)

    Trenkle, Timothy; Driggers, Phillip

    2011-01-01

    The Joint Polar Satellite System (JPSS) is a joint NOAA/NASA mission comprised of a series of polar orbiting weather and climate monitoring satellites which will fly in a sun-synchronous orbit, with a 1330 equatorial crossing time. JPSS resulted from the decision to reconstitute the National Polar-orbiting Operational Environmental Satellite System (NPOESS) into two separate programs, one to be run by the Department of Defense (DOD) and the other by NOAA. This decision was reached in early 2010, after numerous development issues caused a series of unacceptable delays in launching the NPOESS system.

  19. Deriving Tools from Real-Time Runs: A New CCMC Support for SEC and AFWA

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2007-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-on-request" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities at the Space Environment Center, or at the Air Force Weather Agency.

  20. The application of connectionism to query planning/scheduling in intelligent user interfaces

    NASA Technical Reports Server (NTRS)

    Short, Nicholas, Jr.; Shastri, Lokendra

    1990-01-01

    In the mid nineties, the Earth Observing System (EOS) will generate an estimated 10 terabytes of data per day. This enormous amount of data will require the use of sophisticated technologies from real time distributed Artificial Intelligence (AI) and data management. Without regard to the overall problems in distributed AI, efficient models were developed for doing query planning and/or scheduling in intelligent user interfaces that reside in a network environment. Before intelligent query/planning can be done, a model for real time AI planning and/or scheduling must be developed. As Connectionist Models (CM) have shown promise in increasing run times, a connectionist approach to AI planning and/or scheduling is proposed. The solution involves merging a CM rule based system to a general spreading activation model for the generation and selection of plans. The system was implemented in the Rochester Connectionist Simulator and runs on a Sun 3/260.

  1. Solar Power Tower Integrated Layout and Optimization Tool | Concentrating

    Science.gov Websites

    methods to reduce the overall computational burden while generating accurate and precise results. These methods have been developed as part of the U.S. Department of Energy (DOE) SunShot Initiative research

  2. Compilation of methods in orbital mechanics and solar geometry

    NASA Technical Reports Server (NTRS)

    Buglia, James J.

    1988-01-01

    This paper contains a collection of computational algorithms for determining geocentric ephemerides of Earth satellites, useful for both mission planning and data reduction applications. Special emphasis is placed on the computation of sidereal time, and on the determination of the geocentric coordinate of the center of the Sun, all to the accuracy found in the Astronomical Almanac. The report is completely self-contained in that no requirement is placed on any external source of information, and hence, these methods are ideal for computer application.

  3. Identifying the impact of G-quadruplexes on Affymetrix 3' arrays using cloud computing.

    PubMed

    Memon, Farhat N; Owen, Anne M; Sanchez-Graillet, Olivia; Upton, Graham J G; Harrison, Andrew P

    2010-01-15

    A tetramer quadruplex structure is formed by four parallel strands of DNA/ RNA containing runs of guanine. These quadruplexes are able to form because guanine can Hoogsteen hydrogen bond to other guanines, and a tetrad of guanines can form a stable arrangement. Recently we have discovered that probes on Affymetrix GeneChips that contain runs of guanine do not measure gene expression reliably. We associate this finding with the likelihood that quadruplexes are forming on the surface of GeneChips. In order to cope with the rapidly expanding size of GeneChip array datasets in the public domain, we are exploring the use of cloud computing to replicate our experiments on 3' arrays to look at the effect of the location of G-spots (runs of guanines). Cloud computing is a recently introduced high-performance solution that takes advantage of the computational infrastructure of large organisations such as Amazon and Google. We expect that cloud computing will become widely adopted because it enables bioinformaticians to avoid capital expenditure on expensive computing resources and to only pay a cloud computing provider for what is used. Moreover, as well as financial efficiency, cloud computing is an ecologically-friendly technology, it enables efficient data-sharing and we expect it to be faster for development purposes. Here we propose the advantageous use of cloud computing to perform a large data-mining analysis of public domain 3' arrays.

  4. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    PubMed

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  5. Halo orbit transfer trajectory design using invariant manifold in the Sun-Earth system accounting radiation pressure and oblateness

    NASA Astrophysics Data System (ADS)

    Srivastava, Vineet K.; Kumar, Jai; Kushvah, Badam Singh

    2018-01-01

    In this paper, we study the invariant manifold and its application in transfer trajectory problem from a low Earth parking orbit to the Sun-Earth L1 and L2-halo orbits with the inclusion of radiation pressure and oblateness. Invariant manifold of the halo orbit provides a natural entrance to travel the spacecraft in the solar system along some specific paths due to its strong hyperbolic character. In this regard, the halo orbits near both collinear Lagrangian points are computed first. The manifold's approximation near the nominal halo orbit is computed using the eigenvectors of the monodromy matrix. The obtained local approximation provides globalization of the manifold by applying backward time propagation to the governing equations of motion. The desired transfer trajectory well suited for the transfer is explored by looking at a possible intersection between the Earth's parking orbit of the spacecraft and the manifold.

  6. Numerical modeling of the thin shallow solar dynamo

    NASA Astrophysics Data System (ADS)

    O'Bryan, J. B.; Jarboe, T. R.

    2017-10-01

    Nonlinear, numerical computation with the NIMROD code is used to explore and validate the thin shallow solar dynamo model [T.R. Jarboe et al. 2017], which explains the observed global temporal evolution (e.g. magnetic field reversal) and local surface structures (e.g. sunspots) of the sun. The key feature of this model is the presence and magnetic self-organization of global magnetic structures (GMS) lying just below the surface of the sun, which resemble 1D radial Taylor states of size comparable to the supergranule convection cells. First, we seek to validate the thin shallow solar dynamo model by reproducing the 11 year timescale for reversal of the solar magnetic field. Then, we seek to model formation of GMS from convection zone turbulence. Our computations simulate a slab covering a radial depth 3Mm and include differential rotation and gravity. Density, temperature, and resistivity profiles are taken from the Christensen-Dalsgaard model.

  7. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    PubMed Central

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  8. Energy Frontier Research With ATLAS: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, John; Black, Kevin; Ahlen, Steve

    2016-06-14

    The Boston University (BU) group is playing key roles across the ATLAS experiment: in detector operations, the online trigger, the upgrade, computing, and physics analysis. Our team has been critical to the maintenance and operations of the muon system since its installation. During Run 1 we led the muon trigger group and that responsibility continues into Run 2. BU maintains and operates the ATLAS Northeast Tier 2 computing center. We are actively engaged in the analysis of ATLAS data from Run 1 and Run 2. Physics analyses we have contributed to include Standard Model measurements (W and Z cross sections,more » t\\bar{t} differential cross sections, WWW^* production), evidence for the Higgs decaying to \\tau^+\\tau^-, and searches for new phenomena (technicolor, Z' and W', vector-like quarks, dark matter).« less

  9. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  10. Open-source meteor detection software for low-cost single-board computers

    NASA Astrophysics Data System (ADS)

    Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.

    2016-01-01

    This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.

  11. How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing

    NASA Astrophysics Data System (ADS)

    Decyk, V. K.; Dauger, D. E.

    We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.

  12. SharP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkata, Manjunath Gorentla; Aderholdt, William F

    The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less

  13. Sports-related dermatoses among road runners in Southern Brazil*

    PubMed Central

    Purim, Kátia Sheylla Malta; Leite, Neiva

    2014-01-01

    BACKGROUND Road running is a growing sport. OBJECTIVES: To determine the prevalence of sports-related dermatoses among road runners. METHODS Cross-sectional study of 76 road runners. Assessment was performed by means of a questionnaire, interview, and clinical examination. The chi-square and linear trend tests were used for analysis. RESULTS Most athletes were men (61%), aged 38±11 years, who ran mid- or long-distance courses (60.5%) for 45 to 60 minutes (79%), for a total of 25-64 km (42.1% ) or more than 65 km (18.4%) per week. The most prevalent injuries were blisters (50%), chafing (42.1%), calluses (34.2%), onychomadesis (31.5%), tinea pedis (18.4%), onychocryptosis (14.5%), and cheilitis simplex (14.5%). Among athletes running >64 km weekly, several conditions were significantly more frequent: calluses (p<0.04), jogger's nipple (p<0.004), cheilitis simplex (p<0.05), and tinea pedis (p<0.004). There was a significant association between the weekly running distance and the probability of skin lesions. Of the athletes in our sample, 57% trained before 10 a.m., 86% wore clothing and accessories for sun protection, 62% wore sunscreen, and 19.7% experienced sunburn. Traumatic and environmental dermatoses are common in practitioners of this outdoor sport, and are influenced by the weekly running distance. CONCLUSION In this group of athletes, rashes, blisters, sunburn, and nail disorders were recurrent complaints regardless of running distance. Calluses, athlete's foot, chapped lips, and jogger's nipple predominated in individuals who ran longer routes. PMID:25054745

  14. User's guide to the Octopus computer network (the SHOC manual)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, C.; Thompson, D.; Whitten, G.

    1977-07-18

    This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 16 figures, 7 tables.

  15. User's guide to the Octopus computer network (the SHOC manual)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, C.; Thompson, D.; Whitten, G.

    1976-10-07

    This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 8 figures, 4 tables.

  16. User's guide to the Octopus computer network (the SHOC manual)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, C.; Thompson, D.; Whitten, G.

    1975-06-02

    This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers and a broad array of peripheral equipment, from any of 800 remote terminals. Octopus will soon include the Laboratory's STAR-100 computers. 9 figures, 5 tables. (auth)

  17. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  18. JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.

    PubMed

    Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J

    2010-04-01

    The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.

  19. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  20. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  1. ATLAS@Home: Harnessing Volunteer Computing for HEP

    NASA Astrophysics Data System (ADS)

    Adam-Bourdarios, C.; Cameron, D.; Filipčič, A.; Lancon, E.; Wu, W.; ATLAS Collaboration

    2015-12-01

    A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.

  2. Can skin cancer prevention and early detection be improved via mobile phone text messaging? A randomised, attention control trial.

    PubMed

    Youl, Philippa H; Soyer, H Peter; Baade, Peter D; Marshall, Alison L; Finch, Linda; Janda, Monika

    2015-02-01

    To test the impact of a theory-based, SMS (text message)-delivered behavioural intervention (Healthy Text) targeting sun protection or skin self-examination behaviours compared to attention control. Overall, 546 participants aged 18-42 years were randomised using a computer-generated number list to the skin self-examination (N=176), sun protection (N=187), or attention control (N=183) text messages group. Each group received 21 text messages about their assigned topic over 12 months (12 weekly messages for 3 months, then monthly messages for the next 9 months). Data were collected via telephone survey at baseline, 3, and 12 months across Queensland from January 2012 to August 2013. One year after baseline, the sun protection (mean change 0.12; P=0.030) and skin self-examination groups (mean change 0.12; P=0.035) had significantly greater improvement in their sun protection habits (SPH) index compared to the attention control group (reference mean change 0.02). The increase in the proportion of participants who reported any skin self-examination from baseline to 12 months was significantly greater in the skin self-examination intervention group (103/163; 63%; P<0.001) than the sun protection (83/173; 48%) or attention control (65/165; 36%) groups. There was no significant effect of the intervention for participants' self-reported whole-body skin self-examination, sun tanning, or sunburn behaviours. The Healthy Text intervention was effective in inducing significant improvements in sun protection and any type of skin self-examination behaviours. The Australian and New Zealand Clinical Trials register (ACTRN12612000577819). Cancer Australia 1011999. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Understanding the Performance and Potential of Cloud Computing for Scientific Applications

    DOE PAGES

    Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin; ...

    2015-02-19

    In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less

  4. Understanding the Performance and Potential of Cloud Computing for Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin

    In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less

  5. POLCAL - POLARIMETRIC RADAR CALIBRATION

    NASA Technical Reports Server (NTRS)

    Vanzyl, J.

    1994-01-01

    Calibration of polarimetric radar systems is a field of research in which great progress has been made over the last few years. POLCAL (Polarimetric Radar Calibration) is a software tool intended to assist in the calibration of Synthetic Aperture Radar (SAR) systems. In particular, POLCAL calibrates Stokes matrix format data produced as the standard product by the NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). POLCAL was designed to be used in conjunction with data collected by the NASA/JPL AIRSAR system. AIRSAR is a multifrequency (6 cm, 24 cm, and 68 cm wavelength), fully polarimetric SAR system which produces 12 x 12 km imagery at 10 m resolution. AIRSTAR was designed as a testbed for NASA's Spaceborne Imaging Radar program. While the images produced after 1991 are thought to be calibrated (phase calibrated, cross-talk removed, channel imbalance removed, and absolutely calibrated), POLCAL can and should still be used to check the accuracy of the calibration and to correct it if necessary. Version 4.0 of POLCAL is an upgrade of POLCAL version 2.0 released to AIRSAR investigators in June, 1990. New options in version 4.0 include automatic absolute calibration of 89/90 data, distributed target analysis, calibration of nearby scenes with calibration parameters from a scene with corner reflectors, altitude or roll angle corrections, and calibration of errors introduced by known topography. Many sources of error can lead to false conclusions about the nature of scatterers on the surface. Errors in the phase relationship between polarization channels result in incorrect synthesis of polarization states. Cross-talk, caused by imperfections in the radar antenna itself, can also lead to error. POLCAL reduces cross-talk and corrects phase calibration without the use of ground calibration equipment. Removing the antenna patterns during SAR processing also forms a very important part of the calibration of SAR data. Errors in the processing altitude or in the aircraft roll angle are possible causes of error in computing the antenna patterns inside the processor. POLCAL uses an altitude error correction algorithm to correctly remove the antenna pattern from the SAR images. POLCAL also uses a topographic calibration algorithm to reduce calibration errors resulting from ground topography. By utilizing the backscatter measurements from either the corner reflectors or a well-known distributed target, POLCAL can correct the residual amplitude offsets in the various polarization channels and correct for the absolute gain of the radar system. POLCAL also gives the user the option of calibrating a scene using the calibration data from a nearby site. This allows precise calibration of all the scenes acquired on a flight line where corner reflectors were present. Construction and positioning of corner reflectors is covered extensively in the program documentation. In an effort to keep the POLCAL code as transportable as possible, the authors eliminated all interactions with a graphics display system. For this reason, it is assumed that users will have their own software for doing the following: (1) synthesize an image using HH or VV polarization, (2) display the synthesized image on any display device, and (3) read the pixel locations of the corner reflectors from the image. The only inputs used by the software (in addition to the input Stokes matrix data file) is a small data file with the corner reflector information. POLCAL is written in FORTRAN 77 for use on Sun series computers running SunOS and DEC VAX computers running VMS. It requires 4Mb of RAM under SunOS and 3.7Mb of RAM under VMS for execution. The standard distribution medium for POLCAL is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format or on a TK50 tape cartridge in DEC VAX FILES-11 format. Other distribution media may be available upon request. Documentation is included in the price of the program. POLCAL 4.0 was released in 1992 and is a copyrighted work with all copyright vested in NASA.

  6. RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices

    PubMed Central

    Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B.

    2018-01-01

    Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support. PMID:29629431

  7. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  8. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    NASA Astrophysics Data System (ADS)

    Adam, C.; Barberis, D.; Crépé-Renaudin, S.; De, K.; Fassi, F.; Stradling, A.; Svatos, M.; Vartapetian, A.; Wolters, H.

    2017-10-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates communication between the ADC experts team and the other ADC shifters. These include the Distributed Analysis Support Team (DAST), which is the first point of contact for addressing all distributed analysis questions, and the ATLAS Distributed Computing Shifters (ADCoS), which check and report problems in central services, sites, Tier-0 export, data transfers and production tasks. Finally, the CRC looks at the level of ADC activities on a weekly or monthly timescale to ensure that ADC resources are used efficiently.

  9. RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices.

    PubMed

    Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B

    2017-06-01

    Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support.

  10. The Polarization Signature of Photospheric Magnetic Fields in 3D MHD Simulations and Observations at Disk Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beck, C.; Fabbian, D.; Rezaei, R.

    2017-06-10

    Before using three-dimensional (3D) magnetohydrodynamical (MHD) simulations of the solar photosphere in the determination of elemental abundances, one has to ensure that the correct amount of magnetic flux is present in the simulations. The presence of magnetic flux modifies the thermal structure of the solar photosphere, which affects abundance determinations and the solar spectral irradiance. The amount of magnetic flux in the solar photosphere also constrains any possible heating in the outer solar atmosphere through magnetic reconnection. We compare the polarization signals in disk-center observations of the solar photosphere in quiet-Sun regions with those in Stokes spectra computed on themore » basis of 3D MHD simulations having average magnetic flux densities of about 20, 56, 112, and 224 G. This approach allows us to find the simulation run that best matches the observations. The observations were taken with the Hinode SpectroPolarimeter (SP), the Tenerife Infrared Polarimeter (TIP), the Polarimetric Littrow Spectrograph (POLIS), and the GREGOR Fabry–Pèrot Interferometer (GFPI), respectively. We determine characteristic quantities of full Stokes profiles in a few photospheric spectral lines in the visible (630 nm) and near-infrared (1083 and 1565 nm). We find that the appearance of abnormal granulation in intensity maps of degraded simulations can be traced back to an initially regular granulation pattern with numerous bright points in the intergranular lanes before the spatial degradation. The linear polarization signals in the simulations are almost exclusively related to canopies of strong magnetic flux concentrations and not to transient events of magnetic flux emergence. We find that the average vertical magnetic flux density in the simulation should be less than 50 G to reproduce the observed polarization signals in the quiet-Sun internetwork. A value of about 35 G gives the best match across the SP, TIP, POLIS, and GFPI observations.« less

  11. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    NASA Technical Reports Server (NTRS)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  12. SANs and Large Scale Data Migration at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2004-01-01

    Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.

  13. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1973-01-01

    A collection of typical three-body trajectories from the L1 libration point on the sun-earth line to the earth is presented. These trajectories in the sun-earth system are grouped into four distinct families which differ in transfer time and delta V requirements. Curves showing the variations of delta V with respect to transfer time, and typical two and three-impulse primer vector histories, are included. The development of a four-body trajectory optimization program to compute fuel optimal trajectories between the earth and a point in the sun-earth-moon system are also discussed. Methods for generating fuel optimal two-impulse trajectories which originate at the earth or a point in space, and fuel optimal three-impulse trajectories between two points in space, are presented. A brief qualitative comparison of these methods is given. An example of a four-body two-impulse transfer from the Li libration point to the earth is included.

  14. 28 percent efficient GaAs concentrator solar cells

    NASA Technical Reports Server (NTRS)

    Macmillan, H. F.; Hamaker, H. C.; Kaminar, N. R.; Kuryla, M. S.; Ladle Ristow, M.

    1988-01-01

    AlGaAs/GaAs heteroface solar concentrator cells which exhibit efficiencies in excess of 27 percent at high solar concentrations (over 400 suns, AM1.5D, 100 mW/sq cm) have been fabricated with both n/p and p/n configurations. The best n/p cell achieved an efficiency of 28.1 percent around 400 suns, and the best p/n cell achieved an efficiency of 27.5 percent around 1000 suns. The high performance of these GaAs concentrator cells compared to earlier high-efficiency cells was due to improved control of the metal-organic chemical vapor deposition growth conditions and improved cell fabrication procedures (gridline definition and edge passivation). The design parameters of the solar cell structures and optimized grid pattern were determined with a realistic computer modeling program. An evaluation of the device characteristics and a discussion of future GaAs concentrator cell development are presented.

  15. Flood-plain delineation for Occoquan River, Wolf Run, Sandy Run, Elk Horn Run, Giles Run, Kanes Creek, Racoon Creek, and Thompson Creek, Fairfax County, Virginia

    USGS Publications Warehouse

    Soule, Pat LeRoy

    1978-01-01

    Water-surface profiles of the 25-, 50-, and 100-year recurrence interval discharges have been computed for all streams and reaches of channels in Fairfax County, Virginia, having a drainage area greater than 1 square mile except for Dogue Creek, Little Hunting Creek, and that portion of Cameron Run above Lake Barcroft. Maps having a 2-foot contour interval and a horizontal scale of 1 inch equals 100 feet were used for base on which flood boundaries were delineated for 25-, 50-, and 100-year floods to be expected in each basin under ultimate development conditions. This report is one of a series and presents a discussion of techniques employed in computing discharges and profiles as well as the flood profiles and maps on which flood boundaries have been delineated for the Occoquan River and its tributaries within Fairfax County and those streams on Mason Neck within Fairfax County tributary to the Potomac River. (Woodard-USGS)

  16. Behaviour of Talitrus saltator (Crustacea: Amphipoda) on a rehabilitated sandy beach on the European Atlantic Coast (Portugal)

    NASA Astrophysics Data System (ADS)

    Bessa, Filipa; Rossano, Claudia; Nourisson, Delphine; Gambineri, Simone; Marques, João Carlos; Scapini, Felicita

    2013-01-01

    Environmental and human controls are widely accepted as the main structuring forces of the macrofauna communities on sandy beaches. A population of the talitrid amphipod Talitrus saltator (Montagu, 1808) was investigated on an exposed sandy beach on the Atlantic coast of Portugal (Leirosa beach) to estimate orientation capabilities and endogenous rhythms in conditions of recent changes in the landscape (artificial reconstruction of the foredune) and beach morphodynamics (stabilization against erosion from the sea). We tested sun orientation of talitrids on the beach and recorded their locomotor activity rhythms under constant conditions in the laboratory. The orientation data were analysed with circular statistics and multiple regression models adapted to angular distributions, to highlight the main factors and variables influencing the variation of orientation. The talitrids used the sun compass, visual cues (landscape and sun visibility) to orient and the precision of orientation varied according to the tidal regime (rising or ebbing tides). A well-defined free-running rhythm (circadian with in addition a bimodal rhythmicity, likely tidal) was highlighted in this population. This showed a stable behavioural adaptation on a beach that has experienced a process of artificial stabilization of the dune through nourishment actions over a decade. Monitoring the conditions of such dynamic environments and the resilience capacity of the inhabiting macroinfauna is a main challenge for sandy beach ecologists.

  17. Stability and evolution of orbits around the binary asteroid 175706 (1996 FG3): Implications for the MarcoPolo-R mission

    NASA Astrophysics Data System (ADS)

    Hussmann, Hauke; Oberst, Jürgen; Wickhusen, Kai; Shi, Xian; Damme, Friedrich; Lüdicke, Fabian; Lupovka, Valery; Bauer, Sven

    2012-09-01

    In support of the MarcoPolo-R mission, we have carried out numerical simulations of spacecraft trajectories about the binary asteroid 175706 (1996 FG3) under the influence of solar radiation pressure. We study the effects of (1) the asteroid's mass, shape, and rotational parameters, (2) the secondary's mass, shape, and orbit parameters, (3) the spacecraft's mass, surface area, and reflectivity, and (4) the time of arrival, and therefore the relative position to the sun and planets. We have considered distance regimes between 5 and 20 km, the typical range for a detailed characterization of the asteroids - primary and secondary - with imaging systems, spectrometers and by laser altimetry. With solar radiation pressure and gravity forces of the small asteroid competing, orbits are found to be unstable, in general. However, limited orbital stability can be found in the so-called Self-Stabilized Terminator Orbits (SSTO), where initial orbits are circular, orbital planes are oriented approximately perpendicular to the solar radiation pressure, and where the orbital plane of the spacecraft is shifted slightly (between 0.2 and 1 km) from the asteroid in the direction away from the sun. Under the effect of radiation pressure, the vector perpendicular to the orbit plane is observed to follow the sun direction. Shape and rotation parameters of the asteroid as well as gravitational perturbations by the secondary (not to mention sun and planets) were found not to affect the results. Such stable orbits may be suited for long radio tracking runs, which will allow for studying the gravity field. As the effect of the solar radiation pressure depends on the spacecraft mass, shape, and albedo, good knowledge of the spacecraft model and persistent monitoring of the spacecraft orientation are required.

  18. Assessment of errors and biases in retrievals of X CO2, X CH4, X CO, and X N2O from a 0.5 cm –1 resolution solar-viewing spectrometer

    DOE PAGES

    Hedelius, Jacob K.; Viatte, Camille; Wunch, Debra; ...

    2016-08-03

    Bruker™ EM27/SUN instruments are commercial mobile solar-viewing near-IR spectrometers. They show promise for expanding the global density of atmospheric column measurements of greenhouse gases and are being marketed for such applications. They have been shown to measure the same variations of atmospheric gases within a day as the high-resolution spectrometers of the Total Carbon Column Observing Network (TCCON). However, there is little known about the long-term precision and uncertainty budgets of EM27/SUN measurements. In this study, which includes a comparison of 186 measurement days spanning 11 months, we note that atmospheric variations of X gas within a single day aremore » well captured by these low-resolution instruments, but over several months, the measurements drift noticeably. We present comparisons between EM27/SUN instruments and the TCCON using GGG as the retrieval algorithm. In addition, we perform several tests to evaluate the robustness of the performance and determine the largest sources of errors from these spectrometers. We include comparisons of X CO2, X CH4, X CO, and X N2O. Specifically we note EM27/SUN biases for January 2015 of 0.03, 0.75, –0.12, and 2.43 % for X CO2, X CH4, X CO, and X N2O respectively, with 1 σ running precisions of 0.08 and 0.06 % for X CO2 and X CH4 from measurements in Pasadena. We also identify significant error caused by nonlinear sensitivity when using an extended spectral range detector used to measure CO and N 2O.« less

  19. ACON: a multipurpose production controller for plasma physics codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snell, C.

    1983-01-01

    ACON is a BCON controller designed to run large production codes on the CTSS Cray-1 or the LTSS 7600 computers. ACON can also be operated interactively, with input from the user's terminal. The controller can run one code or a sequence of up to ten codes during the same job. Options are available to get and save Mass storage files, to perform Historian file updating operations, to compile and load source files, and to send out print and film files. Special features include ability to retry after Mass failures, backup options for saving files, startup messages for the various codes,more » and ability to reserve specified amounts of computer time after successive code runs. ACON's flexibility and power make it useful for running a number of different production codes.« less

  20. Proceedings of the Annual Meeting of the Association for Education in Journalism and Mass Communication (78th, Washington, DC, August 9-12, 1995). Communication Technology and Policy Division.

    ERIC Educational Resources Information Center

    Association for Education in Journalism and Mass Communication.

    The Communication Technology and Policy section of the proceedings contains the following eight papers: "Effects of Home Computer Use on Adolescents' Family Lives: Time Use and Relationships with Family Members in Taiwan and America" (Mine-Ping Sun); "Computers, Ambivalence and the Transformation of Journalistic Work" (John T.…

  1. ARTSN: An Automated Real-Time Spacecraft Navigation System

    NASA Technical Reports Server (NTRS)

    Burkhart, P. Daniel; Pollmeier, Vincent M.

    1996-01-01

    As part of the Deep Space Network (DSN) advanced technology program an effort is underway to design a filter to automate the deep space navigation process.The automated real-time spacecraft navigation (ARTSN) filter task is based on a prototype consisting of a FORTRAN77 package operating on an HP-9000/700 workstation running HP-UX 9.05. This will be converted to C, and maintained as the operational version. The processing tasks required are: (1) read a measurement, (2) integrate the spacecraft state to the current measurement time, (3) compute the observable based on the integrated state, and (4) incorporate the measurement information into the state using an extended Kalman filter. This filter processes radiometric data collected by the DSN. The dynamic (force) models currently include point mass gravitational terms for all planets, the Sun and Moon, solar radiation pressure, finite maneuvers, and attitude maintenance activity modeled quadratically. In addition, observable errors due to troposphere are included. Further data types, force and observable models will be ncluded to enhance the accuracy of the models and the capability of the package. The heart of the ARSTSN is a currently available continuous-discrete extended Kalman filter. Simulated data used to test the implementation at various stages of development and the results from processing actual mission data are presented.

  2. Automated Feature and Event Detection with SDO AIA and HMI Data

    NASA Astrophysics Data System (ADS)

    Davey, Alisdair; Martens, P. C. H.; Attrill, G. D. R.; Engell, A.; Farid, S.; Grigis, P. C.; Kasper, J.; Korreck, K.; Saar, S. H.; Su, Y.; Testa, P.; Wills-Davey, M.; Savcheva, A.; Bernasconi, P. N.; Raouafi, N.-E.; Delouille, V. A.; Hochedez, J. F..; Cirtain, J. W.; Deforest, C. E.; Angryk, R. A.; de Moortel, I.; Wiegelmann, T.; Georgouli, M. K.; McAteer, R. T. J.; Hurlburt, N.; Timmons, R.

    The Solar Dynamics Observatory (SDO) represents a new frontier in quantity and quality of solar data. At about 1.5 TB/day, the data will not be easily digestible by solar physicists using the same methods that have been employed for images from previous missions. In order for solar scientists to use the SDO data effectively they need meta-data that will allow them to identify and retrieve data sets that address their particular science questions. We are building a comprehensive computer vision pipeline for SDO, abstracting complete metadata on many of the features and events detectable on the Sun without human intervention. Our project unites more than a dozen individual, existing codes into a systematic tool that can be used by the entire solar community. The feature finding codes will run as part of the SDO Event Detection System (EDS) at the Joint Science Operations Center (JSOC; joint between Stanford and LMSAL). The metadata produced will be stored in the Heliophysics Event Knowledgebase (HEK), which will be accessible on-line for the rest of the world directly or via the Virtual Solar Observatory (VSO) . Solar scientists will be able to use the HEK to select event and feature data to download for science studies.

  3. Numerical Study of Solar Storms from the Sun to Earth

    NASA Astrophysics Data System (ADS)

    Feng, Xueshang; Jiang, Chaowei; Zhou, Yufen

    2017-04-01

    As solar storms are sweeping the Earth, adverse changes occur in geospace environment. How human can mitigate and avoid destructive damages caused by solar storms becomes an important frontier issue that we must face in the high-tech times. It is of both scientific significance to understand the dynamic process during solar storm's propagation in interplanetary space and realistic value to conduct physics-based numerical researches on the three-dimensional process of solar storms in interplanetary space with the aid of powerful computing capacity to predict the arrival times, intensities, and probable geoeffectiveness of solar storms at the Earth. So far, numerical studies based on magnetohydrodynamics (MHD) have gone through the transition from the initial qualitative principle researches to systematic quantitative studies on concrete events and numerical predictions. Numerical modeling community has a common goal to develop an end-to-end physics-based modeling system for forecasting the Sun-Earth relationship. It is hoped that the transition of these models to operational use depends on the availability of computational resources at reasonable cost and that the models' prediction capabilities may be improved by incorporating the observational findings and constraints into physics-based models, combining the observations, empirical models and MHD simulations in organic ways. In this talk, we briefly focus on our recent progress in using solar observations to produce realistic magnetic configurations of CMEs as they leave the Sun, and coupling data-driven simulations of CMEs to heliospheric simulations that then propagate the CME configuration to 1AU, and outlook the important numerical issues and their possible solutions in numerical space weather modeling from the Sun to Earth for future research.

  4. Large-field high-resolution mosaic movies

    NASA Astrophysics Data System (ADS)

    Hammerschlag, Robert H.; Sliepen, Guus; Bettonvil, Felix C. M.; Jägers, Aswin P. L.; Sütterlin, Peter; Martin, Sara F.

    2012-09-01

    Movies with fields-of-view larger than normal for high-resolution telescopes will give a better understanding of processes on the Sun, such as filament and active region developments and their possible interactions. New active regions can influence, by their emergence, their environment to the extent of possibly serving as an igniter of the eruption of a nearby filament. A method to create a large field-of-view is to join several fields-of-view into a mosaic. Fields are imaged quickly one after another using fast telescope-pointing. Such a pointing cycle has been automated at the Dutch Open Telescope (DOT), a high-resolution solar telescope located on the Canary Island La Palma. The observer can draw with the computer mouse the desired total field in the guider-telescope image of the whole Sun. The guider telescope is equipped with an H-alpha filter and electronic enhancement of contrast in the image for good visibility of filaments and prominences. The number and positions of the subfields are calculated automatically and represented by an array of bright points indicating the subfield centers inside the drawn rectangle of the total field on the computer screen with the whole-sun image. When the exposures start the telescope repeats automatically the sequence of subfields. Automatic production of flats is also programmed including defocusing and fast motion over the solar disk of the image field. For the first time mosaic movies were programmed from stored information on automated telescope motions from one field to the next. The mosaic movies fill the gap between whole-sun images with limited resolution of synoptic telescopes including space instruments and small-field high-cadence movies of high-resolution solar telescopes.

  5. Probabilistic Solar Wind Forecasting Using Large Ensembles of Near‐Sun Conditions With a Simple One‐Dimensional “Upwind” Scheme

    PubMed Central

    Riley, Pete

    2017-01-01

    Abstract Long lead‐time space‐weather forecasting requires accurate prediction of the near‐Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near‐Sun solar wind and magnetic field conditions provide the inner boundary condition to three‐dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics‐based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near‐Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near‐Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near‐Sun solar wind speed at a range of latitudes about the sub‐Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun‐Earth line. Propagating these conditions to Earth by a three‐dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one‐dimensional “upwind” scheme is used. The variance in the resulting near‐Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996–2016, the upwind ensemble is found to provide a more “actionable” forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large). PMID:29398982

  6. Review of Collaborative Tools for Planning and Engineering

    DTIC Science & Technology

    2007-10-01

    including PDAs) and Operating Systems 1 In general, should support laptops, desktops, Windows OS, Mac OS, Palm OS, Windows CE, Blackberry , Sun...better), voting (to establish operating parameters), reactor design, wind tunnel simulation Display same material on every computer, synchronisation

  7. Experimental Realization of High-Efficiency Counterfactual Computation.

    PubMed

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-21

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  8. Experimental Realization of High-Efficiency Counterfactual Computation

    NASA Astrophysics Data System (ADS)

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-01

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  9. Running of scalar spectral index in multi-field inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Jinn-Ouk, E-mail: jinn-ouk.gong@apctp.org

    We compute the running of the scalar spectral index in general multi-field slow-roll inflation. By incorporating explicit momentum dependence at the moment of horizon crossing, we can find the running straightforwardly. At the same time, we can distinguish the contributions from the quasi de Sitter background and the super-horizon evolution of the field fluctuations.

  10. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.

  11. Program Processes Thermocouple Readings

    NASA Technical Reports Server (NTRS)

    Quave, Christine A.; Nail, William, III

    1995-01-01

    Digital Signal Processor for Thermocouples (DART) computer program implements precise and fast method of converting voltage to temperature for large-temperature-range thermocouple applications. Written using LabVIEW software. DART available only as object code for use on Macintosh II FX or higher-series computers running System 7.0 or later and IBM PC-series and compatible computers running Microsoft Windows 3.1. Macintosh version of DART (SSC-00032) requires LabVIEW 2.2.1 or 3.0 for execution. IBM PC version (SSC-00031) requires LabVIEW 3.0 for Windows 3.1. LabVIEW software product of National Instruments and not included with program.

  12. Generalized environmental control and life support system computer program (G189A) configuration control, phase 2

    NASA Technical Reports Server (NTRS)

    Mcenulty, R. E.

    1977-01-01

    The G189A simulation of the Shuttle Orbiter ECLSS was upgraded. All simulation library versions and simulation models were converted from the EXEC2 to the EXEC8 computer system and a new program, G189PL, was added to the combination master program library. The program permits the post-plotting of up to 100 frames of plot data over any time interval of a G189 simulation run. The overlay structure of the G189A simulations were restructured for the purpose of conserving computer core requirements and minimizing run time requirements.

  13. INHYD: Computer code for intraply hybrid composite design. A users manual

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Sinclair, J. H.

    1983-01-01

    A computer program (INHYD) was developed for intraply hybrid composite design. A users manual for INHYD is presented. In INHYD embodies several composite micromechanics theories, intraply hybrid composite theories, and an integrated hygrothermomechanical theory. The INHYD can be run in both interactive and batch modes. It has considerable flexibility and capability, which the user can exercise through several options. These options are demonstrated through appropriate INHYD runs in the manual.

  14. Topology Optimization for Reducing Additive Manufacturing Processing Distortions

    DTIC Science & Technology

    2017-12-01

    features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a

  15. MindModeling@Home . . . and Anywhere Else You Have Idle Processors

    DTIC Science & Technology

    2009-12-01

    was SETI @Home. It was established in 1999 for the purpose of demonstrating the utility of “distributed grid computing” by providing a mechanism for...the public imagination, and SETI @Home remains the longest running and one of the most popular volunteer computing projects in the world. This...pursuits. Most of them, including SETI @Home, run on a software architecture called the Berkeley Open Infrastructure for Network Computing (BOINC). Some of

  16. Robust Optimization Design for Turbine Blade-Tip Radial Running Clearance using Hierarchically Response Surface Method

    NASA Astrophysics Data System (ADS)

    Zhiying, Chen; Ping, Zhou

    2017-11-01

    Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.

  17. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  18. DualSPHysics: A numerical tool to simulate real breakwaters

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho

    2018-02-01

    The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.

  19. Prediction of sound radiated from different practical jet engine inlets

    NASA Technical Reports Server (NTRS)

    Zinn, B. T.; Meyer, W. L.

    1980-01-01

    Existing computer codes for calculating the far field radiation patterns surrounding various practical jet engine inlet configurations under different excitation conditions were upgraded. The computer codes were refined and expanded so that they are now more efficient computationally by a factor of about three and they are now capable of producing accurate results up to nondimensional wave numbers of twenty. Computer programs were also developed to help generate accurate geometrical representations of the inlets to be investigated. This data is required as input for the computer programs which calculate the sound fields. This new geometry generating computer program considerably reduces the time required to generate the input data which was one of the most time consuming steps in the process. The results of sample runs using the NASA-Lewis QCSEE inlet are presented and comparison of run times and accuracy are made between the old and upgraded computer codes. The overall accuracy of the computations is determined by comparison of the results of the computations with simple source solutions.

  20. Shor's factoring algorithm and modern cryptography. An illustration of the capabilities inherent in quantum computers

    NASA Astrophysics Data System (ADS)

    Gerjuoy, Edward

    2005-06-01

    The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.

  1. Programming the social computer.

    PubMed

    Robertson, David; Giunchiglia, Fausto

    2013-03-28

    The aim of 'programming the global computer' was identified by Milner and others as one of the grand challenges of computing research. At the time this phrase was coined, it was natural to assume that this objective might be achieved primarily through extending programming and specification languages. The Internet, however, has brought with it a different style of computation that (although harnessing variants of traditional programming languages) operates in a style different to those with which we are familiar. The 'computer' on which we are running these computations is a social computer in the sense that many of the elementary functions of the computations it runs are performed by humans, and successful execution of a program often depends on properties of the human society over which the program operates. These sorts of programs are not programmed in a traditional way and may have to be understood in a way that is different from the traditional view of programming. This shift in perspective raises new challenges for the science of the Web and for computing in general.

  2. On Stellar Winds as a Source of Mass: Applying Bondi-Hoyle-Lyttleton Accretion

    NASA Astrophysics Data System (ADS)

    Detweiler, L. G.; Yates, K.; Siem, E.

    2017-12-01

    The interaction between planets orbiting stars and the stellar wind that stars emit is investigated and explored. The main goal of this research is to devise a method of calculating the amount of mass accumulated by an arbitrary planet from the stellar wind of its parent star via accretion processes. To achieve this goal, the Bondi-Hoyle-Lyttleton (BHL) mass accretion rate equation and model is employed. In order to use the BHL equation, various parameters of the stellar wind is required to be known, including the velocity, density, and speed of sound of the wind. In order to create a method that is applicable to arbitrary planets orbiting arbitrary stars, Eugene Parker's isothermal stellar wind model is used to calculate these stellar wind parameters. In an isothermal wind, the speed of sound is simple to compute, however the velocity and density equations are transcendental and so the solutions must be approximated using a numerical approximation method. By combining Eugene Parker's isothermal stellar wind model with the BHL accretion equation, a method for computing planetary accretion rates inside a star's stellar wind is realized. This method is then applied to a variety of scenarios. First, this method is used to calculate the amount of mass that our solar system's planets will accrete from the solar wind throughout our Sun's lifetime. Then, some theoretical situations are considered. We consider the amount of mass various brown dwarfs would accrete from the solar wind of our Sun throughout its lifetime if they were orbiting the Sun at Jupiter's distance. For very high mass brown dwarfs, a significant amount of mass is accreted. In the case of the brown dwarf 15 Sagittae B, it actually accretes enough mass to surpass the mass limit for hydrogen fusion. Since 15 Sagittae B is orbiting a star that is very similar to our Sun, this encouraged making calculations for 15 Sagittae B orbiting our Sun at its true distance from its star, 15 Sagittae. It was found that at this distance, it does not accrete enough mass to surpass the mass limit for hydrogen fusion. Finally, we apply this method to brown dwarfs orbiting a 15 solar mass star at Jupiter's distance. It is found that a significantly smaller amount of mass is accreted when compared to the same brown dwarfs orbiting our Sun at the same distance.

  3. Computer program for the IBM personal computer which searches for approximate matches to short oligonucleotide sequences in long target DNA sequences.

    PubMed Central

    Myers, E W; Mount, D W

    1986-01-01

    We describe a program which may be used to find approximate matches to a short predefined DNA sequence in a larger target DNA sequence. The program predicts the usefulness of specific DNA probes and sequencing primers and finds nearly identical sequences that might represent the same regulatory signal. The program is written in the C programming language and will run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The program has been integrated into an existing software package for the IBM personal computer (see article by Mount and Conrad, this volume). Some examples of its use are given. PMID:3753785

  4. AGIS: Evolution of Distributed Computing information system for ATLAS

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  5. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    PubMed

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  6. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    ERIC Educational Resources Information Center

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  7. Flame-Vortex Studies to Quantify Markstein Numbers Needed to Model Flame Extinction Limits

    NASA Technical Reports Server (NTRS)

    Driscoll, James F.; Feikema, Douglas A.

    2003-01-01

    This has quantified a database of Markstein numbers for unsteady flames; future work will quantify a database of flame extinction limits for unsteady conditions. Unsteady extinction limits have not been documented previously; both a stretch rate and a residence time must be measured, since extinction requires that the stretch rate be sufficiently large for a sufficiently long residence time. Ma was measured for an inwardly-propagating flame (IPF) that is negatively-stretched under microgravity conditions. Computations also were performed using RUN-1DL to explain the measurements. The Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardy-propagating flame. The computed profiles of the various species within the flame suggest reasons. Computed hydrogen concentrations build up ahead of the IPF but not the OPF. Understanding was gained by running the computations for both simplified and full-chemistry conditions. Numerical Simulations. To explain the experimental findings, numerical simulations of both inwardly and outwardly propagating spherical flames (with complex chemistry) were generated using the RUN-1DL code, which includes 16 species and 46 reactions.

  8. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    NASA Astrophysics Data System (ADS)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.

    2014-06-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  9. Dark matter in the Sun: scattering off electrons vs nucleons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garani, Raghuveer; Palomares-Ruiz, Sergio, E-mail: garani@th.physik.uni-bonn.de, E-mail: sergiopr@ific.uv.es

    The annihilation of dark matter (DM) particles accumulated in the Sun could produce a flux of neutrinos, which is potentially detectable with neutrino detectors/telescopes and the DM elastic scattering cross section can be constrained. Although the process of DM capture in astrophysical objects like the Sun is commonly assumed to be due to interactions only with nucleons, there are scenarios in which tree-level DM couplings to quarks are absent, and even if loop-induced interactions with nucleons are allowed, scatterings off electrons could be the dominant capture mechanism. We consider this possibility and study in detail all the ingredients necessary tomore » compute the neutrino production rates from DM annihilations in the Sun (capture, annihilation and evaporation rates) for velocity-independent and isotropic, velocity-dependent and isotropic and momentum-dependent scattering cross sections for DM interactions with electrons and compare them with the results obtained for the case of interactions with nucleons. Moreover, we improve the usual calculations in a number of ways and provide analytical expressions in three appendices. Interestingly, we find that the evaporation mass in the case of interactions with electrons could be below the GeV range, depending on the high-velocity tail of the DM distribution in the Sun, which would open a new mass window for searching for this type of scenarios.« less

  10. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2017-08-01

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  11. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  12. Unified algorithm of cone optics to compute solar flux on central receiver

    NASA Astrophysics Data System (ADS)

    Grigoriev, Victor; Corsi, Clotilde

    2017-06-01

    Analytical algorithms to compute flux distribution on central receiver are considered as a faster alternative to ray tracing. They have quite too many modifications, with HFLCAL and UNIZAR being the most recognized and verified. In this work, a generalized algorithm is presented which is valid for arbitrary sun shape of radial symmetry. Heliostat mirrors can have a nonrectangular profile, and the effects of shading and blocking, strong defocusing and astigmatism can be taken into account. The algorithm is suitable for parallel computing and can benefit from hardware acceleration of polygon texturing.

  13. Anatomical basis of sun compass navigation II: the neuronal composition of the central complex of the monarch butterfly.

    PubMed

    Heinze, Stanley; Florman, Jeremy; Asokaraj, Surainder; El Jundi, Basil; Reppert, Steven M

    2013-02-01

    Each fall, eastern North American monarch butterflies in their northern range undergo a long-distance migration south to their overwintering grounds in Mexico. Migrants use a time-compensated sun compass to determine directionality during the migration. This compass system uses information extracted from sun-derived skylight cues that is compensated for time of day and ultimately transformed into the appropriate motor commands. The central complex (CX) is likely the site of the actual sun compass, because neurons in this brain region are tuned to specific skylight cues. To help illuminate the neural basis of sun compass navigation, we examined the neuronal composition of the CX and its associated brain regions. We generated a standardized version of the sun compass neuropils, providing reference volumes, as well as a common frame of reference for the registration of neuron morphologies. Volumetric comparisons between migratory and nonmigratory monarchs substantiated the proposed involvement of the CX and related brain areas in migratory behavior. Through registration of more than 55 neurons of 34 cell types, we were able to delineate the major input pathways to the CX, output pathways, and intrinsic neurons. Comparison of these neural elements with those of other species, especially the desert locust, revealed a surprising degree of conservation. From these interspecies data, we have established key components of a conserved core network of the CX, likely complemented by species-specific neurons, which together may comprise the neural substrates underlying the computations performed by the CX. Copyright © 2012 Wiley Periodicals, Inc.

  14. SunPy 0.8 - Python for Solar Physics

    NASA Astrophysics Data System (ADS)

    Inglis, Andrew; Bobra, Monica; Christe, Steven; Hewett, Russell; Ireland, Jack; Mumford, Stuart; Martinez Oliveros, Juan Carlos; Perez-Suarez, David; Reardon, Kevin P.; Savage, Sabrina; Shih, Albert Y.; Ryan, Daniel; Sipocz, Brigitta; Freij, Nabil

    2017-08-01

    SunPy is a community-developed open-source software library for solar physics. It is written in Python, a free, cross-platform, general-purpose, high-level programming language which is being increasingly adopted throughout the scientific community. Python is one of the top ten most often used programming languages, as such it provides a wide array of software packages, such as numerical computation (NumPy, SciPy), machine learning (scikit-learn), signal processing (scikit-image, statsmodels) to visualization and plotting (matplotlib, mayavi). SunPy aims to provide the software for obtaining and analyzing solar and heliospheric data. This poster introduces a new major release of SunPy (0.8). This release includes two major new functionalities, as well as a number of bug fixes. It is based on 1120 contributions from 34 unique contributors. Fido is the new primary interface to download data. It provides a consistent and powerful search interface to all major data sources provides including VSO, JSOC, as well as individual data sources such as GOES XRS time series and and is fully pluggable to add new data sources, i.e. DKIST. In anticipation of Solar Orbiter and the Parker Solar Probe, SunPy now provides a powerful way of representing coordinates, allowing conversion between coordinate systems and viewpoints of different instruments, including preliminary reprojection capabilities. Other new features including new timeseries capabilities with better support for concatenation and metadata, updated documentation and example gallery. SunPy is distributed through pip and conda and all of its code is publicly available (sunpy.org).

  15. Scalable load balancing for massively parallel distributed Monte Carlo particle transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, M. J.; Brantley, P. S.; Joy, K. I.

    2013-07-01

    In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrencemore » Livermore National Laboratory. (authors)« less

  16. Performance of a supercharged direct-injection stratified-charge rotary combustion engine

    NASA Technical Reports Server (NTRS)

    Bartrand, Timothy A.; Willis, Edward A.

    1990-01-01

    A zero-dimensional thermodynamic performance computer model for direct-injection stratified-charge rotary combustion engines was modified and run for a single rotor supercharged engine. Operating conditions for the computer runs were a single boost pressure and a matrix of speeds, loads and engine materials. A representative engine map is presented showing the predicted range of efficient operation. After discussion of the engine map, a number of engine features are analyzed individually. These features are: heat transfer and the influence insulating materials have on engine performance and exhaust energy; intake manifold pressure oscillations and interactions with the combustion chamber; and performance losses and seal friction. Finally, code running times and convergence data are presented.

  17. Decrease in Ground-Run Distance of Small Airplanes by Applying Electrically-Driven Wheels

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Nishizawa, Akira

    A new takeoff method for small airplanes was proposed. Ground-roll performance of an airplane driven by electrically-powered wheels was experimentally and computationally studied. The experiments verified that the ground-run distance was decreased by half with a combination of the powered driven wheels and propeller without increase of energy consumption during the ground-roll. The computational analysis showed the ground-run distance of the wheel-driven aircraft was independent of the motor power when the motor capability exceeded the friction between tires and ground. Furthermore, the distance was minimized when the angle of attack was set to the value so that the wing generated negative lift.

  18. Java: A New Brew for Educators, Administrators and Students.

    ERIC Educational Resources Information Center

    Gordon, Barbara

    1996-01-01

    Java is an object-oriented programming language developed by Sun Microsystems; its benefits include platform independence, security, and interactivity. Within the college community, Java is being used in programming courses, collaborative technology research projects, computer graphics instruction, and distance education. (AEF)

  19. Computational experiments in the optimal slewing of flexible structures

    NASA Technical Reports Server (NTRS)

    Baker, T. E.; Polak, Lucian Elijah

    1989-01-01

    Numerical experiments on the problem of moving a flexible beam are discussed. An optimal control problem is formulated and transcribed into a form which can be solved using semi-infinite optimization techniques. All experiments were carried out on a SUN 3 microcomputer.

  20. Vitamin D

    MedlinePlus

    ... body that produces vitamin D. As many of us spend more and more time on computers and game consoles, we're not outdoors as much as we once were. And, when we do spend time in the sun, more of us are making the wise decision to use sunscreen ...

  1. Have Observatory, Will Travel.

    ERIC Educational Resources Information Center

    White, James C., II

    1996-01-01

    Describes several of the labs developed by Project CLEA (Contemporary Laboratory Experiences in Astronomy). The computer labs cover simulated spectrometer use, investigating the moons of Jupiter, radar measurements, energy flow out of the sun, classifying stellar spectra, photoelectric photometry, Doppler effect, eclipsing binary stars, and lunar…

  2. Multiple running speed signals in medial entorhinal cortex

    PubMed Central

    Hinman, James R.; Brandon, Mark P.; Climer, Jason R.; Chapman, G. William; Hasselmo, Michael E.

    2016-01-01

    Grid cells in medial entorhinal cortex (MEC) can be modeled using oscillatory interference or attractor dynamic mechanisms that perform path integration, a computation requiring information about running direction and speed. The two classes of computational models often use either an oscillatory frequency or a firing rate that increases as a function of running speed. Yet it is currently not known whether these are two manifestations of the same speed signal or dissociable signals with potentially different anatomical substrates. We examined coding of running speed in MEC and identified these two speed signals to be independent of each other within individual neurons. The medial septum (MS) is strongly linked to locomotor behavior and removal of MS input resulted in strengthening of the firing rate speed signal, while decreasing the strength of the oscillatory speed signal. Thus two speed signals are present in MEC that are differentially affected by disrupted MS input. PMID:27427460

  3. Running SINDA '85/FLUINT interactive on the VAX

    NASA Technical Reports Server (NTRS)

    Simmonds, Boris

    1992-01-01

    Computer software as engineering tools are typically run in three modes: Batch, Demand, and Interactive. The first two are the most popular in the SINDA world. The third one is not so popular, due probably to the users inaccessibility to the command procedure files for running SINDA '85, or lack of familiarity with the SINDA '85 execution processes (pre-processor, processor, compilation, linking, execution and all of the file assignment, creation, deletions and de-assignments). Interactive is the mode that makes thermal analysis with SINDA '85 a real-time design tool. This paper explains a command procedure sufficient (the minimum modifications required in an existing demand command procedure) to run SINDA '85 on the VAX in an interactive mode. To exercise the procedure a sample problem is presented exemplifying the mode, plus additional programming capabilities available in SINDA '85. Following the same guidelines the process can be extended to other SINDA '85 residence computer platforms.

  4. Multi-GPGPU Tsunami simulation at Toyama-bay

    NASA Astrophysics Data System (ADS)

    Furuyama, Shoichi; Ueda, Yuki

    2017-07-01

    Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.

  5. Reducing Sun Exposure for Prevention of Skin Cancers: Factorial Invariance and Reliability of the Self-Efficacy Scale for Sun Protection

    PubMed Central

    Babbin, Steven F.; Yin, Hui-Qing; Rossi, Joseph S.; Redding, Colleen A.; Paiva, Andrea L.; Velicer, Wayne F.

    2015-01-01

    The Self-Efficacy Scale for Sun Protection consists of two correlated factors with three items each for Sunscreen Use and Avoidance. This study evaluated two crucial psychometric assumptions, factorial invariance and scale reliability, with a sample of adults (N = 1356) participating in a computer-tailored, population-based intervention study. A measure has factorial invariance when the model is the same across subgroups. Three levels of invariance were tested, from least to most restrictive: (1) Configural Invariance (nonzero factor loadings unconstrained); (2) Pattern Identity Invariance (equal factor loadings); and (3) Strong Factorial Invariance (equal factor loadings and measurement errors). Strong Factorial Invariance was a good fit for the model across seven grouping variables: age, education, ethnicity, gender, race, skin tone, and Stage of Change for Sun Protection. Internal consistency coefficient Alpha and factor rho scale reliability, respectively, were .84 and .86 for Sunscreen Use, .68 and .70 for Avoidance, and .78 and .78 for the global (total) scale. The psychometric evidence demonstrates strong empirical support that the scale is consistent, has internal validity, and can be used to assess population-based adult samples. PMID:26457203

  6. Effects of solar radiation on the orbits of small particles

    NASA Technical Reports Server (NTRS)

    Lyttleton, R. A.

    1976-01-01

    A modification of the Robertson (1937) equations of particle motion in the presence of solar radiation is developed which allows for partial reflection of sunlight as a result of rapid and varying particle rotations caused by interaction with the solar wind. The coefficients and forces in earlier forms of the equations are compared with those in the present equations, and secular rates of change of particle orbital elements are determined. Orbital dimensions are calculated in terms of time, probable sizes and densities of meteoric and cometary particles are estimated, and times of infall to the sun are computed for a particle moving in an almost circular orbit and a particle moving in an elliptical orbit of high eccentricity. Changes in orbital elements are also determined for particles from a long-period sun-grazing comet. The results show that the time of infall to the sun from a highly eccentric orbit is substantially shorter than from a circular orbit with a radius equal to the mean distance in the eccentric orbit. The possibility is considered that the free orbital kinetic energy of particles drawn into the sun may be the energy source for the solar corona.

  7. Solar Eclipse Computer API: Planning Ahead for August 2017

    NASA Astrophysics Data System (ADS)

    Bartlett, Jennifer L.; Chizek Frouard, Malynda; Lesniak, Michael V.; Bell, Steve

    2016-01-01

    With the total solar eclipse of 2017 August 21 over the continental United States approaching, the U.S. Naval Observatory (USNO) on-line Solar Eclipse Computer can now be accessed via an application programming interface (API). This flexible interface returns local circumstances for any solar eclipse in JavaScript Object Notation (JSON) that can be incorporated into third-party Web sites or applications. For a given year, it can also return a list of solar eclipses that can be used to build a more specific request for local circumstances. Over the course of a particular eclipse as viewed from a specific site, several events may be visible: the beginning and ending of the eclipse (first and fourth contacts), the beginning and ending of totality (second and third contacts), the moment of maximum eclipse, sunrise, or sunset. For each of these events, the USNO Solar Eclipse Computer reports the time, Sun's altitude and azimuth, and the event's position and vertex angles. The computer also reports the duration of the total phase, the duration of the eclipse, the magnitude of the eclipse, and the percent of the Sun obscured for a particular eclipse site. On-line documentation for using the API-enabled Solar Eclipse Computer, including sample calls, is available (http://aa.usno.navy.mil/data/docs/api.php). The same Web page also describes how to reach the Complete Sun and Moon Data for One Day, Phases of the Moon, Day and Night Across the Earth, and Apparent Disk of a Solar System Object services using API calls.For those who prefer using a traditional data input form, local circumstances can still be requested that way at http://aa.usno.navy.mil/data/docs/SolarEclipses.php. In addition, the 2017 August 21 Solar Eclipse Resource page (http://aa.usno.navy.mil/data/docs/Eclipse2017.php) consolidates all of the USNO resources for this event, including a Google Map view of the eclipse track designed by Her Majesty's Nautical Almanac Office (HMNAO). Looking further ahead, a 2024 April 8 Solar Eclipse Resource page (http://aa.usno.navy.mil/data/docs/Eclipse2024.php) is also available.

  8. An analysis of running skyline load path.

    Treesearch

    Ward W. Carson; Charles N. Mann

    1971-01-01

    This paper is intended for those who wish to prepare an algorithm to determine the load path of a running skyline. The mathematics of a simplified approach to this running skyline design problem are presented. The approach employs assumptions which reduce the complexity of the problem to the point where it can be solved on desk-top computers of limited capacities. The...

  9. Job Priorities on Peregrine | High-Performance Computing | NREL

    Science.gov Websites

    allocation when run with qos=high. Requesting a Node Reservation If you are doing work that requires real scheduler more efficiently plan resources for larger jobs. When projects reach their allocation limit, jobs associated with those projects will run at very low priority, which will ensure that these jobs run only when

  10. Running High-Throughput Jobs on Peregrine | High-Performance Computing |

    Science.gov Websites

    unique name (using "name=") and usse the task name to create a unique output file name. For runs on and how many tasks to give to each worker at a time using the NITRO_COORD_OPTIONS environment . Finally, you start Nitro by executing launch_nitro.sh. Sample Nitro job script To run a job using the

  11. AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.

    PubMed

    Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld

    2016-08-01

    There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Performance of a Block Structured, Hierarchical Adaptive MeshRefinement Code on the 64k Node IBM BlueGene/L Computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.

    2005-04-25

    We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current softwaremore » when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.« less

  13. Some tests of flat plate photovoltaic module cell temperatures in simulated field conditions

    NASA Technical Reports Server (NTRS)

    Griffith, J. S.; Rathod, M. S.; Paslaski, J.

    1981-01-01

    The nominal operating cell temperature (NOCT) of solar photovoltaic (PV) modules is an important characteristic. Typically, the power output of a PV module decreases 0.5% per deg C rise in cell temperature. Several tests were run with artificial sun and wind to study the parametric dependencies of cell temperature on wind speed and direction and ambient temperature. It was found that the cell temperature is extremely sensitive to wind speed, moderately so to wind direction and rather insensitive to ambient temperature. Several suggestions are made to obtain data more typical of field conditions.

  14. The Redox Flow System for solar photovoltaic energy storage

    NASA Technical Reports Server (NTRS)

    Odonnell, P.; Gahn, R. F.; Pfeiffer, W.

    1976-01-01

    The interfacing of a Solar Photovoltaic System and a Redox Flow System for storage was workable. The Redox Flow System, which utilizes the oxidation-reduction capability of two redox couples, in this case iron and titanium, for its storage capacity, gave a relatively constant output regardless of solar activity so that a load could be run continually day and night utilizing the sun's energy. One portion of the system was connected to a bank of solar cells to electrochemically charge the solutions, while a separate part of the system was used to electrochemically discharge the stored energy.

  15. A UNIX-based real-time data acquisition system for microprobe analysis using an advanced X11 window toolkit

    NASA Astrophysics Data System (ADS)

    Kramer, J. L. A. M.; Ullings, A. H.; Vis, R. D.

    1993-05-01

    A real-time data acquisition system for microprobe analysis has been developed at the Free University of Amsterdam. The system is composed of two parts: a front-end real-time and a back-end monitoring system. The front-end consists of a VMEbus based system which reads out a CAMAC crate. The back-end is implemented on a Sun work station running the UNIX operating system. This separation allows the integration of a minimal, and consequently very fast, real-time executive within the sophisticated possibilities of advanced UNIX work stations.

  16. First real-time detection of solar pp neutrinos by Borexino

    NASA Astrophysics Data System (ADS)

    Pallavicini, M.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Cavalcante, P.; Chavarria, A.; Chepurnov, A.; D'Angelo, D.; Davini, S.; Derbin, A.; Empl, A.; Etenko, A.; Fomenko, K.; Franco, D.; Gabriele, F.; Galbiati, C.; Gazzana, S.; Ghiano, C.; Giammarchi, M.; Göger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Hungerford, E.; Ianni, Al.; Ianni, An.; Kayser, M.; Kobychev, V.; Korablëv, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Lehnert, B.; Lewke, T.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Manecki, S.; Maneschg, W.; Marcocci, S.; Meindl, Q.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Montuschi, M.; Mosteiro, P.; Muratova, V.; Oberauer, L.; Obolensky, M.; Ortica, F.; Otis, K.; Papp, L.; Perasso, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Rossi, N.; Saldanha, R.; Salvo, C.; Schönert, S.; Simgen, H.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Sukhotin, S.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Vignaud, D.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Winter, J.; Wojcik, M.; Wurm, M.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.

    2016-07-01

    Solar neutrinos have been pivotal to the discovery of neutrino flavour oscillations and are a unique tool to probe the reactions that keep the Sun shine. Although most of solar neutrino components have been directly measured, the neutrinos emitted by the keystone pp reaction, in which two protons fuse to make a deuteron, have so far eluded direct detection. The Borexino experiment, an ultra-pure liquid scintillator detector running at the Laboratori Nazionali del Gran Sasso in Italy, has now filled the gap, providing the first direct real time measurement of pp neutrinos and of the solar neutrino luminosity.

  17. Passive Attenuating Communication Earphone (PACE): Noise Attenuation and Speech Intelligibility Performance When Worn in Conjunction with the HGU-56/P and HGU-55/P Flight Helmets

    DTIC Science & Technology

    2013-10-16

    right) eartips The purpose of this study was to integrate the HGU-56/P and HGU-55/P flight helmets with PACE to measure the noise attenuation and...55/P flight helmet integrated with PACE 2.0 METHODS 2.1 Subjects Twenty paid volunteer subjects (9 male, 11 female) participated in the study ...Pan Pad Pat Path Pack Pass Buff Bus But Bug Buck Bun Sat Sag Sass Sack Sad Sap Run Sun Bun Gun Fun Nun 8 Distribution A: Approved for

  18. ASDIR-II. Volume I. User Manual

    DTIC Science & Technology

    1975-12-01

    normally the most significant part of the overall aircraft IR signature. The 4 radiance is directly dependent upon the geometric view factors , a set...tactors as punched card output in. a view factor computer run. For the view factor computer run IB49 through 53 and all IDS input A, from IDS-2 to IDS-6...may be excluded from the input string if the * program execution is requested to stop after punching the viewv factors . Inputs required for punching

  19. Feasibility of Virtual Machine and Cloud Computing Technologies for High Performance Computing

    DTIC Science & Technology

    2014-05-01

    Hat Enterprise Linux SaaS software as a service VM virtual machine vNUMA virtual non-uniform memory access WRF weather research and forecasting...previously mentioned in Chapter I Section B1 of this paper, which is used to run the weather research and forecasting ( WRF ) model in their experiments...against a VMware virtualization solution of WRF . The experiment consisted of running WRF in a standard configuration between the D-VTM and VMware while

  20. The Air Force Geophysics Laboratory Standalone Data Acquisition System: A Functional Description.

    DTIC Science & Technology

    1980-10-09

    the board are a buffer for the RUN/HALT front panel switch and a retriggerable oneshot multivibrator. This latter circuit senses the SRUN pulse train...recording on the data tapes, and providing the master timing source for data acquisition. An Electronic Research Company (ERC) model 2446 digital...the computer is fed to a retriggerable oneshot multivibrator on the board. (SRUN consists of a pulse train that is present when the computer is running

Top