Sample records for distribution versions include

  1. A multiplet table for Mn I (Adelman, Svatek, Van Winkler, Warren 1989): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Adelman, Saul J.

    1989-01-01

    The machine-readable version of the multiplet table, as it is currently being distributed from the Astronomical Data Center, is described. The computerized version of the table contains data on excitation potentials, J values, multiplet terms, intensities of the transitions, and multiplet numbers. Files ordered by multiplet and by wavelength are included in the distributed version.

  2. Integrated Farm System Model Version 4.1 and Dairy Gas Emissions Model Version 3.1 software release and distribution

    USDA-ARS?s Scientific Manuscript database

    Animal facilities are significant contributors of gaseous emissions including ammonia (NH3) and nitrous oxide (N2O). Previous versions of the Integrated Farm System Model (IFSM version 4.0) and Dairy Gas Emissions Model (DairyGEM version 3.0), two whole-farm simulation models developed by USDA-ARS, ...

  3. COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Smith, J. P.

    1994-01-01

    The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.

  4. COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Smith, J. P.

    1994-01-01

    The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.

  5. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.

  6. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.

  7. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Riley, G.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.

  8. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (DEC VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.

  9. HYPERDIRE-HYPERgeometric functions DIfferential REduction: Mathematica-based packages for the differential reduction of generalized hypergeometric functions: Lauricella function FC of three variables

    NASA Astrophysics Data System (ADS)

    Bytev, Vladimir V.; Kniehl, Bernd A.

    2016-09-01

    We present a further extension of the HYPERDIRE project, which is devoted to the creation of a set of Mathematica-based program packages for manipulations with Horn-type hypergeometric functions on the basis of differential equations. Specifically, we present the implementation of the differential reduction for the Lauricella function FC of three variables. Catalogue identifier: AEPP_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPP_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 243461 No. of bytes in distributed program, including test data, etc.: 61610782 Distribution format: tar.gz Programming language: Mathematica. Computer: All computers running Mathematica. Operating system: Operating systems running Mathematica. Classification: 4.4. Does the new version supersede the previous version?: No, it significantly extends the previous version. Nature of problem: Reduction of hypergeometric function FC of three variables to a set of basis functions. Solution method: Differential reduction. Reasons for new version: The extension package allows the user to handle the Lauricella function FC of three variables. Summary of revisions: The previous version goes unchanged. Running time: Depends on the complexity of the problem.

  10. MISR UAE2 Aerosol Versioning

    Atmospheric Science Data Center

    2013-03-21

    ... The "Beta" designation means particle microphysical property validation is in progress, uncertainty envelopes on particle size distribution, ... UAE-2 campaign activities are part of the validation process, so two versions of the MISR aerosol products are included in this ...

  11. THE U.S. ENVIRONMENTAL PROTECTION AGENCY VERSION OF POSITIVE MATRIX FACTORIZATION

    EPA Science Inventory

    The abstract describes some of the special features of the EPA's version of Positive Matrix Factorization that is freely distributed. Features include descriptions of the Graphical User Interface, an approach for estimating errors in the modeled solutions, and future development...

  12. A distributed version of the NASA Engine Performance Program

    NASA Technical Reports Server (NTRS)

    Cours, Jeffrey T.; Curlett, Brian P.

    1993-01-01

    Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.

  13. Dr TIM: Ray-tracer TIM, with additional specialist scientific capabilities

    NASA Astrophysics Data System (ADS)

    Oxburgh, Stephen; Tyc, Tomáš; Courtial, Johannes

    2014-03-01

    We describe several extensions to TIM, a raytracing program for ray-optics research. These include relativistic raytracing; simulation of the external appearance of Eaton lenses, Luneburg lenses and generalised focusing gradient-index lens (GGRIN) lenses, which are types of perfect imaging devices; raytracing through interfaces between spaces with different optical metrics; and refraction with generalised confocal lenslet arrays, which are particularly versatile METATOYs. Catalogue identifier: AEKY_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKY_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licencing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 106905 No. of bytes in distributed program, including test data, etc.: 6327715 Distribution format: tar.gz Programming language: Java. Computer: Any computer capable of running the Java Virtual Machine (JVM) 1.6. Operating system: Any, developed under Mac OS X Version 10.6 and 10.8.3. RAM: Typically 130 MB (interactive version running under Mac OS X Version 10.8.3) Classification: 14, 18. Catalogue identifier of previous version: AEKY_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)711 External routines: JAMA [1] (source code included) Does the new version supersede the previous version?: Yes Nature of problem: Visualisation of scenes that include scene objects that create wave-optically forbidden light-ray fields. Solution method: Ray tracing. Reasons for new version: Significant extension of the capabilities (see Summary of revisions), as demanded by our research. Summary of revisions: Added capabilities include the simulation of different types of camera moving at relativistic speeds relative to the scene; visualisation of the external appearance of generalised focusing gradient-index (GGRIN) lenses, including Maxwell fisheye, Eaton and Luneburg lenses; calculation of refraction at the interface between spaces with different optical metrics; and handling of generalised confocal lenslet arrays (gCLAs), a new type of METATOY. Unusual features: Specifically designed to visualise wave-optically forbidden light-ray fields; can visualise ray trajectories and geometric optic transformations; can simulate photos taken with different types of camera moving at relativistic speeds, interfaces between spaces with different optical metrics, the view through METATOYs and generalised focusing gradient-index lenses; can create anaglyphs (for viewing with coloured “3D glasses”), HDMI-1.4a standard 3D images, and random-dot autostereograms of the scene; integrable into web pages. Running time: Problem-dependent; typically seconds for a simple scene. References: [1] JAMA: A Java Matrix Package, http://math.nist.gov/javanumerics/jama/

  14. The Invar tensor package: Differential invariants of Riemann

    NASA Astrophysics Data System (ADS)

    Martín-García, J. M.; Yllanes, D.; Portugal, R.

    2008-10-01

    The long standing problem of the relations among the scalar invariants of the Riemann tensor is computationally solved for all 6ṡ10 objects with up to 12 derivatives of the metric. This covers cases ranging from products of up to 6 undifferentiated Riemann tensors to cases with up to 10 covariant derivatives of a single Riemann. We extend our computer algebra system Invar to produce within seconds a canonical form for any of those objects in terms of a basis. The process is as follows: (1) an invariant is converted in real time into a canonical form with respect to the permutation symmetries of the Riemann tensor; (2) Invar reads a database of more than 6ṡ10 relations and applies those coming from the cyclic symmetry of the Riemann tensor; (3) then applies the relations coming from the Bianchi identity, (4) the relations coming from commutations of covariant derivatives, (5) the dimensionally-dependent identities for dimension 4, and finally (6) simplifies invariants that can be expressed as product of dual invariants. Invar runs on top of the tensor computer algebra systems xTensor (for Mathematica) and Canon (for Maple). Program summaryProgram title:Invar Tensor Package v2.0 Catalogue identifier:ADZK_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZK_v2_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3 243 249 No. of bytes in distributed program, including test data, etc.:939 Distribution format:tar.gz Programming language:Mathematica and Maple Computer:Any computer running Mathematica versions 5.0 to 6.0 or Maple versions 9 and 11 Operating system:Linux, Unix, Windows XP, MacOS RAM:100 Mb Word size:64 or 32 bits Supplementary material:The new database of relations is much larger than that for the previous version and therefore has not been included in the distribution. To obtain the Mathematica and Maple database files click on this link. Classification:1.5, 5 Does the new version supersede the previous version?:Yes. The previous version (1.0) only handled algebraic invariants. The current version (2.0) has been extended to cover differential invariants as well. Nature of problem:Manipulation and simplification of scalar polynomial expressions formed from the Riemann tensor and its covariant derivatives. Solution method:Algorithms of computational group theory to simplify expressions with tensors that obey permutation symmetries. Tables of syzygies of the scalar invariants of the Riemann tensor. Reasons for new version:With this new version, the user can manipulate differential invariants of the Riemann tensor. Differential invariants are required in many physical problems in classical and quantum gravity. Summary of revisions:The database of syzygies has been expanded by a factor of 30. New commands were added in order to deal with the enlarged database and to manipulate the covariant derivative. Restrictions:The present version only handles scalars, and not expressions with free indices. Additional comments:The distribution file for this program is over 53 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time:One second to fully reduce any monomial of the Riemann tensor up to degree 7 or order 10 in terms of independent invariants. The Mathematica notebook included in the distribution takes approximately 5 minutes to run.

  15. Size-biased distributions in the generalized beta distribution family, with applications to forestry

    Treesearch

    Mark J. Ducey; Jeffrey H. Gove

    2015-01-01

    Size-biased distributions arise in many forestry applications, as well as other environmental, econometric, and biomedical sampling problems. We examine the size-biased versions of the generalized beta of the first kind, generalized beta of the second kind and generalized gamma distributions. These distributions include, as special cases, the Dagum (Burr Type III),...

  16. QuTiP 2: A Python framework for the dynamics of open quantum systems

    NASA Astrophysics Data System (ADS)

    Johansson, J. R.; Nation, P. D.; Nori, Franco

    2013-04-01

    We present version 2 of QuTiP, the Quantum Toolbox in Python. Compared to the preceding version [J.R. Johansson, P.D. Nation, F. Nori, Comput. Phys. Commun. 183 (2012) 1760.], we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Here we introduce these new features, demonstrate their use, and give a summary of the important backward-incompatible API changes introduced in this version. Catalog identifier: AEMB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 33625 No. of bytes in distributed program, including test data, etc.: 410064 Distribution format: tar.gz Programming language: Python. Computer: i386, x86-64. Operating system: Linux, Mac OSX. RAM: 2+ Gigabytes Classification: 7. External routines: NumPy, SciPy, Matplotlib, Cython Catalog identifier of previous version: AEMB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1760 Does the new version supercede the previous version?: Yes Nature of problem: Dynamics of open quantum systems Solution method: Numerical solutions to Lindblad, Floquet-Markov, and Bloch-Redfield master equations, as well as the Monte Carlo wave function method. Reasons for new version: Compared to the preceding version we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Restrictions: Problems must meet the criteria for using the master equation in Lindblad, Floquet-Markov, or Bloch-Redfield form. Running time: A few seconds up to several tens of hours, depending on size of the underlying Hilbert space.

  17. CWG - MUTUAL COUPLING PROGRAM FOR CIRCULAR WAVEGUIDE-FED APERTURE ARRAY (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Bailey, M. C.

    1994-01-01

    Mutual Coupling Program for Circular Waveguide-fed Aperture Array (CWG) was developed to calculate the electromagnetic interaction between elements of an antenna array of circular apertures with specified aperture field distributions. The field distributions were assumed to be a superposition of the modes which could exist in a circular waveguide. Various external media were included to provide flexibility of use, for example, the flexibility to determine the effects of dielectric covers (i.e., thermal protection system tiles) upon the impedance of aperture type antennas. The impedance and radiation characteristics of planar array antennas depend upon the mutual interaction between all the elements of the array. These interactions are influenced by several parameters (e.g., the array grid geometry, the geometry and excitation of each array element, the medium outside the array, and the internal network feeding the array.) For the class of array antenna whose radiating elements consist of small holes in a flat conducting plate, the electromagnetic problem can be divided into two parts, the internal and the external. In solving the external problem for an array of circular apertures, CWG will compute the mutual interaction between various combinations of circular modal distributions and apertures. CWG computes the mutual coupling between various modes assumed to exist in circular apertures that are located in a flat conducting plane of infinite dimensions. The apertures can radiate into free space, a homogeneous medium, a multilayered region or a reflecting surface. These apertures are assumed to be excited by one or more modes corresponding to the modal distributions in circular waveguides of the same cross sections as the apertures. The apertures may be of different sizes and also of different polarizations. However, the program assumes that each aperture field contains the same modal distributions, and calculates the complex scattering matrix between all mode and aperture combinations. The scattering matrix can then be used to determine the complex modal field amplitudes for each aperture with a specified array excitation. CWG is written in VAX FORTRAN for DEC VAX series computers running VMS (LAR-15236) and IBM PC series and compatible computers running MS-DOS (LAR-15226). It requires 360K of RAM for execution. To compile the source code for the PC version, the NDP Fortran compiler and linker will be required; however, the distribution medium for the PC version of CWG includes a sample MS-DOS executable which was created using NDP Fortran with the -vms compiler option. The standard distribution medium for the PC version of CWG is a 3.5 inch 1.44Mb MS-DOS format diskette. The standard distribution medium for the VAX version of CWG is a 1600 BPI 9track magnetic tape in DEC VAX BACKUP format. The VAX version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Both machine versions of CWG include an electronic version of the documentation in Microsoft Word for Windows format. CWG was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  18. CWG - MUTUAL COUPLING PROGRAM FOR CIRCULAR WAVEGUIDE-FED APERTURE ARRAY (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Bailey, M. C.

    1994-01-01

    Mutual Coupling Program for Circular Waveguide-fed Aperture Array (CWG) was developed to calculate the electromagnetic interaction between elements of an antenna array of circular apertures with specified aperture field distributions. The field distributions were assumed to be a superposition of the modes which could exist in a circular waveguide. Various external media were included to provide flexibility of use, for example, the flexibility to determine the effects of dielectric covers (i.e., thermal protection system tiles) upon the impedance of aperture type antennas. The impedance and radiation characteristics of planar array antennas depend upon the mutual interaction between all the elements of the array. These interactions are influenced by several parameters (e.g., the array grid geometry, the geometry and excitation of each array element, the medium outside the array, and the internal network feeding the array.) For the class of array antenna whose radiating elements consist of small holes in a flat conducting plate, the electromagnetic problem can be divided into two parts, the internal and the external. In solving the external problem for an array of circular apertures, CWG will compute the mutual interaction between various combinations of circular modal distributions and apertures. CWG computes the mutual coupling between various modes assumed to exist in circular apertures that are located in a flat conducting plane of infinite dimensions. The apertures can radiate into free space, a homogeneous medium, a multilayered region or a reflecting surface. These apertures are assumed to be excited by one or more modes corresponding to the modal distributions in circular waveguides of the same cross sections as the apertures. The apertures may be of different sizes and also of different polarizations. However, the program assumes that each aperture field contains the same modal distributions, and calculates the complex scattering matrix between all mode and aperture combinations. The scattering matrix can then be used to determine the complex modal field amplitudes for each aperture with a specified array excitation. CWG is written in VAX FORTRAN for DEC VAX series computers running VMS (LAR-15236) and IBM PC series and compatible computers running MS-DOS (LAR-15226). It requires 360K of RAM for execution. To compile the source code for the PC version, the NDP Fortran compiler and linker will be required; however, the distribution medium for the PC version of CWG includes a sample MS-DOS executable which was created using NDP Fortran with the -vms compiler option. The standard distribution medium for the PC version of CWG is a 3.5 inch 1.44Mb MS-DOS format diskette. The standard distribution medium for the VAX version of CWG is a 1600 BPI 9track magnetic tape in DEC VAX BACKUP format. The VAX version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Both machine versions of CWG include an electronic version of the documentation in Microsoft Word for Windows format. CWG was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  19. Loci-STREAM Version 0.9

    NASA Technical Reports Server (NTRS)

    Wright, Jeffrey; Thakur, Siddharth

    2006-01-01

    Loci-STREAM is an evolving computational fluid dynamics (CFD) software tool for simulating possibly chemically reacting, possibly unsteady flows in diverse settings, including rocket engines, turbomachines, oil refineries, etc. Loci-STREAM implements a pressure- based flow-solving algorithm that utilizes unstructured grids. (The benefit of low memory usage by pressure-based algorithms is well recognized by experts in the field.) The algorithm is robust for flows at all speeds from zero to hypersonic. The flexibility of arbitrary polyhedral grids enables accurate, efficient simulation of flows in complex geometries, including those of plume-impingement problems. The present version - Loci-STREAM version 0.9 - includes an interface with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library for access to enhanced linear-equation-solving programs therein that accelerate convergence toward a solution. The name "Loci" reflects the creation of this software within the Loci computational framework, which was developed at Mississippi State University for the primary purpose of simplifying the writing of complex multidisciplinary application programs to run in distributed-memory computing environments including clusters of personal computers. Loci has been designed to relieve application programmers of the details of programming for distributed-memory computers.

  20. GR@PPA 2.8: Initial-state jet matching for weak-boson production processes at hadron collisions

    NASA Astrophysics Data System (ADS)

    Odaka, Shigeru; Kurihara, Yoshimasa

    2012-04-01

    The initial-state jet matching method introduced in our previous studies has been applied to the event generation of single W and Z production processes and diboson (WW, WZ and ZZ) production processes at hadron collisions in the framework of the GR@PPA event generator. The generated events reproduce the transverse momentum spectra of weak bosons continuously in the entire kinematical region. The matrix elements (ME) for hard interactions are still at the tree level. As in previous versions, the decays of weak bosons are included in the matrix elements. Therefore, spin correlations and phase-space effects in the decay of weak bosons are exact at the tree level. The program package includes custom-made parton shower programs as well as ME-based hard interaction generators in order to achieve self-consistent jet matching. The generated events can be passed to general-purpose event generators to make the simulation proceed down to the hadron level. Catalogue identifier: ADRH_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRH_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 112 146 No. of bytes in distributed program, including test data, etc.: 596 667 Distribution format: tar.gz Programming language: Fortran; with some included libraries coded in C and C++ Computer: All Operating system: Any UNIX-like system RAM: 1.6 Mega bytes at minimum Classification: 11.2 Catalogue identifier of previous version: ADRH_v2_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 665 External routines: Bash and Perl for the setup, and CERNLIB, ROOT, LHAPDF, PYTHIA according to the user's choice. Does the new version supersede the previous version?: No, this version supports only a part of the processes included in the previous versions. Nature of problem: We need to combine those processes including 0 jet and 1 jet in the matrix elements using an appropriate matching method, in order to simulate weak-boson production processes in the entire kinematical region. Solution method: The leading logarithmic components to be included in parton distribution functions and parton showers are subtracted from 1-jet matrix elements. Custom-made parton shower programs are provided to ensure satisfactory performance of the matching method. Reasons for new version: An initial-state jet matching method has been implemented. Summary of revisions: Weak-boson production processes associated with 0 jet and 1 jet can be consistently merged using the matching method. Restrictions: The built-in parton showers are not compatible with the PYTHIA new PS and the HERWIG PS. Unusual features: A large number of particles may be produced by the parton showers and passed to general-purpose event generators. Running time: About 10 min for initialization plus 25 s for every 1k-event generation for W production in the LHC condition, on a 3.0-GHz Intel Xeon processor with the default setting.

  1. The Effect of Different Textual Narrations on Students' Explanations at the Submicroscopic Level in Chemistry

    ERIC Educational Resources Information Center

    Al-Balushi, Sulaiman M.

    2013-01-01

    The effect of different textual versions (macroscopic (control), submicroscopic, and guided imagery) of the explanation of a chemical phenomenon on students' submicroscopic explanation of a related phenomenon was examined. The sample included 152 pre-service science teachers. The three textual versions of the explanation were distributed randomly…

  2. GENASIS Basics: Object-oriented utilitarian functionality for large-scale physics simulations (Version 2)

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2017-05-01

    GenASiS Basics provides Fortran 2003 classes furnishing extensible object-oriented utilitarian functionality for large-scale physics simulations on distributed memory supercomputers. This functionality includes physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. This revision -Version 2 of Basics - makes mostly minor additions to functionality and includes some simplifying name changes.

  3. JavaGenes Molecular Evolution

    NASA Technical Reports Server (NTRS)

    Lohn, Jason; Smith, David; Frank, Jeremy; Globus, Al; Crawford, James

    2007-01-01

    JavaGenes is a general-purpose, evolutionary software system written in Java. It implements several versions of a genetic algorithm, simulated annealing, stochastic hill climbing, and other search techniques. This software has been used to evolve molecules, atomic force field parameters, digital circuits, Earth Observing Satellite schedules, and antennas. This version differs from version 0.7.28 in that it includes the molecule evolution code and other improvements. Except for the antenna code, JaveGenes is available for NASA Open Source distribution.

  4. Modular reweighting software for statistical mechanical analysis of biased equilibrium data

    NASA Astrophysics Data System (ADS)

    Sindhikara, Daniel J.

    2012-07-01

    Here a simple, useful, modular approach and software suite designed for statistical reweighting and analysis of equilibrium ensembles is presented. Statistical reweighting is useful and sometimes necessary for analysis of equilibrium enhanced sampling methods, such as umbrella sampling or replica exchange, and also in experimental cases where biasing factors are explicitly known. Essentially, statistical reweighting allows extrapolation of data from one or more equilibrium ensembles to another. Here, the fundamental separable steps of statistical reweighting are broken up into modules - allowing for application to the general case and avoiding the black-box nature of some “all-inclusive” reweighting programs. Additionally, the programs included are, by-design, written with little dependencies. The compilers required are either pre-installed on most systems, or freely available for download with minimal trouble. Examples of the use of this suite applied to umbrella sampling and replica exchange molecular dynamics simulations will be shown along with advice on how to apply it in the general case. New version program summaryProgram title: Modular reweighting version 2 Catalogue identifier: AEJH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 179 118 No. of bytes in distributed program, including test data, etc.: 8 518 178 Distribution format: tar.gz Programming language: C++, Python 2.6+, Perl 5+ Computer: Any Operating system: Any RAM: 50-500 MB Supplementary material: An updated version of the original manuscript (Comput. Phys. Commun. 182 (2011) 2227) is available Classification: 4.13 Catalogue identifier of previous version: AEJH_v1_0 Journal reference of previous version: Comput. Phys. Commun. 182 (2011) 2227 Does the new version supersede the previous version?: Yes Nature of problem: While equilibrium reweighting is ubiquitous, there are no public programs available to perform the reweighting in the general case. Further, specific programs often suffer from many library dependencies and numerical instability. Solution method: This package is written in a modular format that allows for easy applicability of reweighting in the general case. Modules are small, numerically stable, and require minimal libraries. Reasons for new version: Some minor bugs, some upgrades needed, error analysis added. analyzeweight.py/analyzeweight.py2 has been replaced by “multihist.py”. This new program performs all the functions of its predecessor while being versatile enough to handle other types of histograms and probability analysis. “bootstrap.py” was added. This script performs basic bootstrap resampling allowing for error analysis of data. “avg_dev_distribution.py” was added. This program computes the averages and standard deviations of multiple distributions, making error analysis (e.g. from bootstrap resampling) easier to visualize. WRE.cpp was slightly modified purely for cosmetic reasons. The manual was updated for clarity and to reflect version updates. Examples were removed from the manual in favor of online tutorials (packaged examples remain). Examples were updated to reflect the new format. An additional example is included to demonstrate error analysis. Running time: Preprocessing scripts 1-5 minutes, WHAM engine <1 minute, postprocess script ∼1-5 minutes.

  5. Space station automation of common module power management and distribution, volume 2

    NASA Technical Reports Server (NTRS)

    Ashworth, B.; Riedesel, J.; Myers, C.; Jakstas, L.; Smith, D.

    1990-01-01

    The new Space Station Module Power Management and Distribution System (SSM/PMAD) testbed automation system is described. The subjects discussed include testbed 120 volt dc star bus configuration and operation, SSM/PMAD automation system architecture, fault recovery and management expert system (FRAMES) rules english representation, the SSM/PMAD user interface, and the SSM/PMAD future direction. Several appendices are presented and include the following: SSM/PMAD interface user manual version 1.0, SSM/PMAD lowest level processor (LLP) reference, SSM/PMAD technical reference version 1.0, SSM/PMAD LLP visual control logic representation's (VCLR's), SSM/PMAD LLP/FRAMES interface control document (ICD) , and SSM/PMAD LLP switchgear interface controller (SIC) ICD.

  6. Modeled and Observed Altitude Distributions of the Micrometeoroid Influx in Radar Detection

    NASA Astrophysics Data System (ADS)

    Swarnalingam, N.; Janches, D.; Plane, J. M. C.; Carrillo-Sánchez, J. D.; Sternovsky, Z.; Pokorny, P.; Nesvorny, D.

    2017-12-01

    The altitude distributions of the micrometeoroids are a representation of the radar response function of the incoming flux and thus can be utilized to calibrate radar measurements. These in turn, can be used to determine the rate of ablation and ionization of the meteoroids and ultimately the input flux. During the ablation process, electrons are created and subsequently these electrons produce backscatter signals when they encounter the transmitted signals from radar. In this work, we investigate the altitude distribution by exploring different sizes as well as the aspect sensitivity of the meteor head echoes. We apply an updated version of the Chemical Ablation Model (CABMOD), which includes results from laboratory simulation of meteor ablation for different metallic constituents. In particular, the updated version simulates the ablation of Na. It is observed in the updated version that electrons are produced to a wider altitude range with the peak production occurs at lower altitudes compared to the previous version. The results are compared to head echo meteor observations utilizing the Arecibo 430 MHz radar.

  7. GEMPAK 5.1 - A GENERAL METEOROLOGICAL PACKAGE (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Desjardins, M. L.

    1994-01-01

    GEMPAK is a general meteorological software package developed at NASA/Goddard Space Flight Center. It includes programs to analyze and display surface, upper-air, and gridded data, including model output. There are very general programs to list, edit, and plot data on maps, to display profiles and time series, to draw and fill contours, to draw streamlines, to plot symbols for clouds, sky cover, and pressure tendency, and draw cross sections in the case of gridded data and sounding data. In addition, there are Barnes objective analysis programs to grid surface and upper-air data. The programs include the capabilities to derive meteorological parameters from those found in the dataset, to perform vertical interpolations of sounding data to different coordinate systems, and to compute an extensive set of gridded diagnostic quantities by specifying various nested combinations of scalars and vector arithmetic, algebraic, and differential operators. The GEMPAK 5.1 graphics/transformation subsystem, GEMPLT, provides device-independent graphics. GEMPLT also has the capability to display output in a variety of map projections or overlaid on satellite imagery. GEMPAK 5.1 is written in FORTRAN 77 and C-language and has been implemented on VAX computers under VMS and on computers running the UNIX operating system. During installation and normal use, this package occupies approximately 100Mb of hard disk space. The UNIX version of GEMPAK includes drivers for several graphic output systems including MIT's X Window System (X11,R4), Sun GKS, PostScript (color and monochrome), Silicon Graphics, and others. The VMS version of GEMPAK also includes drivers for several graphic output systems including PostScript (color and monochrome). The VMS version is delivered with the object code for the Transportable Applications Environment (TAE) program, version 4.1 which serves as a user interface. A color monitor is recommended for displaying maps on video display devices. Data for rendering regional maps is included with this package. The standard distribution medium for the UNIX version of GEMPAK 5.1 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the VMS version of GEMPAK 5.1 is a 6250 BPI 9-track magnetic tape in DEC VAX BACKUP format. The VMS version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. This program was developed in 1985. The current version, GEMPAK 5.1, was released in 1992. The package is delivered with source code. An extensive collection of subroutine libraries allows users to format data for use by GEMPAK, to develop new programs, and to enhance existing ones.

  8. DFMSPH14: A C-code for the double folding interaction potential of two spherical nuclei

    NASA Astrophysics Data System (ADS)

    Gontchar, I. I.; Chushnyakova, M. V.

    2016-09-01

    This is a new version of the DFMSPH code designed to obtain the nucleus-nucleus potential by using the double folding model (DFM) and in particular to find the Coulomb barrier. The new version uses the charge, proton, and neutron density distributions provided by the user. Also we added an option for fitting the DFM potential by the Gross-Kalinowski profile. The main functionalities of the original code (e.g. the nucleus-nucleus potential as a function of the distance between the centers of mass of colliding nuclei, the Coulomb barrier characteristics, etc.) have not been modified. Catalog identifier: AEFH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 114404 Distribution format: tar.gz Programming language: C Computer: PC and Mac Operation system: Windows XP and higher, MacOS, Unix/Linux Memory required to execute with typical data: below 10 Mbyte Classification: 17.9 Catalog identifier of previous version: AEFH_v1_0 Journal reference of previous version: Comp. Phys. Comm. 181 (2010) 168 Does the new version supersede the previous version?: Yes Nature of physical problem: The code calculates in a semimicroscopic way the bare interaction potential between two colliding spherical nuclei as a function of the center of mass distance. The height and the position of the Coulomb barrier are found. The calculated potential is approximated by an analytical profile (Woods-Saxon or Gross-Kalinowski) near the barrier. Dependence of the barrier parameters upon the characteristics of the effective NN forces (like, e.g. the range of the exchange part of the nuclear term) can be investigated. Method of solution: The nucleus-nucleus potential is calculated using the double folding model with the Coulomb and the effective M3Y NN interactions. For the direct parts of the Coulomb and the nuclear terms, the Fourier transform method is used. In order to calculate the exchange parts, the density matrix expansion method is applied. Typical running time: less than 1 minute. Reason for new version: Many users asked us how to implement their own density distributions in the DFMSPH. Now this option has been added. Also we found that the calculated Double-Folding Potential (DFP) is approximated more accurately by the Gross-Kalinowski (GK) profile. This option has been also added.

  9. USSAERO computer program development, versions B and C

    NASA Technical Reports Server (NTRS)

    Woodward, F. A.

    1980-01-01

    Versions B and C of the unified subsonic and supersonic aerodynamic analysis program, USSAERO, are described. Version B incorporates a new symmetrical singularity method to provide improved surface pressure distributions on wings in subsonic flow. Version C extends the range of application of the program to include the analysis of multiple engine nacelles or finned external stores. In addition, nonlinear compressibility effects in high subsonic and supersonic flows are approximated using a correction based on the local Mach number at panel control points. Several examples are presented comparing the results of these programs with other panel methods and experimental data.

  10. Multithreaded transactions in scientific computing: New versions of a computer program for kinematical calculations of RHEED intensity oscillations

    NASA Astrophysics Data System (ADS)

    Brzuszek, Marcin; Daniluk, Andrzej

    2006-11-01

    Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1

  11. Second catalog of interferometric measurements of binary stars (McAlister and Hartkopf 1988): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1989-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The catalog is a compilation of measurements of binary- and multiple-star systems obtained by speckle interferometric techniques; this version supersedes a previous edition of the catalog published in 1985. Stars that have been examined for multiplicity with negative results are included, in which case upper limits for the separation are given. The second version is expanded from the first in that a file of newly resolved systems and six cross-index files of alternate designations are included. The data file contains alternate identifications for the observed systems, epochs of observation, reported errors in position angles and separation, and bibliographical references.

  12. Multithreaded transactions in scientific computing. The Growth06_v2 program

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2009-07-01

    Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronization, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents a new version of the GROWTHGr and GROWTH06 programs. New version program summaryProgram title: GROWTH06_v2 Catalogue identifier: ADVL_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 65 255 No. of bytes in distributed program, including test data, etc.: 865 985 Distribution format: tar.gz Programming language: Object Pascal Computer: Pentium-based PC Operating system: Windows 9x, XP, NT, Vista RAM: more than 1 MB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADVL_v2_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 678 Does the new version supersede the previous version?: Yes Nature of problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory. Solution method: Epitaxial growth of thin films is modelled by a set of non-linear differential equations [1]. The Runge-Kutta method with adaptive stepsize control was used for solving initial value problem for non-linear differential equations [2]. Reasons for new version: According to the users' suggestions functionality of the program has been improved. Moreover, new use cases have been added which make the handling of the program easier and more efficient than the previous ones [3]. Summary of revisions:The design pattern (See Fig. 2 of Ref. [3]) has been modified according to the scheme shown on Fig. 1. A graphical user interface (GUI) for the program has been reconstructed. Fig. 2 presents a hybrid diagram of a GUI that shows how onscreen objects connect to use cases. The program has been compiled with English/USA regional and language options. Note: The figures mentioned above are contained in the program distribution file. Unusual features: The program is distributed in the form of source project GROWTH06_v2.dpr with associated files, and should be compiled using Borland Delphi compilers versions 6 or latter (including Borland Developer Studio 2006 and Code Gear compilers for Delphi). Additional comments: Two figures are included in the program distribution file. These are captioned Static classes model for Transaction design pattern. A model of a window that shows how onscreen objects connect to use cases. Running time: The typical running time is machine and user-parameters dependent. References: [1] A. Daniluk, Comput. Phys. Comm. 170 (2005) 265. [2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing, first ed., Cambridge University Press, 1989. [3] M. Brzuszek, A. Daniluk, Comput. Phys. Comm. 175 (2006) 678.

  13. FLY MPI-2: a parallel tree code for LSS

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.

    2006-04-01

    New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors

  14. Stratigraphic framework of Cambrian and Ordovician rocks in the central Appalachian basin from Campbell County, Kentucky, to Tazewell County, Virginia: Chapter E.2.4 in Coal and petroleum resources in the Appalachian basin: distribution, geologic framework, and geochemical character

    USGS Publications Warehouse

    Ryder, Robert T.; Repetski, John E.; Harris, Anita G.; Lentz, Erika E.; Ruppert, Leslie F.; Ryder, Robert T.

    2014-01-01

    This chapter is a re-release of U.S. Geological Survey Miscellaneous Investigations Series Map I-2530, of the same title, by Ryder and others (1997; online version 2.0 revised and digitized by Erika E. Lentz, 2004). Version 2.0 is a digital version of the original and also includes the gamma-ray well log traces.

  15. User's Manual for BEST-Dairy: Benchmarking and Energy/water-Saving Tool (BEST) for the Dairy Processing Industry (Version 1.2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, T.; Ke, J.; Sathaye, J.

    2011-04-20

    This User's Manual summarizes the background information of the Benchmarking and Energy/water-Saving Tool (BEST) for the Dairy Processing Industry (Version 1.2, 2011), including'Read Me' portion of the tool, the sections of Introduction, and Instructions for the BEST-Dairy tool that is developed and distributed by Lawrence Berkeley National Laboratory (LBNL).

  16. QDENSITY—A Mathematica quantum computer simulation

    NASA Astrophysics Data System (ADS)

    Juliá-Díaz, Bruno; Burdis, Joseph M.; Tabakin, Frank

    2009-03-01

    This Mathematica 6.0 package is a simulation of a Quantum Computer. The program provides a modular, instructive approach for generating the basic elements that make up a quantum circuit. The main emphasis is on using the density matrix, although an approach using state vectors is also implemented in the package. The package commands are defined in Qdensity.m which contains the tools needed in quantum circuits, e.g., multiqubit kets, projectors, gates, etc. New version program summaryProgram title: QDENSITY 2.0 Catalogue identifier: ADXH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26 055 No. of bytes in distributed program, including test data, etc.: 227 540 Distribution format: tar.gz Programming language: Mathematica 6.0 Operating system: Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux FC4 Catalogue identifier of previous version: ADXH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 914 Classification: 4.15 Does the new version supersede the previous version?: Offers an alternative, more up to date, implementation Nature of problem: Analysis and design of quantum circuits, quantum algorithms and quantum clusters. Solution method: A Mathematica package is provided which contains commands to create and analyze quantum circuits. Several Mathematica notebooks containing relevant examples: Teleportation, Shor's Algorithm and Grover's search are explained in detail. A tutorial, Tutorial.nb is also enclosed. Reasons for new version: The package has been updated to make it fully compatible with Mathematica 6.0 Summary of revisions: The package has been updated to make it fully compatible with Mathematica 6.0 Running time: Most examples included in the package, e.g., the tutorial, Shor's examples, Teleportation examples and Grover's search, run in less than a minute on a Pentium 4 processor (2.6 GHz). The running time for a quantum computation depends crucially on the number of qubits employed.

  17. GEMPAK 5.1 - A GENERAL METEOROLOGICAL PACKAGE (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Des, Jardins M. L.

    1994-01-01

    GEMPAK is a general meteorological software package developed at NASA/Goddard Space Flight Center. It includes programs to analyze and display surface, upper-air, and gridded data, including model output. There are very general programs to list, edit, and plot data on maps, to display profiles and time series, to draw and fill contours, to draw streamlines, to plot symbols for clouds, sky cover, and pressure tendency, and draw cross sections in the case of gridded data and sounding data. In addition, there are Barnes objective analysis programs to grid surface and upper-air data. The programs include the capabilities to derive meteorological parameters from those found in the dataset, to perform vertical interpolations of sounding data to different coordinate systems, and to compute an extensive set of gridded diagnostic quantities by specifying various nested combinations of scalars and vector arithmetic, algebraic, and differential operators. The GEMPAK 5.1 graphics/transformation subsystem, GEMPLT, provides device-independent graphics. GEMPLT also has the capability to display output in a variety of map projections or overlaid on satellite imagery. GEMPAK 5.1 is written in FORTRAN 77 and C-language and has been implemented on VAX computers under VMS and on computers running the UNIX operating system. During installation and normal use, this package occupies approximately 100Mb of hard disk space. The UNIX version of GEMPAK includes drivers for several graphic output systems including MIT's X Window System (X11,R4), Sun GKS, PostScript (color and monochrome), Silicon Graphics, and others. The VMS version of GEMPAK also includes drivers for several graphic output systems including PostScript (color and monochrome). The VMS version is delivered with the object code for the Transportable Applications Environment (TAE) program, version 4.1 which serves as a user interface. A color monitor is recommended for displaying maps on video display devices. Data for rendering regional maps is included with this package. The standard distribution medium for the UNIX version of GEMPAK 5.1 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the VMS version of GEMPAK 5.1 is a 6250 BPI 9-track magnetic tape in DEC VAX BACKUP format. The VMS version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. This program was developed in 1985. The current version, GEMPAK 5.1, was released in 1992. The package is delivered with source code. An extensive collection of subroutine libraries allows users to format data for use by GEMPAK, to develop new programs, and to enhance existing ones.

  18. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.

  19. SMMP v. 3.0—Simulating proteins and protein interactions in Python and Fortran

    NASA Astrophysics Data System (ADS)

    Meinke, Jan H.; Mohanty, Sandipan; Eisenmenger, Frank; Hansmann, Ulrich H. E.

    2008-03-01

    We describe a revised and updated version of the program package SMMP. SMMP is an open-source FORTRAN package for molecular simulation of proteins within the standard geometry model. It is designed as a simple and inexpensive tool for researchers and students to become familiar with protein simulation techniques. SMMP 3.0 sports a revised API increasing its flexibility, an implementation of the Lund force field, multi-molecule simulations, a parallel implementation of the energy function, Python bindings, and more. Program summaryTitle of program:SMMP Catalogue identifier:ADOJ_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADOJ_v3_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language used:FORTRAN, Python No. of lines in distributed program, including test data, etc.:52 105 No. of bytes in distributed program, including test data, etc.:599 150 Distribution format:tar.gz Computer:Platform independent Operating system:OS independent RAM:2 Mbytes Classification:3 Does the new version supersede the previous version?:Yes Nature of problem:Molecular mechanics computations and Monte Carlo simulation of proteins. Solution method:Utilizes ECEPP2/3, FLEX, and Lund potentials. Includes Monte Carlo simulation algorithms for canonical, as well as for generalized ensembles. Reasons for new version:API changes and increased functionality. Summary of revisions:Added Lund potential; parameters used in subroutines are now passed as arguments; multi-molecule simulations; parallelized energy calculation for ECEPP; Python bindings. Restrictions:The consumed CPU time increases with the size of protein molecule. Running time:Depends on the size of the simulated molecule.

  20. Learning Probabilities From Random Observables in High Dimensions: The Maximum Entropy Distribution and Others

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi

    2015-11-01

    We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).

  1. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (SILICON GRAPHICS VERSION)

    NASA Technical Reports Server (NTRS)

    Walters, D.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  2. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (CONCURRENT VERSION)

    NASA Technical Reports Server (NTRS)

    Pearson, R. W.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  3. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Walters, D.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  4. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (MASSCOMP VERSION)

    NASA Technical Reports Server (NTRS)

    Walters, D.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  5. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (DEC VAX VERSION)

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  6. An Improved Version of the NASA-Lockheed Multielement Airfoil Analysis Computer Program

    NASA Technical Reports Server (NTRS)

    Brune, G. W.; Manke, J. W.

    1978-01-01

    An improved version of the NASA-Lockheed computer program for the analysis of multielement airfoils is described. The predictions of the program are evaluated by comparison with recent experimental high lift data including lift, pitching moment, profile drag, and detailed distributions of surface pressures and boundary layer parameters. The results of the evaluation show that the contract objectives of improving program reliability and accuracy have been met.

  7. Software Design Description for the Polar Ice Prediction System (PIPS) Version 3.0

    DTIC Science & Technology

    2008-11-05

    Naval Research Laboratory Stennis Space Center, MS 39529-5004 NRL/MR/7320--08-9150 Approved for public release; distribution is unlimited. Software ...collection of information, including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services , Directorate for...THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Software Design Description for the Polar Ice Prediction System (PIPS) Version 3.0 Pamela G

  8. A numerical differentiation library exploiting parallel architectures

    NASA Astrophysics Data System (ADS)

    Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.

    2009-08-01

    We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.

  9. Contents of the JPL Distributed Active Archive Center (DAAC) archive, version 2-91

    NASA Technical Reports Server (NTRS)

    Smith, Elizabeth A. (Editor); Lassanyi, Ruby A. (Editor)

    1991-01-01

    The Distributed Active Archive Center (DAAC) archive at the Jet Propulsion Laboratory (JPL) includes satellite data sets for the ocean sciences and global change research to facilitate multidisciplinary use of satellite ocean data. Parameters include sea surface height, surface wind vector, sea surface temperature, atmospheric liquid water, and surface pigment concentration. The Jet Propulsion Laboratory DAAC is an element of the Earth Observing System Data and Information System (EOSDIS) and will be the United States distribution site for the Ocean Topography Experiment (TOPEX)/POSEIDON data and metadata.

  10. JPL Physical Oceanography Distributed Active Archive Center (PO.DAAC) data availability, version 1-94

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Physical Oceanography Distributed Active Archive Center (PO.DAAC) archive at the Jet Propulsion Laboratory (JPL) includes satellite data sets for the ocean sciences and global-change research to facilitate multidisciplinary use of satellite ocean data. Parameters include sea-surface height, surface-wind vector, sea-surface temperature, atmospheric liquid water, and integrated water vapor. The JPL PO.DAAC is an element of the Earth Observing System Data and Information System (EOSDIS) and is the United States distribution site for Ocean Topography Experiment (TOPEX)/POSEIDON data and metadata.

  11. National Hydropower Plant Dataset, Version 2 (FY18Q3)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samu, Nicole; Kao, Shih-Chieh; O'Connor, Patrick

    The National Hydropower Plant Dataset, Version 2 (FY18Q3) is a geospatially comprehensive point-level dataset containing locations and key characteristics of U.S. hydropower plants that are currently either in the hydropower development pipeline (pre-operational), operational, withdrawn, or retired. These data are provided in GIS and tabular formats with corresponding metadata for each. In addition, we include access to download 2 versions of the National Hydropower Map, which was produced with these data (i.e. Map 1 displays the geospatial distribution and characteristics of all operational hydropower plants; Map 2 displays the geospatial distribution and characteristics of operational hydropower plants with pumped storagemore » and mixed capabilities only). This dataset is a subset of ORNL's Existing Hydropower Assets data series, updated quarterly as part of ORNL's National Hydropower Asset Assessment Program.« less

  12. Fatigue-life distributions for reaction time data.

    PubMed

    Tejo, Mauricio; Niklitschek-Soto, Sebastián; Marmolejo-Ramos, Fernando

    2018-06-01

    The family of fatigue-life distributions is introduced as an alternative model of reaction time data. This family includes the shifted Wald distribution and a shifted version of the Birnbaum-Saunders distribution. Although the former has been proposed as a way to model reaction time data, the latter has not. Hence, we provide theoretical, mathematical and practical arguments in support of the shifted Birnbaum-Saunders as a suitable model of simple reaction times and associated cognitive mechanisms.

  13. Biodetection grinder

    NASA Technical Reports Server (NTRS)

    Shaia, C. D.; Jones, G. H.

    1971-01-01

    Work on a biodetection grinder is summarized. It includes development of the prototype grinder, second generation grinder, and the production version of the grinder. Tests showed the particle size distribution was satisfactory and biological evaluation confirmed the tests.

  14. A Distributed Data Base Version of INGRES.

    ERIC Educational Resources Information Center

    Stonebraker, Michael; Neuhold, Eric

    Extensions are required to the currently operational INGRES data base system for it to manage a data base distributed over multiple machines in a computer network running the UNIX operating system. Three possible user views include: (1) each relation in a unique machine, (2) a user interaction with the data base which can only span relations at a…

  15. ISICS2008: An expanded version of ISICS for calculating K-, L-, and M-shell cross sections from PWBA and ECPSSR theory

    NASA Astrophysics Data System (ADS)

    Cipolla, Sam J.

    2009-09-01

    New version program summaryProgram title: ISICS2008 Catalogue identifier: ADDS_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADDS_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5420 No. of bytes in distributed program, including test data, etc.: 107 669 Distribution format: tar.gz Programming language: C Computer: 80 486 or higher level PCs Operating system: Windows XP and all earlier operating systems Classification: 16.7 Catalogue identifier of previous version: ADDS_v3_0 Journal reference of previous version: Comput. Phys. Comm. 179 (2008) 616 Does the new version supersede the previous version?: Yes Nature of problem: Ionization and X-ray production cross section calculations for ion-atom collisions. Solution method: Numerical integration of form factor using a logarithmic transform and Gaussian quadrature, plus exact integration limits. Reasons for new version: Addition of relativistic treatment of both projectile and K-shell electrons. Summary of revisions: A new addition to ISICS is the option (R) to calculate ECPSSR cross sections that account for the relativistic treatment of both projectile and K-shell electron, as proposed recently by Lapicki [1], accordingly as σKRECPSSR=Cṡ(1+0.07(()ṡσ(√{(mKRυ1R)}/Z,ςθ), where υ1R is the relativistic projectile velocity. The option can also be invoked in calculating ECPSShsR, where hsR stands for the Hartree-Slater description of the K-shell electron, which was already incorporated into ISICS2006 [2,3], and is now expressed in this option as, σKRECPSShsR=CṡhsR((2υ1R)/(Zςθ),Z/137)ṡ(1+0.07(()ṡσ(υ1R/Z,ςθ) using the function hsR that is already incorporated into ISICS2006. It should be noted that these expressions are corrected versions [4] from the ones published in Ref. [1]. In this new version, ISICS2008, the option line in the main menu that read "Use Relativistic Proj. velocity" has been replaced by "R option for K-shell … Uses Rel. Proj. vel.". As before, various combinations of options can be utilized and each is denoted in the output. Restrictions: The consumed CPU time increases with the atomic shell (K,L,M), but execution is still very fast. Additional comments: A revised User Manual is included in the distribution file. Running time: This depends on which shell and the number of different energies to be used in the calculation. The running time is not significantly changed from the previous version. As before, to calculate K-shell cross sections for protons striking carbon for 19 different proton energies it took less than 10 s; to calculate M-shell cross sections for protons on gold for 21 proton energies it took 4.2 min. References:G. Lapicki, J. Phys. B: At. Mol. Op. Phys. 41 (2008) 115201. S. Cipolla, Comput. Phys. Comm. 176 (2007) 157. S. Cipolla, Nucl. Instrum. Methods Phys. Res. B 261 (2007) 142. G. Lapicki, private communication.

  16. NASA AVOSS Fast-Time Models for Aircraft Wake Prediction: User's Guide (APA3.8 and TDP2.1)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew J.; Limon Duparcmeur, Fanny M.

    2016-01-01

    NASA's current distribution of fast-time wake vortex decay and transport models includes APA (Version 3.8) and TDP (Version 2.1). This User's Guide provides detailed information on the model inputs, file formats, and model outputs. A brief description of the Memphis 1995, Dallas/Fort Worth 1997, and the Denver 2003 wake vortex datasets is given along with the evaluation of models. A detailed bibliography is provided which includes publications on model development, wake field experiment descriptions, and applications of the fast-time wake vortex models.

  17. Documentation for the machine-readable version of the Stellar Spectrophotometric Atlas, 3130 A lambda 10800 A of Gunn and Stryker (1983)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine-readable version of the Atlas as it is currently being distributed from the Astronomical Data Center is described. The data were obtained with the Oke multichannel scanner on the 5-meter Hale reflector for purposes of synthesizing galaxy spectra, and the digitized Atlas contains normalized spectral energy distributions, computed colors, scan line and continuum indices for 175 selected stars covering the complete ranges of spectral type and luminosity class. The documentation includes a byte-by-byte format description, a table of the indigenous characteristics of the magnetic tape file, and a sample listing of logical records exactly as they are recorded on the tape.

  18. Simulating electron energy loss spectroscopy with the MNPBEM toolbox

    NASA Astrophysics Data System (ADS)

    Hohenester, Ulrich

    2014-03-01

    Within the MNPBEM toolbox, we show how to simulate electron energy loss spectroscopy (EELS) of plasmonic nanoparticles using a boundary element method approach. The methodology underlying our approach closely follows the concepts developed by García de Abajo and coworkers (Garcia de Abajo, 2010). We introduce two classes eelsret and eelsstat that allow in combination with our recently developed MNPBEM toolbox for a simple, robust, and efficient computation of EEL spectra and maps. The classes are accompanied by a number of demo programs for EELS simulation of metallic nanospheres, nanodisks, and nanotriangles, and for electron trajectories passing by or penetrating through the metallic nanoparticles. We also discuss how to compute electric fields induced by the electron beam and cathodoluminescence. Catalogue identifier: AEKJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKJ_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 38886 No. of bytes in distributed program, including test data, etc.: 1222650 Distribution format: tar.gz Programming language: Matlab 7.11.0 (R2010b). Computer: Any which supports Matlab 7.11.0 (R2010b). Operating system: Any which supports Matlab 7.11.0 (R2010b). RAM:≥1 GB Classification: 18. Catalogue identifier of previous version: AEKJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 370 External routines: MESH2D available at www.mathworks.com Does the new version supersede the previous version?: Yes Nature of problem: Simulation of electron energy loss spectroscopy (EELS) for plasmonic nanoparticles. Solution method: Boundary element method using electromagnetic potentials. Reasons for new version: The new version of the toolbox includes two additional classes for the simulation of electron energy loss spectroscopy (EELS) of plasmonic nanoparticles, and corrects a few minor bugs and inconsistencies. Summary of revisions: New classes “eelsstat” and “eelsret” for the simulation of electron energy loss spectroscopy (EELS) of plasmonic nanoparticles have been added. A few minor errors in the implementation of dipole excitation have been corrected. Running time: Depending on surface discretization between seconds and hours.

  19. AIRS Version 6 Products and Data Services at NASA GES DISC

    NASA Astrophysics Data System (ADS)

    Ding, F.; Savtchenko, A. K.; Hearty, T. J.; Theobald, M. L.; Vollmer, B.; Esfandiari, E.

    2013-12-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for data from the Atmospheric Infrared Sounder (AIRS) mission. The AIRS mission is entering its 11th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing longwave radiation, cloud properties, and trace gases. The GES DISC, in collaboration with the AIRS Project, released data from the Version 6 algorithm in early 2013. The new algorithm represents a significant improvement over previous versions in terms of greater stability, yield, and quality of products. Among the most substantial advances are: improved soundings of Tropospheric and Sea Surface Temperatures; larger improvements with increasing cloud cover; improved retrievals of surface spectral emissivity; near-complete removal of spurious temperature bias trends seen in earlier versions; substantially improved retrieval yield (i.e., number of soundings accepted for output) for climate studies; AIRS-Only retrievals with comparable accuracy to AIRS+AMSU (Advanced Microwave Sounding Unit) retrievals; and more realistic hemispheric seasonal variability and global distribution of carbon monoxide. The GES DISC is working to bring the distribution services up-to-date with these new developments. Our focus is on popular services, like variable subsetting and quality screening, which are impacted by the new elements in Version 6. Other developments in visualization services, such as Giovanni, Near-Real Time imagery, and a granule-map viewer, are progressing along with the introduction of the new data; each service presents its own challenge. This presentation will demonstrate the most significant improvements in Version 6 AIRS products, such as newly added variables (higher resolution outgoing longwave radiation, new cloud property products, etc.), the new quality control schema, and improved retrieval yields. We will also demonstrate the various distribution and visualization services for AIRS data products. The cloud properties, model physics, and water and energy cycles research communities are invited to take advantage of the improvements in Version 6 AIRS products and the various services at GES DISC which provide them.

  20. A general spectral method for the numerical simulation of one-dimensional interacting fermions

    NASA Astrophysics Data System (ADS)

    Clason, Christian; von Winckel, Gregory

    2012-08-01

    This software implements a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient Matlab program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. The new version includes a Python implementation of the presented approach. New version program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 No. of bytes in distributed program, including test data, etc.: 5418 Distribution format: tar.gz Programming language: MATLAB/GNU Octave, Python Computer: Any architecture supported by MATLAB, GNU Octave or Python Operating system: Any supported by MATLAB, GNU Octave or Python RAM: Depends on the data Classification: 4.3, 2.2. External routines: Python 2.7+, NumPy 1.3+, SciPy 0.10+ Catalogue identifier of previous version: AEKO_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 405 Does the new version supersede the previous version?: Yes Nature of problem: The direct numerical solution of the multi-particle one-dimensional Schrödinger equation in a quantum well is challenging due to the exponential growth in the number of degrees of freedom with increasing particles. Solution method: A nodal spectral Galerkin scheme is used where the basis functions are constructed to obey the antisymmetry relations of the fermionic wave function. The assembly of these matrices is performed efficiently by exploiting the combinatorial structure of the sparsity patterns. Reasons for new version: A Python implementation is now included. Summary of revisions: Added a Python implementation; small documentation fixes in Matlab implementation. No change in features of the package. Restrictions: Only one-dimensional computational domains with homogeneous Dirichlet or periodic boundary conditions are supported. Running time: Seconds to minutes.

  1. The HEAO A-1 X Ray Source Catalog (Wood Et Al. 1984): Documentation for the Machine-Readable Version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The catalog is a compilation of data for 842 sources detected with the U.S. Naval Research Laboratory Large Area Sky Survey Experiment flown aboard the HEAO 1 satellite. The data include source identifications, positions, error boxes, mean X-ray intensities, and cross identifications to other source designations.

  2. A catalog of selected compact radio sources for the construction of an extragalactic radio/optical reference frame (Argue et al. 1984): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This document describes the machine readable version of the Selected Compact Radio Source Catalog as it is currently being distributed from the international network of astronomical data centers. It is intended to enable users to read and process the computerized catalog. The catalog contains 233 strong, compact extragalactic radio sources having identified optical counterparts. The machine version contains the same data as the published catalog and includes source identifications, equatorial positions at J2000.0 and their mean errors, object classifications, visual magnitudes, redshift, 5-GHz flux densities, and comments.

  3. FORM version 4.0

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Ueda, T.; Vermaseren, J. A. M.; Vollinga, J.

    2013-05-01

    We present version 4.0 of the symbolic manipulation system FORM. The most important new features are manipulation of rational polynomials and the factorization of expressions. Many other new functions and commands are also added; some of them are very general, while others are designed for building specific high level packages, such as one for Gröbner bases. New is also the checkpoint facility, that allows for periodic backups during long calculations. Finally, FORM 4.0 has become available as open source under the GNU General Public License version 3. Program summaryProgram title: FORM. Catalogue identifier: AEOT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 151599 No. of bytes in distributed program, including test data, etc.: 1 078 748 Distribution format: tar.gz Programming language: The FORM language. FORM itself is programmed in a mixture of C and C++. Computer: All. Operating system: UNIX, LINUX, Mac OS, Windows. Classification: 5. Nature of problem: FORM defines a symbolic manipulation language in which the emphasis lies on fast processing of very large formulas. It has been used successfully for many calculations in Quantum Field Theory and mathematics. In speed and size of formulas that can be handled it outperforms other systems typically by an order of magnitude. Special in this version: The version 4.0 contains many new features. Most important are factorization and rational arithmetic. The program has also become open source under the GPL. The code in CPC is for reference. You are encouraged to upload the most recent sources from www.nikhef.nl/form/formcvs.php because of frequent bug fixes. Solution method: See "Nature of Problem", above. Additional comments: NOTE: The code in CPC is for reference. You are encouraged to upload the most recent sources from www.nikhef.nl/form/formcvs.php because of frequent bug fixes.

  4. Contents of the NASA ocean data system archive, version 11-90

    NASA Technical Reports Server (NTRS)

    Smith, Elizabeth A. (Editor); Lassanyi, Ruby A. (Editor)

    1990-01-01

    The National Aeronautics and Space Administration (NASA) Ocean Data System (NODS) archive at the Jet Propulsion Laboratory (JPL) includes satellite data sets for the ocean sciences and global-change research to facilitate multidisciplinary use of satellite ocean data. Parameters include sea-surface height, surface-wind vector, sea-surface temperature, atmospheric liquid water, and surface pigment concentration. NODS will become the Data Archive and Distribution Service of the JPL Distributed Active Archive Center for the Earth Observing System Data and Information System (EOSDIS) and will be the United States distribution site for Ocean Topography Experiment (TOPEX)/POSEIDON data and metadata.

  5. Fifth Fundamental Catalogue (FK5). Part 1: Basic fundamental stars (Fricke, Schwan, and Lederle 1988): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The Basic FK5 provides improved mean positions and proper motions for the 1535 classical fundamental stars that had been included in the FK3 and FK4 catalogs. The machine version of the catalog contains the positions and proper motions of the Basic FK5 stars for the epochs and equinoxes J2000.0 and B1950.0, the mean epochs of individual observed right ascensions and declinations used to determine the final positions, and the mean errors of the final positions and proper motions for the reported epochs. The cross identifications to other designations used for the FK5 stars that are given in the published catalog were not included in the original machine versions, but the Durchmusterung numbers have been added at the Astronomical Data Center.

  6. A parallel solver for huge dense linear systems

    NASA Astrophysics Data System (ADS)

    Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.

    2011-11-01

    HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.

  7. New version of PLNoise: a package for exact numerical simulation of power-law noises

    NASA Astrophysics Data System (ADS)

    Milotti, Edoardo

    2007-08-01

    In a recent paper I have introduced a package for the exact simulation of power-law noises and other colored noises [E. Milotti, Comput. Phys. Comm. 175 (2006) 212]: in particular, the algorithm generates 1/f noises with 0<α⩽2. Here I extend the algorithm to generate 1/f noises with 2<α⩽4 (black noises). The method is exact in the sense that it produces a sampled process with a theoretically guaranteed range-limited power-law spectrum for any arbitrary sequence of sampling intervals, i.e. the sampling times may be unevenly spaced. Program summaryTitle of program: PLNoise Catalogue identifier:ADXV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXV_v2_0.html Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Programming language used: ANSI C Computer: Any computer with an ANSI C compiler: the package has been tested with gcc version 3.2.3 on Red Hat Linux 3.2.3-52 and gcc version 4.0.0 and 4.0.1 on Apple Mac OS X-10.4 Operating system: All operating systems capable of running an ANSI C compiler RAM: The code of the test program is very compact (about 60 Kbytes), but the program works with list management and allocates memory dynamically; in a typical run with average list length 2ṡ10, the RAM taken by the list is 200 Kbytes External routines: The package needs external routines to generate uniform and exponential deviates. The implementation described here uses the random number generation library ranlib freely available from Netlib [B.W. Brown, J. Lovato, K. Russell: ranlib, available from Netlib, http://www.netlib.org/random/index.html, select the C version ranlib.c], but it has also been successfully tested with the random number routines in Numerical Recipes [W.H. Press, S.A. Teulkolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, second ed., Cambridge Univ. Press., Cambridge, 1992, pp. 274-290]. Notice that ranlib requires a pair of routines from the linear algebra package LINPACK, and that the distribution of ranlib includes the C source of these routines, in case LINPACK is not installed on the target machine. No. of lines in distributed program, including test data, etc.:2975 No. of bytes in distributed program, including test data, etc.:194 588 Distribution format:tar.gz Catalogue identifier of previous version: ADXV_v1_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 212 Does the new version supersede the previous version?: Yes Nature of problem: Exact generation of different types of colored noise. Solution method: Random superposition of relaxation processes [E. Milotti, Phys. Rev. E 72 (2005) 056701], possibly followed by an integration step to produce noise with spectral index >2. Reasons for the new version: Extension to 1/f noises with spectral index 2<α⩽4: the new version generates both noises with spectral with spectral index 0<α⩽2 and with 2<α⩽4. Summary of revisions: Although the overall structure remains the same, one routine has been added and several changes have been made throughout the code to include the new integration step. Unusual features: The algorithm is theoretically guaranteed to be exact, and unlike all other existing generators it can generate samples with uneven spacing. Additional comments: The program requires an initialization step; for some parameter sets this may become rather heavy. Running time: Running time varies widely with different input parameters, however in a test run like the one in Section 3 in the long write-up, the generation routine took on average about 75 μs for each sample.

  8. Comparing two versions of the Karolinska Sleepiness Scale (KSS).

    PubMed

    Miley, Anna Åkerstedt; Kecklund, Göran; Åkerstedt, Torbjörn

    2016-01-01

    The Karolinska Sleepiness Scale (KSS) is frequently used to study sleepiness in various contexts. However, it exists in two versions, one with labels on every other step (version A), and one with labels on every step (version B) on the 9-point scale. To date, there are no studies examining whether these versions can be used interchangeably. The two versions were here compared in a 24 hr wakefulness study of 12 adults. KSS ratings were obtained every hour, alternating version A and B. Results indicated that the two versions are highly correlated, do not have different response distributions on labeled and unlabeled steps, and that the distributions across all steps have a high level of correspondence (Kappa = 0.73). It was concluded that the two versions are quite similar.

  9. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  10. Epi info - present and future.

    PubMed

    Su, Y; Yoon, S S

    2003-01-01

    Epi Info is a suite of public domain computer programs for public health professionals developed by the Centers for Disease Control and Prevention (CDC). Epi Info is used for rapid questionnaire design, data entry and validation, data analysis including mapping and graphing, and creation of reports. Epi Info was originally created in 1985 using Turbo Pascal. In 1998, the last version of Epi Info for DOS, version 6, was released. Epi Info for DOS is currently supported by CDC but is no longer updated. The current version, Epi Info 2002, is Windows-based software developed using Microsoft Visual Basic. Approximately 300,000 downloads of Epi Info software occurred in 2002 from approximately 130 countries. These numbers make Epi Info probably one of the most widely distributed and used public domain programs in the world. The DOS version of Epi Info was translated into 13 languages, and efforts are underway to translate the Windows version into other major languages. Versions already exist for Spanish, French, Portuguese, Chinese, Japanese, and Arabic.

  11. A compilation of redshifts and velocity dispersions for Abell clusters (Struble and Rood 1987): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1989-01-01

    The machine readable version of the compilation, as it is currently being distributed from the Astronomical Data Center, is described. The catalog contains redshifts and velocity dispersions for all Abell clusters for which these data had been published up to 1986 July. Also included are 1950 equatorial coordinates for the centers of the listed clusters, numbers of observations used to determine the redshifts, and bibliographical references citing the data sources.

  12. Dyspnoea-12: a translation and linguistic validation study in a Swedish setting

    PubMed Central

    Ekström, Magnus

    2017-01-01

    Background Dyspnoea consists of multiple dimensions including the intensity, unpleasantness, sensory qualities and emotional responses which may differ between patient groups, settings and in relation to treatment. The Dyspnoea-12 is a validated and convenient instrument for multidimensional measurement in English. We aimed to take forward a Swedish version of the Dyspnoea-12. Methods The linguistic validation of the Dyspnoea-12 was performed (Mapi Language Services, Lyon, France). The standardised procedure involved forward and backward translations by three independent certified translators and revisions after feedback from an in-country linguistic consultant, the developerand three native physicians. The understanding and convenience of the translated version was evaluated using qualitative in-depth interviews with five patients with dyspnoea. Results A Swedish version of the Dyspnoea-12 was elaborated and evaluated carefully according to international guidelines. The Swedish version, ‘Dyspné−12’, has the same layout as the original version, including 12 items distributed on seven physical and five affective items. The Dyspnoea-12 is copyrighted by the developer but can be used free of charge after permission for not industry-funded research. Conclusion A Swedish version of the Dyspnoea-12 is now available for clinical validation and multidimensional measurement across diseases and settings with the aim of improved evaluation and management of dyspnoea. PMID:28592574

  13. ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization

    NASA Astrophysics Data System (ADS)

    Antcheva, I.; Ballintijn, M.; Bellenot, B.; Biskup, M.; Brun, R.; Buncic, N.; Canal, Ph.; Casadei, D.; Couet, O.; Fine, V.; Franco, L.; Ganis, G.; Gheata, A.; Maline, D. Gonzalez; Goto, M.; Iwaszkiewicz, J.; Kreshuk, A.; Segura, D. Marcos; Maunder, R.; Moneta, L.; Naumann, A.; Offermann, E.; Onuchin, V.; Panacek, S.; Rademakers, F.; Russo, P.; Tadel, M.

    2011-06-01

    A new stable version ("production version") v5.28.00 of ROOT [1] has been published [2]. It features several major improvements in many areas, most noteworthy data storage performance as well as statistics and graphics features. Some of these improvements have already been predicted in the original publication Antcheva et al. (2009) [3]. This version will be maintained for at least 6 months; new minor revisions ("patch releases") will be published [4] to solve problems reported with this version. New version program summaryProgram title: ROOT Catalogue identifier: AEFA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser Public License v.2.1 No. of lines in distributed program, including test data, etc.: 2 934 693 No. of bytes in distributed program, including test data, etc.: 1009 Distribution format: tar.gz Programming language: C++ Computer: Intel i386, Intel x86-64, Motorola PPC, Sun Sparc, HP PA-RISC Operating system: GNU/Linux, Windows XP/Vista/7, Mac OS X, FreeBSD, OpenBSD, Solaris, HP-UX, AIX Has the code been vectorized or parallelized?: Yes RAM: > 55 Mbytes Classification: 4, 9, 11.9, 14 Catalogue identifier of previous version: AEFA_v1_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 2499 Does the new version supersede the previous version?: Yes Nature of problem: Storage, analysis and visualization of scientific data Solution method: Object store, wide range of analysis algorithms and visualization methods Reasons for new version: Added features and corrections of deficiencies Summary of revisions: The release notes at http://root.cern.ch/root/v528/Version528.news.html give a module-oriented overview of the changes in v5.28.00. Highlights include File format Reading of TTrees has been improved dramatically with respect to CPU time (30%) and notably with respect to disk space. Histograms A new TEfficiency class has been provided to handle the calculation of efficiencies and their uncertainties, TH2Poly for polygon-shaped bins (e.g. maps), TKDE for kernel density estimation, and TSVDUnfold for singular value decomposition. Graphics Kerning is now supported in TLatex, PostScript and PDF; a table of contents can be added to PDF files. A new font provides italic symbols. A TPad containing GL can be stored in a binary (i.e. non-vector) image file; add support for full-scene anti-aliasing. Usability enhancements to EVE. Math New interfaces for generating random number according to a given distribution, goodness of fit tests of unbinned data, binning multidimensional data, and several advanced statistical functions were added. RooFit Introduction of HistFactory; major additions to RooStats. TMVA Updated to version 4.1.0, adding e.g. the support for simultaneous classification of multiple output classes for several multivariate methods. PROOF Many new features, adding to PROOF's usability, plus improvements and fixes. PyROOT Support of Python 3 has been added. Tutorials Several new tutorials were provided for above new features (notably RooStats). A detailed list of all the changes is available at http://root.cern.ch/root/htmldoc/examples/V5. Additional comments: For an up-to-date author list see: http://root.cern.ch/drupal/content/root-development-team and http://root.cern.ch/drupal/content/former-root-developers. The distribution file for this program is over 30 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Depending on the data size and complexity of analysis algorithms. References: id="pr0100" view="all">http://root.cern.ch. http://root.cern.ch/drupal/content/production-version-528. I. Antcheva, M. Ballintijn, B. Bellenot, M. Biskup, R. Brun, N. Buncic, Ph. Canal, D. Casadei, O. Couet, V. Fine, L. Franco, G. Ganis, A. Gheata, D. Gonzalez Maline, M. Goto, J. Iwaszkiewicz, A. Kreshuk, D. Marcos Segura, R. Maunder, L. Moneta, A. Naumann, E. Offermann, V. Onuchin, S. Panacek, F. Rademakers, P. Russo, M. Tadel, ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization, Comput. Phys. Commun. 180 (2009) 2499. http://root.cern.ch/drupal/content/root-version-v5-28-00-patch-release-notes.

  14. GENXICC2.0: An upgraded version of the generator for hadronic production of double heavy baryons Ξ, Ξ and Ξ

    NASA Astrophysics Data System (ADS)

    Chang, Chao-Hsi; Wang, Jian-Xiong; Wu, Xing-Gang

    2010-06-01

    An upgraded (second) version of the package GENXICC (A Generator for Hadronic Production of the Double Heavy Baryons Ξ, Ξ and Ξ by C.H. Chang, J.X. Wang and X.G. Wu [its first version in: Comput. Phys. Comm. 177 (2007) 467]) is presented. Users, with this version being implemented in PYTHIA and a GNU C compiler, may simulate full events of these processes in various experimental environments conveniently. In comparison with the previous version, in order to implement it in PYTHIA properly, a subprogram for the fragmentation of the produced double heavy diquark to the relevant baryon is supplied and the interface of the generator to PYTHIA is changed accordingly. In the subprogram, with explanation, certain necessary assumptions (approximations) are made in order to conserve the momenta and the QCD 'color' flow for the fragmentation. Program summaryProgram title: GENXICC2.0 Catalogue identifier: ADZJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 102 482 No. of bytes in distributed program, including test data, etc.: 1 469 519 Distribution format: tar.gz Programming language: Fortran 77/90 Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating system: Linux RAM: About 2.0 MByte Classification: 11.2 Catalogue identifier of previous version: ADZJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 177 (2007) 467 Does the new version supersede the previous version?: No Nature of problem: Hadronic production of double heavy baryons Ξ, Ξ and Ξ Solution method: The code is based on NRQCD framework. With proper options, it can generate weighted and un-weighted events of hadronic double heavy baryon production. When the hadronizations of the produced jets and double heavy diquark are taken into account in the production, the upgraded version with proper interface to PYTHIA can generate full events. Reasons for new version: Responding to the feedback from users, we improve the generator mainly by carefully completing the 'final non-perturbative process', i.e. the formulation of the double heavy baryon from relevant intermediate diquark. In the present version, the information for fragmentation about momentum-flow and the color-flow, that is necessary for PYTHIA to generate full events, is retained although reasonable approximations are made. In comparison with the original version, the upgraded one can implement it in PYTHIA properly to do the full event simulation of the double heavy baryon production. Summary of revisions:We try to explain the treatment of the momentum distribution of the process more clearly than the original version, and show how the final baryon is generated through the typical intermediate diquark precisely. We present color flow of the involved processes precisely and the corresponding changes for the program are made. The corresponding changes of the program are explained in the paper. Restrictions: The color flow, particularly, in the piece of code programming of the fragmentation from the produced colorful double heavy diquark into a relevant double heavy baryon, is treated carefully so as to implement it in PYTHIA properly. Running time: It depends on which option is chosen to configure PYTHIA when generating full events and also on which mechanism is chosen to generate the events. Typically, for the most complicated case with gluon-gluon fusion mechanism to generate the mixed events via the intermediate diquark in (cc)[ and (cc)[ states, under the option, IDWTUP=1, to generate 1000 events, takes about 20 hours on a 1.8 GHz Intel P4-processor machine, whereas under the option, IDWTUP=3, even to generate 106 events takes about 40 minutes on the same machine.

  15. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  16. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 3: Assessment Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, C.; Hughes, E. D.; Niederauer, G. F.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best- estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the wallsmore » and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume III contains some of the assessments performed by LANL and FzK« less

  17. SHABERTH - ANALYSIS OF A SHAFT BEARING SYSTEM (CRAY VERSION)

    NASA Technical Reports Server (NTRS)

    Coe, H. H.

    1994-01-01

    The SHABERTH computer program was developed to predict operating characteristics of bearings in a multibearing load support system. Lubricated and non-lubricated bearings can be modeled. SHABERTH calculates the loads, torques, temperatures, and fatigue life for ball and/or roller bearings on a single shaft. The program also allows for an analysis of the system reaction to the termination of lubricant supply to the bearings and other lubricated mechanical elements. SHABERTH has proven to be a valuable tool in the design and analysis of shaft bearing systems. The SHABERTH program is structured with four nested calculation schemes. The thermal scheme performs steady state and transient temperature calculations which predict system temperatures for a given operating state. The bearing dimensional equilibrium scheme uses the bearing temperatures, predicted by the temperature mapping subprograms, and the rolling element raceway load distribution, predicted by the bearing subprogram, to calculate bearing diametral clearance for a given operating state. The shaft-bearing system load equilibrium scheme calculates bearing inner ring positions relative to the respective outer rings such that the external loading applied to the shaft is brought into equilibrium by the rolling element loads which develop at each bearing inner ring for a given operating state. The bearing rolling element and cage load equilibrium scheme calculates the rolling element and cage equilibrium positions and rotational speeds based on the relative inner-outer ring positions, inertia effects, and friction conditions. The ball bearing subprograms in the current SHABERTH program have several model enhancements over similar programs. These enhancements include an elastohydrodynamic (EHD) film thickness model that accounts for thermal heating in the contact area and lubricant film starvation; a new model for traction combined with an asperity load sharing model; a model for the hydrodynamic rolling and shear forces in the inlet zone of lubricated contacts, which accounts for the degree of lubricant film starvation; modeling normal and friction forces between a ball and a cage pocket, which account for the transition between the hydrodynamic and elastohydrodynamic regimes of lubrication; and a model of the effect on fatigue life of the ratio of the EHD plateau film thickness to the composite surface roughness. SHABERTH is intended to be as general as possible. The models in SHABERTH allow for the complete mathematical simulation of real physical systems. Systems are limited to a maximum of five bearings supporting the shaft, a maximum of thirty rolling elements per bearing, and a maximum of one hundred temperature nodes. The SHABERTH program structure is modular and has been designed to permit refinement and replacement of various component models as the need and opportunities develop. A preprocessor is included in the IBM PC version of SHABERTH to provide a user friendly means of developing SHABERTH models and executing the resulting code. The preprocessor allows the user to create and modify data files with minimal effort and a reduced chance for errors. Data is utilized as it is entered; the preprocessor then decides what additional data is required to complete the model. Only this required information is requested. The preprocessor can accommodate data input for any SHABERTH compatible shaft bearing system model. The system may include ball bearings, roller bearings, and/or tapered roller bearings. SHABERTH is written in FORTRAN 77, and two machine versions are available from COSMIC. The CRAY version (LEW-14860) has a RAM requirement of 176K of 64 bit words. The IBM PC version (MFS-28818) is written for IBM PC series and compatible computers running MS-DOS, and includes a sample MS-DOS executable. For execution, the PC version requires at least 1Mb of RAM and an 80386 or 486 processor machine with an 80x87 math co-processor. The standard distribution medium for the IBM PC version is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The standard distribution medium for the CRAY version is also a 5.25 inch 360K MS-DOS format diskette, but alternate distribution media and formats are available upon request. The original version of SHABERTH was developed in FORTRAN IV at Lewis Research Center for use on a UNIVAC 1100 series computer. The Cray version was released in 1988, and was updated in 1990 to incorporate fluid rheological data for Rocket Propellant 1 (RP-1), thereby allowing the analysis of bearings lubricated with RP-1. The PC version is a port of the 1990 CRAY version and was developed in 1992 by SRS Technologies under contract to NASA Marshall Space Flight Center.

  18. A Case Study of the United States Navy’s Enterprise Resource Planning System

    DTIC Science & Technology

    2006-06-01

    incarnations, MRP-II added the capabilities of shop-floor management and distribution management activities. Later versions included the ability to manage ... finances , human resources, engineering, and project management. Enterprise Resource Planning systems were then developed as an integrated system

  19. Code Analysis and Refactoring with Clang Tools, Version 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelley, Timothy M.

    2016-12-23

    Code Analysis and Refactoring with Clang Tools is a small set of example code that demonstrates techniques for applying tools distributed with the open source Clang compiler. Examples include analyzing where variables are used and replacing old data structures with standard structures.

  20. Trucks involved in fatal accidents codebook 2004 (Version March 23, 2007).

    DOT National Transportation Integrated Search

    2007-03-01

    "This report provides documentation for UMTRIs file of Trucks Involved in Fatal Accidents (TIFA), : 2004, including distributions of the code values for each variable in the file. The 2004 TIFA file is : a census of all medium and heavy trucks inv...

  1. Trucks involved in fatal accidents codebook 2010 (Version October 22, 2012).

    DOT National Transportation Integrated Search

    2012-11-01

    This report provides documentation for UMTRIs file of Trucks Involved in Fatal Accidents : (TIFA), 2010, including distributions of the code values for each variable in the file. The 2010 : TIFA file is a census of all medium and heavy trucks invo...

  2. Status of a Power Processor for the Prometheus-1 Electric Propulsion System

    NASA Technical Reports Server (NTRS)

    Pinero, Luis R.; Hill, Gerald M.; Aulisio, Michael; Gerber, Scott; Griebeler, Elmer; Hewitt, Frank; Scina, Joseph

    2006-01-01

    NASA is developing technologies for nuclear electric propulsion for proposed deep space missions in support of the Exploration initiative under Project Prometheus. Electrical power produced by the combination of a fission-based power source and a Brayton power conversion and distribution system is used by a high specific impulse ion propulsion system to propel the spaceship. The ion propulsion system include the thruster, power processor and propellant feed system. A power processor technology development effort was initiated under Project Prometheus to develop high performance and lightweight power-processing technologies suitable for the application. This effort faces multiple challenges including developing radiation hardened power modules and converters with very high power capability and efficiency to minimize the impact on the power conversion and distribution system as well as the heat rejection system. This paper documents the design and test results of the first version of the beam supply, the design of a second version of the beam supply and the design and test results of the ancillary supplies.

  3. Comparison of TRMM 2A25 Products Version 6 and Version 7 with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    NASA Technical Reports Server (NTRS)

    Kirstetter, Pierre-Emmanuel; Hong, Y.; Gourley, J. J.; Schwaller, M.; Petersen, W; Zhang, J.

    2012-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving spaceborne passive and active microwave measurements for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem was addressed in a previous paper by comparison of 2A25 version 6 (V6) product with reference values derived from NOAA/NSSL's ground radar-based National Mosaic and QPE system (NMQ/Q2). The primary contribution of this study is to compare the new 2A25 version 7 (V7) products that were recently released as a replacement of V6. This new version is considered superior over land areas. Several aspects of the two versions are compared and quantified including rainfall rate distributions, systematic biases, and random errors. All analyses indicate V7 is an improvement over V6.

  4. An evaluation of procedures to estimate monthly precipitation probabilities

    NASA Astrophysics Data System (ADS)

    Legates, David R.

    1991-01-01

    Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.

  5. A new version of a computer program for dynamical calculations of RHEED intensity oscillations

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej; Skrobas, Kazimierz

    2006-01-01

    We present a new version of the RHEED program which contains a graphical user interface enabling the use of the program in the graphical environment. The presented program also contains a graphical component which enables displaying program data at run-time through an easy-to-use graphical interface. New version program summaryTitle of program: RHEEDGr Catalogue identifier: ADWV Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWV Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version: ADUY Authors of the original program: A. Daniluk Does the new version supersede the original program: no Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used: Borland C++ Builder Memory required to execute with typical data: more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Number of lines in distributed program, including test data, etc.: 5797 Number of bytes in distributed program, including test data, etc.: 588 121 Distribution format: tar.gz Nature of physical problem: Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the molecular beam epitaxy (MBE). The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. Method of solution: RHEED intensities are calculated within the framework of the general matrix formulation of Peng and Whelan [1] under the one-beam condition. Reasons for the new version: Responding to the user feedback we designed a graphical package that enables displaying program data at run-time through an easy-to-use graphical interface. Summary of revisions:In the present form the code is an object-oriented extension of previous version [2]. Fig. 1 shows the static structure of classes and their possible relationships (i.e. inheritance, association, aggregation and dependency) in the code. The code has been modified and optimized to compile under the C++ Builder integrated development environment (IDE). A graphical user interface (GUI) for the program has been created. The application is a standard multiple document interface (MDI) project from Builder's object repository. The MDI application spawns child window that reside within the client window; the main form contains child object. We have added an original graphical component [3] which has been tested successfully in the C++ Builder programming environment under Microsoft Windows platform. Fig. 2 shows internal structure of the component. This diagram is a graphic presentation of the static view which shows a collection of declarative model elements, such as classes, types, and their relationships. Each of the model elements shown in Fig. 2 is manifested by one header file Graph2D.h, and one code file Graph2D.cpp. Fig. 3 sets the stage by showing the package which supplies the C++ Builder elements used in the component. Installation instructions of the TGraph2D.bpk package can be found in the new distribution. The program has been constructed according to the systems development live cycle (SDLC) methodology [4]. Typical running time: The typical running time is machine and user-parameters dependent. Unusual features of the program: The program is distributed in the form of a main project RHEEDGr.bpr with associated files, and should be compiled using Borland C++ Builder compilers version 5 or later.

  6. Program package for multicanonical simulations of U(1) lattice gauge theory-Second version

    NASA Astrophysics Data System (ADS)

    Bazavov, Alexei; Berg, Bernd A.

    2013-03-01

    A new version STMCMUCA_V1_1 of our program package is available. It eliminates compatibility problems of our Fortran 77 code, originally developed for the g77 compiler, with Fortran 90 and 95 compilers. New version program summaryProgram title: STMC_U1MUCA_v1_1 Catalogue identifier: AEET_v1_1 Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: Fortran 77 compatible with Fortran 90 and 95 Computers: Any capable of compiling and executing Fortran code Operating systems: Any capable of compiling and executing Fortran code RAM: 10 MB and up depending on lattice size used No. of lines in distributed program, including test data, etc.: 15059 No. of bytes in distributed program, including test data, etc.: 215733 Keywords: Markov chain Monte Carlo, multicanonical, Wang-Landau recursion, Fortran, lattice gauge theory, U(1) gauge group, phase transitions of continuous systems Classification: 11.5 Catalogue identifier of previous version: AEET_v1_0 Journal Reference of previous version: Computer Physics Communications 180 (2009) 2339-2347 Does the new version supersede the previous version?: Yes Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory (or other continuous systems) close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors. Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars. Reasons for the new version: The previous version was developed for the g77 compiler Fortran 77 version. Compiler errors were encountered with Fortran 90 and Fortran 95 compilers (specified below). Summary of revisions: epsilon=one/10**10 is replaced by epsilon/10.0D10 in the parameter statements of the subroutines u1_bmha.f, u1_mucabmha.f, u1wl_backup.f, u1wlread_backup.f of the folder Libs/U1_par. For the tested compilers script files are added in the folder ExampleRuns and readme.txt files are now provided in all subfolders of ExampleRuns. The gnuplot driver files produced by the routine hist_gnu.f of Libs/Fortran are adapted to syntax required by gnuplot version 4.0 and higher. Restrictions: Due to the use of explicit real*8 initialization the conversion into real*4 will require extra changes besides replacing the implicit.sta file by its real*4 version. Unusual features: The programs have to be compiled the script files like those contained in the folder ExampleRuns as explained in the original paper. Running time: The prepared test runs took up to 74 minutes to execute on a 2 GHz PC.

  7. Application of Statistically Derived CPAS Parachute Parameters

    NASA Technical Reports Server (NTRS)

    Romero, Leah M.; Ray, Eric S.

    2013-01-01

    The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.

  8. TIM, a ray-tracing program for METATOY research and its dissemination

    NASA Astrophysics Data System (ADS)

    Lambert, Dean; Hamilton, Alasdair C.; Constable, George; Snehanshu, Harsh; Talati, Sharvil; Courtial, Johannes

    2012-03-01

    TIM (The Interactive METATOY) is a ray-tracing program specifically tailored towards our research in METATOYs, which are optical components that appear to be able to create wave-optically forbidden light-ray fields. For this reason, TIM possesses features not found in other ray-tracing programs. TIM can either be used interactively or by modifying the openly available source code; in both cases, it can easily be run as an applet embedded in a web page. Here we describe the basic structure of TIM's source code and how to extend it, and we give examples of how we have used TIM in our own research. Program summaryProgram title: TIM Catalogue identifier: AEKY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 124 478 No. of bytes in distributed program, including test data, etc.: 4 120 052 Distribution format: tar.gz Programming language: Java Computer: Any computer capable of running the Java Virtual Machine (JVM) 1.6 Operating system: Any; developed under Mac OS X Version 10.6 RAM: Typically 145 MB (interactive version running under Mac OS X Version 10.6) Classification: 14, 18 External routines: JAMA [1] (source code included) Nature of problem: Visualisation of scenes that include scene objects that create wave-optically forbidden light-ray fields. Solution method: Ray tracing. Unusual features: Specifically designed to visualise wave-optically forbidden light-ray fields; can visualise ray trajectories; can visualise geometric optic transformations; can create anaglyphs (for viewing with coloured "3D glasses") and random-dot autostereograms of the scene; integrable into web pages. Running time: Problem-dependent; typically seconds for a simple scene.

  9. Efficient self-consistency for magnetic tight binding

    NASA Astrophysics Data System (ADS)

    Soin, Preetma; Horsfield, A. P.; Nguyen-Manh, D.

    2011-06-01

    Tight binding can be extended to magnetic systems by including an exchange interaction on an atomic site that favours net spin polarisation. We have used a published model, extended to include long-ranged Coulomb interactions, to study defects in iron. We have found that achieving self-consistency using conventional techniques was either unstable or very slow. By formulating the problem of achieving charge and spin self-consistency as a search for stationary points of a Harris-Foulkes functional, extended to include spin, we have derived a much more efficient scheme based on a Newton-Raphson procedure. We demonstrate the capabilities of our method by looking at vacancies and self-interstitials in iron. Self-consistency can indeed be achieved in a more efficient and stable manner, but care needs to be taken to manage this. The algorithm is implemented in the code PLATO. Program summaryProgram title:PLATO Catalogue identifier: AEFC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 228 747 No. of bytes in distributed program, including test data, etc.: 1 880 369 Distribution format: tar.gz Programming language: C and PERL Computer: Apple Macintosh, PC, Unix machines Operating system: Unix, Linux, Mac OS X, Windows XP Has the code been vectorised or parallelised?: Yes. Up to 256 processors tested RAM: Up to 2 Gbytes per processor Classification: 7.3 External routines: LAPACK, BLAS and optionally ScaLAPACK, BLACS, PBLAS, FFTW Catalogue identifier of previous version: AEFC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2616 Does the new version supersede the previous version?: Yes Nature of problem: Achieving charge and spin self-consistency in magnetic tight binding can be very difficult. Our existing schemes failed altogether, or were very slow. Solution method: A new scheme for achieving self-consistency in orthogonal tight binding has been introduced that explicitly evaluates the first and second derivatives of the energy with respect to input charge and spin, and then uses these to search for stationary values of the energy. Reasons for new version: Bug fixes and new functionality. Summary of revisions: New charge and spin mixing scheme for orthogonal tight binding. Numerous small bug fixes. Restrictions: The new mixing scheme scales poorly with system size. In particular the memory usage scales as number of atoms to the power 4. It is restricted to systems with about 200 atoms or less. Running time: Test cases will run in a few minutes, large calculations may run for several days.

  10. Dutch translation and cross-cultural validation of the Adult Social Care Outcomes Toolkit (ASCOT).

    PubMed

    van Leeuwen, Karen M; Bosmans, Judith E; Jansen, Aaltje Pd; Rand, Stacey E; Towers, Ann-Marie; Smith, Nick; Razik, Kamilla; Trukeschitz, Birgit; van Tulder, Maurits W; van der Horst, Henriette E; Ostelo, Raymond W

    2015-05-13

    The Adult Social Care Outcomes Toolkit was developed to measure outcomes of social care in England. In this study, we translated the four level self-completion version (SCT-4) of the ASCOT for use in the Netherlands and performed a cross-cultural validation. The ASCOT SCT-4 was translated into Dutch following international guidelines, including two forward and back translations. The resulting version was pilot tested among frail older adults using think-aloud interviews. Furthermore, using a subsample of the Dutch ACT-study, we investigated test-retest reliability and construct validity and compared response distributions with data from a comparable English study. The pilot tests showed that translated items were in general understood as intended, that most items were reliable, and that the response distributions of the Dutch translation and associations with other measures were comparable to the original English version. Based on the results of the pilot tests, some small modifications and a revision of the Dignity items were proposed for the final translation, which were approved by the ASCOT development team. The complete original English version and the final Dutch translation can be obtained after registration on the ASCOT website ( http://www.pssru.ac.uk/ascot ). This study provides preliminary evidence that the Dutch translation of the ASCOT is valid, reliable and comparable to the original English version. We recommend further research to confirm the validity of the modified Dutch ASCOT translation.

  11. Minimally Clinically Important Change in the Activity Measure for Post-Acute Care (AM-PAC), a Generic Patient-Reported Outcome Tool, in People With Low Back Pain.

    PubMed

    Lee, Natalie; Thompson, Nicolas R; Passek, Sandra; Stilphen, Mary; Katzan, Irene L

    2017-11-01

    The Activity Measure for Post-Acute Care (AM-PAC) is a generic metric of patient-reported functional status. The minimal clinically important difference (MCID) in the AM-PAC score has not been determined. The study objective was to determine the MCID for AM-PAC in people with low back pain. This was a retrospective cohort study. Anchor-based and distribution-based methods were used to estimate the MCID. The Modified Low Back Pain Disability Questionnaire was used as the anchor. Adults who had a primary ICD-9 code for low back pain in at least 1 outpatient physical therapist visit during an episode of care and who completed both the AM-PAC and the Modified Low Back Pain Disability Questionnaire in at least 2 visits during the care episode were included. The MCID was calculated for the AM-PAC basic mobility version as well its adapted version, which the Cleveland Clinic uses for patients 65 years old or older. A total of 1,271 participants were eligible for study. For the AM-PAC basic mobility version, anchor-based methods yielded MCID estimates of between 3.4 and 5.1, whereas distribution-based methods yielded estimates of 1.7 to 4.2. The minimal detectable change (MDC) for the AM-PAC basic mobility version was 3.3. For the adapted AM-PAC basic mobility version, the MCID was estimated to be between 2.9 and 4.0 via anchor-based methods and between 1.2 to 3.5 via distribution-based methods. The MDC for the adapted AM-PAC basic mobility version was 3.5. The estimated MCID was designed for people with low back pain only. The MCID ranged from 3.3 to 5.1 for the AM-PAC basic mobility version and 3.5 to 4 for the adapted version, with the MDC as the lower limit. Changes in the AM-PAC for people with low back pain may be interpreted using the estimated MCID. Future studies are needed to determine the AM-PAC MCID for populations other than those with low back pain. © 2017 American Physical Therapy Association

  12. NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Phillips, T. A.

    1994-01-01

    NETS, A Tool for the Development and Evaluation of Neural Networks, provides a simulation of Neural Network algorithms plus an environment for developing such algorithms. Neural Networks are a class of systems modeled after the human brain. Artificial Neural Networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to brain neurons. Problems which involve pattern matching readily fit the class of problems which NETS is designed to solve. NETS uses the back propagation learning method for all of the networks which it creates. The nodes of a network are usually grouped together into clumps called layers. Generally, a network will have an input layer through which the various environment stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to some features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. NETS allows the user to customize the patterns of connections between layers of a network. NETS also provides features for saving the weight values of a network during the learning process, which allows for more precise control over the learning process. NETS is an interpreter. Its method of execution is the familiar "read-evaluate-print" loop found in interpreted languages such as BASIC and LISP. The user is presented with a prompt which is the simulator's way of asking for input. After a command is issued, NETS will attempt to evaluate the command, which may produce more prompts requesting specific information or an error if the command is not understood. The typical process involved when using NETS consists of translating the problem into a format which uses input/output pairs, designing a network configuration for the problem, and finally training the network with input/output pairs until an acceptable error is reached. NETS allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard distribution medium) or a .25 inch streaming magnetic tape cartridge in UNIX tar format. NETS was developed in 1989 and updated in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. SunIPC and SunOS are trademarks of Sun Microsystems, Inc. CRAY Y-MP and UNICOS are trademarks of Cray Research, Inc.

  13. NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACHINE INDEPENDENT VERSION)

    NASA Technical Reports Server (NTRS)

    Baffes, P. T.

    1994-01-01

    NETS, A Tool for the Development and Evaluation of Neural Networks, provides a simulation of Neural Network algorithms plus an environment for developing such algorithms. Neural Networks are a class of systems modeled after the human brain. Artificial Neural Networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to brain neurons. Problems which involve pattern matching readily fit the class of problems which NETS is designed to solve. NETS uses the back propagation learning method for all of the networks which it creates. The nodes of a network are usually grouped together into clumps called layers. Generally, a network will have an input layer through which the various environment stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to some features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. NETS allows the user to customize the patterns of connections between layers of a network. NETS also provides features for saving the weight values of a network during the learning process, which allows for more precise control over the learning process. NETS is an interpreter. Its method of execution is the familiar "read-evaluate-print" loop found in interpreted languages such as BASIC and LISP. The user is presented with a prompt which is the simulator's way of asking for input. After a command is issued, NETS will attempt to evaluate the command, which may produce more prompts requesting specific information or an error if the command is not understood. The typical process involved when using NETS consists of translating the problem into a format which uses input/output pairs, designing a network configuration for the problem, and finally training the network with input/output pairs until an acceptable error is reached. NETS allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard distribution medium) or a .25 inch streaming magnetic tape cartridge in UNIX tar format. NETS was developed in 1989 and updated in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. SunIPC and SunOS are trademarks of Sun Microsystems, Inc. CRAY Y-MP and UNICOS are trademarks of Cray Research, Inc.

  14. NHPP for FRBs, Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence, Earl; Wiel, Scott Vander

    This code implements the non-homogeneous poisson process model for estimating the rate of fast radio bursts. It includes modeling terms for the distribution of events in the Universe and the detection sensitivity of the radio telescopes and arrays used in observation. The model is described in LA-UR-16-26261.

  15. SPLICER - A GENETIC ALGORITHM TOOL FOR SEARCH AND OPTIMIZATION, VERSION 1.0 (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Wang, L.

    1994-01-01

    SPLICER is a genetic algorithm tool which can be used to solve search and optimization problems. Genetic algorithms are adaptive search procedures (i.e. problem solving methods) based loosely on the processes of natural selection and Darwinian "survival of the fittest." SPLICER provides the underlying framework and structure for building a genetic algorithm application. These algorithms apply genetically-inspired operators to populations of potential solutions in an iterative fashion, creating new populations while searching for an optimal or near-optimal solution to the problem at hand. SPLICER 1.0 was created using a modular architecture that includes a Genetic Algorithm Kernel, interchangeable Representation Libraries, Fitness Modules and User Interface Libraries, and well-defined interfaces between these components. The architecture supports portability, flexibility, and extensibility. SPLICER comes with all source code and several examples. For instance, a "traveling salesperson" example searches for the minimum distance through a number of cities visiting each city only once. Stand-alone SPLICER applications can be used without any programming knowledge. However, to fully utilize SPLICER within new problem domains, familiarity with C language programming is essential. SPLICER's genetic algorithm (GA) kernel was developed independent of representation (i.e. problem encoding), fitness function or user interface type. The GA kernel comprises all functions necessary for the manipulation of populations. These functions include the creation of populations and population members, the iterative population model, fitness scaling, parent selection and sampling, and the generation of population statistics. In addition, miscellaneous functions are included in the kernel (e.g., random number generators). Different problem-encoding schemes and functions are defined and stored in interchangeable representation libraries. This allows the GA kernel to be used with any representation scheme. The SPLICER tool provides representation libraries for binary strings and for permutations. These libraries contain functions for the definition, creation, and decoding of genetic strings, as well as multiple crossover and mutation operators. Furthermore, the SPLICER tool defines the appropriate interfaces to allow users to create new representation libraries. Fitness modules are the only component of the SPLICER system a user will normally need to create or alter to solve a particular problem. Fitness functions are defined and stored in interchangeable fitness modules which must be created using C language. Within a fitness module, a user can create a fitness (or scoring) function, set the initial values for various SPLICER control parameters (e.g., population size), create a function which graphically displays the best solutions as they are found, and provide descriptive information about the problem. The tool comes with several example fitness modules, while the process of developing a fitness module is fully discussed in the accompanying documentation. The user interface is event-driven and provides graphic output in windows. SPLICER is written in Think C for Apple Macintosh computers running System 6.0.3 or later and Sun series workstations running SunOS. The UNIX version is easily ported to other UNIX platforms and requires MIT's X Window System, Version 11 Revision 4 or 5, MIT's Athena Widget Set, and the Xw Widget Set. Example executables and source code are included for each machine version. The standard distribution media for the Macintosh version is a set of three 3.5 inch Macintosh format diskettes. The standard distribution medium for the UNIX version is a .25 inch streaming magnetic tape cartridge in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. SPLICER was developed in 1991.

  16. Distributed Interoperable Metadata Registry; How Do Physicists Use an E-Print Archive? Implications for Institutional E-Print Services; A Framework for Building Open Digital Libraries; Implementing Digital Sanborn Maps for Ohio: OhioLINK and OPLIN Collaborative Project.

    ERIC Educational Resources Information Center

    Blanchi, Christophe; Petrone, Jason; Pinfield, Stephen; Suleman, Hussein; Fox, Edward A.; Bauer, Charly; Roddy, Carol Lynn

    2001-01-01

    Includes four articles that discuss a distributed architecture for managing metadata that promotes interoperability between digital libraries; the use of electronic print (e-print) by physicists; the development of digital libraries; and a collaborative project between two library consortia in Ohio to provide digital versions of Sanborn Fire…

  17. Competition between plant functional types in the Canadian Terrestrial Ecosystem Model (CTEM) v. 2.0

    NASA Astrophysics Data System (ADS)

    Melton, J. R.; Arora, V. K.

    2015-06-01

    The Canadian Terrestrial Ecosystem Model (CTEM) is the interactive vegetation component in the Earth system model of the Canadian Centre for Climate Modelling and Analysis. CTEM models land-atmosphere exchange of CO2 through the response of carbon in living vegetation, and dead litter and soil pools, to changes in weather and climate at timescales of days to centuries. Version 1.0 of CTEM uses prescribed fractional coverage of plant functional types (PFTs) although, in reality, vegetation cover continually adapts to changes in climate, atmospheric composition, and anthropogenic forcing. Changes in the spatial distribution of vegetation occur on timescales of years to centuries as vegetation distributions inherently have inertia. Here, we present version 2.0 of CTEM which includes a representation of competition between PFTs based on a modified version of the Lotka-Volterra (L-V) predator-prey equations. Our approach is used to dynamically simulate the fractional coverage of CTEM's seven natural, non-crop PFTs which are then compared with available observation-based estimates. Results from CTEM v. 2.0 show the model is able to represent the broad spatial distributions of its seven PFTs at the global scale. However, differences remain between modelled and observation-based fractional coverages of PFTs since representing the multitude of plant species globally, with just seven non-crop PFTs, only captures the large scale climatic controls on PFT distributions. As expected, PFTs that exist in climate niches are difficult to represent either due to the coarse spatial resolution of the model, and the corresponding driving climate, or the limited number of PFTs used. We also simulate the fractional coverages of PFTs using unmodified L-V equations to illustrate its limitations. The geographic and zonal distributions of primary terrestrial carbon pools and fluxes from the versions of CTEM that use prescribed and dynamically simulated fractional coverage of PFTs compare reasonably well with each other and observation-based estimates. The parametrization of competition between PFTs in CTEM v. 2.0 based on the modified L-V equations behaves in a reasonably realistic manner and yields a tool with which to investigate the changes in spatial distribution of vegetation in response to future changes in climate.

  18. Competition between plant functional types in the Canadian Terrestrial Ecosystem Model (CTEM) v. 2.0

    NASA Astrophysics Data System (ADS)

    Melton, J. R.; Arora, V. K.

    2016-01-01

    The Canadian Terrestrial Ecosystem Model (CTEM) is the interactive vegetation component in the Earth system model of the Canadian Centre for Climate Modelling and Analysis. CTEM models land-atmosphere exchange of CO2 through the response of carbon in living vegetation, and dead litter and soil pools, to changes in weather and climate at timescales of days to centuries. Version 1.0 of CTEM uses prescribed fractional coverage of plant functional types (PFTs) although, in reality, vegetation cover continually adapts to changes in climate, atmospheric composition and anthropogenic forcing. Changes in the spatial distribution of vegetation occur on timescales of years to centuries as vegetation distributions inherently have inertia. Here, we present version 2.0 of CTEM, which includes a representation of competition between PFTs based on a modified version of the Lotka-Volterra (L-V) predator-prey equations. Our approach is used to dynamically simulate the fractional coverage of CTEM's seven natural, non-crop PFTs, which are then compared with available observation-based estimates. Results from CTEM v. 2.0 show the model is able to represent the broad spatial distributions of its seven PFTs at the global scale. However, differences remain between modelled and observation-based fractional coverage of PFTs since representing the multitude of plant species globally, with just seven non-crop PFTs, only captures the large-scale climatic controls on PFT distributions. As expected, PFTs that exist in climate niches are difficult to represent either due to the coarse spatial resolution of the model, and the corresponding driving climate, or the limited number of PFTs used. We also simulate the fractional coverage of PFTs using unmodified L-V equations to illustrate its limitations. The geographic and zonal distributions of primary terrestrial carbon pools and fluxes from the versions of CTEM that use prescribed and dynamically simulated fractional coverage of PFTs compare reasonably well with each other and observation-based estimates. The parametrization of competition between PFTs in CTEM v. 2.0 based on the modified L-V equations behaves in a reasonably realistic manner and yields a tool with which to investigate the changes in spatial distribution of vegetation in response to future changes in climate.

  19. Software For Calibration Of Polarimetric SAR Data

    NASA Technical Reports Server (NTRS)

    Van Zyl, Jakob; Zebker, Howard; Freeman, Anthony; Holt, John; Dubois, Pascale; Chapman, Bruce

    1994-01-01

    POLCAL (Polarimetric Radar Calibration) software tool intended to assist in calibration of synthetic-aperture radar (SAR) systems. In particular, calibrates Stokes-matrix-format data produced as standard product by NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). Version 4.0 of POLCAL is upgrade of version 2.0. New options include automatic absolute calibration of 89/90 data, distributed-target analysis, calibration of nearby scenes with corner reflectors, altitude or roll-angle corrections, and calibration of errors introduced by known topography. Reduces crosstalk and corrects phase calibration without use of ground calibration equipment. Written in FORTRAN 77.

  20. Documentation for the machine-readable version of the catalog of galactic O type stars

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The Catalog of Galactic O-Type Stars (Garmany, Conti and Chiosi 1982), a compilation from the literature of all O-type stars for which spectral types, luminosity classes and UBV photometry exist, contains 765 stars, for each of which designation (HD, DM, etc.), spectral type, V, B-V, cluster membership, Galactic coordinates, and source references are given. Derived values of absolute visual and bolometric magnitudes, and distances are included. The source reference should be consulted for additional details concerning the derived quantities. This description of the machine-readable version of the catalog seeks to enable users to read and process the data with a minimum of guesswork. A copy of this document should be distributed with any machine readable version of the catalog.

  1. Dyspnoea-12: a translation and linguistic validation study in a Swedish setting.

    PubMed

    Sundh, Josefin; Ekström, Magnus

    2017-06-06

    Dyspnoea consists of multiple dimensions including the intensity, unpleasantness, sensory qualities and emotional responses which may differ between patient groups, settings and in relation to treatment. The Dyspnoea-12 is a validated and convenient instrument for multidimensional measurement in English. We aimed to take forward a Swedish version of the Dyspnoea-12. The linguistic validation of the Dyspnoea-12 was performed (Mapi Language Services, Lyon, France). The standardised procedure involved forward and backward translations by three independent certified translators and revisions after feedback from an in-country linguistic consultant, the developerand three native physicians. The understanding and convenience of the translated version was evaluated using qualitative in-depth interviews with five patients with dyspnoea. A Swedish version of the Dyspnoea-12 was elaborated and evaluated carefully according to international guidelines. The Swedish version, 'Dyspné-12', has the same layout as the original version, including 12 items distributed on seven physical and five affective items. The Dyspnoea-12 is copyrighted by the developer but can be used free of charge after permission for not industry-funded research. A Swedish version of the Dyspnoea-12 is now available for clinical validation and multidimensional measurement across diseases and settings with the aim of improved evaluation and management of dyspnoea. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Volttron version 5.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VOLTTRON is an agent execution platform providing services to its agents that allow them to easily communicate with physical devices and other resources. VOLTTRON delivers an innovative distributed control and sensing software platform that supports modern control strategies, including agent-based and transaction-based controls. It enables mobile and stationary software agents to perform information gathering, processing, and control actions. VOLTTRON can independently manage a wide range of applications, such as HVAC systems, electric vehicles, distributed energy or entire building loads, leading to improved operational efficiency.

  3. Seafloor mapping and benthic habitat GIS for southern California, volume III

    USGS Publications Warehouse

    Cochrane, Guy R.; Golden, Nadine E.; Dartnell, Pete; Schroeder, Donna M.; Finlayson, David P.

    2007-01-01

    From August 8-27, 2005, more than 75 km of the continental shelf (Fig. 1) in water depths of 20-70m southeast of Santa Barbara, were surveyed during the USGS cruise S-1-05-SC (http://walrus.wr.usgs.gov/infobank/s/s105sc/html/s-1-05-sc.meta.html). Both Interferometric sonar and 14 hours of both vertical and oblique georeferenced submarine digital video were collected to (1) obtain geophysical data (bathymetry and acoustic reflectance), (2) examine and record geologic characteristics of the sea floor, and (3) construct maps of seafloor geomorphology and habitat distribution. Substrate distribution is predicted using a modified version of Cochrane and Lafferty (2002) video-supervised statistical classification of sonar data that includes derivatives of bathymetry data. Specific details of the methods can be found in the meatadata of the bathymetry data file. Substrates observed are predominantly sand with some rock. Rocky substrates were restricted primarily to an east-west trending bathymetric high 2,000 m north of oil platforms. This is an updated report (version 2.0) from the earlier 2007-1271 (version 1.0) open-file report. This updated report re-releases the data files in UTM, zone 11, WGS84 coordinates. Also, the bathymetry data has been corrected for a vertical offset discovered in the earlier 2007-1271 (version 1.0) report.

  4. Development and validation of a novel patient-reported treatment satisfaction measure for hyperfunctional facial lines: facial line satisfaction questionnaire.

    PubMed

    Pompilus, Farrah; Burgess, Somali; Hudgens, Stacie; Banderas, Benjamin; Daniels, Selena

    2015-12-01

    Facial lines or wrinkles are among the most visible signs of aging, and minimally invasive cosmetic procedures are becoming increasingly popular. The aim of this study was to develop and validate the Facial Line Satisfaction Questionnaire (FLSQ) for use in adults with upper facial lines (UFL). A literature review, concept elicitation interviews (n = 33), and cognitive debriefing interviews (n = 23) of adults with UFL were conducted to develop the FLSQ. The FLSQ comprises Baseline and Follow-up versions and was field-tested with 150 subjects in a US observational study designed to assess its psychometric performance. Analyses included acceptability (item and scale distribution [i.e. missingness, floor, and ceiling effects]), reliability, and validity (including concurrent validity). In total, 69 concepts were elicited during patient interviews. Following cognitive debriefing interviews, the FLSQ-Baseline version included 11 items and the Follow-up version included 13 items. Response rates for the FLSQ were 100% and 73% at baseline and follow-up, respectively; no items had excessive missing data. Questionnaire scale scores were normally distributed. Most domain scores demonstrated good internal consistency reliability (Cronbach's α ≥ 0.70). Most items within their respective domains exhibited good convergent (item-scale correlations > 0.40) and discriminant (items had higher correlation with their hypothesized scales than other scales) validity. Concurrent validity correlation coefficients of the FLSQ domain scores with the associated concurrent measures were acceptable (range: r = 0.40-0.70). Six FLSQ items demonstrated reliability and validity as stand-alone items outside their domains. The FLSQ is a valid questionnaire for assessing treatment expectations, satisfaction, impact, and preference in adults with UFL. © 2015 The Authors. Journal of Cosmetic Dermatology Published by Wiley Periodicals, Inc.

  5. GRID3D-v2: An updated version of the GRID2D/3D computer program for generating grid systems in complex-shaped three-dimensional spatial domains

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.

    1991-01-01

    In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.

  6. Design and Implementation of a Distributed Version of the NASA Engine Performance Program

    NASA Technical Reports Server (NTRS)

    Cours, Jeffrey T.

    1994-01-01

    Distributed NEPP is a new version of the NASA Engine Performance Program that runs in parallel on a collection of Unix workstations connected through a network. The program is fault-tolerant, efficient, and shows significant speed-up in a multi-user, heterogeneous environment. This report describes the issues involved in designing distributed NEPP, the algorithms the program uses, and the performance distributed NEPP achieves. It develops an analytical model to predict and measure the performance of the simple distribution, multiple distribution, and fault-tolerant distribution algorithms that distributed NEPP incorporates. Finally, the appendices explain how to use distributed NEPP and document the organization of the program's source code.

  7. Hyper-Fractal Analysis: A visual tool for estimating the fractal dimension of 4D objects

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Grossu, I.; Felea, D.; Besliu, C.; Jipa, Al.; Esanu, T.; Bordeianu, C. C.; Stan, E.

    2013-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images and 3D objects (Grossu et al. (2010) [1]). The program was extended for working with four-dimensional objects stored in comma separated values files. This might be of interest in biomedicine, for analyzing the evolution in time of three-dimensional images. New version program summaryProgram title: Hyper-Fractal Analysis (Fractal Analysis v03) Catalogue identifier: AEEG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 745761 No. of bytes in distributed program, including test data, etc.: 12544491 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 100M Classification: 14 Catalogue identifier of previous version: AEEG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 831-832 Does the new version supersede the previous version? Yes Nature of problem: Estimating the fractal dimension of 4D images. Solution method: Optimized implementation of the 4D box-counting algorithm. Reasons for new version: Inspired by existing applications of 3D fractals in biomedicine [3], we extended the optimized version of the box-counting algorithm [1, 2] to the four-dimensional case. This might be of interest in analyzing the evolution in time of 3D images. The box-counting algorithm was extended in order to support 4D objects, stored in comma separated values files. A new form was added for generating 2D, 3D, and 4D test data. The application was tested on 4D objects with known dimension, e.g. the Sierpinski hypertetrahedron gasket, Df=ln(5)/ln(2) (Fig. 1). The algorithm could be extended, with minimum effort, to higher number of dimensions. Easy integration with other applications by using the very simple comma separated values file format for storing multi-dimensional images. Implementation of χ2 test as a criterion for deciding whether an object is fractal or not. User friendly graphical interface. Hyper-Fractal Analysis-Test on the Sierpinski hypertetrahedron 4D gasket (Df=ln(5)/ln(2)≅2.32). Running time: In a first approximation, the algorithm is linear [2]. References: [1] V. Grossu, D. Felea, C. Besliu, Al. Jipa, C.C. Bordeianu, E. Stan, T. Esanu, Computer Physics Communications, 181 (2010) 831-832. [2] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C. C. Bordeianu, D. Felea, Computer Physics Communications, 180 (2009) 1999-2001. [3] J. Ruiz de Miras, J. Navas, P. Villoslada, F.J. Esteban, Computer Methods and Programs in Biomedicine, 104 Issue 3 (2011) 452-460.

  8. OASIS: a data and software distribution service for Open Science Grid

    NASA Astrophysics Data System (ADS)

    Bockelman, B.; Caballero Bejar, J.; De Stefano, J.; Hover, J.; Quick, R.; Teige, S.

    2014-06-01

    The Open Science Grid encourages the concept of software portability: a user's scientific application should be able to run at as many sites as possible. It is necessary to provide a mechanism for OSG Virtual Organizations to install software at sites. Since its initial release, the OSG Compute Element has provided an application software installation directory to Virtual Organizations, where they can create their own sub-directory, install software into that sub-directory, and have the directory shared on the worker nodes at that site. The current model has shortcomings with regard to permissions, policies, versioning, and the lack of a unified, collective procedure or toolset for deploying software across all sites. Therefore, a new mechanism for data and software distributing is desirable. The architecture for the OSG Application Software Installation Service (OASIS) is a server-client model: the software and data are installed only once in a single place, and are automatically distributed to all client sites simultaneously. Central file distribution offers other advantages, including server-side authentication and authorization, activity records, quota management, data validation and inspection, and well-defined versioning and deletion policies. The architecture, as well as a complete analysis of the current implementation, will be described in this paper.

  9. Computational control of flexible aerospace systems

    NASA Technical Reports Server (NTRS)

    Sharpe, Lonnie, Jr.; Shen, Ji Yao

    1994-01-01

    The main objective of this project is to establish a distributed parameter modeling technique for structural analysis, parameter estimation, vibration suppression and control synthesis of large flexible aerospace structures. This report concentrates on the research outputs produced in the last two years. The main accomplishments can be summarized as follows. A new version of the PDEMOD Code had been completed based on several incomplete versions. The verification of the code had been conducted by comparing the results with those examples for which the exact theoretical solutions can be obtained. The theoretical background of the package and the verification examples has been reported in a technical paper submitted to the Joint Applied Mechanics & Material Conference, ASME. A brief USER'S MANUAL had been compiled, which includes three parts: (1) Input data preparation; (2) Explanation of the Subroutines; and (3) Specification of control variables. Meanwhile, a theoretical investigation of the NASA MSFC two-dimensional ground-based manipulator facility by using distributed parameter modeling technique has been conducted. A new mathematical treatment for dynamic analysis and control of large flexible manipulator systems has been conceived, which may provide an embryonic form of a more sophisticated mathematical model for future modified versions of the PDEMOD Codes.

  10. MADANALYSIS 5, a user-friendly framework for collider phenomenology

    NASA Astrophysics Data System (ADS)

    Conte, Eric; Fuks, Benjamin; Serret, Guillaume

    2013-01-01

    We present MADANALYSIS 5, a new framework for phenomenological investigations at particle colliders. Based on a C++ kernel, this program allows us to efficiently perform, in a straightforward and user-friendly fashion, sophisticated physics analyses of event files such as those generated by a large class of Monte Carlo event generators. MADANALYSIS 5 comes with two modes of running. The first one, easier to handle, uses the strengths of a powerful PYTHON interface in order to implement physics analyses by means of a set of intuitive commands. The second one requires one to implement the analyses in the C++ programming language, directly within the core of the analysis framework. This opens unlimited possibilities concerning the level of complexity which can be reached, being only limited by the programming skills and the originality of the user. Program summaryProgram title: MadAnalysis 5 Catalogue identifier: AENO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Permission to use, copy, modify and distribute this program is granted under the terms of the GNU General Public License. No. of lines in distributed program, including test data, etc.: 31087 No. of bytes in distributed program, including test data, etc.: 399105 Distribution format: tar.gz Programming language: PYTHON, C++. Computer: All platforms on which Python version 2.7, Root version 5.27 and the g++ compiler are available. Compatibility with newer versions of these programs is also ensured. However, the Python version must be below version 3.0. Operating system: Unix, Linux and Mac OS operating systems on which the above-mentioned versions of Python and Root, as well as g++, are available. Classification: 11.1. External routines: ROOT (http://root.cern.ch/drupal/) Nature of problem: Implementing sophisticated phenomenological analyses in high-energy physics through a flexible, efficient and straightforward fashion, starting from event files such as those produced by Monte Carlo event generators. The event files can have been matched or not to parton-showering and can have been processed or not by a (fast) simulation of a detector. According to the sophistication level of the event files (parton-level, hadron-level, reconstructed-level), one must note that several input formats are possible. Solution method: We implement an interface allowing the production of predefined as well as user-defined histograms for a large class of kinematical distributions after applying a set of event selection cuts specified by the user. This therefore allows us to devise robust and novel search strategies for collider experiments, such as those currently running at the Large Hadron Collider at CERN, in a very efficient way. Restrictions: Unsupported event file format. Unusual features: The code is fully based on object representations for events, particles, reconstructed objects and cuts, which facilitates the implementation of an analysis. Running time: It depends on the purposes of the user and on the number of events to process. It varies from a few seconds to the order of the minute for several millions of events.

  11. Documentation for the MODFLOW 6 framework

    USGS Publications Warehouse

    Hughes, Joseph D.; Langevin, Christian D.; Banta, Edward R.

    2017-08-10

    MODFLOW is a popular open-source groundwater flow model distributed by the U.S. Geological Survey. Growing interest in surface and groundwater interactions, local refinement with nested and unstructured grids, karst groundwater flow, solute transport, and saltwater intrusion, has led to the development of numerous MODFLOW versions. Often times, there are incompatibilities between these different MODFLOW versions. The report describes a new MODFLOW framework called MODFLOW 6 that is designed to support multiple models and multiple types of models. The framework is written in Fortran using a modular object-oriented design. The primary framework components include the simulation (or main program), Timing Module, Solutions, Models, Exchanges, and Utilities. The first version of the framework focuses on numerical solutions, numerical models, and numerical exchanges. This focus on numerical models allows multiple numerical models to be tightly coupled at the matrix level.

  12. Versioned distributed arrays for resilience in scientific applications: Global view resilience

    DOE PAGES

    Chien, A.; Balaji, P.; Beckman, P.; ...

    2015-06-01

    Exascale studies project reliability challenges for future high-performance computing (HPC) systems. We propose the Global View Resilience (GVR) system, a library that enables applications to add resilience in a portable, application-controlled fashion using versioned distributed arrays. We describe GVR’s interfaces to distributed arrays, versioning, and cross-layer error recovery. Using several large applications (OpenMC, the preconditioned conjugate gradient solver PCG, ddcMD, and Chombo), we evaluate the programmer effort to add resilience. The required changes are small (<2% LOC), localized, and machine-independent, requiring no software architecture changes. We also measure the overhead of adding GVR versioning and show that generally overheads <2%more » are achieved. We conclude that GVR’s interfaces and implementation are flexible and portable and create a gentle-slope path to tolerate growing error rates in future systems.« less

  13. Distribution to the Astronomy Community of the Compressed Digitized Sky Survey

    NASA Astrophysics Data System (ADS)

    Postman, Marc

    1996-03-01

    The Space Telescope Science Institute has compressed an all-sky collection of ground-based images and has printed the data on a two volume, 102 CD-ROM disc set. The first part of the survey (containing images of the southern sky) was published in May 1994. The second volume (containing images of the northern sky) was published in January 1995. Software which manages the image retrieval is included with each volume. The Astronomical Society of the Pacific (ASP) is handling the distribution of the lOx compressed data and has sold 310 sets as of October 1996. ASP is also handling the distribution of the recently published 100x version of the northern sky survey which is publicly available at a low cost. The target markets for the 100x compressed data set are the amateur astronomy community, educational institutions, and the general public. During the next year, we plan to publish the first version of a photometric calibration database which will allow users of the compressed sky survey to determine the brightness of stars in the images.

  14. Distribution to the Astronomy Community of the Compressed Digitized Sky Survey

    NASA Technical Reports Server (NTRS)

    Postman, Marc

    1996-01-01

    The Space Telescope Science Institute has compressed an all-sky collection of ground-based images and has printed the data on a two volume, 102 CD-ROM disc set. The first part of the survey (containing images of the southern sky) was published in May 1994. The second volume (containing images of the northern sky) was published in January 1995. Software which manages the image retrieval is included with each volume. The Astronomical Society of the Pacific (ASP) is handling the distribution of the lOx compressed data and has sold 310 sets as of October 1996. ASP is also handling the distribution of the recently published 100x version of the northern sky survey which is publicly available at a low cost. The target markets for the 100x compressed data set are the amateur astronomy community, educational institutions, and the general public. During the next year, we plan to publish the first version of a photometric calibration database which will allow users of the compressed sky survey to determine the brightness of stars in the images.

  15. The Mpi-M Aerosol Climatology (MAC)

    NASA Astrophysics Data System (ADS)

    Kinne, S.

    2014-12-01

    Monthly gridded global data-sets for aerosol optical properties (AOD, SSA and g) and for aerosol microphysical properties (CCN and IN) offer a (less complex) alternate path to include aerosol radiative effects and aerosol impacts on cloud-microphysics in global simulations. Based on merging AERONET sun-/sky-photometer data onto background maps provided by AeroCom phase 1 modeling output and AERONET sun-/the MPI-M Aerosol Climatology (MAC) version 1 was developed and applied in IPCC simulations with ECHAM and as ancillary data-set in satellite-based global data-sets. An updated version 2 of this climatology will be presented now applying central values from the more recent AeroCom phase 2 modeling and utilizing the better global coverage of trusted sun-photometer data - including statistics from the Marine Aerosol network (MAN). Applications include spatial distributions of estimates for aerosol direct and aerosol indirect radiative effects.

  16. DEMAID - A DESIGN MANAGER'S AID FOR INTELLIGENT DECOMPOSITION (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.

    1994-01-01

    Many engineering systems are large and multi-disciplinary. Before the design of new complex systems such as large space platforms can begin, the possible interactions among subsystems and their parts must be determined. Once this is completed the proposed system can be decomposed to identify its hierarchical structure. DeMAID (A Design Manager's Aid for Intelligent Decomposition) is a knowledge-based system for ordering the sequence of modules and identifying a possible multilevel structure for the design problem. DeMAID displays the modules in an N x N matrix format (called a design structure matrix) where a module is any process that requires input and generates an output. (Modules which generate an output but do not require an input, such as an initialization process, are also acceptable.) Although DeMAID requires an investment of time to generate and refine the list of modules for input, it could save a considerable amount of money and time in the total design process, particularly in new design problems where the ordering of the modules has not been defined. The decomposition of a complex design system into subsystems requires the judgement of the design manager. DeMAID reorders and groups the modules based on the links (interactions) among the modules, helping the design manager make decomposition decisions early in the design cycle. The modules are grouped into circuits (the subsystems) and displayed in an N x N matrix format. Feedback links, which indicate an iterative process, are minimized and only occur within a subsystem. Since there are no feedback links among the circuits, the circuits can be displayed in a multilevel format. Thus, a large amount of information is reduced to one or two displays which are stored for later retrieval and modification. The design manager and leaders of the design teams then have a visual display of the design problem and the intricate interactions among the different modules. The design manager could save a substantial amount of time if circuits on the same level of the multilevel structure are executed in parallel. DeMAID estimates the time savings based on the number of available processors. In addition to decomposing the system into subsystems, DeMAID examines the dependencies of a problem with independent variables and dependant functions. A dependency matrix is created to show the relationship. DeMAID is based on knowledge base techniques to provide flexibility and ease in adding new capabilities. Although DeMAID was originally written for design problems, it has proven to be very general in solving any problem which contains modules (processes) which take an input and generate an output. For example, one group is applying DeMAID to gain understanding of the data flow of a very large computer program. In this example, the modules are the subroutines of the program. The design manager begins the design of a system by determining the level of modules which need to be ordered. The level is the "granularity" of the problem. For example, the design manager may wish to examine disciplines (a coarse model), analysis programs, or the data level (a fine model). Once the system is divided into these modules, each module's input and output is determined, creating a data file for input to the main program. DeMAID is executed through a system of menus. The user has the choice to plan, schedule, display the N x N matrix, display the multilevel organization, or examine the dependency matrix. The main program calls a subroutine which reads a rule file and a data file, asserts facts into the knowledge base, and executes the inference engine of the artificial intelligence/expert systems program, CLIPS (C Language Integrated Production System). To determine the effects of changes in the design process, DeMAID includes a trace effects feature. There are two methods available to trace the effects of a change in the design process. The first method traces forward through the outputs to determine the effects of an output with respect to a change in a particular input. The second method traces backward to determine what modules must be re-executed if the output of a module must be recomputed. DeMAID is available in three machine versions: a Macintosh version which is written in Symantec's Think C 3.01, a Sun version, and an SGI IRIS version, both of which are written in C language. The Macintosh version requires system software 6.0.2 or later and CLIPS 4.3. The source code for the Macintosh version will not compile under version 4.0 of Think C; however, a sample executable is provided on the distribution media. QuickDraw is required for plotting. The Sun version requires GKS 4.1 graphics libraries, OpenWindows 3, and CLIPS 4.3. The SGI IRIS version requires CLIPS 4.3. Since DeMAID is not compatible with CLIPS 5.0 or later, the source code for CLIPS 4.3 is included on the distribution media; however, the documentation for CLIPS 4.3 is not included in the documentation package for DeMAID. It is available from COSMIC separately as the documentation for MSC-21208. The standard distribution medium for the Macintosh version of DeMAID is a set of four 3.5 inch 800K Macintosh format diskettes. The standard distribution medium for the Sun version of DeMAID is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. The standard distribution medium for the IRIS version is a .25 inch IRIX compatible streaming magnetic tape cartridge in UNIX tar format. All versions include sample input. DeMAID was originally developed for use on VAX VMS computers in 1989. The Macintosh version of DeMAID was released in 1991 and updated in 1992. The Sun version of DeMAID was released in 1992 and updated in 1993. The SGI IRIS version was released in 1993.

  17. DEMAID - A DESIGN MANAGER'S AID FOR INTELLIGENT DECOMPOSITION (SGI IRIS VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.

    1994-01-01

    Many engineering systems are large and multi-disciplinary. Before the design of new complex systems such as large space platforms can begin, the possible interactions among subsystems and their parts must be determined. Once this is completed the proposed system can be decomposed to identify its hierarchical structure. DeMAID (A Design Manager's Aid for Intelligent Decomposition) is a knowledge-based system for ordering the sequence of modules and identifying a possible multilevel structure for the design problem. DeMAID displays the modules in an N x N matrix format (called a design structure matrix) where a module is any process that requires input and generates an output. (Modules which generate an output but do not require an input, such as an initialization process, are also acceptable.) Although DeMAID requires an investment of time to generate and refine the list of modules for input, it could save a considerable amount of money and time in the total design process, particularly in new design problems where the ordering of the modules has not been defined. The decomposition of a complex design system into subsystems requires the judgement of the design manager. DeMAID reorders and groups the modules based on the links (interactions) among the modules, helping the design manager make decomposition decisions early in the design cycle. The modules are grouped into circuits (the subsystems) and displayed in an N x N matrix format. Feedback links, which indicate an iterative process, are minimized and only occur within a subsystem. Since there are no feedback links among the circuits, the circuits can be displayed in a multilevel format. Thus, a large amount of information is reduced to one or two displays which are stored for later retrieval and modification. The design manager and leaders of the design teams then have a visual display of the design problem and the intricate interactions among the different modules. The design manager could save a substantial amount of time if circuits on the same level of the multilevel structure are executed in parallel. DeMAID estimates the time savings based on the number of available processors. In addition to decomposing the system into subsystems, DeMAID examines the dependencies of a problem with independent variables and dependant functions. A dependency matrix is created to show the relationship. DeMAID is based on knowledge base techniques to provide flexibility and ease in adding new capabilities. Although DeMAID was originally written for design problems, it has proven to be very general in solving any problem which contains modules (processes) which take an input and generate an output. For example, one group is applying DeMAID to gain understanding of the data flow of a very large computer program. In this example, the modules are the subroutines of the program. The design manager begins the design of a system by determining the level of modules which need to be ordered. The level is the "granularity" of the problem. For example, the design manager may wish to examine disciplines (a coarse model), analysis programs, or the data level (a fine model). Once the system is divided into these modules, each module's input and output is determined, creating a data file for input to the main program. DeMAID is executed through a system of menus. The user has the choice to plan, schedule, display the N x N matrix, display the multilevel organization, or examine the dependency matrix. The main program calls a subroutine which reads a rule file and a data file, asserts facts into the knowledge base, and executes the inference engine of the artificial intelligence/expert systems program, CLIPS (C Language Integrated Production System). To determine the effects of changes in the design process, DeMAID includes a trace effects feature. There are two methods available to trace the effects of a change in the design process. The first method traces forward through the outputs to determine the effects of an output with respect to a change in a particular input. The second method traces backward to determine what modules must be re-executed if the output of a module must be recomputed. DeMAID is available in three machine versions: a Macintosh version which is written in Symantec's Think C 3.01, a Sun version, and an SGI IRIS version, both of which are written in C language. The Macintosh version requires system software 6.0.2 or later and CLIPS 4.3. The source code for the Macintosh version will not compile under version 4.0 of Think C; however, a sample executable is provided on the distribution media. QuickDraw is required for plotting. The Sun version requires GKS 4.1 graphics libraries, OpenWindows 3, and CLIPS 4.3. The SGI IRIS version requires CLIPS 4.3. Since DeMAID is not compatible with CLIPS 5.0 or later, the source code for CLIPS 4.3 is included on the distribution media; however, the documentation for CLIPS 4.3 is not included in the documentation package for DeMAID. It is available from COSMIC separately as the documentation for MSC-21208. The standard distribution medium for the Macintosh version of DeMAID is a set of four 3.5 inch 800K Macintosh format diskettes. The standard distribution medium for the Sun version of DeMAID is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. The standard distribution medium for the IRIS version is a .25 inch IRIX compatible streaming magnetic tape cartridge in UNIX tar format. All versions include sample input. DeMAID was originally developed for use on VAX VMS computers in 1989. The Macintosh version of DeMAID was released in 1991 and updated in 1992. The Sun version of DeMAID was released in 1992 and updated in 1993. The SGI IRIS version was released in 1993.

  18. DEMAID - A DESIGN MANAGER'S AID FOR INTELLIGENT DECOMPOSITION (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.

    1994-01-01

    Many engineering systems are large and multi-disciplinary. Before the design of new complex systems such as large space platforms can begin, the possible interactions among subsystems and their parts must be determined. Once this is completed the proposed system can be decomposed to identify its hierarchical structure. DeMAID (A Design Manager's Aid for Intelligent Decomposition) is a knowledge-based system for ordering the sequence of modules and identifying a possible multilevel structure for the design problem. DeMAID displays the modules in an N x N matrix format (called a design structure matrix) where a module is any process that requires input and generates an output. (Modules which generate an output but do not require an input, such as an initialization process, are also acceptable.) Although DeMAID requires an investment of time to generate and refine the list of modules for input, it could save a considerable amount of money and time in the total design process, particularly in new design problems where the ordering of the modules has not been defined. The decomposition of a complex design system into subsystems requires the judgement of the design manager. DeMAID reorders and groups the modules based on the links (interactions) among the modules, helping the design manager make decomposition decisions early in the design cycle. The modules are grouped into circuits (the subsystems) and displayed in an N x N matrix format. Feedback links, which indicate an iterative process, are minimized and only occur within a subsystem. Since there are no feedback links among the circuits, the circuits can be displayed in a multilevel format. Thus, a large amount of information is reduced to one or two displays which are stored for later retrieval and modification. The design manager and leaders of the design teams then have a visual display of the design problem and the intricate interactions among the different modules. The design manager could save a substantial amount of time if circuits on the same level of the multilevel structure are executed in parallel. DeMAID estimates the time savings based on the number of available processors. In addition to decomposing the system into subsystems, DeMAID examines the dependencies of a problem with independent variables and dependant functions. A dependency matrix is created to show the relationship. DeMAID is based on knowledge base techniques to provide flexibility and ease in adding new capabilities. Although DeMAID was originally written for design problems, it has proven to be very general in solving any problem which contains modules (processes) which take an input and generate an output. For example, one group is applying DeMAID to gain understanding of the data flow of a very large computer program. In this example, the modules are the subroutines of the program. The design manager begins the design of a system by determining the level of modules which need to be ordered. The level is the "granularity" of the problem. For example, the design manager may wish to examine disciplines (a coarse model), analysis programs, or the data level (a fine model). Once the system is divided into these modules, each module's input and output is determined, creating a data file for input to the main program. DeMAID is executed through a system of menus. The user has the choice to plan, schedule, display the N x N matrix, display the multilevel organization, or examine the dependency matrix. The main program calls a subroutine which reads a rule file and a data file, asserts facts into the knowledge base, and executes the inference engine of the artificial intelligence/expert systems program, CLIPS (C Language Integrated Production System). To determine the effects of changes in the design process, DeMAID includes a trace effects feature. There are two methods available to trace the effects of a change in the design process. The first method traces forward through the outputs to determine the effects of an output with respect to a change in a particular input. The second method traces backward to determine what modules must be re-executed if the output of a module must be recomputed. DeMAID is available in three machine versions: a Macintosh version which is written in Symantec's Think C 3.01, a Sun version, and an SGI IRIS version, both of which are written in C language. The Macintosh version requires system software 6.0.2 or later and CLIPS 4.3. The source code for the Macintosh version will not compile under version 4.0 of Think C; however, a sample executable is provided on the distribution media. QuickDraw is required for plotting. The Sun version requires GKS 4.1 graphics libraries, OpenWindows 3, and CLIPS 4.3. The SGI IRIS version requires CLIPS 4.3. Since DeMAID is not compatible with CLIPS 5.0 or later, the source code for CLIPS 4.3 is included on the distribution media; however, the documentation for CLIPS 4.3 is not included in the documentation package for DeMAID. It is available from COSMIC separately as the documentation for MSC-21208. The standard distribution medium for the Macintosh version of DeMAID is a set of four 3.5 inch 800K Macintosh format diskettes. The standard distribution medium for the Sun version of DeMAID is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. The standard distribution medium for the IRIS version is a .25 inch IRIX compatible streaming magnetic tape cartridge in UNIX tar format. All versions include sample input. DeMAID was originally developed for use on VAX VMS computers in 1989. The Macintosh version of DeMAID was released in 1991 and updated in 1992. The Sun version of DeMAID was released in 1992 and updated in 1993. The SGI IRIS version was released in 1993.

  19. AERO2S - SUBSONIC AERODYNAMIC ANALYSIS OF WINGS WITH LEADING- AND TRAILING-EDGE FLAPS IN COMBINATION WITH CANARD OR HORIZONTAL TAIL SURFACES (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Carlson, H. W.

    1994-01-01

    This code was developed to aid design engineers in the selection and evaluation of aerodynamically efficient wing-canard and wing-horizontal-tail configurations that may employ simple hinged-flap systems. Rapid estimates of the longitudinal aerodynamic characteristics of conceptual airplane lifting surface arrangements are provided. The method is particularly well suited to configurations which, because of high speed flight requirements, must employ thin wings with highly swept leading edges. The code is applicable to wings with either sharp or rounded leading edges. The code provides theoretical pressure distributions over the wing, the canard or horizontal tail, and the deflected flap surfaces as well as estimates of the wing lift, drag, and pitching moments which account for attainable leading edge thrust and leading edge separation vortex forces. The wing planform information is specified by a series of leading edge and trailing edge breakpoints for a right hand wing panel. Up to 21 pairs of coordinates may be used to describe both the leading edge and the trailing edge. The code has been written to accommodate 2000 right hand panel elements, but can easily be modified to accommodate a larger or smaller number of elements depending on the capacity of the target computer platform. The code provides solutions for wing surfaces composed of all possible combinations of leading edge and trailing edge flap settings provided by the original deflection multipliers and by the flap deflection multipliers. Up to 25 pairs of leading edge and trailing edge flap deflection schedules may thus be treated simultaneously. The code also provides for an improved accounting of hinge-line singularities in determination of wing forces and moments. To determine lifting surface perturbation velocity distributions, the code provides for a maximum of 70 iterations. The program is constructed so that successive runs may be made with a given code entry. To make additional runs, it is necessary only to add an identification record and the namelist data that are to be changed from the previous run. This code was originally developed in 1989 in FORTRAN V on a CDC 6000 computer system, and was later ported to an MS-DOS environment. Both versions are available from COSMIC. There are only a few differences between the PC version (LAR-14458) and CDC version (LAR-14178) of AERO2S distributed by COSMIC. The CDC version has one main source code file while the PC version has two files which are easier to edit and compile on a PC. The PC version does not require a FORTRAN compiler which supports NAMELIST because a special INPUT subroutine has been added. The CDC version includes two MODIFY decks which can be used to improve the code and prevent the possibility of some infrequently occurring errors while PC-version users will have to make these code changes manually. The PC version includes an executable which was generated with the Ryan McFarland/FORTRAN compiler and requires 253K RAM and an 80x87 math co-processor. Using this executable, the sample case requires about four hours to execute on an 8MHz AT-class microcomputer with a co-processor. The source code conforms to the FORTRAN 77 standard except that it uses variables longer than six characters. With two minor modifications, the PC version should be portable to any computer with a FORTRAN compiler and sufficient memory. The CDC version of AERO2S is available in CDC NOS Internal format on a 9-track 1600 BPI magnetic tape. The PC version is available on a set of two 5.25 inch 360K MS-DOS format diskettes. IBM AT is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. CDC is a registered trademark of Control Data Corporation. NOS is a trademark of Control Data Corporation.

  20. AERO2S - SUBSONIC AERODYNAMIC ANALYSIS OF WINGS WITH LEADING- AND TRAILING-EDGE FLAPS IN COMBINATION WITH CANARD OR HORIZONTAL TAIL SURFACES (CDC VERSION)

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1994-01-01

    This code was developed to aid design engineers in the selection and evaluation of aerodynamically efficient wing-canard and wing-horizontal-tail configurations that may employ simple hinged-flap systems. Rapid estimates of the longitudinal aerodynamic characteristics of conceptual airplane lifting surface arrangements are provided. The method is particularly well suited to configurations which, because of high speed flight requirements, must employ thin wings with highly swept leading edges. The code is applicable to wings with either sharp or rounded leading edges. The code provides theoretical pressure distributions over the wing, the canard or horizontal tail, and the deflected flap surfaces as well as estimates of the wing lift, drag, and pitching moments which account for attainable leading edge thrust and leading edge separation vortex forces. The wing planform information is specified by a series of leading edge and trailing edge breakpoints for a right hand wing panel. Up to 21 pairs of coordinates may be used to describe both the leading edge and the trailing edge. The code has been written to accommodate 2000 right hand panel elements, but can easily be modified to accommodate a larger or smaller number of elements depending on the capacity of the target computer platform. The code provides solutions for wing surfaces composed of all possible combinations of leading edge and trailing edge flap settings provided by the original deflection multipliers and by the flap deflection multipliers. Up to 25 pairs of leading edge and trailing edge flap deflection schedules may thus be treated simultaneously. The code also provides for an improved accounting of hinge-line singularities in determination of wing forces and moments. To determine lifting surface perturbation velocity distributions, the code provides for a maximum of 70 iterations. The program is constructed so that successive runs may be made with a given code entry. To make additional runs, it is necessary only to add an identification record and the namelist data that are to be changed from the previous run. This code was originally developed in 1989 in FORTRAN V on a CDC 6000 computer system, and was later ported to an MS-DOS environment. Both versions are available from COSMIC. There are only a few differences between the PC version (LAR-14458) and CDC version (LAR-14178) of AERO2S distributed by COSMIC. The CDC version has one main source code file while the PC version has two files which are easier to edit and compile on a PC. The PC version does not require a FORTRAN compiler which supports NAMELIST because a special INPUT subroutine has been added. The CDC version includes two MODIFY decks which can be used to improve the code and prevent the possibility of some infrequently occurring errors while PC-version users will have to make these code changes manually. The PC version includes an executable which was generated with the Ryan McFarland/FORTRAN compiler and requires 253K RAM and an 80x87 math co-processor. Using this executable, the sample case requires about four hours to execute on an 8MHz AT-class microcomputer with a co-processor. The source code conforms to the FORTRAN 77 standard except that it uses variables longer than six characters. With two minor modifications, the PC version should be portable to any computer with a FORTRAN compiler and sufficient memory. The CDC version of AERO2S is available in CDC NOS Internal format on a 9-track 1600 BPI magnetic tape. The PC version is available on a set of two 5.25 inch 360K MS-DOS format diskettes. IBM AT is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. CDC is a registered trademark of Control Data Corporation. NOS is a trademark of Control Data Corporation.

  1. Quantum key distribution without the wavefunction

    NASA Astrophysics Data System (ADS)

    Niestegge, Gerd

    A well-known feature of quantum mechanics is the secure exchange of secret bit strings which can then be used as keys to encrypt messages transmitted over any classical communication channel. It is demonstrated that this quantum key distribution allows a much more general and abstract access than commonly thought. The results include some generalizations of the Hilbert space version of quantum key distribution, but are based upon a general nonclassical extension of conditional probability. A special state-independent conditional probability is identified as origin of the superior security of quantum key distribution; this is a purely algebraic property of the quantum logic and represents the transition probability between the outcomes of two consecutive quantum measurements.

  2. Documentation for the machine-readable version of the Revised S210 Catalog of Far-Ultraviolet Objects (Page, Carruthers and Heckathorn 1982)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    A detailed description of the machine-readable revised catalog as it is currently being distributed from the Astronomical Data Center is given. This catalog of star images was compiled from imagery obtained by the Naval Research Laboratory (NRL) Far-Ultraviolet Camera/Spectrograph (Experiments S201) operated from 21 to 23 April 1972 on the lunar surface during the Apollo 16 mission. The documentation includes a detailed data format description, a table of indigenous characteristics of the magnetic tape file, and a sample listing of data records exactly as they are presented in the machine-readable version.

  3. HELAC-PHEGAS: A generator for all parton level processes

    NASA Astrophysics Data System (ADS)

    Cafarella, Alessandro; Papadopoulos, Costas G.; Worek, Malgorzata

    2009-10-01

    The updated version of the HELAC-PHEGAS event generator is presented. The matrix elements are calculated through Dyson-Schwinger recursive equations using color connection representation. Phase-space generation is based on a multichannel approach, including optimization. HELAC-PHEGAS generates parton level events with all necessary information, in the most recent Les Houches Accord format, for the study of any process within the Standard Model in hadron and lepton colliders. New version program summaryProgram title: HELAC-PHEGAS Catalogue identifier: ADMS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 35 986 No. of bytes in distributed program, including test data, etc.: 380 214 Distribution format: tar.gz Programming language: Fortran Computer: All Operating system: Linux Classification: 11.1, 11.2 External routines: Optionally Les Houches Accord (LHA) PDF Interface library ( http://projects.hepforge.org/lhapdf/) Catalogue identifier of previous version: ADMS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 132 (2000) 306 Does the new version supersede the previous version?: Yes, partly Nature of problem: One of the most striking features of final states in current and future colliders is the large number of events with several jets. Being able to predict their features is essential. To achieve this, the calculations need to describe as accurately as possible the full matrix elements for the underlying hard processes. Even at leading order, perturbation theory based on Feynman graphs runs into computational problems, since the number of graphs contributing to the amplitude grows as n!. Solution method: Recursive algorithms based on Dyson-Schwinger equations have been developed recently in order to overcome the computational obstacles. The calculation of the amplitude, using Dyson-Schwinger recursive equations, results in a computational cost growing asymptotically as 3 n, where n is the number of particles involved in the process. Off-shell subamplitudes are introduced, for which a recursion relation has been obtained allowing to express an n-particle amplitude in terms of subamplitudes, with 1-, 2-, … up to (n-1) particles. The color connection representation is used in order to treat amplitudes involving colored particles. In the present version HELAC-PHEGAS can be used to efficiently obtain helicity amplitudes, total cross sections, parton-level event samples in LHA format, for arbitrary multiparticle processes in the Standard Model in leptonic, pp¯ and pp collisions. Reasons for new version: Substantial improvements, major functionality upgrade. Summary of revisions: Color connection representation, efficient integration over PDF via the PARNI algorithm, interface to LHAPDF, parton level events generated in the most recent LHA format, k reweighting for Parton Shower matching, numerical predictions for amplitudes for arbitrary processes for phase-space points provided by the user, new user interface and the possibility to run over computer clusters. Running time: Depending on the process studied. Usually from seconds to hours. References:A. Kanaki, C.G. Papadopoulos, Comput. Phys. Comm. 132 (2000) 306. C.G. Papadopoulos, Comput. Phys. Comm. 137 (2001) 247. URL: http://www.cern.ch/helac-phegas.

  4. GDF v2.0, an enhanced version of GDF

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Gavrilis, Dimitris; Dermatas, Evangelos

    2007-12-01

    An improved version of the function estimation program GDF is presented. The main enhancements of the new version include: multi-output function estimation, capability of defining custom functions in the grammar and selection of the error function. The new version has been evaluated on a series of classification and regression datasets, that are widely used for the evaluation of such methods. It is compared to two known neural networks and outperforms them in 5 (out of 10) datasets. Program summaryTitle of program: GDF v2.0 Catalogue identifier: ADXC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 98 147 No. of bytes in distributed program, including test data, etc.: 2 040 684 Distribution format: tar.gz Programming language: GNU C++ Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200000 bytes Classification: 4.9 Does the new version supersede the previous version?: Yes Nature of problem: The technique of function estimation tries to discover from a series of input data a functional form that best describes them. This can be performed with the use of parametric models, whose parameters can adapt according to the input data. Solution method: Functional forms are being created by genetic programming which are approximations for the symbolic regression problem. Reasons for new version: The GDF package was extended in order to be more flexible and user customizable than the old package. The user can extend the package by defining his own error functions and he can extend the grammar of the package by adding new functions to the function repertoire. Also, the new version can perform function estimation of multi-output functions and it can be used for classification problems. Summary of revisions: The following features have been added to the package GDF: Multi-output function approximation. The package can now approximate any function f:R→R. This feature gives also to the package the capability of performing classification and not only regression. User defined function can be added to the repertoire of the grammar, extending the regression capabilities of the package. This feature is limited to 3 functions, but easily this number can be increased. Capability of selecting the error function. The package offers now to the user apart from the mean square error other error functions such as: mean absolute square error, maximum square error. Also, user defined error functions can be added to the set of error functions. More verbose output. The main program displays more information to the user as well as the default values for the parameters. Also, the package gives to the user the capability to define an output file, where the output of the gdf program for the testing set will be stored after the termination of the process. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the train data.

  5. Code C# for chaos analysis of relativistic many-body systems with reactions

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Besliu, C.; Jipa, Al.; Stan, E.; Esanu, T.; Felea, D.; Bordeianu, C. C.

    2012-04-01

    In this work we present a reaction module for “Chaos Many-Body Engine” (Grossu et al., 2010 [1]). Following our goal of creating a customizable, object oriented code library, the list of all possible reactions, including the corresponding properties (particle types, probability, cross section, particle lifetime, etc.), could be supplied as parameter, using a specific XML input file. Inspired by the Poincaré section, we propose also the “Clusterization Map”, as a new intuitive analysis method of many-body systems. For exemplification, we implemented a numerical toy-model for nuclear relativistic collisions at 4.5 A GeV/c (the SKM200 Collaboration). An encouraging agreement with experimental data was obtained for momentum, energy, rapidity, and angular π distributions. Catalogue identifier: AEGH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 184 628 No. of bytes in distributed program, including test data, etc.: 7 905 425 Distribution format: tar.gz Programming language: Visual C#.NET 2005 Computer: PC Operating system: Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread. One processor used for each many-body system. RAM: 128 Megabytes Classification: 6.2, 6.5 Catalogue identifier of previous version: AEGH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1464 External routines: Net Framework 2.0 Library Does the new version supersede the previous version?: Yes Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems with reactions. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems with reactions. Object oriented solution, easy to reuse, extend and customize, in any development environment which accepts .Net assemblies or COM components. Treatment of two particles reactions and decays. For each particle, calculation of the time measured in the particle reference frame, according to the instantaneous velocity. Possibility to dynamically add particle properties (spin, isospin, etc.), and reactions/decays, using a specific XML input file. Basic support for Monte Carlo simulations. Implementation of: Lyapunov exponent, “fragmentation level”, “average system radius”, “virial coefficient”, “clusterization map”, and energy conservation precision test. As an example of use, we implemented a toy-model for nuclear relativistic collisions at 4.5 A GeV/c. Reasons for new version: Following our goal of applying chaos theory to nuclear relativistic collisions at 4.5 A GeV/c, we developed a reaction module integrated with the Chaos Many-Body Engine. In the previous version, inheriting the Particle class was the only possibility of implementing more particle properties (spin, isospin, and so on). In the new version, particle properties can be dynamically added using a dictionary object. The application was improved in order to calculate the time measured in the own reference frame of each particle. two particles reactions: a+b→c+d, decays: a→c+d, stimulated decays, more complicated schemas, implemented as various combinations of previous reactions. Following our goal of creating a flexible application, the reactions list, including the corresponding properties (cross sections, particles lifetime, etc.), could be supplied as parameter, using a specific XML configuration file. The simulation output files were modified for systems with reactions, assuring also the backward compatibility. We propose the “Clusterization Map” as a new investigation method of many-body systems. The multi-dimensional Lyapunov Exponent was adapted in order to be used for systems with variable structure. Basic support for Monte Carlo simulations was also added. Additional comments: Windows forms application for testing the engine. Easy copy/paste based deployment method. Running time: Quadratic complexity.

  6. Time-frequency distributions for propulsion-system diagnostics

    NASA Astrophysics Data System (ADS)

    Griffin, Michael E.; Tulpule, Sharayu

    1991-12-01

    The Wigner distribution and its smoothed versions, i.e., Choi-Williams and Gaussian kernels, are evaluated for propulsion system diagnostics. The approach is intended for off-line kernel design by using the ambiguity domain to select the appropriate Gaussian kernel. The features produced by the Wigner distribution and its smoothed versions correlate remarkably well with documented failure indications. The selection of the kernel on the other hand is very subjective for our unstructured data.

  7. FMM-Yukawa: An adaptive fast multipole method for screened Coulomb interactions

    NASA Astrophysics Data System (ADS)

    Huang, Jingfang; Jia, Jun; Zhang, Bo

    2009-11-01

    A Fortran program package is introduced for the rapid evaluation of the screened Coulomb interactions of N particles in three dimensions. The method utilizes an adaptive oct-tree structure, and is based on the new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related packages are also available at http://www.fastmultipole.org/. This paper is a brief review of the program and its performance. Catalogue identifier: AEEQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL 2.0 No. of lines in distributed program, including test data, etc.: 12 385 No. of bytes in distributed program, including test data, etc.: 79 222 Distribution format: tar.gz Programming language: Fortran77 and Fortran90 Computer: Any Operating system: Any RAM: Depends on the number of particles, their distribution, and the adaptive tree structure Classification: 4.8, 4.12 Nature of problem: To evaluate the screened Coulomb potential and force field of N charged particles, and to evaluate a convolution type integral where the Green's function is the fundamental solution of the modified Helmholtz equation. Solution method: An adaptive oct-tree is generated, and a new version of fast multipole method is applied in which the "multipole-to-local" translation operator is diagonalized. Restrictions: Only three and six significant digits accuracy options are provided in this version. Unusual features: Most of the codes are written in Fortran77. Functions for memory allocation from Fortran90 and above are used in one subroutine. Additional comments: For supplementary information see http://www.fastmultipole.org/ Running time: The running time varies depending on the number of particles (denoted by N) in the system and their distribution. The running time scales linearly as a function of N for nearly uniform particle distributions. For three digits accuracy, the solver breaks even with direct summation method at about N = 750. References: [1] L. Greengard, J. Huang, A new version of the fast multipole method for screened Coulomb interactions in three dimensions, J. Comput. Phys. 180 (2002) 642-658.

  8. DSN Array Simulator

    NASA Technical Reports Server (NTRS)

    Tikidjian, Raffi; Mackey, Ryan

    2008-01-01

    The DSN Array Simulator (wherein 'DSN' signifies NASA's Deep Space Network) is an updated version of software previously denoted the DSN Receive Array Technology Assessment Simulation. This software (see figure) is used for computational modeling of a proposed DSN facility comprising user-defined arrays of antennas and transmitting and receiving equipment for microwave communication with spacecraft on interplanetary missions. The simulation includes variations in spacecraft tracked and communication demand changes for up to several decades of future operation. Such modeling is performed to estimate facility performance, evaluate requirements that govern facility design, and evaluate proposed improvements in hardware and/or software. The updated version of this software affords enhanced capability for characterizing facility performance against user-defined mission sets. The software includes a Monte Carlo simulation component that enables rapid generation of key mission-set metrics (e.g., numbers of links, data rates, and date volumes), and statistical distributions thereof as functions of time. The updated version also offers expanded capability for mixed-asset network modeling--for example, for running scenarios that involve user-definable mixtures of antennas having different diameters (in contradistinction to a fixed number of antennas having the same fixed diameter). The improved version also affords greater simulation fidelity, sufficient for validation by comparison with actual DSN operations and analytically predictable performance metrics.

  9. A controlled experiment on the impact of software structure on maintainability

    NASA Technical Reports Server (NTRS)

    Rombach, Dieter H.

    1987-01-01

    The impact of software structure on maintainability aspects including comprehensibility, locality, modifiability, and reusability in a distributed system environment is studied in a controlled maintenance experiment involving six medium-size distributed software systems implemented in LADY (language for distributed systems) and six in an extended version of sequential PASCAL. For all maintenance aspects except reusability, the results were quantitatively given in terms of complexity metrics which could be automated. The results showed LADY to be better suited to the development of maintainable software than the extension of sequential PASCAL. The strong typing combined with high parametrization of units is suggested to improve the reusability of units in LADY.

  10. mizuRoute version 1: A river network routing tool for a continental domain water resources applications

    USGS Publications Warehouse

    Mizukami, Naoki; Clark, Martyn P.; Sampson, Kevin; Nijssen, Bart; Mao, Yixin; McMillan, Hilary; Viger, Roland; Markstrom, Steven; Hay, Lauren E.; Woods, Ross; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    This paper describes the first version of a stand-alone runoff routing tool, mizuRoute. The mizuRoute tool post-processes runoff outputs from any distributed hydrologic model or land surface model to produce spatially distributed streamflow at various spatial scales from headwater basins to continental-wide river systems. The tool can utilize both traditional grid-based river network and vector-based river network data. Both types of river network include river segment lines and the associated drainage basin polygons, but the vector-based river network can represent finer-scale river lines than the grid-based network. Streamflow estimates at any desired location in the river network can be easily extracted from the output of mizuRoute. The routing process is simulated as two separate steps. First, hillslope routing is performed with a gamma-distribution-based unit-hydrograph to transport runoff from a hillslope to a catchment outlet. The second step is river channel routing, which is performed with one of two routing scheme options: (1) a kinematic wave tracking (KWT) routing procedure; and (2) an impulse response function – unit-hydrograph (IRF-UH) routing procedure. The mizuRoute tool also includes scripts (python, NetCDF operators) to pre-process spatial river network data. This paper demonstrates mizuRoute's capabilities to produce spatially distributed streamflow simulations based on river networks from the United States Geological Survey (USGS) Geospatial Fabric (GF) data set in which over 54 000 river segments and their contributing areas are mapped across the contiguous United States (CONUS). A brief analysis of model parameter sensitivity is also provided. The mizuRoute tool can assist model-based water resources assessments including studies of the impacts of climate change on streamflow.

  11. vh@nnlo-v2: new physics in Higgs Strahlung

    NASA Astrophysics Data System (ADS)

    Harlander, Robert V.; Klappert, Jonas; Liebler, Stefan; Simon, Lukas

    2018-05-01

    Introducing version 2 of the code vh@nnlo [1], we study the effects of a number of new-physics scenarios on the Higgs-Strahlung process. In particular, the cross section is evaluated within a general 2HDM and the MSSM. While the Drell-Yan-like contributions are consistently taken into account by a simple rescaling of the SM result, the gluon-initiated contribution is supplemented by squark-loop mediated amplitudes, and by the s-channel exchange of additional scalars which may lead to conspicuous interference effects. The latter holds as well for bottom-quark initiated Higgs Strahlung, which is also included in the new version of vh@nnlo. Using an orthogonal rotation of the three Higgs CP eigenstates in the 2HDM and the MSSM, vh@nnlo incorporates a simple means of CP mixing in these models. Moreover, the effect of vector-like quarks in the SM on the gluon-initiated contribution can be studied. Beyond concrete models, vh@nnlo allows to include the effect of higher-dimensional operators on the production of CP-even Higgs bosons. Transverse momentum distributions of the final state Higgs boson and invariant mass distributions of the Vϕ final state for the gluon- and bottom-quark initiated contributions can be studied. Distributions for the Drell-Yan-like component of Higgs Strahlung can be included through a link to MCFM. vh@nnlo can also be linked to FeynHiggs and 2HDMC for the calculation of Higgs masses and mixing angles. It can also read these parameters from an SLHA-file as produced by standard spectrum generators. Throughout the manuscript, we highlight new-physics effects in various numerical examples, both at the inclusive level and for distributions.

  12. Distributed Energy Resources Customer Adoption Model Plus (DER-CAM+), Version 1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stadler, Michael; Cardorso, Goncalo; Mashayekh, Salman

    DER-CAM+ v1.0.0 is internally referred to as DER-CAM v5.0.0. Due to fundamental changes from previous versions, a new name (DER-CAM+) will be used for DER-CAM version 5.0.0 and above. DER-CAM+ is a Decision Support Tool for Decentralized Energy Systems that has been tailored for microgrid applications, and now explicitly considers electrical and thermal networks within a microgrid, ancillary services, and operating reserve. DER-CAM was initially created as an exclusively economic energy model, able to find the cost minimizing combination and operation profile of a set of DER technologies that meet energy loads of a building or microgrid for a typicalmore » test year. The previous versions of DER-CAM were formulated without modeling the electrical/thermal networks within the microgrid, and hence, used aggregate single-node approaches. Furthermore, they were not able to consider operating reserve constraints, and microgrid revenue streams from participating in ancillary services markets. This new version DER-CAM+ considers these issues by including electrical power flow and thermal flow equations and constraints in the microgrid, revenues from various ancillary services markets, and operating reserve constraints.« less

  13. Integrated Farm System Model Version 4.3 and Dairy Gas Emissions Model Version 3.3 Software development and distribution

    USDA-ARS?s Scientific Manuscript database

    Modeling routines of the Integrated Farm System Model (IFSM version 4.2) and Dairy Gas Emission Model (DairyGEM version 3.2), two whole-farm simulation models developed and maintained by USDA-ARS, were revised with new components for: (1) simulation of ammonia (NH3) and greenhouse gas emissions gene...

  14. mm_par2.0: An object-oriented molecular dynamics simulation program parallelized using a hierarchical scheme with MPI and OPENMP

    NASA Astrophysics Data System (ADS)

    Oh, Kwang Jin; Kang, Ji Hoon; Myung, Hun Joo

    2012-02-01

    We have revised a general purpose parallel molecular dynamics simulation program mm_par using the object-oriented programming. We parallelized the revised version using a hierarchical scheme in order to utilize more processors for a given system size. The benchmark result will be presented here. New version program summaryProgram title: mm_par2.0 Catalogue identifier: ADXP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 390 858 No. of bytes in distributed program, including test data, etc.: 25 068 310 Distribution format: tar.gz Programming language: C++ Computer: Any system operated by Linux or Unix Operating system: Linux Classification: 7.7 External routines: We provide wrappers for FFTW [1], Intel MKL library [2] FFT routine, and Numerical recipes [3] FFT, random number generator, and eigenvalue solver routines, SPRNG [4] random number generator, Mersenne Twister [5] random number generator, space filling curve routine. Catalogue identifier of previous version: ADXP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 560 Does the new version supersede the previous version?: Yes Nature of problem: Structural, thermodynamic, and dynamical properties of fluids and solids from microscopic scales to mesoscopic scales. Solution method: Molecular dynamics simulation in NVE, NVT, and NPT ensemble, Langevin dynamics simulation, dissipative particle dynamics simulation. Reasons for new version: First, object-oriented programming has been used, which is known to be open for extension and closed for modification. It is also known to be better for maintenance. Second, version 1.0 was based on atom decomposition and domain decomposition scheme [6] for parallelization. However, atom decomposition is not popular due to its poor scalability. On the other hand, domain decomposition scheme is better for scalability. It still has a limitation in utilizing a large number of cores on recent petascale computers due to the requirement that the domain size is larger than the potential cutoff distance. To go beyond such a limitation, a hierarchical parallelization scheme has been adopted in this new version and implemented using MPI [7] and OPENMP [8]. Summary of revisions: (1) Object-oriented programming has been used. (2) A hierarchical parallelization scheme has been adopted. (3) SPME routine has been fully parallelized with parallel 3D FFT using volumetric decomposition scheme [9]. K.J.O. thanks Mr. Seung Min Lee for useful discussion on programming and debugging. Running time: Running time depends on system size and methods used. For test system containing a protein (PDB id: 5DHFR) with CHARMM22 force field [10] and 7023 TIP3P [11] waters in simulation box having dimension 62.23 Å×62.23 Å×62.23 Å, the benchmark results are given in Fig. 1. Here the potential cutoff distance was set to 12 Å and the switching function was applied from 10 Å for the force calculation in real space. For the SPME [12] calculation, K, K, and K were set to 64 and the interpolation order was set to 4. To do the fast Fourier transform, we used Intel MKL library. All bonds including hydrogen atoms were constrained using SHAKE/RATTLE algorithms [13,14]. The code was compiled using Intel compiler version 11.1 and mvapich2 version 1.5. Fig. 2 shows performance gains from using CUDA-enabled version [15] of mm_par for 5DHFR simulation in water on Intel Core2Quad 2.83 GHz and GeForce GTX 580. Even though mm_par2.0 is not ported yet for GPU, its performance data would be useful to expect mm_par2.0 performance on GPU. Timing results for 1000 MD steps. 1, 2, 4, and 8 in the figure mean the number of OPENMP threads. Timing results for 1000 MD steps from double precision simulation on CPU, single precision simulation on GPU, and double precision simulation on GPU.

  15. Nonperturbative approach to the parton model

    NASA Astrophysics Data System (ADS)

    Simonov, Yu. A.

    2016-02-01

    In this paper, the nonperturbative parton distributions, obtained from the Lorentz contracted wave functions, are analyzed in the formalism of many-particle Fock components and their properties are compared to the standard perturbative distributions. We show that the collinear and IR divergencies specific for perturbative evolution treatment are absent in the nonperturbative version, however for large momenta pi2 ≫ σ (string tension), the bremsstrahlung kinematics is restored. A preliminary discussion of possible nonperturbative effects in DIS and high energy scattering is given, including in particular a possible role of multihybrid states in creating ridge-type effects.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadgu, Teklu; Appel, Gordon John

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less

  17. CRITIC2: A program for real-space analysis of quantum chemical interactions in solids

    NASA Astrophysics Data System (ADS)

    Otero-de-la-Roza, A.; Johnson, Erin R.; Luaña, Víctor

    2014-03-01

    We present CRITIC2, a program for the analysis of quantum-mechanical atomic and molecular interactions in periodic solids. This code, a greatly improved version of the previous CRITIC program (Otero-de-la Roza et al., 2009), can: (i) find critical points of the electron density and related scalar fields such as the electron localization function (ELF), Laplacian, … (ii) integrate atomic properties in the framework of Bader’s Atoms-in-Molecules theory (QTAIM), (iii) visualize non-covalent interactions in crystals using the non-covalent interactions (NCI) index, (iv) generate relevant graphical representations including lines, planes, gradient paths, contour plots, atomic basins, … and (v) perform transformations between file formats describing scalar fields and crystal structures. CRITIC2 can interface with the output produced by a variety of electronic structure programs including WIEN2k, elk, PI, abinit, Quantum ESPRESSO, VASP, Gaussian, and, in general, any other code capable of writing the scalar field under study to a three-dimensional grid. CRITIC2 is parallelized, completely documented (including illustrative test cases) and publicly available under the GNU General Public License. Catalogue identifier: AECB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECB_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: yes No. of lines in distributed program, including test data, etc.: 11686949 No. of bytes in distributed program, including test data, etc.: 337020731 Distribution format: tar.gz Programming language: Fortran 77 and 90. Computer: Workstations. Operating system: Unix, GNU/Linux. Has the code been vectorized or parallelized?: Shared-memory parallelization can be used for most tasks. Classification: 7.3. Catalogue identifier of previous version: AECB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 157 Nature of problem: Analysis of quantum-chemical interactions in periodic solids by means of atoms-in-molecules and related formalisms. Solution method: Critical point search using Newton’s algorithm, atomic basin integration using bisection, qtree and grid-based algorithms, diverse graphical representations and computation of the non-covalent interactions index on a three-dimensional grid. Additional comments: !!!!! The distribution file for this program is over 330 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Variable, depending on the crystal and the source of the underlying scalar field.

  18. Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0

    NASA Astrophysics Data System (ADS)

    Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun

    2013-02-01

    We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization parameter ɛ, where the coefficients are finite integrals over the unit hypercube. Those integrals are evaluated numerically by Monte Carlo integration. The integrable singularities are handled by choosing a suitable integration contour in the complex plane, in an automated way. Reasons for new version: In the previous version the calculation of multi-scale integrals was restricted to the Euclidean region. Now multi-loop integrals with arbitrary physical kinematics can be evaluated. Another major improvement is the possibility of full parallelization. Summary of revisions: No restriction on the kinematics for multi-loop integrals. The integrand can be constructed from the topological cuts of the diagram. Possibility of full parallelization. Numerical integration of multi-loop integrals written in C++ rather than Fortran. Possibility to loop over ranges of parameters. Restrictions: Depending on the complexity of the problem, limited by memory and CPU time. The restriction that multi-scale integrals could only be evaluated at Euclidean points is superseded in version 2.0. Running time: Between a few minutes and several days, depending on the complexity of the problem. Test runs provided take only seconds.

  19. MinFinder v2.0: An improved version of MinFinder

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Lagaris, Isaac E.

    2008-10-01

    A new version of the "MinFinder" program is presented that offers an augmented linking procedure for Fortran-77 subprograms, two additional stopping rules and a new start-point rejection mechanism that saves a significant portion of gradient and function evaluations. The method is applied on a set of standard test functions and the results are reported. New version program summaryProgram title: MinFinder v2.0 Catalogue identifier: ADWU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC Licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 14 150 No. of bytes in distributed program, including test data, etc.: 218 144 Distribution format: tar.gz Programming language used: GNU C++, GNU FORTRAN, GNU C Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200 000 bytes Classification: 4.9 Catalogue identifier of previous version: ADWU_v1_0 Journal reference of previous version: Computer Physics Communications 174 (2006) 166-179 Does the new version supersede the previous version?: Yes Nature of problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques can be trapped in any local minimum. Global optimization is then the appropriate tool. For example, solving a non-linear system of equations via optimization, one may encounter many local minima that do not correspond to solutions, i.e. they are far from zero. Solution method: Using a uniform pdf, points are sampled from a rectangular domain. A clustering technique, based on a typical distance and a gradient criterion, is used to decide from which points a local search should be started. Further searching is terminated when all the local minima inside the search domain are thought to be found. This is accomplished via three stopping rules: the "double-box" stopping rule, the "observables" stopping rule and the "expected minimizers" stopping rule. Reasons for the new version: The link procedure for source code in Fortran 77 is enhanced, two additional stopping rules are implemented and a new criterion for accepting-start points, that economizes on function and gradient calls, is introduced. Summary of revisions:Addition of command line parameters to the utility program make_program. Augmentation of the link process for Fortran 77 subprograms, by linking the final executable with the g2c library. Addition of two probabilistic stopping rules. Introduction of a rejection mechanism to the Checking step of the original method, that reduces the number of gradient evaluations. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the objective function.

  20. Comparison of LOPES measurements with CoREAS and REAS 3.11 simulations

    NASA Astrophysics Data System (ADS)

    Ludwig, M.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bähren, L.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Chiavassa, A.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Fuchs, B.; Fuhrmann, D.; Gemmeke, H.; Grupen, C.; Haug, M.; Haungs, A.; Heck, D.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Krömer, O.; Kuijpers, J.; Link, K.; Łuczak, P.; Mathes, H. J.; Melissas, M.; Morello, C.; Oehlschläger, J.; Palmieri, N.; Pierog, T.; Rautenberg, J.; Rebel, H.; Roth, M.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wochele, J.; Zabierowski, J.; Zensus, J. A.

    2013-05-01

    In the previous years, LOPES emerged as a very successful experiment measuring the radio emission from air showers in the MHz frequency range. In parallel, the theoretical description of radio emission was developed further and REAS became a widely used simulation Monte Carlo code. REAS 3 as well as CoREAS are based on the endpoint formalism, i.e. they calculate the emission of the air-shower without assuming specific emission mechanisms. While REAS 3 is based on histograms derived from CORSIKA simulations, CoREAS is directly implemented into CORSIKA without loss of information due to histogramming of the particle distributions. In contrast to the earlier versions of REAS, the newest version REAS 3.11 and CoREAS take into account a realistic atmospheric refractive index. To improve the understanding of the emission processes and judge the quality of the simulations, we compare their predictions with high-quality events measured by LOPES. We present results concerning the lateral distribution measured with 30 east-west aligned LOPES antennas. Only the simulation codes including the refractive index (REAS 3.11 and CoREAS) are able to reproduce the slope of measured lateral distributions, but REAS 3.0 predicts too steep lateral distributions, and does not predict rising lateral distributions as seen in a few LOPES events. Moreover, REAS 3.11 predicts an absolute amplitude compatible with the LOPES measurements.

  1. The UCLA Design Diversity Experiment (DEDIX) system: A distributed testbed for multiple-version software

    NASA Technical Reports Server (NTRS)

    Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.

    1986-01-01

    To establish a long-term research facility for experimental investigations of design diversity as a means of achieving fault-tolerant systems, a distributed testbed for multiple-version software was designed. It is part of a local network, which utilizes the Locus distributed operating system to operate a set of 20 VAX 11/750 computers. It is used in experiments to measure the efficacy of design diversity and to investigate reliability increases under large-scale, controlled experimental conditions.

  2. Medium Caliber Lead-Free Electric Primer. Version 2

    DTIC Science & Technology

    2012-09-01

    Toxic Substance Control Act TGA Thermogravimetric Analysis TNR Trinitroresorcinol V Voltage VDC Voltage Direct Current WSESRB Weapons System...variety of techniques including Thermogravimetric Analysis (TGA), base-hydrolysis, Surface Area Analysis using Brunauer, Emmett, Teller (BET...Distribution From Thermogravimetric Analysis Johnson, C. E.; Fallis, S.; Chafin, A. P.; Groshens, T. J.; Higa, K. T.; Ismail, I. M. K. and Hawkins, T. W

  3. Next-to-minimal SOFTSUSY

    NASA Astrophysics Data System (ADS)

    Allanach, B. C.; Athron, P.; Tunstall, Lewis C.; Voigt, A.; Williams, A. G.

    2014-09-01

    We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a Z3 symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general Z3 violating (denoted as =) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper serves as a manual to the NMSSM mode of the program, detailing the approximations and conventions used. Catalogue identifier: ADPM_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADPM_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 154886 No. of bytes in distributed program, including test data, etc.: 1870890 Distribution format: tar.gz Programming language: C++, fortran. Computer: Personal computer. Operating system: Tested on Linux 3.x. Word size: 64 bits Classification: 11.1, 11.6. Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: ADPM_v3_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 785 Nature of problem: Calculating supersymmetric particle spectrum and mixing parameters in the next-to-minimal supersymmetric standard model. The solution to the renormalisation group equations must be consistent with boundary conditions on supersymmetry breaking parameters, as well as on the weak-scale boundary condition on gauge couplings, Yukawa couplings and the Higgs potential parameters. Solution method: Nested iterative algorithm and numerical minimisation of the Higgs potential. Reasons for new version: Major extension to include the next-to-minimal supersymmetric standard model. Summary of revisions: Added additional supersymmetric and supersymmetry breaking parameters associated with the additional gauge singlet. Electroweak symmetry breaking conditions are significantly changed in the next-to-minimal mode, and some sparticle mixing changes. An interface to NMSSMTools has also been included. Some of the object structure has also changed, and the command line interface has been made more user friendly. Restrictions: SOFTSUSY will provide a solution only in the perturbative regime and it assumes that all couplings of the model are real (i.e. CP-conserving). If the parameter point under investigation is non-physical for some reason (for example because the electroweak potential does not have an acceptable minimum), SOFTSUSY returns an error message. Running time: A few seconds per parameter point.

  4. NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (AMDAHL VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, J. E.

    1994-01-01

    The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).

  5. NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, J. E.

    1994-01-01

    The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).

  6. Grid Integrated Distributed PV (GridPV) Version 2.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reno, Matthew J.; Coogan, Kyle

    2014-12-01

    This manual provides the documentation of the MATLAB toolbox of functions for using OpenDSS to simulate the impact of solar energy on the distribution system. The majority of the functio ns are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in th e OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included tomore » show potential uses of the toolbox functions. Each function i n the toolbox is documented with the function use syntax, full description, function input list, function output list, example use, and example output.« less

  7. ASTROP2 Users Manual: A Program for Aeroelastic Stability Analysis of Propfans

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Lucero, John M.

    1996-01-01

    This manual describes the input data required for using the second version of the ASTROP2 (Aeroelastic STability and Response Of Propulsion systems - 2 dimensional analysis) computer code. In ASTROP2, version 2.0, the program is divided into two modules: 2DSTRIP, which calculates the structural dynamic information; and 2DASTROP, which calculates the unsteady aerodynamic force coefficients from which the aeroelastic stability can be determined. In the original version of ASTROP2, these two aspects were performed in a single program. The improvements to version 2.0 include an option to account for counter rotation, improved numerical integration, accommodation for non-uniform inflow distribution, and an iterative scheme to flutter frequency convergence. ASTROP2 can be used for flutter analysis of multi-bladed structures such as those found in compressors, turbines, counter rotating propellers or propfans. The analysis combines a two-dimensional, unsteady cascade aerodynamics model and a three dimensional, normal mode structural model using strip theory. The flutter analysis is formulated in the frequency domain resulting in an eigenvalue determinant. The flutter frequency and damping can be inferred from the eigenvalues.

  8. The International Bathymetric Chart of the Southern Ocean (IBCSO) Version 1.0 - A new bathymetric compilation covering circum-Antarctic waters

    NASA Astrophysics Data System (ADS)

    Arndt, Jan Erik; Schenke, Hans Werner; Jakobsson, Martin; Nitsche, Frank O.; Buys, Gwen; Goleby, Bruce; Rebesco, Michele; Bohoyo, Fernando; Hong, Jongkuk; Black, Jenny; Greku, Rudolf; Udintsev, Gleb; Barrios, Felipe; Reynoso-Peralta, Walter; Taisei, Morishita; Wigley, Rochelle

    2013-06-01

    International Bathymetric Chart of the Southern Ocean (IBCSO) Version 1.0 is a new digital bathymetric model (DBM) portraying the seafloor of the circum-Antarctic waters south of 60°S. IBCSO is a regional mapping project of the General Bathymetric Chart of the Oceans (GEBCO). The IBCSO Version 1.0 DBM has been compiled from all available bathymetric data collectively gathered by more than 30 institutions from 15 countries. These data include multibeam and single-beam echo soundings, digitized depths from nautical charts, regional bathymetric gridded compilations, and predicted bathymetry. Specific gridding techniques were applied to compile the DBM from the bathymetric data of different origin, spatial distribution, resolution, and quality. The IBCSO Version 1.0 DBM has a resolution of 500 × 500 m, based on a polar stereographic projection, and is publicly available together with a digital chart for printing from the project website (www.ibcso.org) and at http://dx.doi.org/10.1594/PANGAEA.805736.

  9. COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics

    NASA Astrophysics Data System (ADS)

    Barletta, Paolo

    2012-02-01

    Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as the spatial densities or the energy distributions, can be readily evaluated. Reasons for new version: A number of issues made the old version very difficult to be ported on different architectures, and impossible to compile on Windows. Furthermore, the test runs results could only be replicated poorly, as a consequence of the simulations being very sensitive to the machine background noise. In practise, as the particles are simulated for billions and billions of steps, the consequence of a small difference in the initial conditions due to the finiteness of double precision real can have macroscopic effects in the output. This is not a problem in its own right, but a feature of such simulations. However, for sake of completeness we have introduced a quadruple precision version of the code which yields the same results independently of the software used to compile it, or the hardware architecture where the code is run. Summary of revisions: A number of bugs in the dynamic memory allocation have been detected and removed, mostly in the cool.cpp file. All files have been renamed with a .cpp ending, rather than .c++, to make them compatible with Windows. The Random Number Generator routine, which is the computational core of the algorithm, has been re-written in C++, and there is no need any longer for cross FORTRAN-C++ compilation. A quadruple precision version of the code is provided alongside the original double precision one. The makefile allows the user to choose which one to compile by setting the switch PRECISION to either double or quad. The source code and header files have been organised into directories to make the code file system look neater. Restrictions: The in-trap motion of the particles is treated classically. Running time: The running time is relatively short, 1-2 hours. However it is convenient to replicate each simulation several times with different initialisations of the random sequence.

  10. SU-E-J-101: Retroactive Calculation of TLD and Film Dose in Anthropomorphic Phantom as Assessment of Updated TPS Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alkhatib, H; Oves, S

    Purpose: To demonstrate a quick and comprehensive method verifying the accuracy of the updated dose model by recalculating dose distribution in an anthropomorphic phantom with a new version of the TPS and comparing the results to measured values. Methods: CT images and IMRT plan of an RPC anthropomorphic head phantom, previously calculated by Pinnacle 9.0, was re-computed using Pinnacle 9.2 and 9.6. The dosimeters within the phantom include four TLD capsules representing a primary PTV, two TLD capsules representing a secondary PTV, and two TLD capsules representing an organ at risk. Also included were three sheets of Gafchromic film. Performancemore » of the updated TPS version was assessed by recalculating point doses and dose profiles corresponding to TLD and film position respectively and then comparing the results to reported values by the RPC. Results: Comparing calculated doses to reported measured doses from the RPC yielded an average disagreement of 1.48%, 2.04% and 2.10% for versions 9.0, 9.2, 9.6 respectively. Computed doses points all meet the RPC's passing criteria with the exception of the point representing the superior organ at risk in version 9.6. However, qualitative analysis of the recalculated dose profiles showed improved agreement with those of the RPC, especially in the penumbra region. Conclusion: This work has demonstrated the calculation results of Pinnacle 9.2 and 9.6 vs 9.0 version. Additionally, this study illustrates a method for the user to gain confidence upgrade to a newer version of the treatment planning system.« less

  11. Final Report for the Development of the NASA Technical Report Server (NTRS)

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.

    2005-01-01

    The author performed a variety of research, development and consulting tasks for NASA Langley Research Center in the area of digital libraries (DLs) and supporting technologies, such as the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In particular, the development focused on the NASA Technical Report Server (NTRS) and its transition from a distributed searching model to one that uses the OAI-PMH. The Open Archives Initiative (OAI) is an international consortium focused on furthering the interoperability of DLs through the use of "metadata harvesting". The OAI-PMH version of NTRS went into public production on April 28, 2003. Since that time, it has been extremely well received. In addition to providing the NTRS user community with a higher level of service than the previous, distributed searching version of NTRS, it has provided more insight into how the user community uses NTRS in a variety of deployment scenarios. This report details the design, implementation and maintenance of the NTRS. Source code is included in the appendices.

  12. A penalized framework for distributed lag non-linear models.

    PubMed

    Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G

    2017-09-01

    Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  13. Employing online quantum random number generators for generating truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2013-01-01

    The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or decrease depending on the connection speed between the computer and the server providing random numbers. Running time: Depends on the used source of randomness and the amount of random data used in the experiment. References: [1] M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, O. Benson., An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements, Applied Physics Letters, Vol. 098, 171105 (2011). http://dx.doi.org/10.1063/1.3578456.

  14. CADNA_C: A version of CADNA for use with C or C++ programs

    NASA Astrophysics Data System (ADS)

    Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne

    2010-11-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.

  15. The influence of competition between plant functional types in the Canadian Terrestrial Ecosystem Model (CTEM) v. 2.0

    NASA Astrophysics Data System (ADS)

    Melton, Joe; Arora, Vivek

    2015-04-01

    The Canadian Terrestrial Ecosystem Model (CTEM) is the interactive vegetation component in the earth system modelling framework of the Canadian Centre for Climate Modelling and Analysis (CCCma). In its current framework, CTEM uses prescribed fractional coverage of plant functional types (PFTs) in each grid cell. In reality, vegetation cover is continually adjusting to changes in climate, atmospheric composition, and anthropogenic forcing, for example, through human-caused fires and CO2 fertilization. These changes in vegetation spatial patterns occur over timescales of years to centuries as tree migration is a slow process and vegetation distributions inherently have inertia. Here, we present version 2.0 of CTEM that includes a representation of competition between PFTs through a modified version of the Lotka-Volterra (L-V) predator-prey equations. The simulated areal extents of CTEM's seven non-crop PFTs are compared with available observation-based estimates, and simulations using unmodified L-V equations (similar to other models like TRIFFID), to demonstrate that the model is able to represent the broad spatial distributions of its seven PFTs at the global scale. Differences remain, however, since representing the multitude of plant species with just seven non-crop PFTs only allows the large scale climatic controls on the distributions of PFTs to be captured. As expected, PFTs that exist in climate niches are difficult to represent either due to the coarse spatial resolution of the model and the corresponding driving climate or the limited number of PFTs used to model the terrestrial ecosystem processes. The geographic and zonal distributions of primary terrestrial carbon pools and fluxes from the versions of CTEM that use prescribed and dynamically simulated fractional coverage of PFTs compare reasonably with each other and observation-based estimates. These results illustrate that the parametrization of competition between PFTs in CTEM behaves in a reasonably realistic manner while the use of unmodified L-V equations results in unrealistic plant distributions.

  16. 21 CFR 801.16 - Medical devices; Spanish-language version of certain required statements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical devices; Spanish-language version of....16 Medical devices; Spanish-language version of certain required statements. If devices restricted to prescription use only are labeled solely in Spanish for distribution in the Commonwealth of Puerto Rico where...

  17. 21 CFR 201.16 - Drugs; Spanish-language version of certain required statements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Drugs; Spanish-language version of certain...; Spanish-language version of certain required statements. An increasing number of medications restricted to prescription use only are being labeled solely in Spanish for distribution in the Commonwealth of Puerto Rico...

  18. CIF2Cell: Generating geometries for electronic structure programs

    NASA Astrophysics Data System (ADS)

    Björkman, Torbjörn

    2011-05-01

    The CIF2Cell program generates the geometrical setup for a number of electronic structure programs based on the crystallographic information in a Crystallographic Information Framework (CIF) file. The program will retrieve the space group number, Wyckoff positions and crystallographic parameters, make a sensible choice for Bravais lattice vectors (primitive or principal cell) and generate all atomic positions. Supercells can be generated and alloys are handled gracefully. The code currently has output interfaces to the electronic structure programs ABINIT, CASTEP, CPMD, Crystal, Elk, Exciting, EMTO, Fleur, RSPt, Siesta and VASP. Program summaryProgram title: CIF2Cell Catalogue identifier: AEIM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL version 3 No. of lines in distributed program, including test data, etc.: 12 691 No. of bytes in distributed program, including test data, etc.: 74 933 Distribution format: tar.gz Programming language: Python (versions 2.4-2.7) Computer: Any computer that can run Python (versions 2.4-2.7) Operating system: Any operating system that can run Python (versions 2.4-2.7) Classification: 7.3, 7.8, 8 External routines: PyCIFRW [1] Nature of problem: Generate the geometrical setup of a crystallographic cell for a variety of electronic structure programs from data contained in a CIF file. Solution method: The CIF file is parsed using routines contained in the library PyCIFRW [1], and crystallographic as well as bibliographic information is extracted. The program then generates the principal cell from symmetry information, crystal parameters, space group number and Wyckoff sites. Reduction to a primitive cell is then performed, and the resulting cell is output to suitably named files along with documentation of the information source generated from any bibliographic information contained in the CIF file. If the space group symmetries is not present in the CIF file the program will fall back on internal tables, so only the minimal input of space group, crystal parameters and Wyckoff positions are required. Additional key features are handling of alloys and supercell generation. Additional comments: Currently implements support for the following general purpose electronic structure programs: ABINIT [2,3], CASTEP [4], CPMD [5], Crystal [6], Elk [7], exciting [8], EMTO [9], Fleur [10], RSPt [11], Siesta [12] and VASP [13-16]. Running time: The examples provided in the distribution take only seconds to run.

  19. Atmospheric Chemistry Data Products

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This presentation poster covers data products from the Distributed Active Archive Center (DAAC) of the Goddard Earth Sciences (GES) Data and Information Services Center (DISC). Total Ozone Mapping Spectrometer products (TOMS) introduced in the presentation include TOMS Version 8 as well as Aura, which provides 25 years of TOMS and Upper Atmosphere Research Satellite (UARS) data. The presentation lists a number of atmospheric chemistry and dynamics data sets at DAAC.

  20. Cross-cultural adaptation and measurement properties testing of the Iconographical Falls Efficacy Scale (Icon-FES).

    PubMed

    Franco, Marcia Rodrigues; Pinto, Rafael Zambelli; Delbaere, Kim; Eto, Bianca Yumie; Faria, Maíra Sgobbi; Aoyagi, Giovana Ayumi; Steffens, Daniel; Pastre, Carlos Marcelo

    2018-02-14

    The Iconographical Falls Efficacy Scale (Icon-FES) is an innovative tool to assess concern of falling that uses pictures as visual cues to provide more complete environmental contexts. Advantages of Icon-FES over previous scales include the addition of more demanding balance-related activities, ability to assess concern about falling in highly functioning older people, and its normal distribution. To perform a cross-cultural adaptation and to assess the measurement properties of the 30-item and 10-item Icon-FES in a community-dwelling Brazilian older population. The cross-cultural adaptation followed the recommendations of international guidelines. We evaluated the measurement properties (i.e. internal consistency, test-retest reproducibility, standard error of the measurement, minimal detectable change, construct validity, ceiling/floor effect, data distribution and discriminative validity), in 100 community-dwelling people aged ≥60 years. The 30-item and 10-item Icon-FES-Brazil showed good internal consistency (alpha and omega >0.70) and excellent intra-rater reproducibility (ICC 2,1 =0.96 and 0.93, respectively). According to the standard error of the measurement and minimal detectable change, the magnitude of change needed to exceed the measurement error and variability were 7.2 and 3.4 points for the 30-item and 10-item Icon-FES, respectively. We observed an excellent correlation between both versions of the Icon-FES and Falls Efficacy Scale - International (rho=0.83, p<0.001 [30-item version]; 0.76, p<0.001 [10-item version]). Icon-FES versions showed normal distribution, no floor/ceiling effects and were able to discriminate between groups relating to fall risk factors. Icon-FES-Brazil is a semantically and linguistically appropriate tool with acceptable measurement properties to evaluate concern about falling among the community-dwelling older population. Copyright © 2018 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.

  1. The artificial and natural isotopes distribution in sedge (Carex L.) biomass from the Yenisei River flood-plain: Adaptation of the sequential elution technique.

    PubMed

    Kropacheva, Marya; Melgunov, Mikhail; Makarova, Irina

    2017-02-01

    The study of migration pathways of artificial isotopes in the flood-plain biogeocoenoses, impacted by the nuclear fuel cycle plants, requires determination of isotope speciations in the biomass of higher terrestrial plants. The optimal method for their determination is the sequential elution technique (SET). The technique was originally developed to study atmospheric pollution by metals and has been applied to lichens, terrestrial and aquatic bryophytes. Due to morphological and physiological differences, it was necessary to adapt SET for new objects: coastal macrophytes growing on the banks of the Yenisei flood-plain islands in the near impact zone of Krasnoyarsk Mining and Chemical Combine (KMCC). In the first version of SET, 20 mM Na 2 EDTA was used as a reagent at the first stage; in the second version of SET, it was 1 M CH 3 COONH 4 . Four fractions were extracted. Fraction I included elements from the intercellular space and those connected with the outer side of the cell wall. Fraction II contained intracellular elements; fraction III contained elements firmly bound in the cell wall and associated structures; fraction IV contained insoluble residue. Adaptation of SET has shown that the first stage should be performed immediately after sampling. Separation of fractions III and IV can be neglected, since the output of isotopes into the IV fraction is at the level of error detection. The most adequate version of SET for terrestrial vascular plants is the version using 20 mM Na 2 EDTA at the first stage. Isotope 90 Sr is most sensitive to the technique changes. Its distribution depends strongly on both the extractant used at stage 1 and duration of the first stage. Distribution of artificial radionuclides in the biomass of terrestrial vascular plants can vary from year to year and depends significantly on the age of the plant. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Simulation of ultra-high energy photon propagation with PRESHOWER 2.0

    NASA Astrophysics Data System (ADS)

    Homola, P.; Engel, R.; Pysz, A.; Wilczyński, H.

    2013-05-01

    In this paper we describe a new release of the PRESHOWER program, a tool for Monte Carlo simulation of propagation of ultra-high energy photons in the magnetic field of the Earth. The PRESHOWER program is designed to calculate magnetic pair production and bremsstrahlung and should be used together with other programs to simulate extensive air showers induced by photons. The main new features of the PRESHOWER code include a much faster algorithm applied in the procedures of simulating the processes of gamma conversion and bremsstrahlung, update of the geomagnetic field model, and a minor correction. The new simulation procedure increases the flexibility of the code so that it can also be applied to other magnetic field configurations such as, for example, encountered in the vicinity of the sun or neutron stars. Program summaryProgram title: PRESHOWER 2.0 Catalog identifier: ADWG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3968 No. of bytes in distributed program, including test data, etc.: 37198 Distribution format: tar.gz Programming language: C, FORTRAN 77. Computer: Intel-Pentium based PC. Operating system: Linux or Unix. RAM:< 100 kB Classification: 1.1. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADWG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 173 (2005) 71 Nature of problem: Simulation of a cascade of particles initiated by UHE photon in magnetic field. Solution method: The primary photon is tracked until its conversion into an e+ e- pair. If conversion occurs each individual particle in the resultant preshower is checked for either bremsstrahlung radiation (electrons) or secondary gamma conversion (photons). Reasons for new version: Slow and outdated algorithm in the old version (a significant speed up is possible); Extension of the program to allow simulations also for extraterrestrial magnetic field configurations (e.g. neutron stars) and very long path lengths. Summary of revisions: A veto algorithm was introduced in the gamma conversion and bremsstrahlung tracking procedures. The length of the tracking step is now variable along the track and depends on the probability of the process expected to occur. The new algorithm reduces significantly the number of tracking steps and speeds up the execution of the program. The geomagnetic field model has been updated to IGRF-11, allowing for interpolations up to the year 2015. Numerical Recipes procedures to calculate modified Bessel functions have been replaced with an open source CERN routine DBSKA. One minor bug has been fixed. Restrictions: Gamma conversion into particles other than an electron pair is not considered. Spatial structure of the cascade is neglected. Additional comments: The following routines are supplied in the package, IGRF [1, 2], DBSKA [3], ran2 [4] Running time: 100 preshower events with primary energy 1020 eV require a 2.66 GHz CPU time of about 200 sec.; at the energy of 1021 eV, 600 sec.

  3. AF-GEOSPACE Version 2.1

    NASA Astrophysics Data System (ADS)

    Hilmer, R. V.; Ginet, G. P.; Hall, T.; Holeman, E.; Madden, D.; Tautz, M.; Roth, C.

    2004-05-01

    AF-GEOSpace is a graphics-intensive software program with space environment models and applications developed and distributed by the Space Weather Center of Excellence at AFRL. A review of current (Version 2.0) and planned (Version 2.1) AF-GEOSpace capabilities will be given. A wide range of physical domains is represented enabling the software to address such things as solar disturbance propagation, radiation belt configuration, and ionospheric auroral particle precipitation and scintillation. The software is currently being used to aid with the design, operation, and simulation of a wide variety of communications, navigation, and surveillance systems. Building on the success of previous releases, AF-GEOSpace has become a platform for the rapid prototyping of automated operational and simulation space weather visualization products and helps with a variety of tasks, including: orbit specification for radiation hazard avoidance; satellite design assessment and post-event anomaly analysis; solar disturbance effects forecasting; frequency and antenna management for radar and HF communications; determination of link outage regions for active ionospheric conditions; scientific model validation and comparison, physics research, and education. Version 2.0 provided a simplified graphical user interface, improved science and application modules, and significantly enhanced graphical performance. Common input data archive sets, application modules, and 1-D, 2-D, and 3-D visualization tools are provided to all models. Dynamic capabilities permit multiple environments to be generated at user-specified time intervals while animation tools enable displays such as satellite orbits and environment data together as a function of time. Building on the existing Version 2.0 software architecture, AF-GEOSpace Version 2.1 is currently under development and will include a host of new modules to provide, for example, geosynchronous charged particle fluxes, neutral atmosphere densities, cosmic ray cutoff maps, low-altitude trapped proton belt specification, and meteor shower/storm fluxes with spacecraft impact probabilities. AF-GEOSpace Version 2.1 is being developed for Windows NT/2000/XP and Linux systems.

  4. Production version of the extended NASA-Langley Vortex Lattice FORTRAN computer program. Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Lamar, J. E.; Herbert, H. E.

    1982-01-01

    The latest production version, MARK IV, of the NASA-Langley vortex lattice computer program is summarized. All viable subcritical aerodynamic features of previous versions were retained. This version extends the previously documented program capabilities to four planforms, 400 panels, and enables the user to obtain vortex-flow aerodynamics on cambered planforms, flowfield properties off the configuration in attached flow, and planform longitudinal load distributions.

  5. Atlantic meridional overturning circulation during the Last Glacial Maximum.

    PubMed

    Lynch-Stieglitz, Jean; Adkins, Jess F; Curry, William B; Dokken, Trond; Hall, Ian R; Herguera, Juan Carlos; Hirschi, Joël J-M; Ivanova, Elena V; Kissel, Catherine; Marchal, Olivier; Marchitto, Thomas M; McCave, I Nicholas; McManus, Jerry F; Mulitza, Stefan; Ninnemann, Ulysses; Peeters, Frank; Yu, Ein-Fen; Zahn, Rainer

    2007-04-06

    The circulation of the deep Atlantic Ocean during the height of the last ice age appears to have been quite different from today. We review observations implying that Atlantic meridional overturning circulation during the Last Glacial Maximum was neither extremely sluggish nor an enhanced version of present-day circulation. The distribution of the decay products of uranium in sediments is consistent with a residence time for deep waters in the Atlantic only slightly greater than today. However, evidence from multiple water-mass tracers supports a different distribution of deep-water properties, including density, which is dynamically linked to circulation.

  6. CPsuperH2.0: An improved computational tool for Higgs phenomenology in the MSSM with explicit CP violation

    NASA Astrophysics Data System (ADS)

    Lee, J. S.; Carena, M.; Ellis, J.; Pilaftsis, A.; Wagner, C. E. M.

    2009-02-01

    We describe the Fortran code CPsuperH2.0, which contains several improvements and extensions of its predecessor CPsuperH. It implements improved calculations of the Higgs-boson pole masses, notably a full treatment of the 4×4 neutral Higgs propagator matrix including the Goldstone boson and a more complete treatment of threshold effects in self-energies and Yukawa couplings, improved treatments of two-body Higgs decays, some important three-body decays, and two-loop Higgs-mediated contributions to electric dipole moments. CPsuperH2.0 also implements an integrated treatment of several B-meson observables, including the branching ratios of B→μμ, B→ττ, B→τν, B→Xγ and the latter's CP-violating asymmetry A, and the supersymmetric contributions to the Bs,d0-B¯s,d0 mass differences. These additions make CPsuperH2.0 an attractive integrated tool for analyzing supersymmetric CP and flavour physics as well as searches for new physics at high-energy colliders such as the Tevatron, LHC and linear colliders. Program summaryProgram title: CPsuperH2.0 Catalogue identifier: ADSR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13 290 No. of bytes in distributed program, including test data, etc.: 89 540 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC running under Linux and computers in Unix environment Operating system: Linux RAM: 32 Mbytes Classification: 11.1 Catalogue identifier of the previous version: ADSR_v1_0 Journal reference of the previous version: CPC 156 (2004) 283 Does the new version supersede the previous version?: Yes Nature of problem: The calculations of mass spectrum, decay widths and branching ratios of the neutral and charged Higgs bosons in the Minimal Supersymmetric Standard Model with explicit CP violation have been improved. The program is based on recent renormalization-group-improved diagrammatic calculations that include dominant higher-order logarithmic and threshold corrections, b-quark Yukawa-coupling resummation effects and improved treatment of Higgs-boson pole-mass shifts. The couplings of the Higgs bosons to the Standard Model gauge bosons and fermions, to their supersymmetric partners and all the trilinear and quartic Higgs-boson self-couplings are also calculated. The new implementations include a full treatment of the 4×4(2×2) neutral (charged) Higgs propagator matrix together with the center-of-mass dependent Higgs-boson couplings to gluons and photons, two-loop Higgs-mediated contributions to electric dipole moments, and an integrated treatment of several B-meson observables. Solution method: One-dimensional numerical integration for several Higgs-decay modes, iterative treatment of the threshold corrections and Higgs-boson pole masses, and the numerical diagonalization of the neutralino mass matrix. Reasons for new version: Mainly to provide a coherent numerical framework which calculates consistently observables for both low- and high-energy experiments. Summary of revisions: Improved treatment of Higgs-boson masses and propagators. Improved treatment of Higgs-boson couplings and decays. Higgs-mediated two-loop electric dipole moments. B-meson observables. Running time: Less than 0.1 seconds. The program may be obtained from http://www.hep.man.ac.uk/u/jslee/CPsuperH.html.

  7. Persistent Identifiers for Data Products: Adoption, Enhancement, and Use

    NASA Astrophysics Data System (ADS)

    Downs, R. R.; Schumacher, J.; Scialdone, J.; Hansen, M.

    2016-12-01

    Persistent identifiers offer value for science and for various science community stakeholders, such as data producers, data distributers, science article authors, scientific journal publishers, research sponsors, libraries, and affiliated institutions. However, to attain the benefits of persistent identifiers, they should be assigned to disseminated data products and included within the references reported in publications that describe the studies in which the data were used. Scientific data centers, archives, digital repositories, and other data publishers also need to determine the level of aggregation, or granularity, of data products to be assigned persistent identifiers as well as the elements to be included in the landing pages to which persistent identifiers will resolve. Similarly, policies and procedures should be clear on decisions about maintenance issues, including versioning of data products and how persistent identifiers to previous versions and new locations will be maintained. With some persistent identifiers, such as Digital Object Identifiers (DOIs), which provide capabilities to link to related identifiers of other works, decisions on the establishment of links also must be clear, including links between early versions of data products and subsequent versions, links between data products and associated documentation, and links between data products and other publications that describe the data. We describe decisions for enabling the adoption and assignment of DOIs as persistent identifiers for data products disseminated by the NASA Socioeconomic Data and Applications Center (SEDAC) along with considerations for policy decisions, testing, implementation, and enhancement. The prevalence of the adoption of DOIs for citing the use of Earth science data disseminated by SEDAC also is described to provide insight into how interdisciplinary data users have engaged in the use of DOIs within their publications along with the implications of such use.

  8. Earth observing system. Output data products and input requirements, version 2.0. Volume 1: Instrument data product characteristics

    NASA Technical Reports Server (NTRS)

    Lu, Yun-Chi; Chang, Hyo Duck; Krupp, Brian; Kumar, Ravindra; Swaroop, Anand

    1992-01-01

    Information on Earth Observing System (EOS) output data products and input data requirements that has been compiled by the Science Processing Support Office (SPSO) at GSFC is presented. Since Version 1.0 of the SPSO Report was released in August 1991, there have been significant changes in the EOS program. In anticipation of a likely budget cut for the EOS Project, NASA HQ restructured the EOS program. An initial program consisting of two large platforms was replaced by plans for multiple, smaller platforms, and some EOS instruments were either deselected or descoped. Updated payload information reflecting the restructured EOS program superseding the August 1991 version of the SPSO report is included. This report has been expanded to cover information on non-EOS data products, and consists of three volumes (Volumes 1, 2, and 3). Volume 1 provides information on instrument outputs and input requirements. Volume 2 is devoted to Interdisciplinary Science (IDS) outputs and input requirements, including the 'best' and 'alternative' match analysis. Volume 3 provides information about retrieval algorithms, non-EOS input requirements of instrument teams and IDS investigators, and availability of non-EOS data products at seven primary Distributed Active Archive Centers (DAAC's).

  9. High-Speed, Low-Cost Workstation for Computation-Intensive Statistics. Phase 1

    DTIC Science & Technology

    1990-06-20

    routine implementation and performance. 5 The two compiled versions given in the table were coded in an attempt to obtain an optimized compiled version...level statistics and linear algebra routines (BSAS and BLAS) that have been prototyped in this study. For each routine, both the C code ( Turbo C...OISTRIBUTION /AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Unlimited distribution 13. ABSTRACT (Maximum 200 words) High-performance and low-cost

  10. BOINC: compute for science

    Science.gov Websites

    . Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.

  11. Indoor Semi-volatile Organic Compounds (i-SVOC) Version 1.0

    EPA Pesticide Factsheets

    i-SVOC Version 1.0 is a general-purpose software application for dynamic modeling of the emission, transport, sorption, and distribution of semi-volatile organic compounds (SVOCs) in indoor environments.

  12. Documentation for the machine-readable version of the revised new general catalogue of nonstellar astronomical objects

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The contents and format of the machine-readable version of the cataloque distributed by the Astronomical Data Center are described. Coding for the various scales and abbreviations used in the catalogue are tabulated and certain revisions to the machine version made to improve storage efficiency and notation are discussed.

  13. ISICS2011, an updated version of ISICS: A program for calculation K-, L-, and M-shell cross sections from PWBA and ECPSSR theories using a personal computer

    NASA Astrophysics Data System (ADS)

    Cipolla, Sam J.

    2011-11-01

    In this new version of ISICS, called ISICS2011, a few omissions and incorrect entries in the built-in file of electron binding energies have been corrected; operational situations leading to un-physical behavior have been identified and flagged. New version program summaryProgram title: ISICS2011 Catalogue identifier: ADDS_v5_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADDS_v5_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6011 No. of bytes in distributed program, including test data, etc.: 130 587 Distribution format: tar.gz Programming language: C Computer: 80486 or higher-level PCs Operating system: WINDOWS XP and all earlier operating systems Classification: 16.7 Catalogue identifier of previous version: ADDS_v4_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1716. Does the new version supersede the previous version?: Yes Nature of problem: Ionization and X-ray production cross section calculations for ion-atom collisions. Solution method: Numerical integration of form factor using a logarithmic transform and Gaussian quadrature, plus exact integration limits. Reasons for new version: General need for higher precision in output format for projectile energies; some built-in binding energies needed correcting; some anomalous results occur due to faulty read-in data or calculated parameters becoming un-physical; erroneous calculations could result for the L and M shells when restricted K-shell options are inadvertently chosen; to achieve general compatibility with ISICSoo, a companion C++ version that is portable to Linux and MacOS platforms, has been submitted for publication in the CPC Program Library approximately at the same time as this present new standalone version of ISICS [1]. Summary of revisions: The format field for projectile energies in the output has been expanded from two to four decimal places in order to distinguish between closely spaced energy values. There were a few entries in the executable binding energy file that needed correcting; K shell of Eu, M shells of Zn, M1 shell of Kr. The corrected values were also entered in the ENERGY.DAT file. In addition, an alternate data file of binding energies is included, called ENERGY_GW.DAT, which is more up-to-date [2]. Likewise, an alternate atomic parameters data file is now included, called FLOURE_JC.DAT, which is more up-to-date [3] fluorescence yields for the K and L shells and Coster-Kronig parameters for the L shell. Both data files can be read in using the -f usage option. To do this, the original energy file should be renamed and saved (e.g., ENERGY_BB.DAT) and the new file (ENERGY_GW.DAT ) should be duplicated as ENERGY.DAT to be read in using the -f option. Similarly for reading in an alternate FLOURE.DAT file. As with previous versions, the user can also simply input different values of any input quantity by invoking the "specify your own parameters" option from the main menu. You can also use this option to simply check the values of the built-in values of the parameters. If it still happens that a zero binding energy for a particular sub-shell is read in, the program will not completely abort, but will calculate results for the other sub-shells while setting the affected sub-shell output to zero. In calculating the Coulomb deflection factor, if the quantity inside the radical sign of the parameter z z=√{(1} becomes zero or negative, to prevent the program from aborting, the PWBA cross sections are still calculated while the ECPSSR cross sections are set to zero. This situation can happen for very low energy collisions, such as were noticed for helium ions on copper at energies of E⩽11.2 keV. It was observed during the engineering of ISICSoo [1] that erroneous calculations could result for the L- and M-shell cases when restricted K-shell R or HSR scaling options were inappropriately chosen. The program has now been fixed so that these inappropriate options are ignored for the L and M shells. In the previous versions, the usage for inputting a batch data file was incorrectly stated in the Users Manual as -Bxxx; the correct designation is -Fxxx, or alternatively, -Ixxx, as indicated on the usage screen in running the program. A revised Users Manual is also available. Restrictions: The consumed CPU time increases with the atomic shell (K, L, M), but execution is still very fast. Running time: This depends on which shell and the number of different energies to be used in the calculation. The running time is not significantly changed from the previous version.

  14. RIS3: A program for relativistic isotope shift calculations

    NASA Astrophysics Data System (ADS)

    Nazé, C.; Gaidamauskas, E.; Gaigalas, G.; Godefroid, M.; Jönsson, P.

    2013-09-01

    An atomic spectral line is characteristic of the element producing the spectrum. The line also depends on the isotope. The program RIS3 (Relativistic Isotope Shift) calculates the electron density at the origin and the normal and specific mass shift parameters. Combining these electronic quantities with available nuclear data, isotope-dependent energy level shifts are determined. Program summaryProgram title:RIS3 Catalogue identifier: ADEK_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEK_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5147 No. of bytes in distributed program, including test data, etc.: 32869 Distribution format: tar.gz Programming language: Fortran 77. Computer: HP ProLiant BL465c G7 CTO. Operating system: Centos 5.5, which is a Linux distribution compatible with Red Hat Enterprise Advanced Server. Classification: 2.1. Catalogue identifier of previous version: ADEK_v1_0 Journal reference of previous version: Comput. Phys. Comm. 100 (1997) 81 Subprograms used: Cat Id Title Reference ADZL_v1_1 GRASP2K VERSION 1_1 to be published. Does the new version supersede the previous version?: Yes Nature of problem: Prediction of level and transition isotope shifts in atoms using four-component relativistic wave functions. Solution method: The nuclear motion and volume effects are treated in first order perturbation theory. Taking the zero-order wave function in terms of a configuration state expansion |Ψ>=∑μcμ|Φ(γμPJMj)>, where P, J and MJ are, respectively, the parity and angular quantum numbers, the electron density at the nucleus and the normal and specific mass shift parameters may generally be expressed as ∑cμcν<γμPJMj|V|γνPJMj> where V is the relevant operator. The matrix elements, in turn, can be expressed as sums over radial integrals multiplied by angular coefficients. All the angular coefficients are calculated using routines from the GRASP2K version 1_1 package [1]. Reasons for new version: This new version takes the nuclear recoil corrections into account within the (m2/M approximation [2] and also allows storage of the angular coefficients for a series of calculations within a given isoelectronic sequence. Furthermore, the program JJ2LSJ, a module of the GRASP2K version 1_1 toolkit that allows a transformation of ASFs from a jj-coupled CSF basis into an LSJ-coupled CSF basis, has been especially adapted to present RIS3 results using LSJ labels of the states. This additional tool is called RIS3_LSJ. Summary of revisions: This version is compatible with the new angular approach of the GRASP2K version 1_1 package [1] and can store necessary angular coefficients. According to the formalism of the relativistic nuclear recoil, the "uncorrected" expression of the normal mass shift has been fundamentally modified compared with its expression in [3]. Restrictions: The complexity of the cases that can be handled is entirely determined by the GRASP2K package [1] used for the generation of the electronic wave functions. Unusual features: Angular data is stored on disk and can be reused. LSJ labels are used for the states. Running time: As an example, we evaluated the isotope shift parameters and the electron density at the origin using the wave functions of Be-like system. We used the MCDHF wave function built on a complete active space (CAS) with n=8 (296 626 CSFs-62 orbitals) that contains 3 non-interacting blocks of given parity and J values involving 6 different eigenvalues in total. Calculations take around 10 h on one AMD Opteron 6100 @ 2.3 GHz CPU with 8 cores (64 GB DDR3 RAM 1.333 GHz). If angular files are available the time is reduced to 20 min. The storage of the angular data takes 139 MB and 7.2 GB for the one-body and the two-body elements, respectively. References: [1] P. Jönsson, G. Gaigalas, J. Bieroń, C. Froese Fischer, I.P. Grant, New version: GRASP2K relativistic atomic structure package, Comput. Phys. Commun. 184 (9) (2013) 2197-2203. [2] E. Gaidamauskas, C. Nazé, P. Rynkun, G. Gaigalas, P. Jönsson, M. Godefroid, J. Phys. B: At. Mol. Opt. Phys. 44 (17) (2011) 175003. [3] P. Jönsson, C. Froese Fischer, Comput. Phys. Commun. 100 (1997) 81-92.

  15. Distributed and parallel Ada and the Ada 9X recommendations

    NASA Technical Reports Server (NTRS)

    Volz, Richard A.; Goldsack, Stephen J.; Theriault, R.; Waldrop, Raymond S.; Holzbacher-Valero, A. A.

    1992-01-01

    Recently, the DoD has sponsored work towards a new version of Ada, intended to support the construction of distributed systems. The revised version, often called Ada 9X, will become the new standard sometimes in the 1990s. It is intended that Ada 9X should provide language features giving limited support for distributed system construction. The requirements for such features are given. Many of the most advanced computer applications involve embedded systems that are comprised of parallel processors or networks of distributed computers. If Ada is to become the widely adopted language envisioned by many, it is essential that suitable compilers and tools be available to facilitate the creation of distributed and parallel Ada programs for these applications. The major languages issues impacting distributed and parallel programming are reviewed, and some principles upon which distributed/parallel language systems should be built are suggested. Based upon these, alternative language concepts for distributed/parallel programming are analyzed.

  16. Continuous-time quantum Monte Carlo impurity solvers

    NASA Astrophysics Data System (ADS)

    Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias

    2011-04-01

    Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.

  17. Climatology of the Aerosol Optical Depth by Components from the Multi-Angle Imaging Spectroradiometer (MISR) and Chemistry Transport Models

    NASA Technical Reports Server (NTRS)

    Lee, Huikyo; Kalashnikova, Olga V.; Suzuki, Kentaroh; Braverman, Amy; Garay, Michael J.; Kahn, Ralph A.

    2016-01-01

    The Multi-angle Imaging Spectroradiometer (MISR) Joint Aerosol (JOINT_AS) Level 3 product has provided a global, descriptive summary of MISR Level 2 aerosol optical depth (AOD) and aerosol type information for each month over 16+ years since March 2000. Using Version 1 of JOINT_AS, which is based on the operational (Version 22) MISR Level 2 aerosol product, this study analyzes, for the first time, characteristics of observed and simulated distributions of AOD for three broad classes of aerosols: spherical nonabsorbing, spherical absorbing, and nonspherical - near or downwind of their major source regions. The statistical moments (means, standard deviations, and skew-nesses) and distributions of AOD by components derived from the JOINT_AS are compared with results from two chemistry transport models (CTMs), the Goddard Chemistry Aerosol Radiation and Transport (GOCART) and SPectral RadIatioN-TrAnSport (SPRINTARS). Overall, the AOD distributions retrieved from MISR and modeled by GOCART and SPRINTARS agree with each other in a qualitative sense. Marginal distributions of AOD for each aerosol type in both MISR and models show considerable high positive skewness, which indicates the importance of including extreme AOD events when comparing satellite retrievals with models. The MISR JOINT_AS product will greatly facilitate comparisons between satellite observations and model simulations of aerosols by type.

  18. Recent Theoretical Advances in Analysis of AIRS/AMSU Sounding Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2007-01-01

    AIRS was launched on EOS Aqua on May 4,2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. This paper describes the AIRS Science Team Version 5.0 retrieval algorithm. Starting in early 2007, the Goddard DAAC will use this algorithm to analyze near real time AIRS/AMSU observations. These products are then made available to the scientific community for research purposes. The products include twice daily measurements of the Earth's three dimensional global temperature, water vapor, and ozone distribution as well as cloud cover. In addition, accurate twice daily measurements of the earth's land and ocean temperatures are derived and reported. Scientists use this important set of observations for two major applications. They provide important information for climate studies of global and regional variability and trends of different aspects of the earth's atmosphere. They also provide information for researchers to improve the skill of weather forecasting. A very important new product of the AIRS Version 5 algorithm is accurate case-by-case error estimates of the retrieved products. This heightens their utility for use in both weather and climate applications. These error estimates are also used directly for quality control of the retrieved products. Version 5 also allows for accurate quality controlled AIRS only retrievals, called "Version 5 AO retrievals" which can be used as a backup methodology if AMSU fails. Examples of the accuracy of error estimates and quality controlled retrieval products of the AIRS/AMSU Version 5 and Version 5 AO algorithms are given, and shown to be significantly better than the previously used Version 4 algorithm. Assimilation of Version 5 retrievals are also shown to significantly improve forecast skill, especially when the case-by-case error estimates are utilized in the data assimilation process.

  19. Kernel User’s Manual Version 1.0

    DTIC Science & Technology

    1989-02-01

    especially on distributed systems. There are issues concerning functionality (amply documented in [ARTEWG 86b), customization , tool support (especially...a far lower level, including special device drivers, special message or signaling systems, and even a custom executive. There is far less general...functionality; the implementors of the language do not know how to satisfy the variety of needs of real-time applications; the vendors are unable to customize

  20. Traffic-Adaptive, Flow-Specific Medium Access for Wireless Networks

    DTIC Science & Technology

    2009-09-01

    hybrid, contention and non-contention schemes are shown to be special cases. This work also compares the energy efficiency of centralized and distributed...solutions and proposes an energy efficient version of traffic-adaptive CWS-MAC that includes an adaptive sleep cycle coordinated through the use of...preamble sampling. A preamble sampling probability parameter is introduced to manage the trade-off between energy efficiency and throughput and delay

  1. Finding the Missing Physics: Simulating Polydisperse Polymer Melts

    NASA Astrophysics Data System (ADS)

    Rorrer, Nichoals; Dorgan, John

    2014-03-01

    A Monte Carlo algorithm has been developed to model polydisperse polymer melts. For the first time, this enables the specification of a predetermined molecular weight distribution for lattice based simulations. It is demonstrated how to map an arbitrary probability distributions onto a discrete number of chains residing on an fcc lattice. The resulting algorithm is able to simulate a wide variety of behaviors for polydisperse systems including confinement effects, shear flow, and parabolic flow. The dynamic version of the algorithm accurately captures Rouse dynamics for short polymer chains, and reptation-like dynamics for longer chain lengths.1 When polydispersity is introduced, smaller Rouse times and broadened the transition between different scaling regimes are observed. Rouse times also decrease under confinement for both polydisperse and monodisperse systems and chain length dependent migration effects are observed. The steady-state version of the algorithm enables the simulation of flow and when polydisperse systems are subject to parabolic (Poiseulle) flow, a migration phenomenon based on chain length is again present. These and other phenomena highlight the importance of including polydispersity in obtaining physically realistic simulations of polymeric melts. 1. Dorgan, J.R.; Rorrer, N.A.; Maupin, C.M., Macromolecules 2012, 45(21), 8833-8840. Work funded by the Fluid Dynamics program of the National Science Foundation under grant CBET-1067707.

  2. Versioning System for Distributed Ontology Development

    DTIC Science & Technology

    2016-03-15

    provides guidelines for evaluating the impact of the version changes. This page intentionally left blank. v...conformance to a clear set of development and versioning guidelines to assure that changes and extensions can be integrated back into the “main development... guidelines for evolution of an ontology would have considerably helped the users of the ontology in these situations. The currently accessible

  3. AEOSS runtime manual for system analysis on Advanced Earth-Orbital Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    Lee, Hwa-Ping

    1990-01-01

    Advanced earth orbital spacecraft system (AEOSS) enables users to project the required power, weight, and cost for a generic earth-orbital spacecraft system. These variables are calculated on the component and subsystem levels, and then the system level. The included six subsystems are electric power, thermal control, structure, auxiliary propulsion, attitude control, and communication, command, and data handling. The costs are computed using statistically determined models that were derived from the flown spacecraft in the past and were categorized into classes according to their functions and structural complexity. Selected design and performance analyses for essential components and subsystems are also provided. AEOSS has the feature permitting a user to enter known values of these parameters, totally and partially, at all levels. All information is of vital importance to project managers of subsystems or a spacecraft system. AEOSS is a specially tailored software coded from the relational database program of the Acius' 4th Dimension with a Macintosh version. Because of the licensing agreements, two versions of the AEOSS documents were prepared. This version, AEOSS Runtime Manual, is permitted to be distributed with a finite number of the restrictive 4D Runtime version. It can perform all contained applications without any programming alterations.

  4. BSR: B-spline atomic R-matrix codes

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg

    2006-02-01

    BSR is a general program to calculate atomic continuum processes using the B-spline R-matrix method, including electron-atom and electron-ion scattering, and radiative processes such as bound-bound transitions, photoionization and polarizabilities. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme by including terms of the Breit-Pauli Hamiltonian. New version program summaryTitle of program: BSR Catalogue identifier: ADWY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers on which the program has been tested: Microway Beowulf cluster; Compaq Beowulf cluster; DEC Alpha workstation; DELL PC Operating systems under which the new version has been tested: UNIX, Windows XP Programming language used: FORTRAN 95 Memory required to execute with typical data: Typically 256-512 Mwords. Since all the principal dimensions are allocatable, the available memory defines the maximum complexity of the problem No. of bits in a word: 8 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.: 69 943 No. of bytes in distributed program, including test data, etc.: 746 450 Peripherals used: scratch disk store; permanent disk store Distribution format: tar.gz Nature of physical problem: This program uses the R-matrix method to calculate electron-atom and electron-ion collision processes, with options to calculate radiative data, photoionization, etc. The calculations can be performed in LS-coupling or in an intermediate-coupling scheme, with options to include Breit-Pauli terms in the Hamiltonian. Method of solution: The R-matrix method is used [P.G. Burke, K.A. Berrington, Atomic and Molecular Processes: An R-Matrix Approach, IOP Publishing, Bristol, 1993; P.G. Burke, W.D. Robb, Adv. At. Mol. Phys. 11 (1975) 143; K.A. Berrington, W.B. Eissner, P.H. Norrington, Comput. Phys. Comm. 92 (1995) 290].

  5. TIM Version 3.0 beta - Technical Description and User's Guidance

    EPA Pesticide Factsheets

    Provides technical information on version 3.0 of the Terrestrial Investigation Model (TIM v.3.0). Describes how TIM derives joint distributions of exposure and toxicity to calculate the risk of mortality to birds.

  6. Short version of the "instrument for assessment of stress in nursing students" in the Brazilian reality.

    PubMed

    Costa, Ana Lúcia Siqueira; Silva, Rodrigo Marques da; Mussi, Fernanda Carneiro; Serrano, Patrícia Maria; Graziano, Eliane da Silva; Batista, Karla de Melo

    2018-01-08

    validate a short version of the Instrument for assessment of stress in nursing students in the Brazilian reality. Methodological study conducted with 1047 nursing students from five Brazilian institutions, who answered the 30 items initially distributed in eight domains. Data were analyzed in the R Statistical Package and in the latent variable analysis, using exploratory and confirmatory factor analyses, Cronbach's alpha and item-total correlation. The short version of the instrument had 19 items distributed into four domains: Environment, Professional Training, Theoretical Activities and Performance of Practical Activities. The confirmatory analysis showed absolute and parsimony fit to the proposed model with satisfactory residual levels. Alpha values ​​per factor ranged from 0.736 (Environment) to 0.842 (Performance of Practical Activities). The short version of the instrument has construct validity and reliability for application to Brazilian nursing undergraduates at any stage of the course.

  7. The global aerosol-climate model ECHAM-HAM, version 2: sensitivity to improvements in process representations

    NASA Astrophysics Data System (ADS)

    Zhang, K.; O'Donnell, D.; Kazil, J.; Stier, P.; Kinne, S.; Lohmann, U.; Ferrachat, S.; Croft, B.; Quaas, J.; Wan, H.; Rast, S.; Feichter, J.

    2012-03-01

    This paper introduces and evaluates the second version of the global aerosol-climate model ECHAM-HAM. Major changes have been brought into the model, including new parameterizations for aerosol nucleation and water uptake, an explicit treatment of secondary organic aerosols, modified emission calculations for sea salt and mineral dust, the coupling of aerosol microphysics to a two-moment stratiform cloud microphysics scheme, and alternative wet scavenging parameterizations. These revisions extend the model's capability to represent details of the aerosol lifecycle and its interaction with climate. Sensitivity experiments are carried out to analyse the effects of these improvements in the process representation on the simulated aerosol properties and global distribution. The new parameterizations that have largest impact on the global mean aerosol optical depth and radiative effects turn out to be the water uptake scheme and cloud microphysics. The former leads to a significant decrease of aerosol water contents in the lower troposphere, and consequently smaller optical depth; the latter results in higher aerosol loading and longer lifetime due to weaker in-cloud scavenging. The combined effects of the new/updated parameterizations are demonstrated by comparing the new model results with those from the earlier version, and against observations. Model simulations are evaluated in terms of aerosol number concentrations against measurements collected from twenty field campaigns as well as from fixed measurement sites, and in terms of optical properties against the AERONET measurements. Results indicate a general improvement with respect to the earlier version. The aerosol size distribution and spatial-temporal variance simulated by HAM2 are in better agreement with the observations. Biases in the earlier model version in aerosol optical depth and in the Ångström parameter have been reduced. The paper also points out the remaining model deficiencies that need to be addressed in the future.

  8. Implementing security in a distributed web-based EHCR.

    PubMed

    Sucurovic, Snezana

    2007-01-01

    In many countries there are initiatives for building an integrated patient-centric electronic health record. There are also initiatives for transnational integrations. These growing demands for integration result from the fact that it can provide improving healthcare treatments and reducing the cost of healthcare services. While in European highly developed countries computerisation in healthcare sector began in the 1970s and reached a high level, some developing countries, and Serbia among them, have started computerisation recently. This is why MEDIS (MEDical Information System) is aimed at integration itself from the very beginning instead of integration of heterogeneous information systems on a middle layer or using HL7 protocol. The implementation of a national healthcare information system requires using standards as integrated and widely accepted solutions. Therefore, we have started building MEDIS to meet the requirements of CEN ENV 13606 and CEN ENV 13729 standards. The prototype version has a distributed component-based architecture with modern security solutions applied. MEDIS has been implemented as a federated system where the central server hosts basic EHCR information about a patient, and clinical servers contain their own part of patients' EHCR. At present, there is an initial version of prototype planned to be deployed at first in a small community. In particular, open source API for X.509 authentication and authorisation has been developed. Our project meets the requirements for education in health informatics, including appropriate knowledge and skills on EHCR. The points included in this article have been presented on several national conferences and widely discussed. MEDIS has explored a federated, component-based EHCR architecture and related security aspects. In its initial version it shows acceptable performances and administrative simplicity. It emphasizes the importance of using standards in building EHCR in our country, in order to prepare it for future integrations.

  9. Simulated and measured neutron/gamma light output distribution for poly-energetic neutron/gamma sources

    NASA Astrophysics Data System (ADS)

    Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.

    2018-03-01

    In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.

  10. Introduction to the Canadian Cardiovascular Outcomes Research Team's (CCORT) Canadian Cardiovascular Atlas project.

    PubMed

    Tu, Jack V; Brien, Susan E; Kennedy, Courtney C; Pilote, Louise; Ghali, William A

    2003-03-15

    The Canadian Cardiovascular Outcomes Research Team's (CCORT) Canadian Cardiovascular Atlas project was developed to provide Canadians with a national report on the state of cardiovascular health and health services in Canada. Written by a group of Canada's leading experts in cardiovascular outcomes research, the CCORT cardiac Atlas will cover a wide variety of topics ranging from cardiac risk factors and cardiac mortality rates to the treatment of patients with acute myocardial infarction and congestive heart failure and the outcomes of invasive cardiac procedures across Canada. Data in the Atlas will be presented at a national, provincial and health region level. The Atlas will be published as a series of 20 articles and chapters in future issues of The Canadian Journal of Cardiology and on CCORT's web site (www.ccort.ca). The journal version of the Atlas chapters will be written for a clinical audience and will include editorials written by invited experts, whereas the web-based version of each chapter will be written for a more general audience and will include additional supplemental information (for example, interactive colour maps and tables) that cannot be included in the journal version. Material from the Journal and the web will eventually be compiled into a book that will be distributed across Canada. This article serves as an introduction to the Atlas project and describes the rationale for and objectives of the CCORT national cardiac Atlas project.

  11. micrOMEGAs 2.0.7: a program to calculate the relic density of dark matter in a generic model

    NASA Astrophysics Data System (ADS)

    Bélanger, G.; Boudjema, F.; Pukhov, A.; Semenov, A.

    2007-12-01

    micrOMEGAs2.0.7 is a code which calculates the relic density of a stable massive particle in an arbitrary model. The underlying assumption is that there is a conservation law like R-parity in supersymmetry which guarantees the stability of the lightest odd particle. The new physics model must be incorporated in the notation of CalcHEP, a package for the automatic generation of squared matrix elements. Once this is done, all annihilation and coannihilation channels are included automatically in any model. Cross-sections at v=0, relevant for indirect detection of dark matter, are also computed automatically. The package includes three sample models: the minimal supersymmetric standard model (MSSM), the MSSM with complex phases and the NMSSM. Extension to other models, including non supersymmetric models, is described. Program summaryTitle of program:micrOMEGAs2.0.7 Catalogue identifier:ADQR_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQR_v2_1.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:216 529 No. of bytes in distributed program, including test data, etc.:1 848 816 Distribution format:tar.gz Programming language used:C and Fortran Computer:PC, Alpha, Mac, Sun Operating system:UNIX (Linux, OSF1, SunOS, Darwin, Cygwin) RAM:17 MB depending on the number of processes required Classification:1.9, 11.6 Catalogue identifier of previous version:ADQR_v2_0 Journal version of previous version:Comput. Phys. Comm. 176 (2007) 367 Does the new version supersede the previous version?:Yes Nature of problem:Calculation of the relic density of the lightest stable particle in a generic new model of particle physics. Solution method:In numerically solving the evolution equation for the density of dark matter, relativistic formulae for the thermal average are used. All tree-level processes for annihilation and coannihilation of new particles in the model are included. The cross-sections for all processes are calculated exactly with CalcHEP after definition of a model file. Higher-order QCD corrections to Higgs couplings to quark pairs are included. Reasons for new version:The main changes in this new version consist, on the one hand, in improvements of the user interface and treatment of error codes when using spectrum calculators in the MSSM and, on the other hand, on a completely revised code for the calculation of the relic density in the NMSSM based on the code NMSSMTools1.0.2 for the computation of the spectrum. Summary of revisions:The version of CalcHEP was updated to CalcHEP 2.4. The procedure for shared library generation has been improved. Now the libraries are recalculated each time the model is modified. The default value for the top quark mass has been set to 171.4 GeV. Changes specific to the MSSM model. The deltaMb correction is now included in the B,t,H-vertex and is always included for other Higgs vertices. In case of a fatal error in an RGE program, micrOMEGAs now continues operation while issuing a warning that the given point is not valid. This is important when running scans over parameter space. However this means that the standard ˆC command that could be used to cancel a job now only cancels the RGE program. To cancel a job, use "kill -9 -N" where N is the micrOMEGAs process id, all child processes launched by micrOMEGAs will be killed at once. Following the last SLHA2 release, we use key=26 item of EXTPAR block for the pole mass of the CP-odd Higgs so that micrOMEGAs can now use SoftSUSY for spectrum calculation with EWSB input. The Isajet interface was corrected too, so the user has to recompile the isajet_slha executable. For SuSpect we still support an old "wrong" interface where key=24 is used for the mass of the CP-odd Higgs. In the non-universal SUGRA model, we set the value of M ( M,A) to the value of the largest subset of equal parameters among scalar masses (gaugino masses, trilinear couplings). In the previous version these parameters were set arbitrarily to be equal to MH2, MG2 and At respectively. The spectrum calculators need an input value for M,M and A for initialisation purposes. We have removed bugs in micrOMEGAs-Isajet interface in case of non-universal SUGRA. $(FFLAGS) is added to compilation instruction of suspect.exe. It was omitted in version 2.0. The treatment of errors in reading of the LesHouches accord file is improved. Now, if the SPINFO block is absent in the SLHA output it is considered as a fatal error. Instructions for calculation of Δ, (, Br(b→sγ) and Br(B→μμ) constraints are included in EWSB sample main programs omg.c/omg.cpp/omg.F. We have corrected the name of the library for neutralino-neutralino annihilation in our sample files MSSM/cs br.*. Changes specific to the NMSSM model. The NMSSM has been completely revised. Now it is based on NMSSMTools_1.0.2. The deltaMb corrections in the NMSSM are included in the Higgs potential. CP violation model. We have included in our package the MSSM with CP violation. Our implementation was described in Phys. Rev. D 73 (2006) 115007. It is based on the CPSUPERH package published in Comput. Phys. Comm. 156 (2004) 283. Unusual features:Depending on the parameters of the model, the program generates additional new code, compiles it and loads it dynamically. Running time:0.2 seconds

  12. Guidelines for chemical peeling in Japan (3rd edition).

    PubMed

    2012-04-01

    Chemical peeling may be defined as the therapies, procedures and techniques used for the treatment of certain cutaneous diseases or conditions, and for aesthetic improvement. The procedures include the application of one or more chemical agents to the skin. Chemical peeling has been very popular in both medical and aesthetic fields. Because neither its scientific background is well understood nor a systematic approach established, medical and social problems have taken place. This prompted us to establish and distribute a standard guideline of care for chemical peeling. Previous guidelines such as the 2001 and 2004 versions included minimum standards of care such as indications, chemicals, applications, and any associated precautions, including post-peeling care. The principles in this updated version of chemical peeling are as follows: (i) chemical peeling should be performed under the strict technical control and responsibility of a physician; (ii) the physician should have sufficient knowledge of the structure and physiology of the skin and subcutaneous tissues, and understand the mechanisms of wound-healing induced by chemical peeling; (iii) the physician should be board-certified in an appropriate specialty such as dermatology; and (iv) the ultimate judgment regarding the appropriateness of any specific chemical peeling procedure must be made by the physician while considering all standard therapeutic protocols, which should be presented to each individual patient. Keeping these concepts in mind, this new version of the guidelines includes a more scientific and detailed approach from the viewpoint of evidence-based medicine. © 2011 Japanese Dermatological Association.

  13. PSsolver: A Maple implementation to solve first order ordinary differential equations with Liouvillian solutions

    NASA Astrophysics Data System (ADS)

    Avellar, J.; Duarte, L. G. S.; da Mota, L. A. C. P.

    2012-10-01

    We present a set of software routines in Maple 14 for solving first order ordinary differential equations (FOODEs). The package implements the Prelle-Singer method in its original form together with its extension to include integrating factors in terms of elementary functions. The package also presents a theoretical extension to deal with all FOODEs presenting Liouvillian solutions. Applications to ODEs taken from standard references show that it solves ODEs which remain unsolved using Maple's standard ODE solution routines. New version program summary Program title: PSsolver Catalogue identifier: ADPR_v2_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADPR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2302 No. of bytes in distributed program, including test data, etc.: 31962 Distribution format: tar.gz Programming language: Maple 14 (also tested using Maple 15 and 16). Computer: Intel Pentium Processor P6000, 1.86 GHz. Operating system: Windows 7. RAM: 4 GB DDR3 Memory Classification: 4.3. Catalogue identifier of previous version: ADPR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 144 (2002) 46 Does the new version supersede the previous version?: Yes Nature of problem: Symbolic solution of first order differential equations via the Prelle-Singer method. Solution method: The method of solution is based on the standard Prelle-Singer method, with extensions for the cases when the FOODE contains elementary functions. Additionally, an extension of our own which solves FOODEs with Liouvillian solutions is included. Reasons for new version: The program was not running anymore due to changes in the latest versions of Maple. Additionally, we corrected/changed some bugs/details that were hampering the smoother functioning of the routines. Summary of revisions: • As time went by, many commands in Maple were deprecated. So, in order to make the program able to run with the newer versions, we have checked and changed some of those. For instance, the command sum had changed, and some program lines were substituted so that the package works properly. • In the old version we must supply the degree of the Darboux polynomials we want to determine. In the present version the user can set the degree by typing Deg = number in the command call (e.g., PSsolve(ode, Deg =3); telling the command PSsolve that it must use Darboux polynomials of degree up to three). If the user does not specify the degree, the routines use, as default, the degree 1. Restrictions: If the integrating factor for the FOODE under consideration has factors of high degree in the dependent and independent variables and in the elementary functions appearing in the FOODE, the package may spend a long time finding the solution. Also, when dealing with FOODEs containing elementary functions, it is essential that the algebraic dependency between them is recognized. If that does not happen, our program can miss some solutions. Unusual features: Our implementation of the Prelle-Singer approach not only solves FOODEs, but can also be used as a research tool that allows the user to follow all the steps of the procedure. For example, the Darboux polynomials (eigenpolynomials) of the D-operator associated with a FOODE (see Section 4) can be calculated. In addition, our package is successful in solving FOODEs that were not solved by some of the most commonly available solvers. Finally, our package implements a theoretical extension (for details, see [1,2]) to the original Prelle-Singer approach that enhances its scope, allowing it to tackle some FOODEs whose solutions involve non-elementary Liouvillian functions. Running time: This depends strongly on the FOODE, but usually under 2 seconds when running our 'arena' test file: The non linear FOODEs presented in the book by Kamke [3]. These times were obtained using an Intel Pentium Processor P6000, 1.86 GHz, with 4 GB RAM. References: [1] M. Singer, Liouvillian first integrals of differential equations, Trans. Amer. Math. Soc. 333 (1992) 673-688. [2] L.G.S. Duarte, S.E.S. Duarte, L.A.C.P. da Mota, J.E.F. Skea, A method to tackle first order ordinary differential equations with Liouvilian functions in the solution, J. Phys. A: Math. Gen. Inglaterra 35 (17) (2002) 3899-3910. [3] E. Kamke, Differentialgleichungen: Lösungsmethoden und Lösungen, Chelsea Publishing Co., New York, 1959.

  14. Secure and Cost-Effective Distributed Aggregation for Mobile Sensor Networks

    PubMed Central

    Guo, Kehua; Zhang, Ping; Ma, Jianhua

    2016-01-01

    Secure data aggregation (SDA) schemes are widely used in distributed applications, such as mobile sensor networks, to reduce communication cost, prolong the network life cycle and provide security. However, most SDA are only suited for a single type of statistics (i.e., summation-based or comparison-based statistics) and are not applicable to obtaining multiple statistic results. Most SDA are also inefficient for dynamic networks. This paper presents multi-functional secure data aggregation (MFSDA), in which the mapping step and coding step are introduced to provide value-preserving and order-preserving and, later, to enable arbitrary statistics support in the same query. MFSDA is suited for dynamic networks because these active nodes can be counted directly from aggregation data. The proposed scheme is tolerant to many types of attacks. The network load of the proposed scheme is balanced, and no significant bottleneck exists. The MFSDA includes two versions: MFSDA-I and MFSDA-II. The first one can obtain accurate results, while the second one is a more generalized version that can significantly reduce network traffic at the expense of less accuracy loss. PMID:27120599

  15. Distribution of a Generic Mission Planning and Scheduling Toolkit for Astronomical Spacecraft

    NASA Technical Reports Server (NTRS)

    Kleiner, Steven C.

    1996-01-01

    Work is progressing as outlined in the proposal for this contract. A working planning and scheduling system has been documented and packaged and made available to the WIRE Small Explorer group at JPL, the FUSE group at JHU, the NASA/GSFC Laboratory for Astronomy and Solar Physics and the Advanced Planning and Scheduling Branch at STScI. The package is running successfully on the WIRE computer system. It is expected that the WIRE will reuse significant portions of the SWAS code in its system. This scheduling system itself was tested successfully against the spacecraft hardware in December 1995. A fully automatic scheduling module has been developed and is being added to the toolkit. In order to maximize reuse, the code is being reorganized during the current build into object-oriented class libraries. A paper describing the toolkit has been written and is included in the software distribution. We have experienced interference between the export and production versions of the toolkit. We will be requesting permission to reprogram funds in order to purchase a standalone PC onto which to offload the export version.

  16. TRASYS - THERMAL RADIATION ANALYZER SYSTEM (DEC VAX VERSION WITH NASADIG)

    NASA Technical Reports Server (NTRS)

    Anderson, G. E.

    1994-01-01

    The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.

  17. TRASYS - THERMAL RADIATION ANALYZER SYSTEM (CRAY VERSION WITH NASADIG)

    NASA Technical Reports Server (NTRS)

    Anderson, G. E.

    1994-01-01

    The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.

  18. TRASYS - THERMAL RADIATION ANALYZER SYSTEM (DEC VAX VERSION WITHOUT NASADIG)

    NASA Technical Reports Server (NTRS)

    Vogt, R. A.

    1994-01-01

    The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.

  19. A measurement of global event shape distributions in the hadronic decays of the Z 0

    NASA Astrophysics Data System (ADS)

    Akrawy, M. Z.; Alexander, G.; Allison, J.; Allport, P. P.; Anderson, K. J.; Armitage, J. C.; Arnison, G. T. J.; Ashton, P.; Azuelos, G.; Baines, J. T. M.; Ball, A. H.; Banks, J.; Barker, G. J.; Barlow, R. J.; Batley, J. R.; Becker, J.; Behnke, T.; Bell, K. W.; Bella, G.; Bethke, S.; Biebel, O.; Binder, U.; Bloodworth, L. J.; Bock, P.; Breuker, H.; Brown, R. M.; Brun, R.; Buijs, A.; Burckhart, H. J.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Chrin, J. T. M.; Cohen, I.; Collins, W. J.; Conboy, J. E.; Couch, M.; Coupland, M.; Cuffiani, M.; Dado, S.; Dallavalle, G. M.; Debu, P.; Deninno, M. M.; Dieckmann, A.; Dittmar, M.; Dixit, M. S.; Duchovni, E.; Duerdoth, I. P.; Dumas, D.; El Mamouni, H.; Elcombe, P. A.; Estabrooks, P. G.; Etzion, E.; Fabbri, F.; Farthouat, P.; Fischer, H. M.; Fong, D. G.; French, M. T.; Fukunaga, C.; Gaidot, A.; Ganel, O.; Gary, J. W.; Gascon, J.; Geddes, N. I.; Gee, C. N. P.; Geich-Gimbel, C.; Gensler, S. W.; Gentit, F. X.; Giacomelli, G.; Gibson, V.; Gibson, W. R.; Gillies, J. D.; Goldberg, J.; Goodrick, M. J.; Gorn, W.; Granite, D.; Gross, E.; Grosse-Wiesmann, P.; Grunhaus, J.; Hagedorn, H.; Hagemann, J.; Hansroul, M.; Hargrove, C. K.; Hart, J.; Hattersley, P. M.; Hauschild, M.; Hawkes, C. M.; Heflin, E.; Hemingway, R. J.; Heuer, R. D.; Hill, J. C.; Hillier, S. J.; Ho, C.; Hobbs, J. D.; Hobson, P. R.; Hochman, D.; Holl, B.; Homer, R. J.; Hou, S. R.; Howarth, C. P.; Hughes-Jones, R. E.; Igo-Kemenes, P.; Ihssen, H.; Imrie, D. C.; Jawahery, A.; Jeffreys, P. W.; Jeremie, H.; Jimack, M.; Jobes, M.; Jones, R. W. L.; Jovanovic, P.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Kellogg, R. G.; Kennedy, B. W.; Kleinwort, C.; Klem, D. E.; Knop, G.; Kobayashi, T.; Kokott, T. P.; Köpke, L.; Kowalewski, R.; Kreutzmann, H.; von Krogh, J.; Kroll, J.; Kuwano, M.; Kyberd, P.; Lafferty, G. D.; Lamarche, F.; Larson, W. J.; Lasota, M. M. B.; Layter, J. G.; Le Du, P.; Leblanc, P.; Lee, A. M.; Lellouch, D.; Lennert, P.; Lessard, L.; Levinson, L.; Lloyd, S. L.; Loebinger, F. K.; Lorah, J. M.; Lorazo, B.; Losty, M. J.; Ludwig, J.; Lupu, N.; Ma, J.; MacBeth, A. A.; Mannelli, M.; Marcellini, S.; Maringer, G.; Martin, A. J.; Martin, J. P.; Mashimo, T.; Mättig, P.; Maur, U.; McMahon, T. J.; McPherson, A. C.; Meijers, F.; Menszner, D.; Merritt, F. S.; Mes, H.; Michelini, A.; Middleton, R. P.; Mikenberg, G.; Miller, D. J.; Milstene, C.; Minowa, M.; Mohr, W.; Montanari, A.; Mori, T.; Moss, M. W.; Murphy, P. G.; Murray, W. J.; Nellen, B.; Nguyen, H. H.; Nozaki, M.; O'Dowd, A. J. P.; O'Neale, S. W.; O'Neill, B. P.; Oakham, F. G.; Odorici, F.; Ogg, M.; Oh, H.; Oreglia, M. J.; Orito, S.; Pansart, J. P.; Patrick, G. N.; Pawley, S. J.; Pfister, P.; Pilcher, J. E.; Pinfold, J. L.; Plane, D. E.; Poli, B.; Pouladdej, A.; Pritchard, P. W.; Quast, G.; Raab, J.; Redmond, M. W.; Rees, D. L.; Regimbald, M.; Riles, K.; Roach, C. M.; Robins, S. A.; Rollnik, A.; Roney, J. M.; Rossberg, S.; Rossi, A. M.; Routenburg, P.; Runge, K.; Runolfsson, O.; Sanghera, S.; Sansum, R. A.; Sasaki, M.; Saunders, B. J.; Schaile, A. D.; Schaile, O.; Schappert, W.; Scharff-Hansen, P.; von der Schmitt, H.; Schreiber, S.; Schwarz, J.; Shapira, A.; Shen, B. C.; Sherwood, P.; Simon, A.; Siroli, G. P.; Skuja, A.; Smith, A. M.; Smith, T. J.; Snow, G. A.; Spreadbury, E. J.; Springer, R. W.; Sproston, M.; Stephens, K.; Stier, H. E.; Ströhmer, R.; Strom, D.; Takeda, H.; Takeshita, T.; Tsukamoto, T.; Turner, M. F.; Tysarczyk-Niemeyer, G.; van den Plas, D.; Vandalen, G. J.; Vasseur, G.; Virtue, C. J.; Wagner, A.; Wahl, C.; Ward, C. P.; Ward, D. R.; Waterhouse, J.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Weber, M.; Weisz, S.; Wermes, N.; Weymann, M.; Wilson, G. W.; Wilson, J. A.; Wingerter, I.; Winterer, V.-H.; Wood, N. C.; Wotton, S.; Wuensch, B.; Wyatt, T. R.; Yaari, R.; Yang, Y.; Yekutieli, G.; Yoshida, T.; Zeuner, W.; Zorn, G. T.

    1990-12-01

    We present measurements of global event shape distributions in the hadronic decays of the Z 0. The data sample, corresponding to an integrated luminosity of about 1.3 pb-1, was collected with the OPAL detector at LEP. Most of the experimental distributions we present are unfolded for the finite acceptance and resolution of the OPAL detector. Through comparison with our unfolded data, we tune the parameter values of several Monte Carlo computer programs which simulate perturbative QCD and the hadronization of partons. Jetset version 7.2, Herwig version 3.4 and Ariadne version 3.1 all provide good descriptions of the experimental distributions. They in addition describe lower energy data with the parameter values adjusted at the Z 0 energy. A complete second order matrix element Monte Carlo program with a modified perturbation scale is also compared to our 91 GeV data and its parameter values are adjusted. We obtained an unfolded value for the mean charged multiplicity of 21.28±0.04±0.84, where the first error is statistical and the second is systematic.

  20. Modeling dust as component minerals in the Community Atmosphere Model: development of framework and impact on radiative forcing

    DOE PAGES

    Scanza, R. A.; Mahowald, N.; Ghan, S.; ...

    2014-07-02

    The mineralogy of desert dust is important due to its effect on radiation, clouds and biogeochemical cycling of trace nutrients. This study presents the simulation of dust radiative forcing as a function of both mineral composition and size at the global scale using mineral soil maps for estimating emissions. Externally mixed mineral aerosols in the bulk aerosol module in the Community Atmosphere Model version 4 (CAM4) and internally mixed mineral aerosols in the modal aerosol module in the Community Atmosphere Model version 5.1 (CAM5) embedded in the Community Earth System Model version 1.0.5 (CESM) are speciated into common mineral componentsmore » in place of total dust. The simulations with mineralogy are compared to available observations of mineral atmospheric distribution and deposition along with observations of clear-sky radiative forcing efficiency. Based on these simulations, we estimate the all-sky direct radiative forcing at the top of the atmosphere as +0.05 W m −2 for both CAM4 and CAM5 simulations with mineralogy and compare this both with simulations of dust in release versions of CAM4 and CAM5 (+0.08 and +0.17 W m −2) and of dust with optimized optical properties, wet scavenging and particle size distribution in CAM4 and CAM5, −0.05 and −0.17 W m −2, respectively. The ability to correctly include the mineralogy of dust in climate models is hindered by its spatial and temporal variability as well as insufficient global in-situ observations, incomplete and uncertain source mineralogies and the uncertainties associated with data retrieved from remote sensing methods.« less

  1. Modeling dust as component minerals in the Community Atmosphere Model: development of framework and impact on radiative forcing

    DOE PAGES

    Scanza, Rachel; Mahowald, N.; Ghan, Steven J.; ...

    2015-01-01

    The mineralogy of desert dust is important due to its effect on radiation, clouds and biogeochemical cycling of trace nutrients. This study presents the simulation of dust radiative forcing as a function of both mineral composition and size at the global scale, using mineral soil maps for estimating emissions. Externally mixed mineral aerosols in the bulk aerosol module in the Community Atmosphere Model version 4 (CAM4) and internally mixed mineral aerosols in the modal aerosol module in the Community Atmosphere Model version 5.1 (CAM5) embedded in the Community Earth System Model version 1.0.5 (CESM) are speciated into common mineral componentsmore » in place of total dust. The simulations with mineralogy are compared to available observations of mineral atmospheric distribution and deposition along with observations of clear-sky radiative forcing efficiency. Based on these simulations, we estimate the all-sky direct radiative forcing at the top of the atmosphere as + 0.05 Wm⁻² for both CAM4 and CAM5 simulations with mineralogy. We compare this to the radiative forcing from simulations of dust in release versions of CAM4 and CAM5 (+0.08 and +0.17 Wm⁻²) and of dust with optimized optical properties, wet scavenging and particle size distribution in CAM4 and CAM5, -0.05 and -0.17 Wm⁻², respectively. The ability to correctly include the mineralogy of dust in climate models is hindered by its spatial and temporal variability as well as insufficient global in situ observations, incomplete and uncertain source mineralogies and the uncertainties associated with data retrieved from remote sensing methods.« less

  2. FY17 Status Report on the Computing Systems for the Yucca Mountain Project TSPA-LA Models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Appel, Gordon John

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014), Hadgu et al. (2015) and Hadgu and Appel (2016). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) weremore » used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5, 11.1 and 12.0 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA- type analysis on the server cluster. The current tasks included preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 12.0 and address DLL-related issues observed in the FY16 work. The model upgrade task successfully converted the Nominal Modeling case to GoldSim Versions 11.1/12. Conversions of the rest of the TSPA models were also attempted but program and operational difficulties precluded this. Upgrade of the remaining of the modeling cases and distributed processing tasks is expected to continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less

  3. Sensitivity of biogenic volatile organic compounds to land surface parameterizations and vegetation distributions in California

    NASA Astrophysics Data System (ADS)

    Zhao, Chun; Huang, Maoyi; Fast, Jerome D.; Berg, Larry K.; Qian, Yun; Guenther, Alex; Gu, Dasa; Shrivastava, Manish; Liu, Ying; Walters, Stacy; Pfister, Gabriele; Jin, Jiming; Shilling, John E.; Warneke, Carsten

    2016-05-01

    Current climate models still have large uncertainties in estimating biogenic trace gases, which can significantly affect atmospheric chemistry and secondary aerosol formation that ultimately influences air quality and aerosol radiative forcing. These uncertainties result from many factors, including uncertainties in land surface processes and specification of vegetation types, both of which can affect the simulated near-surface fluxes of biogenic volatile organic compounds (BVOCs). In this study, the latest version of Model of Emissions of Gases and Aerosols from Nature (MEGAN v2.1) is coupled within the land surface scheme CLM4 (Community Land Model version 4.0) in the Weather Research and Forecasting model with chemistry (WRF-Chem). In this implementation, MEGAN v2.1 shares a consistent vegetation map with CLM4 for estimating BVOC emissions. This is unlike MEGAN v2.0 in the public version of WRF-Chem that uses a stand-alone vegetation map that differs from what is used by land surface schemes. This improved modeling framework is used to investigate the impact of two land surface schemes, CLM4 and Noah, on BVOCs and examine the sensitivity of BVOCs to vegetation distributions in California. The measurements collected during the Carbonaceous Aerosol and Radiative Effects Study (CARES) and the California Nexus of Air Quality and Climate Experiment (CalNex) conducted in June of 2010 provided an opportunity to evaluate the simulated BVOCs. Sensitivity experiments show that land surface schemes do influence the simulated BVOCs, but the impact is much smaller than that of vegetation distributions. This study indicates that more effort is needed to obtain the most appropriate and accurate land cover data sets for climate and air quality models in terms of simulating BVOCs, oxidant chemistry and, consequently, secondary organic aerosol formation.

  4. The Community Climate System Model Version 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gent, Peter R.; Danabasoglu, Gokhan; Donner, Leo J.

    The fourth version of the Community Climate System Model (CCSM4) was recently completed and released to the climate community. This paper describes developments to all the CCSM components, and documents fully coupled pre-industrial control runs compared to the previous version, CCSM3. Using the standard atmosphere and land resolution of 1{sup o} results in the sea surface temperature biases in the major upwelling regions being comparable to the 1.4{sup o} resolution CCSM3. Two changes to the deep convection scheme in the atmosphere component result in the CCSM4 producing El Nino/Southern Oscillation variability with a much more realistic frequency distribution than themore » CCSM3, although the amplitude is too large compared to observations. They also improve the representation of the Madden-Julian Oscillation, and the frequency distribution of tropical precipitation. A new overflow parameterization in the ocean component leads to an improved simulation of the deep ocean density structure, especially in the North Atlantic. Changes to the CCSM4 land component lead to a much improved annual cycle of water storage, especially in the tropics. The CCSM4 sea ice component uses much more realistic albedos than the CCSM3, and the Arctic sea ice concentration is improved in the CCSM4. An ensemble of 20th century simulations runs produce an excellent match to the observed September Arctic sea ice extent from 1979 to 2005. The CCSM4 ensemble mean increase in globally-averaged surface temperature between 1850 and 2005 is larger than the observed increase by about 0.4 C. This is consistent with the fact that the CCSM4 does not include a representation of the indirect effects of aerosols, although other factors may come into play. The CCSM4 still has significant biases, such as the mean precipitation distribution in the tropical Pacific Ocean, too much low cloud in the Arctic, and the latitudinal distributions of short-wave and long-wave cloud forcings.« less

  5. Stratigraphic framework of Cambrian and Ordovician rocks in the central Appalachian basin from Medina County, Ohio, through southwestern and south-central Pennsylvania to Hampshire County, West Virginia: Chapter E.2.2 in Coal and petroleum resources in the Appalachian basin: distribution, geologic framework, and geochemical character

    USGS Publications Warehouse

    Ryder, Robert T.; Harris, Anita G.; Repetski, John E.; Crangle, Robert D.; Ruppert, Leslie F.; Ryder, Robert T.

    2014-01-01

    This chapter is a re-release of U.S. Geological Survey Bulletin 1839-K, of the same title, by Ryder and others (1992; online version 2.0 revised and digitized by Robert D. Crangle, Jr., 2003). It consists of one file of the report text as it appeared in USGS Bulletin 1839-K and a second file containing the cross section, figures 1 and 2, and tables 1 and 2 on one oversized sheet; the second file was digitized in 2003 as version 2.0 and also includes the gamma-ray well log traces.

  6. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  7. The AE-8 trapped electron model environment

    NASA Technical Reports Server (NTRS)

    Vette, James I.

    1991-01-01

    The machine sensible version of the AE-8 electron model environment was completed in December 1983. It has been sent to users on the model environment distribution list and is made available to new users by the National Space Science Data Center (NSSDC). AE-8 is the last in a series of terrestrial trapped radiation models that includes eight proton and eight electron versions. With the exception of AE-8, all these models were documented in formal reports as well as being available in a machine sensible form. The purpose of this report is to complete the documentation, finally, for AE-8 so that users can understand its construction and see the comparison of the model with the new data used, as well as with the AE-4 model.

  8. Temperature and Humidity Profiles in the TqJoint Data Group of AIRS Version 6 Product for the Climate Model Evaluation

    NASA Technical Reports Server (NTRS)

    Ding, Feng; Fang, Fan; Hearty, Thomas J.; Theobald, Michael; Vollmer, Bruce; Lynnes, Christopher

    2014-01-01

    The Atmospheric Infrared Sounder (AIRS) mission is entering its 13th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing long-wave radiation, cloud properties, and trace gases. Thus AIRS data have been widely used, among other things, for short-term climate research and observational component for model evaluation. One instance is the fifth phase of the Coupled Model Intercomparison Project (CMIP5) which uses AIRS version 5 data in the climate model evaluation. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for data from the AIRS mission. The GES DISC, in collaboration with the AIRS Project, released data from the version 6 algorithm in early 2013. The new algorithm represents a significant improvement over previous versions in terms of greater stability, yield, and quality of products. The ongoing Earth System Grid for next generation climate model research project, a collaborative effort of GES DISC and NASA JPL, will bring temperature and humidity profiles from AIRS version 6. The AIRS version 6 product adds a new "TqJoint" data group, which contains data for a common set of observations across water vapor and temperature at all atmospheric levels and is suitable for climate process studies. How different may the monthly temperature and humidity profiles in "TqJoint" group be from the "Standard" group where temperature and water vapor are not always valid at the same time? This study aims to answer the question by comprehensively comparing the temperature and humidity profiles from the "TqJoint" group and the "Standard" group. The comparison includes mean differences at different levels globally and over land and ocean. We are also working on examining the sampling differences between the "TqJoint" and "Standard" group using MERRA data.

  9. Parallelization and Visual Analysis of Multidimensional Fields: Application to Ozone Production, Destruction, and Transport in Three Dimensions

    NASA Technical Reports Server (NTRS)

    Schwan, Karsten; Alyea, Fred; Ribarsky, M. William; Trauner, Mary; Eisenhauer, Greg; Jean, Yves; Gu, Weiming; Wang, Ray; Waldrop, Jeffrey; Schroeder, Beth; hide

    1996-01-01

    The three-dimensional, spectral transport model used in the current project was first successfully integrated over climatological time scales by Dr. Guang Ping Lou for the simulation of atmospheric N2O using the United Kingdom Meteorological Office (UKMO) 4-dimensional, assimilated wind and temperature data set. A non-parallel, FORTRAN version of this integration using a fairly simple N2O chemistry package containing only photo-chemical reactions was used to verify our initial parallel model results. The integrations reproduced the gross features of the observed stratospheric climatological N2O distributions but also simulated the structure of the stratospheric Antarctic vortex and its evolution. Subsequently, Dr. Thomas Kindler, who produced much of the parallel version of our model, enlarged the N2O model chemistry package to include N2O reactions involving O(D-1) and also introduced assimilated wind data from NASA as well as UKMO. Initially, transport calculations without chemistry were run using Carbon-14 as a non-reactive tracer gas with the result that large differences in the transport properties of the two assimilated wind data sets were apparent from the resultant Carbon-14 distributions. Subsequent calculations for N2O, including its chemistry, with the two input winds data sets with verification from UARS satellite observations have refined the transport differences between the two such that the model's steering capabilities could be used to infer the correct climatological vertical velocity fields required to support the N2O observations. During this process, it was also discovered that both the NASA and the UKMO data contained spurious values in some of the higher frequency wave components, leading to incorrect local transport calculations and ultimately affecting the large scale properties of the model's N2O distributions, particularly at tropical latitudes. Subsequent model runs with wind data that had been filtered to remove some of the high frequency components produced much more realistic N2O distributions. During the past few months, the UKMO wind data base for a complete two-year period was processed into spectral form for model use. This new version of the input transport data base now includes complete temperature fields as well as the necessary wind data. This was done to facilitate advanced chemical calculations in the parallel model which often depend upon temperature. Additional UKMO data is being added as it becomes available.

  10. SPheno 3.1: extensions including flavour, CP-phases and models beyond the MSSM

    NASA Astrophysics Data System (ADS)

    Porod, W.; Staub, F.

    2012-11-01

    We describe recent extensions of the program SPhenoincluding flavour aspects, CP-phases, R-parity violation and low energy observables. In case of flavour mixing all masses of supersymmetric particles are calculated including the complete flavour structure and all possible CP-phases at the 1-loop level. We give details on implemented seesaw models, low energy observables and the corresponding extension of the SUSY Les Houches Accord. Moreover, we comment on the possibilities to include MSSM extensions in SPheno. Catalogue identifier: ADRV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRV_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 154062 No. of bytes in distributed program, including test data, etc.: 1336037 Distribution format: tar.gz Programming language: Fortran95. Computer: PC running under Linux, should run in every Unix environment. Operating system: Linux, Unix. Classification: 11.6. Catalogue identifier of previous version: ADRV_v1_0 Journal reference of previous version: Comput. Phys. Comm. 153(2003)275 Does the new version supersede the previous version?: Yes Nature of problem: The first issue is the determination of the masses and couplings of supersymmetric particles in various supersymmetric models, the R-parity conserved MSSM with generation mixing and including CP-violating phases, various seesaw extensions of the MSSM and the MSSM with bilinear R-parity breaking. Low energy data on Standard Model fermion masses, gauge couplings and electroweak gauge boson masses serve as constraints. Radiative corrections from supersymmetric particles to these inputs must be calculated. Theoretical constraints on the soft SUSY breaking parameters from a high scale theory are imposed and the parameters at the electroweak scale are obtained from the high scale parameters by evaluating the corresponding renormalisation group equations. These parameters must be consistent with the requirement of correct electroweak symmetry breaking. The second issue is to use the obtained masses and couplings for calculating decay widths and branching ratios of supersymmetric particles as well as the cross sections for these particles in electron-positron annihilation. The third issue is to calculate low energy constraints in the B-meson sector such as BR(b s), MB s, rare lepton decays, such as BR(e), the SUSY contributions to anomalous magnetic moments and electric dipole moments of leptons, the SUSY contributions to the ρ parameter as well as lepton flavour violating Z decays. Solution method: The renormalisation connecting a high scale and the electroweak scale is calculated by the Runge-Kutta method. Iteration provides a solution consistent with the multi-boundary conditions. In case of three-body decays and for the calculation of initial state radiation Gaussian quadrature is used for the numerical solution of the integrals. Reasons for new version: Inclusion of new models as well as additional observables. Moreover, a new standard for data transfer had been established, which is now supported. Summary of revisions: The already existing models have been extended to include also CP-violation and flavour mixing. The data transfer is done using the so-called SLHA2 standard. In addition new models have been included: all three types of seesaw models as well as bilinear R-parity violation. Moreover, additional observables are calculated: branching ratios for flavour violating lepton decays, EDMs of leptons and of the neutron, CP-violating mass difference in the B-meson sector and branching ratios for flavour violating b-quark decays. Restrictions: In case of R-parity violation the cross sections are not calculated. Running time: 0.2 seconds on an Intel(R) Core(TM)2 Duo CPU T9900 with 3.06 GHz

  11. Current Status of Japan's Activity for GPM/DPR and Global Rainfall Map algorithm development

    NASA Astrophysics Data System (ADS)

    Kachi, M.; Kubota, T.; Yoshida, N.; Kida, S.; Oki, R.; Iguchi, T.; Nakamura, K.

    2012-04-01

    The Global Precipitation Measurement (GPM) mission is composed of two categories of satellites; 1) a Tropical Rainfall Measuring Mission (TRMM)-like non-sun-synchronous orbit satellite (GPM Core Observatory); and 2) constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory carries the Dual-frequency Precipitation Radar (DPR), which is being developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and microwave radiometer provided by the National Aeronautics and Space Administration (NASA). GPM Core Observatory will be launched in February 2014, and development of algorithms is underway. DPR Level 1 algorithm, which provides DPR L1B product including received power, will be developed by the JAXA. The first version was submitted in March 2011. Development of the second version of DPR L1B algorithm (Version 2) will complete in March 2012. Version 2 algorithm includes all basic functions, preliminary database, HDF5 I/F, and minimum error handling. Pre-launch code will be developed by the end of October 2012. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The first version of GPM/DPR Level-2 Algorithm Theoretical Basis Document was completed on November 2010. The second version, "Baseline code", was completed in January 2012. Baseline code includes main module, and eight basic sub-modules (Preparation module, Vertical Profile module, Classification module, SRT module, DSD module, Solver module, Input module, and Output module.) The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. It is important to develop algorithm applicable to both TRMM/PR and KuPR in order to produce long-term continuous data set. Pre-launch code will be developed by autumn 2012. Global Rainfall Map algorithm has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project between 2002 and 2007, and near-real-time version operating at JAXA since 2007. "Baseline code" used current operational GSMaP code (V5.222,) and development completed in January 2012. Pre-launch code will be developed by autumn 2012, including update of database for rain type classification and rain/no-rain classification, and introduction of rain-gauge correction.

  12. micrOMEGAs 2.0: A program to calculate the relic density of dark matter in a generic model

    NASA Astrophysics Data System (ADS)

    Bélanger, G.; Boudjema, F.; Pukhov, A.; Semenov, A.

    2007-03-01

    micrOMEGAs 2.0 is a code which calculates the relic density of a stable massive particle in an arbitrary model. The underlying assumption is that there is a conservation law like R-parity in supersymmetry which guarantees the stability of the lightest odd particle. The new physics model must be incorporated in the notation of CalcHEP, a package for the automatic generation of squared matrix elements. Once this is done, all annihilation and coannihilation channels are included automatically in any model. Cross-sections at v=0, relevant for indirect detection of dark matter, are also computed automatically. The package includes three sample models: the minimal supersymmetric standard model (MSSM), the MSSM with complex phases and the NMSSM. Extension to other models, including non-supersymmetric models, is described. Program summaryTitle of program:micrOMEGAs2.0 Catalogue identifier:ADQR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQR_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers for which the program is designed and others on which it has been tested:PC, Alpha, Mac, Sun Operating systems under which the program has been tested:UNIX (Linux, OSF1, SunOS, Darwin, Cygwin) Programming language used:C and Fortran Memory required to execute with typical data:17 MB depending on the number of processes required No. of processors used:1 Has the code been vectorized or parallelized:no No. of lines in distributed program, including test data, etc.:91 778 No. of bytes in distributed program, including test data, etc.:1 306 726 Distribution format:tar.gz External routines/libraries used:no Catalogue identifier of previous version:ADQR_v1_3 Journal reference of previous version:Comput. Phys. Comm. 174 (2006) 577 Does the new version supersede the previous version:yes Nature of physical problem:Calculation of the relic density of the lightest stable particle in a generic new model of particle physics. Method of solution: In numerically solving the evolution equation for the density of dark matter, relativistic formulae for the thermal average are used. All tree-level processes for annihilation and coannihilation of new particles in the model are included. The cross-sections for all processes are calculated exactly with CalcHEP after definition of a model file. Higher-order QCD corrections to Higgs couplings to quark pairs are included. Reasons for the new version:There are many models of new physics that propose a candidate for dark matter besides the much studied minimal supersymmetric standard model. This new version not only incorporates extensions of the MSSM, such as the MSSM with complex phases, or the NMSSM which contains an extra singlet superfield but also gives the possibility for the user to incorporate easily a new model. For this the user only needs to redefine appropriately a new model file. Summary of revisions:Possibility to include in the package any particle physics model with a discrete symmetry that guarantees the stability of the cold dark matter candidate (LOP) and to compute the relic density of CDM. Compute automatically the cross-sections for annihilation of the LOP at small velocities into SM final states and provide the energy spectra for γ,e,p¯,ν final states. For the MSSM with input parameters defined at the GUT scale, the interface with any of the spectrum calculator codes reads an input file in the SUSY Les Houches Accord format (SLHA). Implementation of the MSSM with complex parameters (CPV-MSSM) with an interface to CPsuperH to calculate the spectrum. Routine to calculate the electric dipole moment of the electron in the CPV-MSSM. In the NMSSM, new interface compatible with NMHDECAY2.1. Typical running time:0.2 sec Unusual features of the program:Depending on the parameters of the model, the program generates additional new code, compiles it and loads it dynamically.

  13. Data analysis environment (DASH2000) for the Subaru telescope

    NASA Astrophysics Data System (ADS)

    Mizumoto, Yoshihiko; Yagi, Masafumi; Chikada, Yoshihiro; Ogasawara, Ryusuke; Kosugi, George; Takata, Tadafumi; Yoshida, Michitoshi; Ishihara, Yasuhide; Yanaka, Hiroshi; Yamamoto, Tadahiro; Morita, Yasuhiro; Nakamoto, Hiroyuki

    2000-06-01

    New framework of data analysis system (DASH) has been developed for the SUBARU Telescope. It is designed using object-oriented methodology and adopted a restaurant model. DASH shares the load of CPU and I/O among distributed heterogeneous computers. The distributed object environment of the system is implemented with JAVA and CORBA. DASH has been evaluated by several prototypings. DASH2000 is the latest version, which will be released as the beta version of data analysis system for the SUBARU Telescope.

  14. User's guide to the wetland creation/restoration data base, version 2

    USGS Publications Warehouse

    Miller, Lee; Auble, Gregor T.; Schneller-McDonald, Keith

    1991-01-01

    Wetland creation or restoration projects are frequently proposed as mitigation for unavoidable wetland losses, as components of wetland enhancement programs, and as tools to accomplish specific objectives such as waterfowl production or flood control. There is considerable controversy concerning the effectiveness of such projects as well as the most appropriate and efficient techniques to employ. The importance of the resource and the long time scales involved in fully evaluating a creation or restoration effort make it imperative to consider existing information as fully as possible in the development and evaluation of wetland creation or restoration proposals.To aid in the evaluation of wetland/creation efforts, the U.s. Fish and Wildlife Service (FWS), National Ecology Research Center, has developed the Wetland Creation/Restoration (WCR) Data Base. The data base is a highly indexed or keyworded bibliography of wetland creation or restoration articles. ("Articles" refers to any type of publication that deals specifically with wetland creation/restoration projects or studies.) The scope of the articles is international, although most of them are concerned with projects conducted in the United States. Information coded for each article includes author; citation; type of wetland and its location in terms of state, ecoregion, and FWS region; type of study undertaken; objectives in creating or restoring the wetland; actions performed to realize those objectives; length of time encompassed by the study; evaluation of results and responses to the wetland creation/restoration actions; and a listing of plant species significant to the project. A brief annotation summarizes the article and includes any significant additional information that may not be adequately reflected in the above described fields.Many of these articles describe only one or two components of a total wetland restoration effort. Planning a project that is designed to restore a wetland system (including at least some of its functions) is similar to constructing a picture from a number of puzzle pieces--missing pieces represent data gaps or information that is not available. Articles range from specific case studies, to overviews of restoration methods and techniques, to planning restoration projects and assessing programmatic and administrative backgrounds and interactions.In this data base, the term "restoration" is applied loosely to include rehabilitation of wetlands. It may refer to a number of situations or actions including, but not limited to:1. breaching dikes or plugging drains;2. water pollution clean-up;3. conversion of eutrophic conditions;4. wastewater treatment;5. recolonization of previously disturbed or denuded areas;6. amelioration of adverse conditions (erosion, wave, or wind action);7. soil treatment --mulching, fertilization;8. rerouting streams --may include construction of meander patterns;9. monitoring natural vegetation; or0. excluding grazers (geese, cattle) and monitoring results.This report describes the format and content of Version 2 of the WCR data base. Version 2 differs from the previous version described in SchnellerMcDonald et al. (1988): several fields have been dropped and condensed and new records have been added. Version 2 includes all records distributed with the earlier version and its updates. We recommend you replace any previous version with Version 2.

  15. The CEOS International Directory Network: Progress and Plans, Spring, 1999

    NASA Technical Reports Server (NTRS)

    Olsen, Lola M.

    1999-01-01

    The Global Change Master Directory (GCMD) serves as the software development hub for the Committee on Earth observation Satellites' (CEOS) International Directory Network (IDN). The GCMD has upgraded the software for the IDN nodes as Version 7 of the GCMD: MD7-Oracle and MD7-Isite, as well as three other MD7 experimental interfaces. The contribution by DLR representatives (Germany) of the DLR Thesaurus will be demonstrated as an educational tool for use with MD7-Isite. The software will be installed at twelve nodes around the world: Brazil, Argentina, the Netherlands, Canada, France, Germany, Italy, Japan, Australia, New Zealand, Switzerland, and several sites in the United States. Representing NASA for the International Directory Network and the CEOS Data Access Subgroup, NASA's contribution to this international interoperability effort will be updated. Discussion will include interoperability with the CEOS Interoperability Protocol (CIP), features of the latest version of the software, including upgraded capabilities for distributed input by the IDN nodes, installation logistics, "mirroring", population objectives, and future plans.

  16. The CEOS International Directory Network Progress and Plans: Spring, 1999

    NASA Technical Reports Server (NTRS)

    Olsen, Lola M.

    1999-01-01

    The Global Change Master Directory (GCMD) serves as the software development hub for the Committee on Earth Observation Satellites' (CEOS) International Directory Network (IDN). The GCMD has upgraded the software for the IDN nodes as Version 7 of the GCMD: MD7-Oracle and MD7-Isite, as well as three other MD7 experimental interfaces. The contribution by DLR representatives (Germany) of the DLR Thesaurus will be demonstrated as an educational tool for use with MD7-Isite. The software will be installed at twelve nodes around the world: Brazil, Argentina, the Netherlands, Canada, France, Germany, Italy, Japan, Australia, New Zealand, Switzerland, and several sites in the United States. Representing NASA for the International Directory Network and the CEOS Data Access Subgroup, NASA's contribution to this international interoperability effort will be updated. Discussion will include interoperability with the CEOS Interoperability Protocol (CIP), features of the latest version of the software, including upgraded capabilities for distributed input by the IDN nodes, installation logistics, "mirroring', population objectives, and future plans.

  17. NASA Access Mechanism: Lessons learned document

    NASA Technical Reports Server (NTRS)

    Burdick, Lisa; Dunbar, Rick; Duncan, Denise; Generous, Curtis; Hunter, Judy; Lycas, John; Taber-Dudas, Ardeth

    1994-01-01

    The six-month beta test of the NASA Access Mechanism (NAM) prototype was completed on June 30, 1993. This report documents the lessons learned from the use of this Graphical User Interface to NASA databases such as the NASA STI Database, outside databases, Internet resources, and peers in the NASA R&D community. Design decisions, such as the use of XWindows software, a client-server distributed architecture, and use of the NASA Science Internet, are explained. Users' reactions to the interface and suggestions for design changes are reported, as are the changes made by the software developers based on new technology for information discovery and retrieval. The lessons learned section also reports reactions from the public, both at demonstrations and in response to articles in the trade press and journals. Recommendations are included for future versions, such as a World Wide Web (WWW) and Mosaic based interface to heterogeneous databases, and NAM-Lite, a version which allows customization to include utilities provided locally at NASA Centers.

  18. Sea Level Affecting Marshes Model (SLAMM) ‐ New functionality for predicting changes in distribution of submerged aquatic vegetation in response to sea level rise

    USGS Publications Warehouse

    Lee II, Henry; Reusser, Deborah A.; Frazier, Melanie R; McCoy, Lee M; Clinton, Patrick J.; Clough, Jonathan S.

    2014-01-01

    The “Sea‐Level Affecting Marshes Model” (SLAMM) is a moderate resolution model used to predict the effects of sea level rise on marsh habitats (Craft et al. 2009). SLAMM has been used extensively on both the west coast (e.g., Glick et al., 2007) and east coast (e.g., Geselbracht et al., 2011) of the United States to evaluate potential changes in the distribution and extent of tidal marsh habitats. However, a limitation of the current version of SLAMM, (Version 6.2) is that it lacks the ability to model distribution changes in seagrass habitat resulting from sea level rise. Because of the ecological importance of SAV habitats, U.S. EPA, USGS, and USDA partnered with Warren Pinnacle Consulting to enhance the SLAMM modeling software to include new functionality in order to predict changes in Zostera marina distribution within Pacific Northwest estuaries in response to sea level rise. Specifically, the objective was to develop a SAV model that used generally available GIS data and parameters that were predictive and that could be customized for other estuaries that have GIS layers of existing SAV distribution. This report describes the procedure used to develop the SAV model for the Yaquina Bay Estuary, Oregon, appends a statistical script based on the open source R software to generate a similar SAV model for other estuaries that have data layers of existing SAV, and describes how to incorporate the model coefficients from the site‐specific SAV model into SLAMM to predict the effects of sea level rise on Zostera marina distributions. To demonstrate the applicability of the R tools, we utilize them to develop model coefficients for Willapa Bay, Washington using site‐specific SAV data.

  19. Research notes : high-speed rail survey results.

    DOT National Transportation Integrated Search

    2010-08-01

    The survey was conducted from April 2010 to June 2010 using both a print and a web version with identical questions. The print version of the survey was distributed at open house meetings on high-speed rail held in Eugene, Junction City, Albany, Sale...

  20. A Truncated Cauchy Distribution

    ERIC Educational Resources Information Center

    Nadarajah, Saralees; Kotz, Samuel

    2006-01-01

    A truncated version of the Cauchy distribution is introduced. Unlike the Cauchy distribution, this possesses finite moments of all orders and could therefore be a better model for certain practical situations. One such situation in finance is discussed. Explicit expressions for the moments of the truncated distribution are also derived.

  1. PROUCL VERSION 3.0

    EPA Science Inventory

    The computation ofa (l-a) 100% upper confidence limit (UCL) of the population mean depends upon the data distribution. Typically, environmental data are positively skewed, and a default lognormal distribution (EPA, 1992) is often used to model such data distributions. The H-stati...

  2. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal antenna required to establish a link with the satellite, the statistical parameters that characterize the rainrate process at the terminal site, the length of the propagation path within the potential rain region, and its projected length onto the local horizontal. The IBM PC version of LeRC-SLAM (LEW-14979) is written in Microsoft QuickBASIC for an IBM PC compatible computer with a monitor and printer capable of supporting an 80-column format. The IBM PC version is available on a 5.25 inch MS-DOS format diskette. The program requires about 30K RAM. The source code and executable are included. The Macintosh version of LeRC-SLAM (LEW-14977) is written in Microsoft Basic, Binary (b) v2.00 for Macintosh II series computers running MacOS. This version requires 400K RAM and is available on a 3.5 inch 800K Macintosh format diskette, which includes source code only. The Macintosh version was developed in 1987 and the IBM PC version was developed in 1989. IBM PC is a trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh is a registered trademark of Apple Computer, Inc.

  3. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal antenna required to establish a link with the satellite, the statistical parameters that characterize the rainrate process at the terminal site, the length of the propagation path within the potential rain region, and its projected length onto the local horizontal. The IBM PC version of LeRC-SLAM (LEW-14979) is written in Microsoft QuickBASIC for an IBM PC compatible computer with a monitor and printer capable of supporting an 80-column format. The IBM PC version is available on a 5.25 inch MS-DOS format diskette. The program requires about 30K RAM. The source code and executable are included. The Macintosh version of LeRC-SLAM (LEW-14977) is written in Microsoft Basic, Binary (b) v2.00 for Macintosh II series computers running MacOS. This version requires 400K RAM and is available on a 3.5 inch 800K Macintosh format diskette, which includes source code only. The Macintosh version was developed in 1987 and the IBM PC version was developed in 1989. IBM PC is a trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh is a registered trademark of Apple Computer, Inc.

  4. NDL-v2.0: A new version of the numerical differentiation library for parallel architectures

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Voglis, C.; Papageorgiou, D. G.; Lagaris, I. E.

    2014-07-01

    We present a new version of the numerical differentiation library (NDL) used for the numerical estimation of first and second order partial derivatives of a function by finite differencing. In this version we have restructured the serial implementation of the code so as to achieve optimal task-based parallelization. The pure shared-memory parallelization of the library has been based on the lightweight OpenMP tasking model allowing for the full extraction of the available parallelism and efficient scheduling of multiple concurrent library calls. On multicore clusters, parallelism is exploited by means of TORC, an MPI-based multi-threaded tasking library. The new MPI implementation of NDL provides optimal performance in terms of function calls and, furthermore, supports asynchronous execution of multiple library calls within legacy MPI programs. In addition, a Python interface has been implemented for all cases, exporting the functionality of our library to sequential Python codes. Catalog identifier: AEDG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 63036 No. of bytes in distributed program, including test data, etc.: 801872 Distribution format: tar.gz Programming language: ANSI Fortran-77, ANSI C, Python. Computer: Distributed systems (clusters), shared memory systems. Operating system: Linux, Unix. Has the code been vectorized or parallelized?: Yes. RAM: The library uses O(N) internal storage, N being the dimension of the problem. It can use up to O(N2) internal storage for Hessian calculations, if a task throttling factor has not been set by the user. Classification: 4.9, 4.14, 6.5. Catalog identifier of previous version: AEDG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)1404 Does the new version supersede the previous version?: Yes Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, and sensitivity analysis. For a large number of scientific and engineering applications, the underlying functions correspond to simulation codes for which analytical estimation of derivatives is difficult or almost impossible. A parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with a carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Reasons for new version: The updated version was motivated by our endeavors to extend a parallel Bayesian uncertainty quantification framework [1], by incorporating higher order derivative information as in most state-of-the-art stochastic simulation methods such as Stochastic Newton MCMC [2] and Riemannian Manifold Hamiltonian MC [3]. The function evaluations are simulations with significant time-to-solution, which also varies with the input parameters such as in [1, 4]. The runtime of the N-body-type of problem changes considerably with the introduction of a longer cut-off between the bodies. In the first version of the library, the OpenMP-parallel subroutines spawn a new team of threads and distribute the function evaluations with a PARALLEL DO directive. This limits the functionality of the library as multiple concurrent calls require nested parallelism support from the OpenMP environment. Therefore, either their function evaluations will be serialized or processor oversubscription is likely to occur due to the increased number of OpenMP threads. In addition, the Hessian calculations include two explicit parallel regions that compute first the diagonal and then the off-diagonal elements of the array. Due to the barrier between the two regions, the parallelism of the calculations is not fully exploited. These issues have been addressed in the new version by first restructuring the serial code and then running the function evaluations in parallel using OpenMP tasks. Although the MPI-parallel implementation of the first version is capable of fully exploiting the task parallelism of the PNDL routines, it does not utilize the caching mechanism of the serial code and, therefore, performs some redundant function evaluations in the Hessian and Jacobian calculations. This can lead to: (a) higher execution times if the number of available processors is lower than the total number of tasks, and (b) significant energy consumption due to wasted processor cycles. Overcoming these drawbacks, which become critical as the time of a single function evaluation increases, was the primary goal of this new version. Due to the code restructure, the MPI-parallel implementation (and the OpenMP-parallel in accordance) avoids redundant calls, providing optimal performance in terms of the number of function evaluations. Another limitation of the library was that the library subroutines were collective and synchronous calls. In the new version, each MPI process can issue any number of subroutines for asynchronous execution. We introduce two library calls that provide global and local task synchronizations, similarly to the BARRIER and TASKWAIT directives of OpenMP. The new MPI-implementation is based on TORC, a new tasking library for multicore clusters [5-7]. TORC improves the portability of the software, as it relies exclusively on the POSIX-Threads and MPI programming interfaces. It allows MPI processes to utilize multiple worker threads, offering a hybrid programming and execution environment similar to MPI+OpenMP, in a completely transparent way. Finally, to further improve the usability of our software, a Python interface has been implemented on top of both the OpenMP and MPI versions of the library. This allows sequential Python codes to exploit shared and distributed memory systems. Summary of revisions: The revised code improves the performance of both parallel (OpenMP and MPI) implementations. The functionality and the user-interface of the MPI-parallel version have been extended to support the asynchronous execution of multiple PNDL calls, issued by one or multiple MPI processes. A new underlying tasking library increases portability and allows MPI processes to have multiple worker threads. For both implementations, an interface to the Python programming language has been added. Restrictions: The library uses only double precision arithmetic. The MPI implementation assumes the homogeneity of the execution environment provided by the operating system. Specifically, the processes of a single MPI application must have identical address space and a user function resides at the same virtual address. In addition, address space layout randomization should not be used for the application. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 23 ms for the serial distribution, 25 ms for the OpenMP with 2 threads, 53 ms and 1.01 s for the MPI parallel distribution using 2 threads and 2 processes respectively and yield-time for idle workers equal to 10 ms. References: [1] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework, J. Chem. Phys 137 (14). [2] H.P. Flath, L.C. Wilcox, V. Akcelik, J. Hill, B. van Bloemen Waanders, O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput. 33 (1) (2011) 407-432. [3] M. Girolami, B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, J. R. Stat. Soc. Ser. B (Stat. Methodol.) 73 (2) (2011) 123-214. [4] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Data driven, predictive molecular dynamics for nanoscale flow simulations under uncertainty, J. Phys. Chem. B 117 (47) (2013) 14808-14816. [5] P.E. Hadjidoukas, E. Lappas, V.V. Dimakopoulos, A runtime library for platform-independent task parallelism, in: PDP, IEEE, 2012, pp. 229-236. [6] C. Voglis, P.E. Hadjidoukas, D.G. Papageorgiou, I. Lagaris, A parallel hybrid optimization algorithm for fitting interatomic potentials, Appl. Soft Comput. 13 (12) (2013) 4481-4492. [7] P.E. Hadjidoukas, C. Voglis, V.V. Dimakopoulos, I. Lagaris, D.G. Papageorgiou, Supporting adaptive and irregular parallelism for non-linear numerical optimization, Appl. Math. Comput. 231 (2014) 544-559.

  5. Revision of FMM-Yukawa: An adaptive fast multipole method for screened Coulomb interactions

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Huang, Jingfang; Pitsianis, Nikos P.; Sun, Xiaobai

    2010-12-01

    FMM-YUKAWA is a mathematical software package primarily for rapid evaluation of the screened Coulomb interactions of N particles in three dimensional space. Since its release, we have revised and re-organized the data structure, software architecture, and user interface, for the purpose of enabling more flexible, broader and easier use of the package. The package and its documentation are available at http://www.fastmultipole.org/, along with a few other closely related mathematical software packages. New version program summaryProgram title: FMM-Yukawa Catalogue identifier: AEEQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 2.0 No. of lines in distributed program, including test data, etc.: 78 704 No. of bytes in distributed program, including test data, etc.: 854 265 Distribution format: tar.gz Programming language: FORTRAN 77, FORTRAN 90, and C. Requires gcc and gfortran version 4.4.3 or later Computer: All Operating system: Any Classification: 4.8, 4.12 Catalogue identifier of previous version: AEEQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2331 Does the new version supersede the previous version?: Yes Nature of problem: To evaluate the screened Coulomb potential and force field of N charged particles, and to evaluate a convolution type integral where the Green's function is the fundamental solution of the modified Helmholtz equation. Solution method: The new version of fast multipole method (FMM) that diagonalizes the multipole-to-local translation operator is applied with the tree structure adaptive to sample particle locations. Reasons for new version: To handle much larger particle ensembles, to enable the iterative use of the subroutines in a solver, and to remove potential contention in assignments for parallelization. Summary of revisions: The software package FMM-Yukawa has been revised and re-organized in data structure, software architecture, programming methods, and user interface. The revision enables more flexible use of the package and economic use of memory resources. It consists of five stages. The initial stage (stage 1) determines, based on the accuracy requirement and FMM theory, the length of multipole expansions and the number of quadrature points for diagonalization, and loads the quadrature nodes and weights that are computed off line. Stage 2 constructs the oct-tree and interaction lists, with adaptation to the sparsity or density of particles and employing a dynamic memory allocation scheme at every tree level. Stage 3 executes the core FMM subroutine for numerical calculation of the particle interactions. The subroutine can now be used iteratively as in a solver, while the particle locations remain the same. Stage 4 releases the memory allocated in Stage 2 for the adaptive tree and interaction lists. The user can modify the iterative routine easily. When the particle locations are changed such as in a molecular dynamics simulation, stage 2 to 4 can also be used together repeatedly. The final stage releases the memory space used for the quadrature and other remaining FMM parameters. Programs at the stage level and at the user interface are re-written in the C programming language, while most of the translation and interaction operations remain in FORTRAN. As a result of the change in data structures and memory allocation, the revised package can accommodate much larger particle ensembles while maintaining the same accuracy-efficiency performance. The new version is also developed as an important precursor to its parallel counterpart on multi-core or many core processors in a shared memory programming environment. Particularly, in order to ensure mutual exclusion in concurrent updates without incurring extra latency, we have replaced all the assignment statements at a source box that put its data to multiple target boxes with assignments at every target box that gather data from source boxes. This amounts to replacing the column version of matrix-vector multiplication with the row version. The matrix here, however, is in compressive representation. Sufficient care is taken in the revision not to alter the algorithmic complexity or numerical behavior, as concurrent writing potentially takes place in the upward calculation of the multipole expansion coefficients, interactions at every level of the FMM tree, and downward calculation of the local expansion coefficients. The software modules and their compositions are also organized according to the stages they are used. Demonstration files and makefiles for merging the user routines and the library routines are provided. Restrictions: Accuracy requirement is described in terms of three or six digits. Higher multiples of three digits will be allowed in a later version. Finer decimation in digits for accuracy specification may or may not be necessary. Unusual features: Ready and friendly for customized use and instrumental in expression of concurrency and dependency for efficient parallelization. Running time: The running time depends linearly on the number N of particles, and varies with the distribution characteristics of the particle distribution. It also depends on the accuracy requirement, a higher accuracy requirement takes relatively longer time. The code outperforms the direct summation method when N⩾750.

  6. Resampling methods in Microsoft Excel® for estimating reference intervals

    PubMed Central

    Theodorsson, Elvar

    2015-01-01

    Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles.
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366

  7. Resampling methods in Microsoft Excel® for estimating reference intervals.

    PubMed

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  8. The CMIP5 archive architecture: A system for petabyte-scale distributed archival of climate model data

    NASA Astrophysics Data System (ADS)

    Pascoe, Stephen; Cinquini, Luca; Lawrence, Bryan

    2010-05-01

    The Phase 5 Coupled Model Intercomparison Project (CMIP5) will produce a petabyte scale archive of climate data relevant to future international assessments of climate science (e.g., the IPCC's 5th Assessment Report scheduled for publication in 2013). The infrastructure for the CMIP5 archive must meet many challenges to support this ambitious international project. We describe here the distributed software architecture being deployed worldwide to meet these challenges. The CMIP5 architecture extends the Earth System Grid (ESG) distributed architecture of Datanodes, providing data access and visualisation services, and Gateways providing the user interface including registration, search and browse services. Additional features developed for CMIP5 include a publication workflow incorporating quality control and metadata submission, data replication, version control, update notification and production of citable metadata records. Implementation of these features have been driven by the requirements of reliable global access to over 1Pb of data and consistent citability of data and metadata. Central to the implementation is the concept of Atomic Datasets that are identifiable through a Data Reference Syntax (DRS). Atomic Datasets are immutable to allow them to be replicated and tracked whilst maintaining data consistency. However, since occasional errors in data production and processing is inevitable, new versions can be published and users notified of these updates. As deprecated datasets may be the target of existing citations they can remain visible in the system. Replication of Atomic Datasets is designed to improve regional access and provide fault tolerance. Several datanodes in the system are designated replicating nodes and hold replicas of a portion of the archive expected to be of broad interest to the community. Gateways provide a system-wide interface to users where they can track the version history and location of replicas to select the most appropriate location for download. In addition to meeting the immediate needs of CMIP5 this architecture provides a basis for the Earth System Modeling e-infrastructure being further developed within the EU FP7 IS-ENES project.

  9. New version: GRASP2K relativistic atomic structure package

    NASA Astrophysics Data System (ADS)

    Jönsson, P.; Gaigalas, G.; Bieroń, J.; Fischer, C. Froese; Grant, I. P.

    2013-09-01

    A revised version of GRASP2K [P. Jönsson, X. He, C. Froese Fischer, I.P. Grant, Comput. Phys. Commun. 177 (2007) 597] is presented. It supports earlier non-block and block versions of codes as well as a new block version in which the njgraf library module [A. Bar-Shalom, M. Klapisch, Comput. Phys. Commun. 50 (1988) 375] has been replaced by the librang angular package developed by Gaigalas based on the theory of [G. Gaigalas, Z.B. Rudzikas, C. Froese Fischer, J. Phys. B: At. Mol. Phys. 30 (1997) 3747, G. Gaigalas, S. Fritzsche, I.P. Grant, Comput. Phys. Commun. 139 (2001) 263]. Tests have shown that errors encountered by njgraf do not occur with the new angular package. The three versions are denoted v1, v2, and v3, respectively. In addition, in v3, the coefficients of fractional parentage have been extended to j=9/2, making calculations feasible for the lanthanides and actinides. Changes in v2 include minor improvements. For example, the new version of rci2 may be used to compute quantum electrodynamic (QED) corrections only from selected orbitals. In v3, a new program, jj2lsj, reports the percentage composition of the wave function in LSJ and the program rlevels has been modified to report the configuration state function (CSF) with the largest coefficient of an LSJ expansion. The bioscl2 and bioscl3 application programs have been modified to produce a file of transition data with one record for each transition in the same format as in ATSP2K [C. Froese Fischer, G. Tachiev, G. Gaigalas, M.R. Godefroid, Comput. Phys. Commun. 176 (2007) 559], which identifies each atomic state by the total energy and a label for the CSF with the largest expansion coefficient in LSJ intermediate coupling. All versions of the codes have been adapted for 64-bit computer architecture. Program SummaryProgram title: GRASP2K, version 1_1 Catalogue identifier: ADZL_v1_1 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADZL_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 730252 No. of bytes in distributed program, including test data, etc.: 14808872 Distribution format: tar.gz Programming language: Fortran. Computer: Intel Xeon, 2.66 GHz. Operating system: Suse, Ubuntu, and Debian Linux 64-bit. RAM: 500 MB or more Classification: 2.1. Catalogue identifier of previous version: ADZL_v1_0 Journal reference of previous version: Comput. Phys. Comm. 177 (2007) 597 Does the new version supersede the previous version?: Yes Nature of problem: Prediction of atomic properties — atomic energy levels, oscillator strengths, radiative decay rates, hyperfine structure parameters, Landé gJ-factors, and specific mass shift parameters — using a multiconfiguration Dirac-Hartree-Fock approach. Solution method: The computational method is the same as in the previous GRASP2K [1] version except that for v3 codes the njgraf library module [2] for recoupling has been replaced by librang [3,4]. Reasons for new version: New angular libraries with improved performance are available. Also methodology for transforming from jj- to LSJ-coupling has been developed. Summary of revisions: New angular libraries where the coefficients of fractional parentage have been extended to j=9/2, making calculations feasible for the lanthanides and actinides. Inclusion of a new program jj2lsj, which reports the percentage composition of the wave function in LSJ. Transition programs have been modified to produce a file of transition data with one record for each transition in the same format as Atsp2K [C. Froese Fischer, G. Tachiev, G. Gaigalas and M.R. Godefroid, Comput. Phys. Commun. 176 (2007) 559], which identifies each atomic state by the total energy and a label for the CSF with the largest expansion coefficient in LSJ intermediate coupling. Updated to 64-bit architecture. A comprehensive user manual in pdf format for the program package has been added. Restrictions: The packing algorithm restricts the maximum number of orbitals to be ≤214. The tables of reduced coefficients of fractional parentage used in this version are limited to subshells with j≤9/2 [5]; occupied subshells with j>9/2 are, therefore, restricted to a maximum of two electrons. Some other parameters, such as the maximum number of subshells of a CSF outside a common set of closed shells are determined by a parameter.def file that can be modified prior to compile time. Unusual features: The bioscl3 program reports transition data in the same format as in Atsp2K [6], and the data processing program tables of the latter package can be used. The tables program takes a name.lsj file, usually a concatenated file of all the .lsj transition files for a given atom or ion, and finds the energy structure of the levels and the multiplet transition arrays. The tables posted at the website http://atoms.vuse.vanderbilt.edu are examples of tables produced by the tables program. With the extension of coefficients of fractional parentage to j=9/2, calculations for the lanthanides and actinides become possible. Running time: CPU time required to execute test cases: 70.5 s.

  10. GRASP92: a package for large-scale relativistic atomic structure calculations

    NASA Astrophysics Data System (ADS)

    Parpia, F. A.; Froese Fischer, C.; Grant, I. P.

    2006-12-01

    Program summaryTitle of program: GRASP92 Catalogue identifier: ADCU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADCU_v1_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: no Programming language used: Fortran Computer: IBM POWERstation 320H Operating system: IBM AIX 3.2.5+ RAM: 64M words No. of lines in distributed program, including test data, etc.: 65 224 No of bytes in distributed program, including test data, etc.: 409 198 Distribution format: tar.gz Catalogue identifier of previous version: ADCU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 94 (1996) 249 Does the new version supersede the previous version?: Yes Nature of problem: Prediction of atomic spectra—atomic energy levels, oscillator strengths, and radiative decay rates—using a 'fully relativistic' approach. Solution method: Atomic orbitals are assumed to be four-component spinor eigenstates of the angular momentum operator, j=l+s, and the parity operator Π=βπ. Configuration state functions (CSFs) are linear combinations of Slater determinants of atomic orbitals, and are simultaneous eigenfunctions of the atomic electronic angular momentum operator, J, and the atomic parity operator, P. Lists of CSFs are either explicitly prescribed by the user or generated from a set of reference CSFs, a set of subshells, and rules for deriving other CSFs from these. Approximate atomic state functions (ASFs) are linear combinations of CSFs. A variational functional may be constructed by combining expressions for the energies of one or more ASFs. Average level (AL) functionals are weighted sums of energies of all possible ASFs that may be constructed from a set of CSFs; the number of ASFs is then the same as the number, n, of CSFs. Optimal level (OL) functionals are weighted sums of energies of some subset of ASFs; the GRASP92 package is optimized for this latter class of functionals. The composition of an ASF in terms of CSFs sharing the same quantum numbers is determined using the configuration-interaction (CI) procedure that results upon varying the expansion coefficients to determine the extremum of a variational functional. Radial functions may be determined by numerically solving the multiconfiguration Dirac-Fock (MCDF) equations that result upon varying the orbital radial functions or some subset thereof so as to obtain an extremum of the variational functional. Radial wavefunctions may also be determined using a screened hydrogenic or Thomas-Fermi model, although these schemes generally provide initial estimates for MCDF self-consistent-field (SCF) calculations. Transition properties for pairs of ASFs are computed from matrix elements of multipole operators of the electromagnetic field. All matrix elements of CSFs are evaluated using the Racah algebra. Reasons for the new version: During recent studies using the general relativistic atomic structure package (GRASP92), several errors were found, some of which might have been present already in the earlier GRASP92 version (program ABJN_v1_0, Comput. Phys. Comm. 55 (1989) 425). These errors were reported and discussed by Froese Fischer, Gaigalas, and Ralchenko in a separate publication [C. Froese Fischer, G. Gaigalas, Y. Ralchenko, Comput. Phys. Comm. 175 (2006) 738-744. [7

  11. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1985-01-01

    Synopses are given for NASA supported work in computer science at the University of Virginia. Some areas of research include: error seeding as a testing method; knowledge representation for engineering design; analysis of faults in a multi-version software experiment; implementation of a parallel programming environment; two computer graphics systems for visualization of pressure distribution and convective density particles; task decomposition for multiple robot arms; vectorized incomplete conjugate gradient; and iterative methods for solving linear equations on the Flex/32.

  12. Multizone accretional evolution of planetesimal swarms

    NASA Technical Reports Server (NTRS)

    Spaute, D.; Davis, D. R.; Weidenschilling, S. J.

    1990-01-01

    The general features of a new numerical simulation of planetesimal accretion which models multiple heliocentric distance zones, together with a detailed model for the planetesimal size and orbital distribution in each zone, are described. A restricted version of this model which allows only a single heliocentric distance zone has been used to test the validity of the code by comparing with results from earlier authors when the same physical phenomena are included. Generally, very good agreement is found.

  13. Catalog of SAS-2 gamma-ray observations (Fichtel, et al. 1990)

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The SAS-2 gamma ray catalog contains fluxes measured with the high energy gamma ray telescope flown aboard the second NASA Small Astronomy Satellite. The objects measured include various types of galaxies, quasi-stellar, and BL Lacertae objects, and pulsars. The catalog contains separate files for galaxies, pulsars, other objects, notes, and references.

  14. An atlas of stellar spectra between 2.00 and 2.45 micrometers (Arnaud, Gilmore, and Collier Cameron 1989)

    NASA Technical Reports Server (NTRS)

    Warren, Wayne N., Jr.

    1990-01-01

    The machine-readable version of the atlas, as it is currently being distributed from the Astronomical Data Center, is described. The atlas represent a collection of spectra in the wavelength range 2.00 to 2.45 micros having a resolution of approximately 0.02 micron. The sample of 73 stars includes a supergiant, giants, dwarfs, and subdwarfs with a chemical abundance range of about -2 to +0.5 dex.

  15. Integrated Baseline System (IBS). Version 1.03, System Management Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, J.R.; Bailey, S.; Bower, J.C.

    This IBS System Management Guide explains how to install or upgrade the Integrated Baseline System (IBS) software package. The IBS is an emergency management planning and analysis tool that was developed under the direction of the Federal Emergency Management Agency (FEMA). This guide includes detailed instructions for installing the IBS software package on a Digital Equipment Corporation (DEC) VAX computer from the IBS distribution tapes. The installation instructions include procedures for both first-time installations and upgrades to existing IBS installations. To ensure that the system manager has the background necessary for successful installation of the IBS package, this guide alsomore » includes information on IBS computer requirements, software organization, and the generation of IBS distribution tapes. When special utility programs are used during IBS installation and setups, this guide refers you to the IBS Utilities Guide for specific instructions. This guide also refers you to the IBS Data Management Guide for detailed descriptions of some IBS data files and structures. Any special requirements for installation are not documented here but should be included in a set of installation notes that come with the distribution tapes.« less

  16. The Evolution of the DARWIN System

    NASA Technical Reports Server (NTRS)

    Walton, Joan D.; Filman, Robert E.; Korsmeyer, David J.; Norvig, Peter (Technical Monitor)

    1999-01-01

    DARWIN is a web-based system for presenting the results of wind-tunnel testing and computational model analyses to aerospace designers. DARWIN captures the data, maintains the information, and manages derived knowledge (e.g. visualizations, etc.) of large quantities of aerospace data. In addition, it provides tools and an environment for distributed collaborative engineering. We are currently constructing the third version of the DARWIN software system. DARWN's development history has, in some sense, tracked the development of web applications. The 1995 DARWIN reflected the latest web technologies--CGI scripts, Java applets and a three-layer architecture--available at that time. The 1997 version of DARWIN expanded on this base, making extensive use of a plethora of web technologies, including Java/JavaScript and Dynamic HTML. While more powerful, this multiplicity has proven to be a maintenance and development headache. The year 2000 version of DARWIN will provide a more stable and uniform foundation environment, composed primarily of Java mechanisms. In this paper, we discuss this evolution, comparing the strengths and weaknesses of the various architectural approaches and describing the lessons learned about building complex web applications.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enders, Alexander L.; Lousteau, Angela L.

    The Desktop Analysis Reporting Tool (DART) is a software package that allows users to easily view and analyze daily files that span long periods. DART gives users the capability to quickly determine the state of health of a radiation portal monitor (RPM), troubleshoot and diagnose problems, and view data in various time frames to perform trend analysis. In short, it converts the data strings written in the daily files into meaningful tables and plots. The standalone version of DART (“soloDART”) utilizes a database engine that is included with the application; no additional installations are necessary. There is also a networkedmore » version of DART (“polyDART”) that is designed to maximize the benefit of a centralized data repository while distributing the workload to individual desktop machines. This networked approach requires a more complex database manager Structured Query Language (SQL) Server; however, SQL Server is not currently provided with DART. Regardless of which version is used, DART will import daily files from RPMs, store the relevant data in its database, and it can produce reports for status, trend analysis, and reporting purposes.« less

  18. Seismic Canvas: Evolution as a Data Exploration and Analysis Tool

    NASA Astrophysics Data System (ADS)

    Kroeger, G. C.

    2015-12-01

    SeismicCanvas, originally developed as a prototype interactive waveform display and printing application for educational use has evolved to include significant data exploration and analysis functionality. The most recent version supports data import from a variety of standard file formats including SAC and mini-SEED, as well as search and download capabilities via IRIS/FDSN Web Services. Data processing tools now include removal of means and trends, interactive windowing, filtering, smoothing, tapering, resampling. Waveforms can be displayed in a free-form canvas or as a record section based on angular or great circle distance, azimuth or back azimuth. Integrated tau-p code allows the calculation and display of theoretical phase arrivals from a variety of radial Earth models. Waveforms can be aligned by absolute time, event time, picked or theoretical arrival times and can be stacked after alignment. Interactive measurements include means, amplitudes, time delays, ray parameters and apparent velocities. Interactive picking of an arbitrary list of seismic phases is supported. Bode plots of amplitude and phase spectra and spectrograms can be created from multiple seismograms or selected windows of seismograms. Direct printing is implemented on all supported platforms along with output of high-resolution pdf files. With these added capabilities, the application is now being used as a data exploration tool for research. Coded in C++ and using the cross-platform Qt framework, the most recent version is available as a 64-bit application for Windows 7-10, Mac OS X 10.6-10.11, and most distributions of Linux, and a 32-bit version for Windows XP and 7. With the latest improvements and refactoring of trace display classes, the 64-bit versions have been tested with over 250 million samples and remain responsive in interactive operations. The source code is available under a LPGLv3 license and both source and executables are available through the IRIS SeisCode repository.

  19. tweezercalib 2.1: Faster version of MatLab package for precise calibration of optical tweezers

    NASA Astrophysics Data System (ADS)

    Hansen, Poul Martin; Tolic-Nørrelykke, Iva Marija; Flyvbjerg, Henrik; Berg-Sørensen, Kirstine

    2006-10-01

    New version program summaryTitle of program: tweezercalib Catalogue identifier:ADTV_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTV_v2_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:no No. of lines in distributed program, including test data, etc.: 134 188 No. of bytes in distributed program, including test data, etc.: 1 050 368 Distribution format: tar.gz Programming language: MatLab (Mathworks Inc.), standard license Computer:General computer running MatLab (Mathworks Inc.) Operating system:Windows2000, Windows-XP, Linux RAM:Of order four times the size of the data file Classification:3, 4.14, 18, 23 Catalogue identifier of previous version: ADTV_v2_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 518 Does the new version supersede the previous version?: yes Nature of problem:Calibrate optical tweezers with precision by fitting theory to experimental power spectrum of position of bead doing Brownian motion in incompressible fluid, possibly near microscope cover slip, while trapped in optical tweezers. Thereby determine spring constant of optical trap and conversion factor for arbitrary-units-to-nanometers for detection system. The theoretical underpinnings of the procedure may be found in Ref. [3]. Solution method:Elimination of cross-talk between quadrant photo-diodes, output channels for positions (optional). Check that distribution of recorded positions agrees with Boltzmann distribution of bead in harmonic trap. Data compression and noise reduction by blocking method applied to power spectrum. Full accounting for hydrodynamic effects; Frequency-dependent drag force and interaction with nearby cover slip (optional). Full accounting for electronic filters (optional), for "virtual filtering" caused by detection system (optional). Full accounting for aliasing caused by finite sampling rate (optional). Standard non-linear least-squares fitting with custom written routines based on Refs. [1,2]. Statistical support for fit is given, with several plots facilitating inspection of consistency and quality of data and fit. Reasons for the new version:Recent progress in the field has demonstrated a better approximation of the formula for the theoretical power spectrum with corrections due to frequency dependence of motion and distance to a surface nearby. Summary of revisions:The expression for the theoretical power spectrum when accounting for corrections to Stokes law, P(f), has been updated to agree with a better approximation of the theoretical spectrum, as discussed in Ref. [4] The units of the kinematic viscosity applied in the program is now stated in the input window. Greek letters and exponents are inserted in the input window. The graphical output has improved: The figures now bear a meaningful title and four figures that test the quality of the fit are now combined in one figure with four parts. Restrictions: Data should be positions of bead doing Brownian motion while held by optical tweezers. For high precision in final results, data should be time series measured over a long time, with sufficiently high experimental sampling rate; The sampling rate should be well above the characteristic frequency of the trap, the so-called corner frequency. Thus, the sampling frequency should typically be larger than 10 kHz. The Fast Fourier Transform used works optimally when the time series contain 2 data points, and long measurement time is obtained with n>12-15. Finally, the optics should be set to ensure a harmonic trapping potential in the range of positions visited by the bead. The fitting procedure checks for harmonic potential. Running time:seconds ReferencesJ. Nocedal, Y.x. Yuan, Combining trust region and line search techniques, Technical Report OTC 98/04, Optimization Technology Center, 1998. W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes. The Art of Scientific Computing, Cambridge University Press, Cambridge, 1986. (The theoretical underpinnings for the procedure) K. Berg-Sørensen and Henrik Flyvbjerg, Power spectrum analysis for optical tweezers, Rev. Sci. Ins. 75 (2004) 594-612. S.F. Tolic-Nørrelykke, et al., Calibration of optical tweezers with positions detection in the back-focal-plane, arXiv:physics/0603037 v2, 2006.

  20. Modeling runoff and erosion risk in a~small steep cultivated watershed using different data sources: from on-site measurements to farmers' perceptions

    NASA Astrophysics Data System (ADS)

    Auvet, B.; Lidon, B.; Kartiwa, B.; Le Bissonnais, Y.; Poussin, J.-C.

    2015-09-01

    This paper presents an approach to model runoff and erosion risk in a context of data scarcity, whereas the majority of available models require large quantities of physical data that are frequently not accessible. To overcome this problem, our approach uses different sources of data, particularly on agricultural practices (tillage and land cover) and farmers' perceptions of runoff and erosion. The model was developed on a small (5 ha) cultivated watershed characterized by extreme conditions (slopes of up to 55 %, extreme rainfall events) on the Merapi volcano in Indonesia. Runoff was modelled using two versions of STREAM. First, a lumped version was used to determine the global parameters of the watershed. Second, a distributed version used three parameters for the production of runoff (slope, land cover and roughness), a precise DEM, and the position of waterways for runoff distribution. This information was derived from field observations and interviews with farmers. Both surface runoff models accurately reproduced runoff at the outlet. However, the distributed model (Nash-Sutcliffe = 0.94) was more accurate than the adjusted lumped model (N-S = 0.85), especially for the smallest and biggest runoff events, and produced accurate spatial distribution of runoff production and concentration. Different types of erosion processes (landslides, linear inter-ridge erosion, linear erosion in main waterways) were modelled as a combination of a hazard map (the spatial distribution of runoff/infiltration volume provided by the distributed model), and a susceptibility map combining slope, land cover and tillage, derived from in situ observations and interviews with farmers. Each erosion risk map gives a spatial representation of the different erosion processes including risk intensities and frequencies that were validated by the farmers and by in situ observations. Maps of erosion risk confirmed the impact of the concentration of runoff, the high susceptibility of long steep slopes, and revealed the critical role of tillage direction. Calibrating and validating models using in situ measurements, observations and farmers' perceptions made it possible to represent runoff and erosion risk despite the initial scarcity of hydrological data. Even if the models mainly provided orders of magnitude and qualitative information, they significantly improved our understanding of the watershed dynamics. In addition, the information produced by such models is easy for farmers to use to manage runoff and erosion by using appropriate agricultural practices.

  1. The EOSDIS Version 0 Distributed Active Archive Center for physical oceanography and air-sea interaction

    NASA Technical Reports Server (NTRS)

    Hilland, Jeffrey E.; Collins, Donald J.; Nichols, David A.

    1991-01-01

    The Distributed Active Archive Center (DAAC) at the Jet Propulsion Laboratory will support scientists specializing in physical oceanography and air-sea interaction. As part of the NASA Earth Observing System Data and Information System Version 0 the DAAC will build on existing capabilities to provide services for data product generation, archiving, distribution and management of information about data. To meet scientist's immediate needs for data, existing data sets from missions such as Seasat, Geosat, the NOAA series of satellites and the Global Positioning Satellite system will be distributed to investigators upon request. In 1992, ocean topography, wave and surface roughness data from the Topex/Poseidon radar altimeter mission will be archived and distributed. New data products will be derived from Topex/Poseidon and other sensor systems based on recommendations of the science community. In 1995, ocean wind field measurements from the NASA Scatterometer will be supported by the DAAC.

  2. AN OVERVIEW OF EPANET VERSION 3.0

    EPA Science Inventory

    EPANET is a widely used public domain software package for modeling the hydraulic and water quality behavior of water distribution systems over an extended period of time. The last major update to the code was version 2.0 released in 2000 (Rossman, 2000). Since that time there ha...

  3. IDC Re-Engineering Phase 2 Glossary Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Christopher J.; Harris, James M.

    2017-01-01

    This document contains the glossary of terms used for the IDC Re-Engineering Phase 2 project. This version was created for Iteration E3. The IDC applies automatic processing methods in order to produce, archive, and distribute standard IDC products on behalf of all States Parties.

  4. The Effect of Complementary and Alternative Medicine Claims on Risk Adjustment

    PubMed Central

    Lind, Bonnie K.; Abrams, Chad; Lafferty, William E.; Kiehr, Paula K.; Grembowski, David E.

    2006-01-01

    Objective To assess how the inclusion of diagnoses from complementary and alternative medicine (CAM) providers affects measures of morbidity burden and expectations of health care resource use for insured patients. Methods Claims data from Washington State were used to create two versions of a case-mix index. One version included claims from all provider types; the second version omitted claims from CAM providers who are covered under commercial insurance. Expected resource use was also calculated. The distribution of expected and actual resource use was then compared for the two indices. Results Inclusion of CAM providers shifts many patients into higher morbidity categories; 54% of 61,914 CAM users had higher risk scores in the index which included CAM providers. When expected resource use categories were defined based on all providers, CAM users in the highest morbidity category had average (± s.d.) annual expenditures of $6661 (± $13,863). This was less than those in the highest morbidity category when CAM providers were not included in the index ($8562 ± $16,354), and was also lower than the highest morbidity patients who did not use any CAM services ($8419 ± $18,885). Conclusions Inclusion of services from CAM providers under third party payment increases risk scores for their patients but expectations of costs for this group are lower than expected had costs been estimated based only on services from traditional providers. Additional work is needed to validate risk adjustment indices when adding services from provider groups not included in the development of the index. PMID:17122711

  5. 1979 SIGNUM meeting on numerical ordinary differential equations. [University Inn, Champaign, IL, April 3-5, 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skeel, R. D.

    1979-04-01

    This report gives a summary of the papers presented at the meeting. It consists of all working papers distributed at the conference and all working papers received too late for distribution. In addition, abstracts and/or summaries are included where practical for those talks and workshop sessions that did not generate papers. This document should be a useful reference to very current research in ODEs. These papers are preliminary versions of papers that will be submitted for publication. One paper in this volume has been cited in ERA, and can be located by reference to the entry CONF-790403-- in the Reportmore » Number Index.« less

  6. ARC2D - EFFICIENT SOLUTION METHODS FOR THE NAVIER-STOKES EQUATIONS (DEC RISC ULTRIX VERSION)

    NASA Technical Reports Server (NTRS)

    Biyabani, S. R.

    1994-01-01

    ARC2D is a computational fluid dynamics program developed at the NASA Ames Research Center specifically for airfoil computations. The program uses implicit finite-difference techniques to solve two-dimensional Euler equations and thin layer Navier-Stokes equations. It is based on the Beam and Warming implicit approximate factorization algorithm in generalized coordinates. The methods are either time accurate or accelerated non-time accurate steady state schemes. The evolution of the solution through time is physically realistic; good solution accuracy is dependent on mesh spacing and boundary conditions. The mathematical development of ARC2D begins with the strong conservation law form of the two-dimensional Navier-Stokes equations in Cartesian coordinates, which admits shock capturing. The Navier-Stokes equations can be transformed from Cartesian coordinates to generalized curvilinear coordinates in a manner that permits one computational code to serve a wide variety of physical geometries and grid systems. ARC2D includes an algebraic mixing length model to approximate the effect of turbulence. In cases of high Reynolds number viscous flows, thin layer approximation can be applied. ARC2D allows for a variety of solutions to stability boundaries, such as those encountered in flows with shocks. The user has considerable flexibility in assigning geometry and developing grid patterns, as well as in assigning boundary conditions. However, the ARC2D model is most appropriate for attached and mildly separated boundary layers; no attempt is made to model wake regions and widely separated flows. The techniques have been successfully used for a variety of inviscid and viscous flowfield calculations. The Cray version of ARC2D is written in FORTRAN 77 for use on Cray series computers and requires approximately 5Mb memory. The program is fully vectorized. The tape includes variations for the COS and UNICOS operating systems. Also included is a sample routine for CONVEX computers to emulate Cray system time calls, which should be easy to modify for other machines as well. The standard distribution media for this version is a 9-track 1600 BPI ASCII Card Image format magnetic tape. The Cray version was developed in 1987. The IBM ES/3090 version is an IBM port of the Cray version. It is written in IBM VS FORTRAN and has the capability of executing in both vector and parallel modes on the MVS/XA operating system and in vector mode on the VM/XA operating system. Various options of the IBM VS FORTRAN compiler provide new features for the ES/3090 version, including 64-bit arithmetic and up to 2 GB of virtual addressability. The IBM ES/3090 version is available only as a 9-track, 1600 BPI IBM IEBCOPY format magnetic tape. The IBM ES/3090 version was developed in 1989. The DEC RISC ULTRIX version is a DEC port of the Cray version. It is written in FORTRAN 77 for RISC-based Digital Equipment platforms. The memory requirement is approximately 7Mb of main memory. It is available in UNIX tar format on TK50 tape cartridge. The port to DEC RISC ULTRIX was done in 1990. COS and UNICOS are trademarks and Cray is a registered trademark of Cray Research, Inc. IBM, ES/3090, VS FORTRAN, MVS/XA, and VM/XA are registered trademarks of International Business Machines. DEC and ULTRIX are registered trademarks of Digital Equipment Corporation.

  7. ARC2D - EFFICIENT SOLUTION METHODS FOR THE NAVIER-STOKES EQUATIONS (CRAY VERSION)

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.

    1994-01-01

    ARC2D is a computational fluid dynamics program developed at the NASA Ames Research Center specifically for airfoil computations. The program uses implicit finite-difference techniques to solve two-dimensional Euler equations and thin layer Navier-Stokes equations. It is based on the Beam and Warming implicit approximate factorization algorithm in generalized coordinates. The methods are either time accurate or accelerated non-time accurate steady state schemes. The evolution of the solution through time is physically realistic; good solution accuracy is dependent on mesh spacing and boundary conditions. The mathematical development of ARC2D begins with the strong conservation law form of the two-dimensional Navier-Stokes equations in Cartesian coordinates, which admits shock capturing. The Navier-Stokes equations can be transformed from Cartesian coordinates to generalized curvilinear coordinates in a manner that permits one computational code to serve a wide variety of physical geometries and grid systems. ARC2D includes an algebraic mixing length model to approximate the effect of turbulence. In cases of high Reynolds number viscous flows, thin layer approximation can be applied. ARC2D allows for a variety of solutions to stability boundaries, such as those encountered in flows with shocks. The user has considerable flexibility in assigning geometry and developing grid patterns, as well as in assigning boundary conditions. However, the ARC2D model is most appropriate for attached and mildly separated boundary layers; no attempt is made to model wake regions and widely separated flows. The techniques have been successfully used for a variety of inviscid and viscous flowfield calculations. The Cray version of ARC2D is written in FORTRAN 77 for use on Cray series computers and requires approximately 5Mb memory. The program is fully vectorized. The tape includes variations for the COS and UNICOS operating systems. Also included is a sample routine for CONVEX computers to emulate Cray system time calls, which should be easy to modify for other machines as well. The standard distribution media for this version is a 9-track 1600 BPI ASCII Card Image format magnetic tape. The Cray version was developed in 1987. The IBM ES/3090 version is an IBM port of the Cray version. It is written in IBM VS FORTRAN and has the capability of executing in both vector and parallel modes on the MVS/XA operating system and in vector mode on the VM/XA operating system. Various options of the IBM VS FORTRAN compiler provide new features for the ES/3090 version, including 64-bit arithmetic and up to 2 GB of virtual addressability. The IBM ES/3090 version is available only as a 9-track, 1600 BPI IBM IEBCOPY format magnetic tape. The IBM ES/3090 version was developed in 1989. The DEC RISC ULTRIX version is a DEC port of the Cray version. It is written in FORTRAN 77 for RISC-based Digital Equipment platforms. The memory requirement is approximately 7Mb of main memory. It is available in UNIX tar format on TK50 tape cartridge. The port to DEC RISC ULTRIX was done in 1990. COS and UNICOS are trademarks and Cray is a registered trademark of Cray Research, Inc. IBM, ES/3090, VS FORTRAN, MVS/XA, and VM/XA are registered trademarks of International Business Machines. DEC and ULTRIX are registered trademarks of Digital Equipment Corporation.

  8. CPsuperH2.3: An updated tool for phenomenology in the MSSM with explicit CP violation

    NASA Astrophysics Data System (ADS)

    Lee, J. S.; Carena, M.; Ellis, J.; Pilaftsis, A.; Wagner, C. E. M.

    2013-04-01

    We describe the Fortran code CPsuperH2.3, which incorporates the following updates compared with its predecessor CPsuperH2.0. It implements improved calculations of the Higgs-boson masses and mixing including stau contributions and finite threshold effects on the tau-lepton Yukawa coupling. It incorporates the LEP limits on the processes e+e-→HiZ,HiHj and the CMS limits on Hi→τ¯τ obtained from 4.6 fb-1 of data at a center-of-mass energy of 7 TeV. It also includes the decay mode Hi→Zγ and the Schiff-moment contributions to the electric dipole moments of Mercury and Radium 225, with several calculational options for the case of Mercury. These additions make CPsuperH2.3 a suitable tool for analyzing possible CP-violating effects in the MSSM in the era of the LHC and a new generation of EDM experiments.

  9. CPsuperH2.3: an Updated Tool for Phenomenology in the MSSM with Explicit CP Violation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, J.S.; Carena, M.; Ellis, J.

    2013-04-01

    We describe the Fortran code CPsuperH2.3, which incorporates the following updates compared with its predecessor CPsuperH2.0. It implements improved calculations of the Higgs-boson masses and mixing including stau contributions and finite threshold effects on the tau-lepton Yukawa coupling. It incorporates the LEP limits on the processes e^+e^-->H_iZ,H_iH_j and the CMS limits on H_i->@t@?@t obtained from 4.6 fb^-^1 of data at a center-of-mass energy of 7 TeV. It also includes the decay mode H_i->Z@c and the Schiff-moment contributions to the electric dipole moments of Mercury and Radium 225, with several calculational options for the case of Mercury. These additions make CPsuperH2.3more » a suitable tool for analyzing possible CP-violating effects in the MSSM in the era of the LHC and a new generation of EDM experiments. Program summary: Program title: CPsuperH2.3 Catalogue identifier: ADSR_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSR_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24058 No. of bytes in distributed program, including test data, etc.: 158721 Distribution format: tar.gz Programming language: Fortran77. Computer: PC running under Linux and computers in Unix environment. Operating system: Linux. RAM: 32 MB Classification: 11.1. Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: ADSR_v2_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)312 Nature of problem: The calculations of mass spectrum, decay widths and branching ratios of the neutral and charged Higgs bosons in the Minimal Supersymmetric Standard Model with explicit CP violation have been improved. The program is based on renormalization-group-improved diagrammatic calculations that include dominant higher-order logarithmic and threshold corrections, b-quark and @t-lepton Yukawa-coupling resummation effects and improved treatment of Higgs-boson pole-mass shifts. The couplings of the Higgs bosons to the Standard Model gauge bosons and fermions, to their supersymmetric partners and all the trilinear and quartic Higgs-boson self-couplings are also calculated. Also included are a full treatment of the 4x4 (2x2) neutral (charged) Higgs propagator matrix together with the center-of-mass dependent Higgs-boson couplings to gluons and photons, and an integrated treatment of several B-meson observables. The new implementations include the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon, as well as the anomalous magnetic moment of muon, (g_@m-2), the top-quark decays, improved calculations of the Higgs-boson masses and mixing including stau contributions, the LEP limits, and the CMS limits on H_i->@t@t@?. It also implements the decay mode H_i->Z@c and includes the corresponding Standard Model branching ratios of the three neutral Higgs bosons in the array GAMBRN(IM,IWB = 2,IH). Solution method: One-dimensional numerical integration for several Higgs-decay modes and EDMs, iterative treatment of the threshold corrections and Higgs-boson pole masses, and the numerical diagonalization of the neutralino mass matrix. Reasons for new version: Mainly to provide the full calculations of the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon as well as (g_@m-2), improved calculations of the Higgs-boson masses and mixing including stau contributions, the LEP limits, the CMS limits on H_i->@t@t@?, the top-quark decays, H_i->Z@c decay, and the corresponding Standard Model branching ratios of the three neutral Higgs bosons. Summary of revisions: Full calculations of the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon as well as (g_@m-2). Improved treatment of Higgs-boson masses and mixing including stau contributions. The LEP limits. The CMS limits on H_i->@t@t@?. The top-quark decays. The H_i->Z@c decay. The corresponding Standard Model branching ratios of the three neutral Higgs bosons. Running time: Less than 1.0 s.« less

  10. DAMT - DISTRIBUTED APPLICATION MONITOR TOOL (HP9000 VERSION)

    NASA Technical Reports Server (NTRS)

    Keith, B.

    1994-01-01

    Typical network monitors measure status of host computers and data traffic among hosts. A monitor to collect statistics about individual processes must be unobtrusive and possess the ability to locate and monitor processes, locate and monitor circuits between processes, and report traffic back to the user through a single application program interface (API). DAMT, Distributed Application Monitor Tool, is a distributed application program that will collect network statistics and make them available to the user. This distributed application has one component (i.e., process) on each host the user wishes to monitor as well as a set of components at a centralized location. DAMT provides the first known implementation of a network monitor at the application layer of abstraction. Potential users only need to know the process names of the distributed application they wish to monitor. The tool locates the processes and the circuit between them, and reports any traffic between them at a user-defined rate. The tool operates without the cooperation of the processes it monitors. Application processes require no changes to be monitored by this tool. Neither does DAMT require the UNIX kernel to be recompiled. The tool obtains process and circuit information by accessing the operating system's existing process database. This database contains all information available about currently executing processes. Expanding the information monitored by the tool can be done by utilizing more information from the process database. Traffic on a circuit between processes is monitored by a low-level LAN analyzer that has access to the raw network data. The tool also provides features such as dynamic event reporting and virtual path routing. A reusable object approach was used in the design of DAMT. The tool has four main components; the Virtual Path Switcher, the Central Monitor Complex, the Remote Monitor, and the LAN Analyzer. All of DAMT's components are independent, asynchronously executing processes. The independent processes communicate with each other via UNIX sockets through a Virtual Path router, or Switcher. The Switcher maintains a routing table showing the host of each component process of the tool, eliminating the need for each process to do so. The Central Monitor Complex provides the single application program interface (API) to the user and coordinates the activities of DAMT. The Central Monitor Complex is itself divided into independent objects that perform its functions. The component objects are the Central Monitor, the Process Locator, the Circuit Locator, and the Traffic Reporter. Each of these objects is an independent, asynchronously executing process. User requests to the tool are interpreted by the Central Monitor. The Process Locator identifies whether a named process is running on a monitored host and which host that is. The circuit between any two processes in the distributed application is identified using the Circuit Locator. The Traffic Reporter handles communication with the LAN Analyzer and accumulates traffic updates until it must send a traffic report to the user. The Remote Monitor process is replicated on each monitored host. It serves the Central Monitor Complex processes with application process information. The Remote Monitor process provides access to operating systems information about currently executing processes. It allows the Process Locator to find processes and the Circuit Locator to identify circuits between processes. It also provides lifetime information about currently monitored processes. The LAN Analyzer consists of two processes. Low-level monitoring is handled by the Sniffer. The Sniffer analyzes the raw data on a single, physical LAN. It responds to commands from the Analyzer process, which maintains the interface to the Traffic Reporter and keeps track of which circuits to monitor. DAMT is written in C-language for HP-9000 series computers running HP-UX and Sun 3 and 4 series computers running SunOS. DAMT requires 1Mb of disk space and 4Mb of RAM for execution. This package requires MIT's X Window System, Version 11 Revision 4, with OSF/Motif 1.1. The HP-9000 version (GSC-13589) includes sample HP-9000/375 and HP-9000/730 executables which were compiled under HP-UX, and the Sun version (GSC-13559) includes sample Sun3 and Sun4 executables compiled under SunOS. The standard distribution medium for the HP version of DAMT is a .25 inch HP pre-formatted streaming magnetic tape cartridge in UNIX tar format. It is also available on a 4mm magnetic tape in UNIX tar format. The standard distribution medium for the Sun version of DAMT is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. DAMT was developed in 1992.

  11. A Case Study on Multiple-Choice Testing in Anatomical Sciences

    ERIC Educational Resources Information Center

    Golda, Stephanie DuPont

    2011-01-01

    Objective testing techniques, such as multiple-choice examinations, are a widely accepted method of assessment in gross anatomy. In order to deter cheating on these types of examinations, instructors often design several versions of an examination to distribute. These versions usually involve the rearrangement of questions and their corresponding…

  12. Validation of a Japanese version of the Scoliosis Research Society-22 Patient Questionnaire among idiopathic scoliosis patients in Japan.

    PubMed

    Hashimoto, Hideki; Sase, Takeshi; Arai, Yasuhisa; Maruyama, Toru; Isobe, Keijirou; Shouno, Yasuhiro

    2007-02-15

    A cross-sectional observational study to determine the response distribution, internal consistency, and construct, concurrent, and discriminative validities of The Scoliosis Research Society-22 (SRS-22) Patient Questionnaire translated into Japanese as compared with the other language versions. To validate the Japanese version of SRS22. The SRS-22 was translated into several languages but yet not into Japanese. The Japanese SRS-22 and Medical Outcomes Study Short Form 36 were simultaneously administered to 114 adolescent idiopathic scoliosis patients. Exploratory factor analysis revealed a 4-factor structure, though several items were not loaded as theoretically expected. The originally constructed Japanese SRS-22 subscales and the English version showed similar response distribution. Internal consistency was fair but lower than that of the English version. The concurrent validity of the translated version, except for the self-image subscale, was supported using Medical Outcomes Study Short Form 36 subscales as a reference. The function scale differed significantly by curve angle magnitude and treatment status. The self-image score was the highest in patients under observation when curve angle was < 40 degrees, while postsurgical patients marked the highest scores when the angle > or = 40 degrees, respectively. The Japanese SRS-22 is valid and may be useful for clinical evaluation of Japanese scoliosis patients, though the self-image subscale may need further assessment.

  13. Evolution Models with Conditional Mutation Rates: Strange Plateaus in Population Distribution

    NASA Astrophysics Data System (ADS)

    Saakian, David B.

    2017-08-01

    Cancer is related to clonal evolution with a strongly nonlinear, collective behavior. Here we investigate a slightly advanced version of the popular Crow-Kimura evolution model, suggested recently, by simply assuming a conditional mutation rate. We investigated the steady-state solution and found a highly intriguing plateau in the distribution. There are selective and nonselective phases, with a rather narrow plateau in the distribution at the peak in the first phase, and a wide plateau for many Hamming classes (a collection of genomes with the same number of mutations from the reference genome) in the second phase. We analytically solved the steady state distribution in the selective and nonselective phases, calculating the widths of the plateaus. Numerically, we also found an intermediate phase with several plateaus in the steady-state distribution, related to large finite-genome-length corrections. We assume that the newly observed phenomena should exist in other versions of evolution dynamics when the parameters of the model are conditioned to the population distribution.

  14. Automated evaluation of matrix elements between contracted wavefunctions: A Mathematica version of the FRODO program

    NASA Astrophysics Data System (ADS)

    Angeli, C.; Cimiraglia, R.

    2013-02-01

    A symbolic program performing the Formal Reduction of Density Operators (FRODO), formerly developed in the MuPAD computer algebra system with the purpose of evaluating the matrix elements of the electronic Hamiltonian between internally contracted functions in a complete active space (CAS) scheme, has been rewritten in Mathematica. New version : A program summaryProgram title: FRODO Catalogue identifier: ADV Y _v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVY_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3878 No. of bytes in distributed program, including test data, etc.: 170729 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which the Mathematica computer algebra system can be installed Operating system: Linux Classification: 5 Catalogue identifier of previous version: ADV Y _v1_0 Journal reference of previous version: Comput. Phys. Comm. 171(2005)63 Does the new version supersede the previous version?: No Nature of problem. In order to improve on the CAS-SCF wavefunction one can resort to multireference perturbation theory or configuration interaction based on internally contracted functions (ICFs) which are obtained by application of the excitation operators to the reference CAS-SCF wavefunction. The previous formulation of such matrix elements in the MuPAD computer algebra system, has been rewritten using Mathematica. Solution method: The method adopted consists in successively eliminating all occurrences of inactive orbital indices (core and virtual) from the products of excitation operators which appear in the definition of the ICFs and in the electronic Hamiltonian expressed in the second quantization formalism. Reasons for new version: Some years ago we published in this journal a couple of papers [1, 2] hereafter to be referred to as papers I and II, respectively dedicated to the automated evaluation of the matrix elements of the molecular electronic Hamiltonian between internally contracted functions [3] (ICFs). In paper II the program FRODO (after Formal Reduction Of Density Operators) was presented with the purpose of providing working formulas for each occurrence of the ICFs. The original FRODO program was written in the MuPAD computer algebra system [4] and was actively used in our group for the generation of the matrix elements to be employed in the third-order n-electron valence state perturbation theory (NEVPT) [5-8] as well as in the internally contracted configuration interaction (IC-CI) [9]. We present a new version of the program FRODO written in the Mathematica system [10]. The reason for the rewriting of the program lies in the fact that, on the one hand, MuPAD does not seem to be any longer available as a stand-alone system and, on the other hand, Mathematica, due to its ubiquitousness, appears to be increasingly the computer algebra system most widely used nowadays. Restrictions: The program is limited to no more than doubly excited ICFs. Running time: The examples described in the Readme file take a few seconds to run. References: [1] C. Angeli, R. Cimiraglia, Comp. Phys. Comm. 166 (2005) 53. [2] C. Angeli, R. Cimiraglia, Comp. Phys. Comm. 171 (2005) 63. [3] H.-J. Werner, P. J. Knowles, Adv. Chem. Phys. 89 (1988) 5803. [4] B. Fuchssteiner, W. Oevel: http://www.mupad.de Mupad research group, university of Paderborn. Mupad version 2.5.3 for Linux. [5] C. Angeli, R. Cimiraglia, S. Evangelisti, T. Leininger, J.-P. Malrieu, J. Chem. Phys. 114 (2001) 10252. [6] C. Angeli, R. Cimiraglia, J.-P. Malrieu, J. Chem. Phys. 117 (2002) 9138. [7] C. Angeli, B. Bories, A. Cavallini, R. Cimiraglia, J. Chem. Phys. 124 (2006) 054108. [8] C. Angeli, M. Pastore, R. Cimiraglia, Theor. Chem. Acc. 117 (2007) 743. [9] C. Angeli, R. Cimiraglia, Mol. Phys. in press, DOI:10.1080/00268976.2012.689872 [10] http://www.wolfram.com/Mathematica. Mathematica version 8 for Linux.

  15. An upgraded version of the generator BCVEGPY2.0 for hadronic production of B meson and its excited states

    NASA Astrophysics Data System (ADS)

    Chang, Chao-Hsi; Wang, Jian-Xiong; Wu, Xing-Gang

    2006-11-01

    An upgraded version of the package BCVEGPY2.0: [C.-H. Chang, J.-X. Wang, X.-G. Wu, Comput. Phys. Commun. 174 (2006) 241] is presented, which works under LINUX system and is named as BCVEGPY2.1. With the version and a GNU C compiler additionally, users may simulate the B-events in various experimental environments very conveniently. It has been manipulated in better modularity and code reusability (less cross communication among various modules) than BCVEGPY2.0 has. Furthermore, in the upgraded version a special execution is arranged as that the GNU command make compiles a requested code with the help of a master makefile in main code directory, and then builds an executable file with the default name run. Finally, this paper may also be considered as an erratum, i.e., typo errors in BCVEGPY2.0 and corrections accordingly have been listed. New version program (BCVEGPY2.1) summaryTitle of program: BCVEGPY2.1 Catalogue identifier: ADTJ_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTJ_v2_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to original program: BCVEGPY2.0 Reference in CPC: Comput. Phys. Commun. 174 (2006) 241 Does the new version supersede the old program: No Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating systems: LINUX Programming language used: FORTRAN 77/90 Memory required to execute with typical data: About 2.0 MB No. of lines in distributed program, including test data, etc.: 31 521 No. of bytes in distributed program, including test data, etc.: 1 310 179 Distribution format: tar.gz Nature of physical problem: Hadronic production of B meson itself and its excited states Method of solution: The code with option can generate weighted and unweighted events. An interface to PYTHIA is provided to meet the needs of jets hadronization in the production. Restrictions on the complexity of the problem: The hadronic production of (cb¯)-quarkonium in S-wave and P-wave states via the mechanism of gluon-gluon fusion are given by the so-called 'complete calculation' approach. Reasons for new version: Responding to the feedback from users, we rearrange the program in a convenient way and then it can be easily adopted by the users to do the simulations according to their own experimental environment (e.g. detector acceptances and experimental cuts). We have paid many efforts to rearrange the program into several modules with less cross communication among the modules, the main program is slimmed down and all the further actions are decoupled from the main program and can be easily called for various purposes. Typical running time: The typical running time is machine and user-parameters dependent. Typically, for production of the S-wave (cb¯)-quarkonium, when IDWTUP = 1, it takes about 20 hour on a 1.8 GHz Intel P4-processor machine to generate 1000 events; however, when IDWTUP = 3, to generate 10 6 events it takes about 40 minutes only. Of the production, the time for the P-wave (cb¯)-quarkonium will take almost two times longer than that for its S-wave quarkonium. Summary of the changes (improvements): (1) The structure and organization of the program have been changed a lot. The new version package BCVEGPY2.1 has been divided into several modules with less cross communication among the modules (some old version source files are divided into several parts for the purpose). The main program is slimmed down and all the further actions are decoupled from the main program so that they can be easily called for various applications. All of the Fortran codes are organized in the main code directory named as bcvegpy2.1, which contains the main program, all of its prerequisite files and subsidiary 'folders' (subdirectory to the main code directory). The method for setting the parameter is the same as that of the previous versions [C.-H. Chang, C. Driouich, P. Eerola, X.-G. Wu, Comput. Phys. Commun. 159 (2004) 192, hep-ph/0309120. [1

  16. Low-cost small scale parabolic trough collector design for manufacturing and deployment in Africa

    NASA Astrophysics Data System (ADS)

    Orosz, Matthew; Mathaha, Paul; Tsiu, Anadola; Taele, B. M.; Mabea, Lengeta; Ntee, Marcel; Khakanyo, Makoanyane; Teker, Tamer; Stephens, Jordan; Mueller, Amy

    2016-05-01

    Concentrating Solar Power is expanding its deployment on the African subcontinent, highlighting the importance of efforts to indigenize manufacturing of this technology to increase local content and therefore local economic benefits of these projects. In this study a design for manufacturing (DFM) exercise was conducted to create a locally produced parabolic trough collector (the G4 PTC). All parts were sourced or fabricated at a production facility in Lesotho, and several examples of the design were prototyped and tested with collaborators in the Government of Lesotho's Appropriate Technology Services division and the National University of Lesotho. Optical and thermal performance was simulated and experimentally validated, and pedagogical pre-commercial versions of the PTC have been distributed to higher education partners in Lesotho and Europe. The cost to produce the PTC is 180 USD/m2 for a locally manufactured heat collection element (HCE) capable of sustaining 250C operation at ~65% efficiency. A version with an imported evacuated HCE can operate at 300°C with 70% efficiency. Economically relevant applications for this locally produced PTC include industrial process heat and distributed generation scenarios where cogeneration is required.

  17. Environmental Information Management For Data Discovery and Access System

    NASA Astrophysics Data System (ADS)

    Giriprakash, P.

    2011-01-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury was developed during 2007 and released in early 2008. This new version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow ! the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.

  18. Design notes for the next generation persistent object manager for CAP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isely, M.; Fischler, M.; Galli, M.

    1995-05-01

    The CAP query system software at Fermilab has several major components, including SQS (for managing the query), the retrieval system (for fetching auxiliary data), and the query software itself. The central query software in particular is essentially a modified version of the `ptool` product created at UIC (University of Illinois at Chicago) as part of the PASS project under Bob Grossman. The original UIC version was designed for use in a single-user non-distributed Unix environment. The Fermi modifications were an attempt to permit multi-user access to a data set distributed over a set of storage nodes. (The hardware is anmore » IBM SP-x system - a cluster of AIX POWER2 nodes with an IBM-proprietary high speed switch interconnect). Since the implementation work of the Fermi-ized ptool, the CAP members have learned quite a bit about the nature of queries and where the current performance bottlenecks exist. This has lead them to design a persistent object manager that will overcome these problems. For backwards compatibility with ptool, the ptool persistent object API will largely be retained, but the implementation will be entirely different.« less

  19. Stochastic generators of multi-site daily temperature: comparison of performances in various applications

    NASA Astrophysics Data System (ADS)

    Evin, Guillaume; Favre, Anne-Catherine; Hingray, Benoit

    2018-02-01

    We present a multi-site stochastic model for the generation of average daily temperature, which includes a flexible parametric distribution and a multivariate autoregressive process. Different versions of this model are applied to a set of 26 stations located in Switzerland. The importance of specific statistical characteristics of the model (seasonality, marginal distributions of standardized temperature, spatial and temporal dependence) is discussed. In particular, the proposed marginal distribution is shown to improve the reproduction of extreme temperatures (minima and maxima). We also demonstrate that the frequency and duration of cold spells and heat waves are dramatically underestimated when the autocorrelation of temperature is not taken into account in the model. An adequate representation of these characteristics can be crucial depending on the field of application, and we discuss potential implications in different contexts (agriculture, forestry, hydrology, human health).

  20. Global Distribution and Variability of Surface Skin and Surface Air Temperatures as Depicted in the AIRS Version-6 Data Set

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Lee, Jae N.; Iredell, Lena

    2014-01-01

    In this presentation, we will briefly describe the significant improvements made in the AIRS Version-6 retrieval algorithm, especially as to how they affect retrieved surface skin and surface air temperatures. The global distribution of seasonal 1:30 AM and 1:30 PM local time 12 year climatologies of Ts,a will be presented for the first time. We will also present the spatial distribution of short term 12 year anomaly trends of Ts,a at 1:30 AM and 1:30 PM, as well as the spatial distribution of temporal correlations of Ts,a with the El Nino Index. It will be shown that there are significant differences between the behavior of 1:30 AM and 1:30 PM Ts,a anomalies in some arid land areas.

  1. A brief introduction to PYTHIA 8.1

    NASA Astrophysics Data System (ADS)

    Sjöstrand, Torbjörn; Mrenna, Stephen; Skands, Peter

    2008-06-01

    The PYTHIA program is a standard tool for the generation of high-energy collisions, comprising a coherent set of physics models for the evolution from a few-body hard process to a complex multihadronic final state. It contains a library of hard processes and models for initial- and final-state parton showers, multiple parton-parton interactions, beam remnants, string fragmentation and particle decays. It also has a set of utilities and interfaces to external programs. While previous versions were written in Fortran, PYTHIA 8 represents a complete rewrite in C++. The current release is the first main one after this transition, and does not yet in every respect replace the old code. It does contain some new physics aspects, on the other hand, that should make it an attractive option especially for LHC physics studies. Program summaryProgram title:PYTHIA 8.1 Catalogue identifier: ACTU_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ACTU_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL version 2 No. of lines in distributed program, including test data, etc.: 176 981 No. of bytes in distributed program, including test data, etc.: 2 411 876 Distribution format: tar.gz Programming language: C++ Computer: Commodity PCs Operating system: Linux; should also work on other systems RAM: 8 megabytes Classification: 11.2 Does the new version supersede the previous version?: yes, partly Nature of problem: High-energy collisions between elementary particles normally give rise to complex final states, with large multiplicities of hadrons, leptons, photons and neutrinos. The relation between these final states and the underlying physics description is not a simple one, for two main reasons. Firstly, we do not even in principle have a complete understanding of the physics. Secondly, any analytical approach is made intractable by the large multiplicities. Solution method: Complete events are generated by Monte Carlo methods. The complexity is mastered by a subdivision of the full problem into a set of simpler separate tasks. All main aspects of the events are simulated, such as hard-process selection, initial- and final-state radiation, beam remnants, fragmentation, decays, and so on. Therefore events should be directly comparable with experimentally observable ones. The programs can be used to extract physics from comparisons with existing data, or to study physics at future experiments. Reasons for new version: Improved and expanded physics models, transition from Fortran to C++. Summary of revisions: New user interface, transverse-momentum-ordered showers, interleaving with multiple interactions, and much more. Restrictions: Depends on the problem studied. Running time: 10-1000 events per second, depending on process studied. References: [1] T. Sjöstrand, P. Edén, C. Friberg, L. Lönnblad, G. Miu, S. Mrenna, E. Norrbin, Comput. Phys. Comm. 135 (2001) 238.

  2. Spring dehydration in the Antarctic stratospheric vortex observed by HALOE

    NASA Technical Reports Server (NTRS)

    Pierce, R. Bradley; Grose, William L.; Russell, James M., III; Tuck, Adrian F.; Swinbank, Richard; O'Neill, Alan

    1994-01-01

    The distribution of dehydrated air in the middle and lower stratosphere during the 1992 Southern Hemisphere spring is investigated using Halogen Occultation Experiment (HALOE) observations and trajectory techniques. Comparisons between previously published Version 9 and the improved Version 16 retrievals on the 700-K isentropic surface show very slight (0.05 ppmv) increases in Version 16 CH4 relative to Version 9 within the polar vortex. Version 16 H2O mixing ratios show a reduction of 0.5 ppmv relative to Version 9 within the polar night jet and a reduction of nearly 1.0 ppmv in middle latitudes when compared to Version 9. The version 16 HALOE retrievals show low mixing ratios of total hydrogen (2CH4 + H2O) within the polar vortex on both 700 and 425 K isentropic surfaces relative to typical middle-stratospheric 2CH4 + H2O mixing ratios. The low 2CH4 + H2O mixing ratios are associated with dehydration. Slight reductions in total hydrogen, relative to typical middle-stratospheric values, are found at these levels throughout the Southern Hemisphere during this period. Trajectory calculations show that middle-latitude air masses are composed of a mixture of air from within the polar night jet and air from middle latitudes. A strong kinematic barrier to large-scale exchange is found on the poleward flank of the polar night jet at 700 K. A much weaker kinematic barrier is found at 425 K. The impact of the finite tangent pathlength of the HALOE measurements is investigated using an idealized tracer distribution. This experiment suggests that HALOE should be able to resolve the kinematic barrier, if it exists.

  3. Axially deformed solution of the Skyrme-Hartree-Fock-Bogoliubov equations using the transformed harmonic oscillator basis (II) HFBTHO v2.00d: A new version of the program

    NASA Astrophysics Data System (ADS)

    Stoitsov, M. V.; Schunck, N.; Kortelainen, M.; Michel, N.; Nam, H.; Olsen, E.; Sarich, J.; Wild, S.

    2013-06-01

    We describe the new version 2.00d of the code HFBTHO that solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogoliubov (HFB) problem by using the cylindrical transformed deformed harmonic oscillator basis. In the new version, we have implemented the following features: (i) the modified Broyden method for non-linear problems, (ii) optional breaking of reflection symmetry, (iii) calculation of axial multipole moments, (iv) finite temperature formalism for the HFB method, (v) linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations, (vi) blocking of quasi-particles in the Equal Filling Approximation (EFA), (vii) framework for generalized energy density with arbitrary density-dependences, and (viii) shared memory parallelism via OpenMP pragmas. Program summaryProgram title: HFBTHO v2.00d Catalog identifier: ADUI_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUI_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 167228 No. of bytes in distributed program, including test data, etc.: 2672156 Distribution format: tar.gz Programming language: FORTRAN-95. Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT5, Cray XE6. Operating system: UNIX, LINUX, WindowsXP. RAM: 200 Mwords Word size: 8 bits Classification: 17.22. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADUI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 167 (2005) 43 Nature of problem: The solution of self-consistent mean-field equations for weakly-bound paired nuclei requires a correct description of the asymptotic properties of nuclear quasi-particle wave functions. In the present implementation, this is achieved by using the single-particle wave functions of the transformed harmonic oscillator, which allows for an accurate description of deformation effects and pairing correlations in nuclei arbitrarily close to the particle drip lines. Solution method: The program uses the axial Transformed Harmonic Oscillator (THO) single- particle basis to expand quasi-particle wave functions. It iteratively diagonalizes the Hartree-Fock-Bogoliubov Hamiltonian based on generalized Skyrme-like energy densities and zero-range pairing interactions until a self-consistent solution is found. A previous version of the program was presented in: M.V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Comput. Phys. Commun. 167 (2005) 43-63. Reasons for new version: Version 2.00d of HFBTHO provides a number of new options such as the optional breaking of reflection symmetry, the calculation of axial multipole moments, the finite temperature formalism for the HFB method, optimized multi-constraint calculations, the treatment of odd-even and odd-odd nuclei in the blocking approximation, and the framework for generalized energy density with arbitrary density-dependences. It is also the first version of HFBTHO to contain threading capabilities. Summary of revisions: The modified Broyden method has been implemented, Optional breaking of reflection symmetry has been implemented, The calculation of all axial multipole moments up to λ=8 has been implemented, The finite temperature formalism for the HFB method has been implemented, The linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations has been implemented, The blocking of quasi-particles in the Equal Filling Approximation (EFA) has been implemented, The framework for generalized energy density functionals with arbitrary density-dependence has been implemented, Shared memory parallelism via OpenMP pragmas has been implemented. Restrictions: Axial- and time-reversal symmetries are assumed. Unusual features: The user must have access to the LAPACK subroutines DSYEVD, DSYTRF and DSYTRI, and their dependences, which compute eigenvalues and eigenfunctions of real symmetric matrices, the LAPACK subroutines DGETRI and DGETRF, which invert arbitrary real matrices, and the BLAS routines DCOPY, DSCAL, DGEMM and DGEMV for double-precision linear algebra (or provide another set of subroutines that can perform such tasks). The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Running time: Highly variable, as it depends on the nucleus, size of the basis, requested accuracy, requested configuration, compiler and libraries, and hardware architecture. An order of magnitude would be a few seconds for ground-state configurations in small bases N≈8-12, to a few minutes in very deformed configuration of a heavy nucleus with a large basis N>20.

  4. DESI-Detection of early-season invasives (software-installation manual and user's guide version 1.0)

    USGS Publications Warehouse

    Kokaly, Raymond F.

    2011-01-01

    This report describes a software system for detecting early-season invasive plant species, such as cheatgrass. The report includes instructions for installing the software and serves as a user's guide in processing Landsat satellite remote sensing data to map the distributions of cheatgrass and other early-season invasive plants. The software was developed for application to the semi-arid regions of southern Utah; however, the detection parameters can be altered by the user for application to other areas.

  5. CALNPS: Computer Analysis Language Naval Postgraduate School Version

    DTIC Science & Technology

    1989-06-01

    The graphics capabilities were expanded to include hai copy options using the PlotlO and Disspia araplaics libraries. T’\\u di ,pla. !𔃻z1 options are ...8217:c:n of tbhis page All oiher ediiions are obsc,,C I. nclassified Approved for public release; distribution is unlimited. CALNPS Computer Analysis... are now available and the user now has the capability to plot curves from data files from within the CALNPS domain. As CALNPS is a very large program

  6. xPerm: fast index canonicalization for tensor computer algebra

    NASA Astrophysics Data System (ADS)

    Martín-García, José M.

    2008-10-01

    We present a very fast implementation of the Butler-Portugal algorithm for index canonicalization with respect to permutation symmetries. It is called xPerm, and has been written as a combination of a Mathematica package and a C subroutine. The latter performs the most demanding parts of the computations and can be linked from any other program or computer algebra system. We demonstrate with tests and timings the effectively polynomial performance of the Butler-Portugal algorithm with respect to the number of indices, though we also show a case in which it is exponential. Our implementation handles generic tensorial expressions with several dozen indices in hundredths of a second, or one hundred indices in a few seconds, clearly outperforming all other current canonicalizers. The code has been already under intensive testing for several years and has been essential in recent investigations in large-scale tensor computer algebra. Program summaryProgram title: xPerm Catalogue identifier: AEBH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 93 582 No. of bytes in distributed program, including test data, etc.: 1 537 832 Distribution format: tar.gz Programming language: C and Mathematica (version 5.0 or higher) Computer: Any computer running C and Mathematica (version 5.0 or higher) Operating system: Linux, Unix, Windows XP, MacOS RAM:: 20 Mbyte Word size: 64 or 32 bits Classification: 1.5, 5 Nature of problem: Canonicalization of indexed expressions with respect to permutation symmetries. Solution method: The Butler-Portugal algorithm. Restrictions: Multiterm symmetries are not considered. Running time: A few seconds with generic expressions of up to 100 indices. The xPermDoc.nb notebook supplied with the distribution takes approximately one and a half hours to execute in full.

  7. Constructing an AIRS Climatology for Data Visualization and Analysis to Serve the Climate Science and Application Communities

    NASA Technical Reports Server (NTRS)

    Ding, Feng; Keim, Elaine; Hearty, Thomas J.; Wei, Jennifer; Savtchenko, Andrey; Theobald, Michael; Vollmer, Bruce

    2016-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for NASA sounders: the present Aqua AIRS mission and the succeeding SNPP CrIS mission. The AIRS mission is entering its 15th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing longwave radiation, cloud properties, and trace gases. The GES DISC, in collaboration with the AIRS Project, released product from the version 6 algorithm in early 2013. Giovanni, a Web-based application developed by the GES DISC, provides a simple and intuitive way to visualize, analyze, and access vast amounts of Earth science remote sensing data without having to download the data. Most important variables from version 6 AIRS product are available in Giovanni. We are developing a climatology product using 14-year AIRS retrievals. The study can be a good start for the long term climatology from NASA sounders: the AIRS and the succeeding CrIS. This presentation will show the impacts to the climatology product from different aggregation methods. The climatology can serve climate science and application communities in data visualization and analysis, which will be demonstrated using a variety of functions in version 4 Giovanni. The highlights of these functions include user-defined monthly and seasonal climatology, inter annual seasonal time series, anomaly analysis.

  8. Constructing an AIRS Climatology for Data Visualization and Analysis to Serve the Climate Science and Application Communities

    NASA Astrophysics Data System (ADS)

    Ding, F.; Keim, E.; Hearty, T. J., III; Wei, J. C.; Savtchenko, A.; Theobald, M.; Vollmer, B.

    2016-12-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for NASA sounders: the present Aqua AIRS mission and the succeeding SNPP CrIS mission. The AIRS mission is entering its 15th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing longwave radiation, cloud properties, and trace gases. The GES DISC, in collaboration with the AIRS Project, released product from the version 6 algorithm in early 2013. Giovanni, a Web-based application developed by the GES DISC, provides a simple and intuitive way to visualize, analyze, and access vast amounts of Earth science remote sensing data without having to download the data. Most important variables from version 6 AIRS product are available in Giovanni. We are developing a climatology product using 14-year AIRS retrievals. The study can be a good start for the long term climatology from NASA sounders: the AIRS and the succeeding CrIS. This presentation will show the impacts to the climatology product from different aggregation methods. The climatology can serve climate science and application communities in data visualization and analysis, which will be demonstrated using a variety of functions in version 4 Giovanni. The highlights of these functions include user-defined monthly and seasonal climatology, inter annual seasonal time series, anomaly analysis.

  9. MC-TESTER v. 1.23: A universal tool for comparisons of Monte Carlo predictions for particle decays in high energy physics

    NASA Astrophysics Data System (ADS)

    Davidson, N.; Golonka, P.; Przedziński, T.; Waş, Z.

    2011-03-01

    Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Since 2002 new functionalities were introduced into the package. In particular, it now works with the HepMC event record, the standard for C++ programs. The complete set-up for benchmarking the interfaces, such as interface between τ-lepton production and decay, including QED bremsstrahlung effects is shown. The example is chosen to illustrate the new options introduced into the program. From the technical perspective, our paper documents software updates and supplements previous documentation. As in the past, our test consists of two steps. Distinct Monte Carlo programs are run separately; events with decays of a chosen particle are searched, and information is stored by MC-TESTER. Then, at the analysis step, information from a pair of runs may be compared and represented in the form of tables and plots. Updates introduced in the program up to version 1.24.4 are also documented. In particular, new configuration scripts or script to combine results from multitude of runs into single information file to be used in analysis step are explained. Program summaryProgram title: MC-TESTER, version 1.23 and version 1.24.4 Catalog identifier: ADSM_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 548 No. of bytes in distributed program, including test data, etc.: 4 290 610 Distribution format: tar.gz Programming language: C++, FORTRAN77 Tested and compiled with: gcc 3.4.6, 4.2.4 and 4.3.2 with g77/gfortran Computer: Tested on various platforms Operating system: Tested on operating systems: Linux SLC 4.6 and SLC 5, Fedora 8, Ubuntu 8.2 etc. Classification: 11.9 External routines: HepMC ( https://savannah.cern.ch/projects/hepmc/), PYTHIA8 ( http://home.thep.lu.se/~torbjorn/Pythia.html), LaTeX ( http://www.latex-project.org/) Catalog identifier of previous version: ADSM_v1_0 Journal reference of previous version: Comput. Phys. Comm. 157 (2004) 39 Does the new version supersede the previous version?: Yes Nature of problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable for the development of new programs, in order to check correctness of the installations or for discussion of uncertainties. Solution method: A typical HEP Monte Carlo program stores the generated events in event records such as HepMC, HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Reasons for new version: Interface for HepMC Event Record is introduced. Setup for benchmarking the interfaces, such as τ-lepton production and decay, including QED bremsstrahlung effects is introduced as well. This required significant changes in the algorithm. As a consequence, a new version of the code was introduced. Restrictions: Only the first 200 decay channels that were found will initialize histograms and if the multiplicity of decay products in a given channel was larger than 7, histograms will not be created for that channel. Additional comments: New features: HepMC interface, use of lists in definition of histograms and decay channels, filters for decay products or secondary decays to be omitted, bug fixing, extended flexibility in representation of program output, installation configuration scripts, merging multiple output files from separate generations. Running time: Varies substantially with the analyzed decay particle, but generally speed estimation of the old version remains valid. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; LATEX processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multiprocessor machines.

  10. A new version of the CADNA library for estimating round-off error propagation in Fortran programs

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc

    2010-11-01

    The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.

  11. Permutational distribution of the log-rank statistic under random censorship with applications to carcinogenicity assays.

    PubMed

    Heimann, G; Neuhaus, G

    1998-03-01

    In the random censorship model, the log-rank test is often used for comparing a control group with different dose groups. If the number of tumors is small, so-called exact methods are often applied for computing critical values from a permutational distribution. Two of these exact methods are discussed and shown to be incorrect. The correct permutational distribution is derived and studied with respect to its behavior under unequal censoring in the light of recent results proving that the permutational version and the unconditional version of the log-rank test are asymptotically equivalent even under unequal censoring. The log-rank test is studied by simulations of a realistic scenario from a bioassay with small numbers of tumors.

  12. Comprehensive Monitoring Program: Air Quality Data Assessment Report for FY90. Volume 2. Version 3.1

    DTIC Science & Technology

    1991-09-01

    91311R01 If VERSION 3.10) VOLUME II Comm 2ND COPY COMPREHENSIVE MONITORING PROGRAM Contract Number DAAAI5-87-0095 AIR QUALITY DATA ASSESSMENT REPORT...MONITORING PROGRAM. FINAL AIR QUALITY DATA ASSESSMENT REPORT FOR FY90, VERSION 3.1 NONE 6. AUTHOR(S) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRES.S(S) 8...RELEASE; DISTRIBUTION IS UNLIMITED 13. ABSTRACT (Maximum 200 words) THE OBJECTIVE OF THIS CMP IS TO: VERIFY AND EVALUATE POTENTIAL AIR QUALITY HEALTH

  13. Documentation for the machine-readable version of the Cape Photographic Durchmusterung (CPD)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The complete catalog is contained in the magnetic tape file, and corrections published in all errata have been made to the data. The machine version contains 454877 records, but only 454875 stars (two stars were later deleted, but their logical records are retained in the file so that the zone counts are not diiferent from the published catalog).

  14. Documentation for the machine-readable version of the Cordoba Durchmusterung (CD)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is presented. The complete catalog is contained in the magnetic tape file, and corrections published in all corrigenda were made to the data. The machine version contains 613959 records, but only 613953 stars (six stars were later deleted, but their logical records are retained in the file so that the zone counts are not different from the published catalog).

  15. Documentation for the machine readable version of the Yale Catalogue of the Positions and Proper Motions of Stars between Declinations -60 deg and -70 deg (Fallon 1983)

    NASA Technical Reports Server (NTRS)

    Roman, N. G.; Warren, W. H., Jr.

    1984-01-01

    The machine-readable, character-coded version of the catalog, as it is currently being distributed from the Astronomical Data Center(ADC), is described. The format and data provided in the magnetic tape version differ somewhat from those of the published catalog, which was also produced from a tape prepared at the ADC. The primary catalog data are positions and proper motions (equinox 1950.0) for 14597 stars.

  16. SARAH 3.2: Dirac gauginos, UFO output, and more

    NASA Astrophysics Data System (ADS)

    Staub, Florian

    2013-07-01

    SARAH is a Mathematica package optimized for the fast, efficient and precise study of supersymmetric models beyond the MSSM: a new model can be defined in a short form and all vertices are derived. This allows SARAH to create model files for FeynArts/FormCalc, CalcHep/CompHep and WHIZARD/O'Mega. The newest version of SARAH now provides the possibility to create model files in the UFO format which is supported by MadGraph 5, MadAnalysis 5, GoSam, and soon by Herwig++. Furthermore, SARAH also calculates the mass matrices, RGEs and 1-loop corrections to the mass spectrum. This information is used to write source code for SPheno in order to create a precision spectrum generator for the given model. This spectrum-generator-generator functionality as well as the output of WHIZARD and CalcHep model files has seen further improvement in this version. Also models including Dirac gauginos are supported with the new version of SARAH, and additional checks for the consistency of the implementation of new models have been created. Program summaryProgram title:SARAH Catalogue identifier: AEIB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIB_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3 22 411 No. of bytes in distributed program, including test data, etc.: 3 629 206 Distribution format: tar.gz Programming language: Mathematica. Computer: All for which Mathematica is available. Operating system: All for which Mathematica is available. Classification: 11.1, 11.6. Catalogue identifier of previous version: AEIB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 808 Does the new version supersede the previous version?: Yes, the new version includes all known features of the previous version but also provides the new features mentioned below. Nature of problem: To use Madgraph for new models it is necessary to provide the corresponding model files which include all information about the interactions of the model. However, the derivation of the vertices for a given model and putting those into model files which can be used with Madgraph is usually very time consuming. Dirac gauginos are not present in the minimal supersymmetric standard model (MSSM) or many extensions of it. Dirac mass terms for vector superfields lead to new structures in the supersymmetric (SUSY) Lagrangian (bilinear mass term between gaugino and matter fermion as well as new D-terms) and modify also the SUSY renormalization group equations (RGEs). The Dirac character of gauginos can change the collider phenomenology. In addition, they come with an extended Higgs sector for which a precise calculation of the 1-loop masses has not happened so far. Solution method: SARAH calculates the complete Lagrangian for a given model whose gauge sector can be any direct product of SU(N) gauge groups. The chiral superfields can transform as any, irreducible representation with respect to these gauge groups and it is possible to handle an arbitrary number of symmetry breakings or particle rotations. Also the gauge fixing is automatically added. Using this information, SARAH derives all vertices for a model. These vertices can be exported to model files in the UFO which is supported by Madgraph and other codes like GoSam, MadAnalysis or ALOHA. The user can also study models with Dirac gauginos. In that case SARAH includes all possible terms in the Lagrangian stemming from the new structures and can also calculate the RGEs. The entire impact of these terms is then taken into account in the output of SARAH to UFO, CalcHep, WHIZARD, FeynArts and SPheno. Reasons for new version: SARAH provides, with this version, the possibility of creating model files in the UFO format. The UFO format is supposed to become a standard format for model files which should be supported by many different tools in the future. Also models with Dirac gauginos were not supported in earlier versions. Summary of revisions: Support of models with Dirac gauginos. Output of model files in the UFO format, speed improvement in the output of WHIZARD model files, CalcHep output supports the internal diagonalization of mass matrices, output of control files for LHPC spectrum plotter, support of generalized PDG numbering scheme PDG.IX, improvement of the calculation of the decay widths and branching ratios with SPheno, the calculation of new low energy observables are added to the SPheno output, the handling of gauge fixing terms has been significantly simplified. Restrictions: SARAH can only derive the Lagrangian in an automatized way for N=1 SUSY models, but not for those with more SUSY generators. Furthermore, SARAH supports only renormalizable operators in the output of model files in the UFO format and also for CalcHep, FeynArts and WHIZARD. Also color sextets are not yet included in the model files for Monte Carlo tools. Dimension 5 operators are only supported in the calculation of the RGEs and mass matrices. Unusual features: SARAH does not need the Lagrangian of a model as input to calculate the vertices. The gauge structure, particle and content and superpotential as well as rotations stemming from gauge symmetry breaking are sufficient. All further information is derived by SARAH on its own. Therefore, the model files are very short and the implementation of new models is fast and easy. In addition, the implementation of a model can be checked for physical and formal consistency. In addition, SARAH can generate Fortran code for a full 1-loop analysis of the mass spectrum in the context for Dirac gauginos. Running time: Measured CPU time for the evaluation of the MSSM using a Lenovo Thinkpad X220 with i7 processor (2.53 GHz). Calculating the complete Lagrangian: 9 s. Calculating all vertices: 51 s. Output of the UFO model files: 49 s.

  17. Development of a Distributed Hydrologic Model Using Triangulated Irregular Networks for Continuous, Real-Time Flood Forecasting

    NASA Astrophysics Data System (ADS)

    Ivanov, V. Y.; Vivoni, E. R.; Bras, R. L.; Entekhabi, D.

    2001-05-01

    The Triangulated Irregular Networks (TINs) are widespread in many finite-element modeling applications stressing high spatial non-uniformity while describing the domain of interest in an optimized fashion that results in superior computational efficiency. TINs, being adaptive to the complexity of any terrain, are capable of maintaining topological relations between critical surface features and therefore afford higher flexibility in data manipulation. The TIN-based Real-time Integrated Basin Simulator (tRIBS) is a distributed hydrologic model that utilizes the mesh architecture and the software environment developed for the CHILD landscape evolution model and employs the hydrologic routines of its raster-oriented version, RIBS. As a totally independent software unit, the tRIBS consolidates the strengths of the distributed approach and efficient computational data platform. The current version couples the unsaturated and the saturated zones and accounts for the interaction of moving infiltration fronts with a variable groundwater surface, allowing the model to handle both storm and interstorm periods in a continuous fashion. Recent model enhancements have included the development of interstorm hydrologic fluxes through an evapotranspiration scheme as well as incorporation of a rainfall interception module. Overall, the tRIBS model has proven to properly mimic successive phases of the distributed catchment response by reproducing various runoff production mechanisms and handling their meteorological constraints. Important improvements in modeling options, robustness to data availability and overall design flexibility have also been accomplished. The current efforts are focused on further model developments as well as the application of the tRIBS to various watersheds.

  18. Critic: a new program for the topological analysis of solid-state electron densities

    NASA Astrophysics Data System (ADS)

    Otero-de-la-Roza, A.; Blanco, M. A.; Pendás, A. Martín; Luaña, Víctor

    2009-01-01

    In this paper we introduce CRITIC, a new program for the topological analysis of the electron densities of crystalline solids. Two different versions of the code are provided, one adapted to the LAPW (Linear Augmented Plane Wave) density calculated by the WIEN2K package and the other to the ab initio Perturbed Ion ( aiPI) density calculated with the PI7 code. Using the converged ground state densities, CRITIC can locate their critical points, determine atomic basins and integrate properties within them, and generate several graphical representations which include topological atomic basins and primary bundles, contour maps of ρ and ∇ρ, vector maps of ∇ρ, chemical graphs, etc. Program summaryProgram title: CRITIC Catalogue identifier: AECB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL, version 3 No. of lines in distributed program, including test data, etc.: 1 206 843 No. of bytes in distributed program, including test data, etc.: 12 648 065 Distribution format: tar.gz Programming language: FORTRAN 77 and 90 Computer: Any computer capable of compiling Fortran Operating system: Unix, GNU/Linux Classification: 7.3 Nature of problem: Topological analysis of the electron density in periodic solids. Solution method: The automatic localization of the electron density critical points is based on a recursive partitioning of the Wigner-Seitz cell into tetrahedra followed by a Newton search from significant points on each tetrahedra. Plotting of and integration on the atomic basins is currently based on a new implementation of Keith's promega algorithm. Running time: Variable, depending on the task. From seconds to a few minutes for the localization of critical points. Hours to days for the determination of the atomic basins shape and properties. Times correspond to a typical 2007 PC.

  19. Smart Grid Educational Series | Energy Systems Integration Facility | NREL

    Science.gov Websites

    generation through transmission, all the way to the distribution infrastructure. Download presentation | Text on key takeaways from breakout group discussions. Learn more about the workshop. Text Version Text presentation PDF | Text Version Using MultiSpeak Data Model Standard & Essence Anomaly Detection for ICS

  20. A Review of DIMPACK Version 1.0: Conditional Covariance-Based Test Dimensionality Analysis Package

    ERIC Educational Resources Information Center

    Deng, Nina; Han, Kyung T.; Hambleton, Ronald K.

    2013-01-01

    DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…

  1. The Relationship between Distributed Leadership and Teachers' Academic Optimism

    ERIC Educational Resources Information Center

    Mascall, Blair; Leithwood, Kenneth; Straus, Tiiu; Sacks, Robin

    2008-01-01

    Purpose: The goal of this study was to examine the relationship between four patterns of distributed leadership and a modified version of a variable Hoy et al. have labeled "teachers' academic optimism." The distributed leadership patterns reflect the extent to which the performance of leadership functions is consciously aligned across…

  2. An Evaluation of Short-Term Distributed Online Learning Events

    ERIC Educational Resources Information Center

    Barker, Bradley; Brooks, David

    2005-01-01

    The purpose of this study was to evaluate the effectiveness of short-term distributed online training events using an adapted version of the compressed evaluation form developed by Wisher and Curnow (1998). Evaluating online distributed training events provides insight into course effectiveness, the contribution of prior knowledge to learning, and…

  3. Simulation of n-qubit quantum systems. III. Quantum operations

    NASA Astrophysics Data System (ADS)

    Radtke, T.; Fritzsche, S.

    2007-05-01

    During the last decade, several quantum information protocols, such as quantum key distribution, teleportation or quantum computation, have attracted a lot of interest. Despite the recent success and research efforts in quantum information processing, however, we are just at the beginning of understanding the role of entanglement and the behavior of quantum systems in noisy environments, i.e. for nonideal implementations. Therefore, in order to facilitate the investigation of entanglement and decoherence in n-qubit quantum registers, here we present a revised version of the FEYNMAN program for working with quantum operations and their associated (Jamiołkowski) dual states. Based on the implementation of several popular decoherence models, we provide tools especially for the quantitative analysis of quantum operations. Apart from the implementation of different noise models, the current program extension may help investigate the fragility of many quantum states, one of the main obstacles in realizing quantum information protocols today. Program summaryTitle of program: Feynman Catalogue identifier: ADWE_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWE_v3_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: None Operating systems: Any system that supports MAPLE; tested under Microsoft Windows XP, SuSe Linux 10 Program language used:MAPLE 10 Typical time and memory requirements: Most commands that act upon quantum registers with five or less qubits take ⩽10 seconds of processor time (on a Pentium 4 processor with ⩾2 GHz or equivalent) and 5-20 MB of memory. Especially when working with symbolic expressions, however, the memory and time requirements critically depend on the number of qubits in the quantum registers, owing to the exponential dimension growth of the associated Hilbert space. For example, complex (symbolic) noise models (with several Kraus operators) for multi-qubit systems often result in very large symbolic expressions that dramatically slow down the evaluation of measures or other quantities. In these cases, MAPLE's assume facility sometimes helps to reduce the complexity of symbolic expressions, but often only numerical evaluation is possible. Since the complexity of the FEYNMAN commands is very different, no general scaling law for the CPU time and memory usage can be given. No. of bytes in distributed program including test data, etc.: 799 265 No. of lines in distributed program including test data, etc.: 18 589 Distribution format: tar.gz Reasons for new version: While the previous program versions were designed mainly to create and manipulate the state of quantum registers, the present extension aims to support quantum operations as the essential ingredient for studying the effects of noisy environments. Does this version supersede the previous version: Yes Nature of the physical problem: Today, entanglement is identified as the essential resource in virtually all aspects of quantum information theory. In most practical implementations of quantum information protocols, however, decoherence typically limits the lifetime of entanglement. It is therefore necessary and highly desirable to understand the evolution of entanglement in noisy environments. Method of solution: Using the computer algebra system MAPLE, we have developed a set of procedures that support the definition and manipulation of n-qubit quantum registers as well as (unitary) logic gates and (nonunitary) quantum operations that act on the quantum registers. The provided hierarchy of commands can be used interactively in order to simulate and analyze the evolution of n-qubit quantum systems in ideal and nonideal quantum circuits.

  4. Multiple scattering of 13 and 20 MeV electrons by thin foils: a Monte Carlo study with GEANT, Geant4, and PENELOPE.

    PubMed

    Vilches, M; García-Pareja, S; Guerrero, R; Anguiano, M; Lallena, A M

    2009-09-01

    In this work, recent results from experiments and simulations (with EGSnrc) performed by Ross et al. [Med. Phys. 35, 4121-4131 (2008)] on electron scattering by foils of different materials and thicknesses are compared to those obtained using several Monte Carlo codes. Three codes have been used: GEANT (version 3.21), Geant4 (version 9.1, patch03), and PENELOPE (version 2006). In the case of PENELOPE, mixed and fully detailed simulations have been carried out. Transverse dose distributions in air have been obtained in order to compare with measurements. The detailed PENELOPE simulations show excellent agreement with experiment. The calculations performed with GEANT and PENELOPE (mixed) agree with experiment within 3% except for the Be foil. In the case of Geant4, the distributions are 5% narrower compared to the experimental ones, though the agreement is very good for the Be foil. Transverse dose distribution in water obtained with PENELOPE (mixed) is 4% wider than those calculated by Ross et al. using EGSnrc and is 1% narrower than the transverse dose distributions in air, as considered in the experiment. All the codes give a reasonable agreement (within 5%) with the experimental results for all the material and thicknesses studied.

  5. flexsurv: A Platform for Parametric Survival Modeling in R

    PubMed Central

    Jackson, Christopher H.

    2018-01-01

    flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450

  6. Measurement of the steady surface pressure distribution on a single rotation large scale advanced prop-fan blade at Mach numbers from 0.03 to 0.78

    NASA Technical Reports Server (NTRS)

    Bushnell, Peter

    1988-01-01

    The aerodynamic pressure distribution was determined on a rotating Prop-Fan blade at the S1-MA wind tunnel facility operated by the Office National D'Etudes et de Recherches Aerospatiale (ONERA) in Modane, France. The pressure distributions were measured at thirteen radial stations on a single rotation Large Scale Advanced Prop-Fan (LAP/SR7) blade, for a sequence of operating conditions including inflow Mach numbers ranging from 0.03 to 0.78. Pressure distributions for more than one power coefficient and/or advanced ratio setting were measured for most of the inflow Mach numbers investigated. Due to facility power limitations the Prop-Fan test installation was a two bladed version of the eight design configuration. The power coefficient range investigated was therefore selected to cover typical power loading per blade conditions which occur within the Prop-Fan operating envelope. The experimental results provide an extensive source of information on the aerodynamic behavior of the swept Prop-Fan blade, including details which were elusive to current computational models and do not appear in the two-dimensional airfoil data.

  7. The Reading the Mind in the Eyes test: validation of a French version and exploration of cultural variations in a multi-ethnic city.

    PubMed

    Prevost, Marie; Carrier, Marie-Eve; Chowne, Gabrielle; Zelkowitz, Phyllis; Joseph, Lawrence; Gold, Ian

    2014-01-01

    The first aim of our study was to validate the French version of the Reading the Mind in the Eyes test, a theory of mind test. The second aim was to test whether cultural differences modulate performance on this test. A total of 109 participants completed the original English version and 97 participants completed the French version. Another group of 30 participants completed the French version twice, one week apart. We report a similar overall distribution of scores in both versions and no differences in the mean scores between them. However, 2 items in the French version did not collect a majority of responses, which differed from the results of the English version. Test-retest showed good stability of the French version. As expected, participants who do not speak French or English at home, and those born in Asia, performed worse than North American participants, and those who speak English or French at home. We report a French version with acceptable validity and good stability. The cultural differences observed support the idea that Asian culture does not use theory of mind to explain people's behaviours as much as North American people do.

  8. Reference aquaplanet climate in the Community Atmosphere Model, Version 5

    DOE PAGES

    Medeiros, Brian; Williamson, David L.; Olson, Jerry G.

    2016-03-18

    In this study, fundamental characteristics of the aquaplanet climate simulated by the Community Atmosphere Model, Version 5.3 (CAM5.3) are presented. The assumptions and simplifications of the configuration are described. A 16 year long, perpetual equinox integration with prescribed SST using the model’s standard 18 grid spacing is presented as a reference simulation. Statistical analysis is presented that shows similar aquaplanet configurations can be run for about 2 years to obtain robust climatological structures, including global and zonal means, eddy statistics, and precipitation distributions. Such a simulation can be compared to the reference simulation to discern differences in the climate, includingmore » an assessment of confidence in the differences. To aid such comparisons, the reference simulation has been made available via earthsystemgrid.org. Examples are shown comparing the reference simulation with simulations from the CAM5 series that make different microphysical assumptions and use a different dynamical core.« less

  9. Improved reference models for middle atmosphere ozone

    NASA Technical Reports Server (NTRS)

    Keating, G. M.; Pitts, M. C.; Chen, C.

    1990-01-01

    This paper describes the improvements introduced into the original version of ozone reference model of Keating and Young (1985, 1987) which is to be incorporated in the next COSPAR International Reference Atmosphere (CIRA). The ozone reference model will provide information on the global ozone distribution (including the ozone vertical structure as a function of month and latitude from 25 to 90 km) combining data from five recent satellite experiments: the Nimbus 7 LIMS, Nimbus 7 SBUV, AE-2 Stratospheric Aerosol Gas Experiment (SAGE), Solar Mesosphere Explorer (SME) UV Spectrometer, and SME 1.27 Micron Airglow. The improved version of the reference model uses reprocessed AE-2 SAGE data (sunset) and extends the use of SAGE data from 1981 to the 1981-1983 time period. Comparisons are presented between the results of this ozone model and various nonsatellite measurements at different levels in the middle atmosphere.

  10. Southern Durchmusterung (Schoenfeld 1886): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Ochsenbein, Francois

    1989-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The Southern Durchmusterung (SD) was computerized at the Centre de Donnees Astronomiques de Strasbourg and at the Astronomical Data Center at the National Space Science Data Center, NASA/Goddard Space Flight Center. Corrigenda listed in the original SD volume and published by Kuenster and Sticker were incorporated into the machine file. In addition, one star indicated to be missing in a published list, and later verified, is flagged so that it can be omitted from computer plotted charts if desired. Stars deleted in the various errata lists were similarly flagged, while those with revised data are flagged and listed in a separate table. This catalog covers the zones -02 to -23 degrees; zones +89 to -01 degrees (the Bonner Durchmusterung) are included in a separate catalog available in machine-readable form.

  11. JPL Development Ephemeris number 96

    NASA Technical Reports Server (NTRS)

    Standish, E. M., Jr.; Keesey, M. S. W.; Newhall, X. X.

    1976-01-01

    The fourth issue of JPL Planetary Ephemerides, designated JPL Development Ephemeris No. 96 (DE96), is described. This ephemeris replaces a previous issue which has become obsolete since its release in 1969. Improvements in this issue include more recent and more accurate observational data, new types of data, better processing of the data, and refined equations of motion which more accurately describe the actual physics of the solar system. The descriptions in this report include these new features as well as the new export version of the ephemeris. The tapes and requisite software will be distributed through the NASA Computer Software Management and Information Center (COSMIC) at the University of Georgia.

  12. CATS Version 2 Aerosol Feature Detection and Applications for Data Assimilation

    NASA Technical Reports Server (NTRS)

    Nowottnick, E. P.; Yorks, J. E.; Selmer, P. A.; Palm, S. P.; Hlavka, D. L.; Pauly, R. M.; Ozog, S.; McGill, M. J.; Da Silva, A.

    2017-01-01

    The Cloud Aerosol Transport System (CATS) lidar has been operating onboard the International Space Station (ISS) since February 2015 and provides vertical observations of clouds and aerosols using total attenuated backscatter and depolarization measurements. From February March 2015, CATS operated in Mode 1, providing backscatter and depolarization measurements at 532 and 1064 nm. CATS began operation in Mode 2 in March 2015, providing backscatter and depolarization measurements at 1064 nm and has continued to operate to the present in this mode. CATS level 2 products are derived from these measurements, including feature detection, cloud aerosol discrimination, cloud and aerosol typing, and optical properties of cloud and aerosol layers. Here, we present changes to our level 2 algorithms, which were aimed at reducing several biases in our version 1 level 2 data products. These changes will be incorporated into our upcoming version 2 level 2 data release in summer 2017. Additionally, owing to the near real time (NRT) data downlinking capabilities of the ISS, CATS provides expedited NRT data products within 6 hours of observation time. This capability provides a unique opportunity for supporting field campaigns and for developing data assimilation techniques to improve simulated cloud and aerosol vertical distributions in models. We additionally present preliminary work toward assimilating CATS observations into the NASA Goddard Earth Observing System version 5 (GEOS-5) global atmospheric model and data assimilation system.

  13. Trends and Variability of Global Fire Emissions Due To Historical Anthropogenic Activities

    NASA Astrophysics Data System (ADS)

    Ward, Daniel S.; Shevliakova, Elena; Malyshev, Sergey; Rabin, Sam

    2018-01-01

    Globally, fires are a major source of carbon from the terrestrial biosphere to the atmosphere, occurring on a seasonal cycle and with substantial interannual variability. To understand past trends and variability in sources and sinks of terrestrial carbon, we need quantitative estimates of global fire distributions. Here we introduce an updated version of the Fire Including Natural and Agricultural Lands model, version 2 (FINAL.2), modified to include multiday burning and enhanced fire spread rate in forest crowns. We demonstrate that the improved model reproduces the interannual variability and spatial distribution of fire emissions reported in present-day remotely sensed inventories. We use FINAL.2 to simulate historical (post-1700) fires and attribute past fire trends and variability to individual drivers: land use and land cover change, population growth, and lightning variability. Global fire emissions of carbon increase by about 10% between 1700 and 1900, reaching a maximum of 3.4 Pg C yr-1 in the 1910s, followed by a decrease to about 5% below year 1700 levels by 2010. The decrease in emissions from the 1910s to the present day is driven mainly by land use change, with a smaller contribution from increased fire suppression due to increased human population and is largest in Sub-Saharan Africa and South Asia. Interannual variability of global fire emissions is similar in the present day as in the early historical period, but present-day wildfires would be more variable in the absence of land use change.

  14. PRMS-IV, the precipitation-runoff modeling system, version 4

    USGS Publications Warehouse

    Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.

    2015-01-01

    Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.

  15. Audio distribution and Monitoring Circuit

    NASA Technical Reports Server (NTRS)

    Kirkland, J. M.

    1983-01-01

    Versatile circuit accepts and distributes TV audio signals. Three-meter audio distribution and monitoring circuit provides flexibility in monitoring, mixing, and distributing audio inputs and outputs at various signal and impedance levels. Program material is simultaneously monitored on three channels, or single-channel version built to monitor transmitted or received signal levels, drive speakers, interface to building communications, and drive long-line circuits.

  16. Psychometric properties of the Persian version of the Ambulatory Care Learning Educational Environment Measure (ACLEEM) questionnaire, Shiraz, Iran

    PubMed Central

    Parvizi, Mohammad Mahdi; Amini, Mitra; Dehghani, Mohammad Reza; Jafari, Peyman; Parvizi, Zahra

    2016-01-01

    Purpose Evaluation is the main component in design and implementation of educational activities and rapid growth of educational institution programs. Outpatient medical education and clinical training environment is one of the most important parts of training of medical residents. This study aimed to determine the validity and reliability of the Persian version of Ambulatory Care Learning Educational Environment Measure (ACLEEM) questionnaire, as an instrument for assessment of educational environments in residency medical clinics. Materials and methods This study was performed on 180 residents in Shiraz University of Medical Sciences, Shiraz, Iran, in 2014–2015. The questionnaire designers’ electronic permission (by email) and the residents’ verbal consent were obtained before distributing the questionnaires. The study data were gathered using ACLEEM questionnaire developed by Arnoldo Riquelme in 2013. The data were analyzed using the SPSS statistical software, version 14, and MedCalc® software. Then, the construct validity, including convergent and discriminant validities, of the Persian version of ACLEEM questionnaire was assessed. Its internal consistency was also checked by Cronbach’s alpha coefficient. Results Five team members who were experts in medical education were consulted to test the cultural adaptation, linguistic equivalency, and content validity of the Persian version of the questionnaire. Content validity indexes were >0.9 in all items. In factor analysis of the instrument, the Kaiser–Meyer–Olkin index was 0.928 and Barlett’s sphericity test yielded the following results: X2=6,717.551, df =1,225, and P≤0.001. Besides, Cronbach’s alpha coefficient of ACLEEM questionnaire was 0.964. Cronbach’s alpha coefficients were also >0.80 in all the three domains of the questionnaire. Overall, the Persian version of ACLEEM showed excellent convergent validity and acceptable discriminant validity, except for the clinical training domain. Conclusion According to the results, the Persian version of ACLEEM questionnaire was a valid and reliable instrument for Iranian residents to assess specialized clinics and residency ambulatory settings. PMID:27729824

  17. Psychometric properties of the Persian version of the Ambulatory Care Learning Educational Environment Measure (ACLEEM) questionnaire, Shiraz, Iran.

    PubMed

    Parvizi, Mohammad Mahdi; Amini, Mitra; Dehghani, Mohammad Reza; Jafari, Peyman; Parvizi, Zahra

    2016-01-01

    Evaluation is the main component in design and implementation of educational activities and rapid growth of educational institution programs. Outpatient medical education and clinical training environment is one of the most important parts of training of medical residents. This study aimed to determine the validity and reliability of the Persian version of Ambulatory Care Learning Educational Environment Measure (ACLEEM) questionnaire, as an instrument for assessment of educational environments in residency medical clinics. This study was performed on 180 residents in Shiraz University of Medical Sciences, Shiraz, Iran, in 2014-2015. The questionnaire designers' electronic permission (by email) and the residents' verbal consent were obtained before distributing the questionnaires. The study data were gathered using ACLEEM questionnaire developed by Arnoldo Riquelme in 2013. The data were analyzed using the SPSS statistical software, version 14, and MedCalc ® software. Then, the construct validity, including convergent and discriminant validities, of the Persian version of ACLEEM questionnaire was assessed. Its internal consistency was also checked by Cronbach's alpha coefficient. Five team members who were experts in medical education were consulted to test the cultural adaptation, linguistic equivalency, and content validity of the Persian version of the questionnaire. Content validity indexes were >0.9 in all items. In factor analysis of the instrument, the Kaiser-Meyer-Olkin index was 0.928 and Barlett's sphericity test yielded the following results: X 2 =6,717.551, df =1,225, and P ≤0.001. Besides, Cronbach's alpha coefficient of ACLEEM questionnaire was 0.964. Cronbach's alpha coefficients were also >0.80 in all the three domains of the questionnaire. Overall, the Persian version of ACLEEM showed excellent convergent validity and acceptable discriminant validity, except for the clinical training domain. According to the results, the Persian version of ACLEEM questionnaire was a valid and reliable instrument for Iranian residents to assess specialized clinics and residency ambulatory settings.

  18. Finding higher symmetries of differential equations using the MAPLE package DESOLVII

    NASA Astrophysics Data System (ADS)

    Vu, K. T.; Jefferson, G. F.; Carminati, J.

    2012-04-01

    We present and describe, with illustrative examples, the MAPLE computer algebra package DESOLVII, which is a major upgrade of DESOLV. DESOLVII now includes new routines allowing the determination of higher symmetries (contact and Lie-Bäcklund) for systems of both ordinary and partial differential equations. Catalogue identifier: ADYZ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYZ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 858 No. of bytes in distributed program, including test data, etc.: 112 515 Distribution format: tar.gz Programming language: MAPLE internal language Computer: PCs and workstations Operating system: Linux, Windows XP and Windows 7 RAM: Depends on the type of problem and the complexity of the system (small ≈ MB, large ≈ GB) Classification: 4.3, 5 Catalogue identifier of previous version: ADYZ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 176 (2007) 682 Does the new version supersede the previous version?: Yes Nature of problem: There are a number of approaches one may use to find solutions to systems of differential equations. These include numerical, perturbative, and algebraic methods. Unfortunately, approximate or numerical solution methods may be inappropriate in many cases or even impossible due to the nature of the system and hence exact methods are important. In their own right, exact solutions are valuable not only as a yardstick for approximate/numerical solutions but also as a means of elucidating the physical meaning of fundamental quantities in systems. One particular method of finding special exact solutions is afforded by the work of Sophus Lie and the use of continuous transformation groups. The power of Lie's group theoretic method lies in its ability to unify a number of ad hoc integration methods through the use of symmetries, that is, continuous groups of transformations which leave the differential system “unchanged”. These symmetry groups may then be used to find special solutions. Solutions found in this manner are called similarity or invariant solutions. The method of finding symmetry transformations initially requires the generation of a large overdetermined system of linear, homogeneous, coupled PDEs. The integration of this system is usually reasonably straightforward requiring the (often elementary) integration of equations by splitting the system according to dependency on different orders and degrees of the dependent variable/s. Unfortunately, in the case of contact and Lie-Bäcklund symmetries, the integration of the determining system becomes increasingly more difficult as the order of the symmetry is increased. This is because the symmetry generating functions become dependent on higher orders of the derivatives of the dependent variables and this diminishes the overall resulting “separable” differential conditions derived from the main determining system. Furthermore, typical determining systems consist of tens to hundreds of equations and this, combined with standard mechanical solution methods, makes the process well suited to automation using computer algebra systems. The new MAPLE package DESOLVII, which is a major upgrade of DESOLV, now includes routines allowing the determination of higher symmetries (contact and Lie-Bäcklund) for systems of both ordinary and partial differential equations. In addition, significant improvements have been implemented to the algorithm for PDE solution. Finally, we have made some improvements in the overall automated process so as to improve user friendliness by reducing user intervention where possible. Solution method: See “Nature of problem” above. Reasons for new version: New and improved functionality. New functionality - can now compute generalised symmetries. Much improved efficiency (speed and memory use) of existing routines. Restrictions: Sufficient memory may be required for complex systems. Running time: Depends on the type of problem and the complexity of the system (small ≈ seconds, large ≈ hours).

  19. Validation of a Persian Short-Form Version of a Standardised Questionnaire Assessing Oral Cancer Knowledge, Practice and Attitudes Among Dentists.

    PubMed

    Navabi, Nader; Hashemipour, Maryam A; Roughani, Aida

    2017-02-01

    Oral cancer is a global health problem; however, many dentists lack the necessary skills, knowledge and capacity to diagnose oral cancers early. This study aimed to examine the validity and reliability of a Persian short-form version of a standardised questionnaire to assess dentists' knowledge, practice and attitudes towards oral cancer. This cross-sectional analytical study was carried out in May 2015 in Tehran, Iran. An original 39-item English-language questionnaire developed by Yellowitz et al . was translated into Persian using forward and backward translation methods. A total of 15 dental professionals were asked to assess the questionnaire for content validity. Based on their feedback, a 20-item short-form version was prepared, including six demographic, six knowledge, four attitude and four practice items. The translated short-form questionnaire was subsequently distributed to 973 general dental practitioners attending a dental conference in Tehran. Internal consistency and reliability were assessed with Cronbach's alpha coefficient and item-total correlation calculations. A total of 13 professionals and 313 general dentists participated in the study (response rates: 86.7% and 32.2%, respectively). After the elimination of six items (two knowledge, two attitude and two practice items), the validity and reliability of the questionnaire was confirmed. The final Persian 14-item version of the questionnaire had acceptable validity and internal consistency. These results indicate that researchers can use this translated short-form version to evaluate oral cancer knowledge, attitudes and practices among Persian-speaking dentists; this will allow for a comparison of data between different populations.

  20. tweezercalib 2.0: Faster version of MatLab package for precise calibration of optical tweezers

    NASA Astrophysics Data System (ADS)

    Hansen, Poul Martin; Tolić-Nørrelykke, Iva Marija; Flyvbjerg, Henrik; Berg-Sørensen, Kirstine

    2006-03-01

    We present a vectorized version of the MatLab (MathWorks Inc.) package tweezercalib for calibration of optical tweezers with precision. The calibration is based on the power spectrum of the Brownian motion of a dielectric bead trapped in the tweezers. Precision is achieved by accounting for a number of factors that affect this power spectrum, as described in vs. 1 of the package [I.M. Tolić-Nørrelykke, K. Berg-Sørensen, H. Flyvbjerg, Matlab program for precision calibration of optical tweezers, Comput. Phys. Comm. 159 (2004) 225-240]. The graphical user interface allows the user to include or leave out each of these factors. Several "health tests" are applied to the experimental data during calibration, and test results are displayed graphically. Thus, the user can easily see whether the data comply with the theory used for their interpretation. Final calibration results are given with statistical errors and covariance matrix. New version program summaryTitle of program: tweezercalib Catalogue identifier: ADTV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTV_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference in CPC to previous version: I.M. Tolić-Nørrelykke, K. Berg-Sørensen, H. Flyvbjerg, Comput. Phys. Comm. 159 (2004) 225 Catalogue identifier of previous version: ADTV Does the new version supersede the original program: Yes Computer for which the program is designed and others on which it has been tested: General computer running MatLab (Mathworks Inc.) Operating systems under with the program has been tested: Windows2000, Windows-XP, Linux Programming language used: MatLab (Mathworks Inc.), standard license Memory required to execute with typical data: Of order four times the size of the data file High speed storage required: none No. of lines in distributed program, including test data, etc.: 135 989 No. of bytes in distributed program, including test data, etc.: 1 527 611 Distribution format: tar. gz Nature of physical problem: Calibrate optical tweezers with precision by fitting theory to experimental power spectrum of position of bead doing Brownian motion in incompressible fluid, possibly near microscope cover slip, while trapped in optical tweezers. Thereby determine spring constant of optical trap and conversion factor for arbitrary-units-to-nanometers for detection system. Method of solution: Elimination of cross-talk between quadrant photo-diode's output channels for positions (optional). Check that distribution of recorded positions agrees with Boltzmann distribution of bead in harmonic trap. Data compression and noise reduction by blocking method applied to power spectrum. Full accounting for hydrodynamic effects: Frequency-dependent drag force and interaction with nearby cover slip (optional). Full accounting for electronic filters (optional), for "virtual filtering" caused by detection system (optional). Full accounting for aliasing caused by finite sampling rate (optional). Standard non-linear least-squares fitting. Statistical support for fit is given, with several plots facilitating inspection of consistency and quality of data and fit. Summary of revisions: A faster fitting routine, adapted from [J. Nocedal, Y.x. Yuan, Combining trust region and line search techniques, Technical Report OTC 98/04, Optimization Technology Center, 1998; W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes. The Art of Scientific Computing, Cambridge University Press, Cambridge, 1986], is applied. It uses fewer function evaluations, and the remaining function evaluations have been vectorized. Calls to routines in Toolboxes not included with a standard MatLab license have been replaced by calls to routines that are included in the present package. Fitting parameters are rescaled to ensure that they are all of roughly the same size (of order 1) while being fitted. Generally, the program package has been updated to comply with MatLab, vs. 7.0, and optimized for speed. Restrictions on the complexity of the problem: Data should be positions of bead doing Brownian motion while held by optical tweezers. For high precision in final results, data should be time series measured over a long time, with sufficiently high experimental sampling rate: The sampling rate should be well above the characteristic frequency of the trap, the so-called corner frequency. Thus, the sampling frequency should typically be larger than 10 kHz. The Fast Fourier Transform used works optimally when the time series contain 2 data points, and long measurement time is obtained with n>12-15. Finally, the optics should be set to ensure a harmonic trapping potential in the range of positions visited by the bead. The fitting procedure checks for harmonic potential. Typical running time: Seconds Unusual features of the program: None References: The theoretical underpinnings for the procedure are found in [K. Berg-Sørensen, H. Flyvbjerg, Power spectrum analysis for optical tweezers, Rev. Sci. Ins. 75 (2004) 594-612].

  1. Overview and Evaluation of the Community Multiscale Air Quality (CMAQ) Modeling System Version 5.2

    EPA Science Inventory

    A new version of the Community Multiscale Air Quality (CMAQ) model, version 5.2 (CMAQv5.2), is currently being developed, with a planned release date in 2017. The new model includes numerous updates from the previous version of the model (CMAQv5.1). Specific updates include a new...

  2. A catalog of stellar spectrophotometry (Adelman, et al. 1989): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Adelman, Saul J.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the astronomical data centers, is described. The catalog is a collection of spectrophotometric observations made using rotating grating scanners and calibrated with the fluxes of Vega. The observations cover various wavelength regions between about 330 and 1080 nm.

  3. Comparisons between GRNTRN simulations and beam measurements of proton lateral broadening distributions

    NASA Astrophysics Data System (ADS)

    Mertens, Christopher; Moyers, Michael; Walker, Steven; Tweed, John

    Recent developments in NASA's High Charge and Energy Transport (HZETRN) code have included lateral broadening of primary ion beams due to small-angle multiple Coulomb scattering, and coupling of the ion-nuclear scattering interactions with energy loss and straggling. The new version of HZETRN based on Green function methods, GRNTRN, is suitable for modeling transport with both space environment and laboratory boundary conditions. Multiple scattering processes are a necessary extension to GRNTRN in order to accurately model ion beam experiments, to simulate the physical and biological-effective radiation dose, and to develop new methods and strategies for light ion radiation therapy. In this paper we compare GRNTRN simulations of proton lateral scattering distributions with beam measurements taken at Loma Linda Medical University. The simulated and measured lateral proton distributions will be compared for a 250 MeV proton beam on aluminum, polyethylene, polystyrene, bone, iron, and lead target materials.

  4. HEPLIB `91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  5. HEPLIB 91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  6. A finite difference Hartree-Fock program for atoms and diatomic molecules

    NASA Astrophysics Data System (ADS)

    Kobus, Jacek

    2013-03-01

    The newest version of the two-dimensional finite difference Hartree-Fock program for atoms and diatomic molecules is presented. This is an updated and extended version of the program published in this journal in 1996. It can be used to obtain reference, Hartree-Fock limit values of total energies and multipole moments for a wide range of diatomic molecules and their ions in order to calibrate existing and develop new basis sets, calculate (hyper)polarizabilities (αzz, βzzz, γzzzz, Az,zz, Bzz,zz) of atoms, homonuclear and heteronuclear diatomic molecules and their ions via the finite field method, perform DFT-type calculations using LDA or B88 exchange functionals and LYP or VWN correlations ones or the self-consistent multiplicative constant method, perform one-particle calculations with (smooth) Coulomb and Krammers-Henneberger potentials and take account of finite nucleus models. The program is easy to install and compile (tarball+configure+make) and can be used to perform calculations within double- or quadruple-precision arithmetic. Catalogue identifier: ADEB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 2 No. of lines in distributed program, including test data, etc.: 171196 No. of bytes in distributed program, including test data, etc.: 9481802 Distribution format: tar.gz Programming language: Fortran 77, C. Computer: any 32- or 64-bit platform. Operating system: Unix/Linux. RAM: Case dependent, from few MB to many GB Classification: 16.1. Catalogue identifier of previous version: ADEB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 98(1996)346 Does the new version supersede the previous version?: Yes Nature of problem: The program finds virtually exact solutions of the Hartree-Fock and density functional theory type equations for atoms, diatomic molecules and their ions. The lowest energy eigenstates of a given irreducible representation and spin can be obtained. The program can be used to perform one-particle calculations with (smooth) Coulomb and Krammers-Henneberger potentials and also DFT-type calculations using LDA or B88 exchange functionals and LYP or VWN correlations ones or the self-consistent multiplicative constant method. Solution method: Single-particle two-dimensional numerical functions (orbitals) are used to construct an antisymmetric many-electron wave function of the restricted open-shell Hartree-Fock model. The orbitals are obtained by solving the Hartree-Fock equations as coupled two-dimensional second-order (elliptic) partial differential equations (PDEs). The Coulomb and exchange potentials are obtained as solutions of the corresponding Poisson equations. The PDEs are discretized by the eighth-order central difference stencil on a two-dimensional single grid, and the resulting large and sparse system of linear equations is solved by the (multicolour) successive overrelaxation ((MC)SOR) method. The self-consistent-field iterations are interwoven with the (MC)SOR ones and orbital energies and normalization factors are used to monitor the convergence. The accuracy of solutions depends mainly on the grid and the system under consideration, which means that within double precision arithmetic one can obtain orbitals and energies having up to 12 significant figures. If more accurate results are needed, quadruple-precision floating-point arithmetic can be used. Reasons for new version: Additional features, many modifications and corrections, improved convergence rate, overhauled code and documentation. Summary of revisions: see ChangeLog found in tar.gz archive Restrictions: The present version of the program is restricted to 60 orbitals. The maximum grid size is determined at compilation time. Unusual features: The program uses two C routines for allocating and deallocating memory. Several BLAS (Basic Linear Algebra System) routines are emulated by the program. When possible they should be replaced by their library equivalents. Additional comments: automake and autoconf tools are required to build and compile the program; checked with f77, gfortran and ifort compilers Running time: Very case dependent - from a few CPU seconds for the H2 defined on a small grid up to several weeks for the Hartree-Fock-limit calculations for 40-50 electron molecules.

  7. Filtered gradient reconstruction algorithm for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  8. Distributed Visualization Project

    NASA Technical Reports Server (NTRS)

    Craig, Douglas; Conroy, Michael; Kickbusch, Tracey; Mazone, Rebecca

    2016-01-01

    Distributed Visualization allows anyone, anywhere to see any simulation at any time. Development focuses on algorithms, software, data formats, data systems and processes to enable sharing simulation-based information across temporal and spatial boundaries without requiring stakeholders to possess highly-specialized and very expensive display systems. It also introduces abstraction between the native and shared data, which allows teams to share results without giving away proprietary or sensitive data. The initial implementation of this capability is the Distributed Observer Network (DON) version 3.1. DON 3.1 is available for public release in the NASA Software Store (https://software.nasa.gov/software/KSC-13775) and works with version 3.0 of the Model Process Control specification (an XML Simulation Data Representation and Communication Language) to display complex graphical information and associated Meta-Data.

  9. A new version of Scilab software package for the study of dynamical systems

    NASA Astrophysics Data System (ADS)

    Bordeianu, C. C.; Felea, D.; Beşliu, C.; Jipa, Al.; Grossu, I. V.

    2009-11-01

    This work presents a new version of a software package for the study of chaotic flows, maps and fractals [1]. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well-known examples are implemented, with the capability of the users inserting their own ODE or iterative equations. New version program summaryProgram title: Chaos v2.0 Catalogue identifier: AEAP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1275 No. of bytes in distributed program, including test data, etc.: 7135 Distribution format: tar.gz Programming language: Scilab 5.1.1. Scilab 5.1.1 should be installed before running the program. Information about the installation can be found at http://wiki.scilab.org/howto/install/windows. Computer: PC-compatible running Scilab on MS Windows or Linux Operating system: Windows XP, Linux RAM: below 150 Megabytes Classification: 6.2 Catalogue identifier of previous version: AEAP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 788 Does the new version supersede the previous version?: Yes Nature of problem: Any physical model containing linear or nonlinear ordinary differential equations (ODE). Solution method: Numerical solving of ordinary differential equations for the study of chaotic flows. The chaotic behavior of the nonlinear dynamical system is analyzed using Poincare sections, phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropies. Numerical solving of iterative equations for the study of maps and fractals. Reasons for new version: The program has been updated to use the new version 5.1.1 of Scilab with new graphical capabilities [2]. Moreover, new use cases have been added which make the handling of the program easier and more efficient. Summary of revisions: A new use case concerning coupled predator-prey models has been added [3]. Three new use cases concerning fractals (Sierpinsky gasket, Barnsley's Fern and Tree) have been added [3]. The graphical user interface (GUI) of the program has been reconstructed to include the new use cases. The program has been updated to use Scilab 5.1.1 with the new graphical capabilities. Additional comments: The program package contains 12 subprograms. interface.sce - the graphical user interface (GUI) that permits the choice of a routine as follows 1.sci - Lorenz dynamical system 2.sci - Chua dynamical system 3.sci - Rosler dynamical system 4.sci - Henon map 5.sci - Lyapunov exponents for Lorenz dynamical system 6.sci - Lyapunov exponent for the logistic map 7.sci - Shannon entropy for the logistic map 8.sci - Coupled predator-prey model 1f.sci - Sierpinsky gasket 2f.sci - Barnsley's Fern 3f.sci - Barnsley's Tree Running time: 10 to 20 seconds for problems that do not involve Lyapunov exponents calculation; 60 to 1000 seconds for problems that involve high orders ODE, Lyapunov exponents calculation and fractals. References: C.C. Bordeianu, C. Besliu, Al. Jipa, D. Felea, I. V. Grossu, Comput. Phys. Comm. 178 (2008) 788. S. Campbell, J.P. Chancelier, R. Nikoukhah, Modeling and Simulation in Scilab/Scicos, Springer, 2006. R.H. Landau, M.J. Paez, C.C. Bordeianu, A Survey of Computational Physics, Introductory Computational Science, Princeton University Press, 2008.

  10. Supporting the Loewenstein occupational therapy cognitive assessment using distributed user interfaces.

    PubMed

    Tesoriero, Ricardo; Gallud Lazaro, Jose A; Altalhi, Abdulrahman H

    2017-02-01

    Improve the quantity and quality of information obtained from traditional Loewenstein Occupational Therapy Cognitive Assessment Battery systems to monitor the evolution of patients' rehabilitation process as well as to compare different rehabilitation therapies. The system replaces traditional artefacts with virtual versions of them to take advantage of cutting edge interaction technology. The system is defined as a Distributed User Interface (DUI) supported by a display ecosystem, including mobile devices as well as multi-touch surfaces. Due to the heterogeneity of the devices involved in the system, the software technology is based on a client-server architecture using the Web as the software platform. The system provides therapists with information that is not available (or it is very difficult to gather) using traditional technologies (i.e. response time measurements, object tracking, information storage and retrieval facilities, etc.). The use of DUIs allows therapists to gather information that is unavailable using traditional assessment methods as well as adapt the system to patients' profile to increase the range of patients that are able to take this assessment. Implications for Rehabilitation Using a Distributed User Interface environment to carry out LOTCAs improves the quality of the information gathered during the rehabilitation assessment. This system captures physical data regarding patient's interaction during the assessment to improve the rehabilitation process analysis. Allows professionals to adapt the assessment procedure to create different versions according to patients' profile. Improves the availability of patients' profile information to therapists to adapt the assessment procedure.

  11. AFMPB: An adaptive fast multipole Poisson-Boltzmann solver for calculating electrostatics in biomolecular systems

    NASA Astrophysics Data System (ADS)

    Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, J. Andrew

    2010-06-01

    A Fortran program package is introduced for rapid evaluation of the electrostatic potentials and forces in biomolecular systems modeled by the linearized Poisson-Boltzmann equation. The numerical solver utilizes a well-conditioned boundary integral equation (BIE) formulation, a node-patch discretization scheme, a Krylov subspace iterative solver package with reverse communication protocols, and an adaptive new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related libraries and utility tools are available at http://lsec.cc.ac.cn/~lubz/afmpb.html and a mirror site at http://mccammon.ucsd.edu/. This paper is a brief summary of the program: the algorithms, the implementation and the usage. Program summaryProgram title: AFMPB: Adaptive fast multipole Poisson-Boltzmann solver Catalogue identifier: AEGB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL 2.0 No. of lines in distributed program, including test data, etc.: 453 649 No. of bytes in distributed program, including test data, etc.: 8 764 754 Distribution format: tar.gz Programming language: Fortran Computer: Any Operating system: Any RAM: Depends on the size of the discretized biomolecular system Classification: 3 External routines: Pre- and post-processing tools are required for generating the boundary elements and for visualization. Users can use MSMS ( http://www.scripps.edu/~sanner/html/msms_home.html) for pre-processing, and VMD ( http://www.ks.uiuc.edu/Research/vmd/) for visualization. Sub-programs included: An iterative Krylov subspace solvers package from SPARSKIT by Yousef Saad ( http://www-users.cs.umn.edu/~saad/software/SPARSKIT/sparskit.html), and the fast multipole methods subroutines from FMMSuite ( http://www.fastmultipole.org/). Nature of problem: Numerical solution of the linearized Poisson-Boltzmann equation that describes electrostatic interactions of molecular systems in ionic solutions. Solution method: A novel node-patch scheme is used to discretize the well-conditioned boundary integral equation formulation of the linearized Poisson-Boltzmann equation. Various Krylov subspace solvers can be subsequently applied to solve the resulting linear system, with a bounded number of iterations independent of the number of discretized unknowns. The matrix-vector multiplication at each iteration is accelerated by the adaptive new versions of fast multipole methods. The AFMPB solver requires other stand-alone pre-processing tools for boundary mesh generation, post-processing tools for data analysis and visualization, and can be conveniently coupled with different time stepping methods for dynamics simulation. Restrictions: Only three or six significant digits options are provided in this version. Unusual features: Most of the codes are in Fortran77 style. Memory allocation functions from Fortran90 and above are used in a few subroutines. Additional comments: The current version of the codes is designed and written for single core/processor desktop machines. Check http://lsec.cc.ac.cn/~lubz/afmpb.html and http://mccammon.ucsd.edu/ for updates and changes. Running time: The running time varies with the number of discretized elements ( N) in the system and their distributions. In most cases, it scales linearly as a function of N.

  12. Static Chemistry in Disks or Clouds

    NASA Astrophysics Data System (ADS)

    Semenov, D.; Wiebe, D.

    2006-11-01

    This FORTRAN77 code can be used to model static, time-dependent chemistry in ISM and circumstellar disks. Current version is based on the OSU'06 gas-grain astrochemical network with all updates to the reaction rates, and includes surface chemistry from Hasegawa & Herbst (1993) and Hasegawa, Herbst, and Leung (1992). Surface chemistry can be modeled either with the standard rate equation approach or modified rate equation approach (useful in disks). Gas-grain interactions include sticking of neutral molecules to grains, dissociative recombination of ions on grains as well as thermal, UV, X-ray, and CRP-induced desorption of frozen species. An advanced X-ray chemistry and 3 grain sizes with power-law size distribution are also included. An deuterium extension to this chemical model is available.

  13. Multi-objective optimal dispatch of distributed energy resources

    NASA Astrophysics Data System (ADS)

    Longe, Ayomide

    This thesis is composed of two papers which investigate the optimal dispatch for distributed energy resources. In the first paper, an economic dispatch problem for a community microgrid is studied. In this microgrid, each agent pursues an economic dispatch for its personal resources. In addition, each agent is capable of trading electricity with other agents through a local energy market. In this paper, a simple market structure is introduced as a framework for energy trades in a small community microgrid such as the Solar Village. It was found that both sellers and buyers benefited by participating in this market. In the second paper, Semidefinite Programming (SDP) for convex relaxation of power flow equations is used for optimal active and reactive dispatch for Distributed Energy Resources (DER). Various objective functions including voltage regulation, reduced transmission line power losses, and minimized reactive power charges for a microgrid are introduced. Combinations of these goals are attained by solving a multiobjective optimization for the proposed ORPD problem. Also, both centralized and distributed versions of this optimal dispatch are investigated. It was found that SDP made the optimal dispatch faster and distributed solution allowed for scalability.

  14. SHABERTH - ANALYSIS OF A SHAFT BEARING SYSTEM (CRAY VERSION)

    NASA Technical Reports Server (NTRS)

    Coe, H. H.

    1994-01-01

    The SHABERTH computer program was developed to predict operating characteristics of bearings in a multibearing load support system. Lubricated and non-lubricated bearings can be modeled. SHABERTH calculates the loads, torques, temperatures, and fatigue life for ball and/or roller bearings on a single shaft. The program also allows for an analysis of the system reaction to the termination of lubricant supply to the bearings and other lubricated mechanical elements. SHABERTH has proven to be a valuable tool in the design and analysis of shaft bearing systems. The SHABERTH program is structured with four nested calculation schemes. The thermal scheme performs steady state and transient temperature calculations which predict system temperatures for a given operating state. The bearing dimensional equilibrium scheme uses the bearing temperatures, predicted by the temperature mapping subprograms, and the rolling element raceway load distribution, predicted by the bearing subprogram, to calculate bearing diametral clearance for a given operating state. The shaft-bearing system load equilibrium scheme calculates bearing inner ring positions relative to the respective outer rings such that the external loading applied to the shaft is brought into equilibrium by the rolling element loads which develop at each bearing inner ring for a given operating state. The bearing rolling element and cage load equilibrium scheme calculates the rolling element and cage equilibrium positions and rotational speeds based on the relative inner-outer ring positions, inertia effects, and friction conditions. The ball bearing subprograms in the current SHABERTH program have several model enhancements over similar programs. These enhancements include an elastohydrodynamic (EHD) film thickness model that accounts for thermal heating in the contact area and lubricant film starvation; a new model for traction combined with an asperity load sharing model; a model for the hydrodynamic rolling and shear forces in the inlet zone of lubricated contacts, which accounts for the degree of lubricant film starvation; modeling normal and friction forces between a ball and a cage pocket, which account for the transition between the hydrodynamic and elastohydrodynamic regimes of lubrication; and a model of the effect on fatigue life of the ratio of the EHD plateau film thickness to the composite surface roughness. SHABERTH is intended to be as general as possible. The models in SHABERTH allow for the complete mathematical simulation of real physical systems. Systems are limited to a maximum of five bearings supporting the shaft, a maximum of thirty rolling elements per bearing, and a maximum of one hundred temperature nodes. The SHABERTH program structure is modular and has been designed to permit refinement and replacement of various component models as the need and opportunities develop. A preprocessor is included in the IBM PC version of SHABERTH to provide a user friendly means of developing SHABERTH models and executing the resulting code. The preprocessor allows the user to create and modify data files with minimal effort and a reduced chance for errors. Data is utilized as it is entered; the preprocessor then decides what additional data is required to complete the model. Only this required information is requested. The preprocessor can accommodate data input for any SHABERTH compatible shaft bearing system model. The system may include ball bearings, roller bearings, and/or tapered roller bearings. SHABERTH is written in FORTRAN 77, and two machine versions are available from COSMIC. The CRAY version (LEW-14860) has a RAM requirement of 176K of 64 bit words. The IBM PC version (MFS-28818) is written for IBM PC series and compatible computers running MS-DOS, and includes a sample MS-DOS executable. For execution, the PC version requires at least 1Mb of RAM and an 80386 or 486 processor machine with an 80x87 math co-processor. The standard distribution medium for the IBM PC version is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diske

  15. Locally-Adaptive, Spatially-Explicit Projection of U.S. Population for 2030 and 2050

    DOE PAGES

    McKee, Jacob J.; Rose, Amy N.; Bright, Eddie A.; ...

    2015-02-03

    Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Moreover, knowing the spatial distribution of future population allows for increased preparation in the event of an emergency. Building on the spatial interpolation technique previously developed for high resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically-informed spatial distribution of the projected population of the contiguous U.S. for 2030 and 2050. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection modelmore » departs from these by accounting for multiple components that affect population distribution. Modelled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the U.S. Census s projection methodology with the U.S. Census s official projection as the benchmark. Applications of our model include, but are not limited to, suitability modelling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.« less

  16. Locally-Adaptive, Spatially-Explicit Projection of U.S. Population for 2030 and 2050

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKee, Jacob J.; Rose, Amy N.; Bright, Eddie A.

    Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Moreover, knowing the spatial distribution of future population allows for increased preparation in the event of an emergency. Building on the spatial interpolation technique previously developed for high resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically-informed spatial distribution of the projected population of the contiguous U.S. for 2030 and 2050. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection modelmore » departs from these by accounting for multiple components that affect population distribution. Modelled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the U.S. Census s projection methodology with the U.S. Census s official projection as the benchmark. Applications of our model include, but are not limited to, suitability modelling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.« less

  17. Single Source Evaluation for the Hartford Working Group and Premcor Distribution Center

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  18. STS-97 (4A) EVA training in NBL pool

    NASA Image and Video Library

    2000-10-23

    JSC2000-07082 (October 2000)--- Wearing a training version of the shuttle extravehicular mobility unit (EMU) space suit, astronaut Joseph R. Tanner, STS-97 mission specialist, simulates a space walk underwater in the giant Neutral Buoyancy Laboratory (NBL). Tanner was there, along with astronaut Carlos I. Noriega, to rehearse one of three scheduled space walks to make additions to the International Space Station (ISS). The five-man crew in early December will deliver the P6 Integrated Truss Segment, which includes the first US Solar arrays and a power distribution system.

  19. Celerity Energy, Inc., Networked Distributed Resource Project City of Albuquerque and Bernalillo County, New Mexico

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  20. FEWZ 2.0: A code for hadronic Z production at next-to-next-to-leading order

    NASA Astrophysics Data System (ADS)

    Gavin, Ryan; Li, Ye; Petriello, Frank; Quackenbush, Seth

    2011-11-01

    We introduce an improved version of the simulation code FEWZ ( Fully Exclusive W and Z Production) for hadron collider production of lepton pairs through the Drell-Yan process at next-to-next-to-leading order (NNLO) in the strong coupling constant. The program is fully differential in the phase space of leptons and additional hadronic radiation. The new version offers users significantly more options for customization. FEWZ now bins multiple, user-selectable histograms during a single run, and produces parton distribution function (PDF) errors automatically. It also features a significantly improved integration routine, and can take advantage of multiple processor cores locally or on the Condor distributed computing system. We illustrate the new features of FEWZ by presenting numerous phenomenological results for LHC physics. We compare NNLO QCD with initial ATLAS and CMS results, and discuss in detail the effects of detector acceptance on the measurement of angular quantities associated with Z-boson production. We address the issue of technical precision in the presence of severe phase-space cuts. Program summaryProgram title: FEWZ Catalogue identifier: AEJP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6 280 771 No. of bytes in distributed program, including test data, etc.: 173 027 645 Distribution format: tar.gz Programming language: Fortran 77, C++, Python Computer: Mac, PC Operating system: Mac OSX, Unix/Linux Has the code been vectorized or parallelized?: Yes. User-selectable, 1 to 219 RAM: 200 Mbytes for common parton distribution functions Classification: 11.1 External routines: CUBA numerical integration library, numerous parton distribution sets (see text); these are provided with the code. Nature of problem: Determination of the Drell-Yan Z/photon production cross section and decay into leptons, with kinematic distributions of leptons and jets including full spin correlations, at next-to-next-to-leading order in the strong coupling constant. Solution method: Virtual loop integrals are decomposed into master integrals using automated techniques. Singularities are extracted from real radiation terms via sector decomposition, which separates singularities and maps onto suitable phase space variables. Result is convoluted with parton distribution functions. Each piece is numerically integrated over phase space, which allows arbitrary cuts on the observed particles. Each sample point may be binned during numerical integration, providing histograms, and reweighted by parton distribution function error eigenvectors, which provides PDF errors. Restrictions: Output does not correspond to unweighted events, and cannot be interfaced with a shower Monte Carlo. Additional comments: !!!!! The distribution file for this program is over 170 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: One day for total cross sections with 0.1% integration errors assuming typical cuts, up to 1 week for smooth kinematic distributions with sub-percent integration errors for each bin.

  1. Integrated Farm System Model Version 4.2 and Dairy Gas Emissions Model Version 3.2 Software development and distribution

    USDA-ARS?s Scientific Manuscript database

    Emissions of ammonia (NH3) and nitrous oxide (N2O) vary among animal facilities due to differences in housing structure and associated manure management. Bedded pack barns are structures with a roof and sidewalls resulting in a lower air velocity and evaporation potential inside the structure. But s...

  2. Using Screening Level Environmental Life Cycle Assessment to Aid Decision Making: A Case Study of a College Annual Report

    ERIC Educational Resources Information Center

    Ingwersen, Wesley W.; Curran, Mary Ann; Gonzalez, Michael A.; Hawkins, Troy R.

    2012-01-01

    Purpose: The purpose of this study is to compare the life cycle environmental impacts of the University of Cincinnati College of Engineering and Applied Sciences' current printed annual report to a version distributed via the internet. Design/methodology/approach: Life cycle environmental impacts of both versions of the report are modeled using…

  3. The Autism-Spectrum Quotient--Italian Version: A Cross-Cultural Confirmation of the Broader Autism Phenotype

    ERIC Educational Resources Information Center

    Ruta, Liliana; Mazzone, Domenico; Mazzone, Luigi; Wheelwright, Sally; Baron-Cohen, Simon

    2012-01-01

    The Autism Spectrum Quotient (AQ) has been used to define the "broader" (BAP), "medium" (MAP) and "narrow" autism phenotypes (NAP). We used a new Italian version of the AQ to test if difference on AQ scores and the distribution of BAP, MAP and NAP in autism parents (n = 245) versus control parents (n = 300) were…

  4. [Validation of an instrument to measure health-related quality of life in Chilean children and adolescents].

    PubMed

    Sepúlveda P, Rodrigo; Molina G, Temístocles; Molina C, Ramiro; Martínez N, Vania; González A, Electra; L, Myriam George; Montaño E, Rosa; Hidalgo-Rasmussen, Carlos

    2013-10-01

    KIDSCREEN-52 is an instrument to assess health related quality of life in children and adolescents. To culturally adapt and validate the KIDSCREEN-52 questionnaire in Chileans. Two independent translations from the English Spanish language were conciliated and retranslated to English. The conciliated version was tested during a cognitive interview to adolescents of different socioeconomic levels. The final version was validated in 7,910 school attending adolescents. In the cross-cultural adaptation, 50 of the 52 items presented low or medium levels of difficulty and a high semantic equivalence. Distribution according to gender, grades and types of schools was similar to the sample. Single ages were not affected by sex distribution. The Confirmatory Factor Analyses were: X² (1229) = 20996.7, Root Mean Square Error of Approximation = .045 and Comparative Fit Index = .96. The instrument had a Cronbach's alpha of .93. The domains had scores over 0.70 points, with the exception of the "Selfperception" domain, with a score of 0.62. The Chilean version of KIDSCREEN-52 is culturally appropriate and semantically equivalent in its English and Spanish versions (from Spain). Its reliability and validity were adequate.

  5. Knoto-ID: a tool to study the entanglement of open protein chains using the concept of knotoids.

    PubMed

    Dorier, Julien; Goundaroulis, Dimos; Benedetti, Fabrizio; Stasiak, Andrzej

    2018-05-02

    The backbone of most proteins forms an open curve. To study their entanglement, a common strategy consists in searching for the presence of knots in their backbones using topological invariants. However, this approach requires to close the curve into a loop, which alters the geometry of curve. Knoto-ID allows evaluating the entanglement of open curves without the need to close them, using the recent concept of knotoids which is a generalization of the classical knot theory to open curves. Knoto-ID can analyse the global topology of the full chain as well as the local topology by exhaustively studying all subchains or only determining the knotted core. Knoto-ID permits to localize topologically non-trivial protein folds that are not detected by informatics tools detecting knotted protein folds. Knoto-ID is written in C ++ and includes R (www.R-project.org) scripts to generate plots of projections maps, fingerprint matrices and disk matrices. Knoto-ID is distributed under the GNU General Public License (GPL), version 2 or any later version and is available at https://github.com/sib-swiss/Knoto-ID. A binary distribution for Mac OS X, Linux and Windows with detailed user guide and examples can be obtained from https://www.vital-it.ch/software/Knoto-ID. julien.dorier@sib.swiss.

  6. The Case For Prediction-based Best-effort Real-time Systems.

    DTIC Science & Technology

    1999-01-01

    Real - time Systems Peter A. Dinda Loukas Kallivokas January...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited DTIG QUALBR DISSECTED X The Case For Prediction-based Best-effort Real - time Systems Peter...Mellon University Pittsburgh, PA 15213 A version of this paper appeared in the Seventh Workshop on Parallel and Distributed Real - Time Systems

  7. A demographic study of the exponential distribution applied to uneven-aged forests

    Treesearch

    Jeffrey H. Gove

    2016-01-01

    A demographic approach based on a size-structured version of the McKendrick-Von Foerster equation is used to demonstrate a theoretical link between the population size distribution and the underlying vital rates (recruitment, mortality and diameter growth) for the population of individuals whose diameter distribution is negative exponential. This model supports the...

  8. Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.

    ERIC Educational Resources Information Center

    Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas

    2002-01-01

    Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…

  9. Representations of the Stratospheric Polar Vortices in Versions 1 and 2 of the Goddard Earth Observing System Chemistry-Climate Model (GEOS CCM)

    NASA Technical Reports Server (NTRS)

    Pawson, S.; Stolarski, R.S.; Nielsen, J.E.; Perlwitz, J.; Oman, L.; Waugh, D.

    2009-01-01

    This study will document the behavior of the polar vortices in two versions of the GEOS CCM. Both versions of the model include the same stratospheric chemistry, They differ in the underlying circulation model. Version 1 of the GEOS CCM is based on the Goddard Earth Observing System, Version 4, general circulation model which includes the finite-volume (Lin-Rood) dynamical core and physical parameterizations from Community Climate Model, Version 3. GEOS CCM Version 2 is based on the GEOS-5 GCM that includes a different tropospheric physics package. Baseline simulations of both models, performed at two-degree spatial resolution, show some improvements in Version 2, but also some degradation, In the Antarctic, both models show an over-persistent stratospheric polar vortex with late breakdown, but the year-to-year variations that are overestimated in Version I are more realistic in Version 2. The implications of this for the interactions with tropospheric climate, the Southern Annular Mode, will be discussed. In the Arctic both model versions show a dominant dynamically forced variabi;ity, but Version 2 has a persistent warm bias in the low stratosphere and there are seasonal differences in the simulations. These differences will be quantified in terms of climate change and ozone loss. Impacts of model resolution, using simulations at one-degree and half-degree, and changes in physical parameterizations (especially the gravity wave drag) will be discussed.

  10. C++QEDv2 Milestone 10: A C++/Python application-programming framework for simulating open quantum dynamics

    NASA Astrophysics Data System (ADS)

    Sandner, Raimar; Vukics, András

    2014-09-01

    The v2 Milestone 10 release of C++QED is primarily a feature release, which also corrects some problems of the previous release, especially as regards the build system. The adoption of C++11 features has led to many simplifications in the codebase. A full doxygen-based API manual [1] is now provided together with updated user guides. A largely automated, versatile new testsuite directed both towards computational and physics features allows for quickly spotting arising errors. The states of trajectories are now savable and recoverable with full binary precision, allowing for trajectory continuation regardless of evolution method (single/ensemble Monte Carlo wave-function or Master equation trajectory). As the main new feature, the framework now presents Python bindings to the highest-level programming interface, so that actual simulations for given composite quantum systems can now be performed from Python. Catalogue identifier: AELU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: yes No. of lines in distributed program, including test data, etc.: 492422 No. of bytes in distributed program, including test data, etc.: 8070987 Distribution format: tar.gz Programming language: C++/Python. Computer: i386-i686, x86 64. Operating system: In principle cross-platform, as yet tested only on UNIX-like systems (including Mac OS X). RAM: The framework itself takes about 60MB, which is fully shared. The additional memory taken by the program which defines the actual physical system (script) is typically less than 1MB. The memory storing the actual data scales with the system dimension for state-vector manipulations, and the square of the dimension for density-operator manipulations. This might easily be GBs, and often the memory of the machine limits the size of the simulated system. Classification: 4.3, 4.13, 6.2. External routines: Boost C++ libraries, GNU Scientific Library, Blitz++, FLENS, NumPy, SciPy Catalogue identifier of previous version: AELU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1381 Does the new version supersede the previous version?: Yes Nature of problem: Definition of (open) composite quantum systems out of elementary building blocks [2,3]. Manipulation of such systems, with emphasis on dynamical simulations such as Master-equation evolution [4] and Monte Carlo wave-function simulation [5]. Solution method: Master equation, Monte Carlo wave-function method Reasons for new version: The new version is mainly a feature release, but it does correct some problems of the previous version, especially as regards the build system. Summary of revisions: We give an example for a typical Python script implementing the ring-cavity system presented in Sec. 3.3 of Ref. [2]: Restrictions: Total dimensionality of the system. Master equation-few thousands. Monte Carlo wave-function trajectory-several millions. Unusual features: Because of the heavy use of compile-time algorithms, compilation of programs written in the framework may take a long time and much memory (up to several GBs). Additional comments: The framework is not a program, but provides and implements an application-programming interface for developing simulations in the indicated problem domain. We use several C++11 features which limits the range of supported compilers (g++ 4.7, clang++ 3.1) Documentation, http://cppqed.sourceforge.net/ Running time: Depending on the magnitude of the problem, can vary from a few seconds to weeks. References: [1] Entry point: http://cppqed.sf.net [2] A. Vukics, C++QEDv2: The multi-array concept and compile-time algorithms in the definition of composite quantum systems, Comp. Phys. Comm. 183(2012)1381. [3] A. Vukics, H. Ritsch, C++QED: an object-oriented framework for wave-function simulations of cavity QED systems, Eur. Phys. J. D 44 (2007) 585. [4] H. J. Carmichael, An Open Systems Approach to Quantum Optics, Springer, 1993. [5] J. Dalibard, Y. Castin, K. Molmer, Wave-function approach to dissipative processes in quantum optics, Phys. Rev. Lett. 68 (1992) 580.

  11. Introducing PROFESS 2.0: A parallelized, fully linear scaling program for orbital-free density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.

    2010-12-01

    Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer: Intel with ifort; AMD Opteron with pathf90 Operating system: Linux Has the code been vectorized or parallelized?: Yes. Parallelization is implemented through domain composition using MPI. RAM: Problem dependent, but 2 GB is sufficient for up to 10,000 ions. Classification: 7.3 External routines: FFTW 2.1.5 ( http://www.fftw.org) Catalogue identifier of previous version: AEBN_v1_0 Journal reference of previous version: Comput. Phys. Comm. 179 (2008) 839 Does the new version supersede the previous version?: Yes Nature of problem: Given a set of coordinates describing the initial ion positions under periodic boundary conditions, recovers the ground state energy, electron density, ion positions, and cell lattice vectors predicted by orbital-free density functional theory. The computation of all terms is effectively linear scaling. Parallelization is implemented through domain decomposition, and up to ˜10,000 ions may be included in the calculation on just a single processor, limited by RAM. For example, when optimizing the geometry of ˜50,000 aluminum ions (plus vacuum) on 48 cores, a single iteration of conjugate gradient ion geometry optimization takes ˜40 minutes wall time. However, each CG geometry step requires two or more electron density optimizations, so step times will vary. Solution method: Computes energies as described in text; minimizes this energy with respect to the electron density, ion positions, and cell lattice vectors. Reasons for new version: To allow much larger systems to be simulated using PROFESS. Restrictions: PROFESS cannot use nonlocal (such as ultrasoft) pseudopotentials. A variety of local pseudopotential files are available at the Carter group website ( http://www.princeton.edu/mae/people/faculty/carter/homepage/research/localpseudopotentials/). Also, due to the current state of the kinetic energy functionals, PROFESS is only reliable for main group metals and some properties of semiconductors. Running time: Problem dependent: the test example provided with the code takes less than a second to run. Timing results for large scale problems are given in the PROFESS paper and Ref. [1].

  12. GeoSciML v3.0 - a significant upgrade of the CGI-IUGS geoscience data model

    NASA Astrophysics Data System (ADS)

    Raymond, O.; Duclaux, G.; Boisvert, E.; Cipolloni, C.; Cox, S.; Laxton, J.; Letourneau, F.; Richard, S.; Ritchie, A.; Sen, M.; Serrano, J.-J.; Simons, B.; Vuollo, J.

    2012-04-01

    GeoSciML version 3.0 (http://www.geosciml.org), released in late 2011, is the latest version of the CGI-IUGS* Interoperability Working Group geoscience data interchange standard. The new version is a significant upgrade and refactoring of GeoSciML v2 which was released in 2008. GeoSciML v3 has already been adopted by several major international interoperability initiatives, including OneGeology, the EU INSPIRE program, and the US Geoscience Information Network, as their standard data exchange format for geoscience data. GeoSciML v3 makes use of recently upgraded versions of several Open Geospatial Consortium (OGC) and ISO data transfer standards, including GML v3.2, SWE Common v2.0, and Observations and Measurements v2 (ISO 19156). The GeoSciML v3 data model has been refactored from a single large application schema with many packages, into a number of smaller, but related, application schema modules with individual namespaces. This refactoring allows the use and future development of modules of GeoSciML (eg; GeologicUnit, GeologicStructure, GeologicAge, Borehole) in smaller, more manageable units. As a result of this refactoring and the integration with new OGC and ISO standards, GeoSciML v3 is not backwardly compatible with previous GeoSciML versions. The scope of GeoSciML has been extended in version 3.0 to include new models for geomorphological data (a Geomorphology application schema), and for geological specimens, geochronological interpretations, and metadata for geochemical and geochronological analyses (a LaboratoryAnalysis-Specimen application schema). In addition, there is better support for borehole data, and the PhysicalProperties model now supports a wider range of petrophysical measurements. The previously used CGI_Value data type has been superseded in favour of externally governed data types provided by OGC's SWE Common v2 and GML v3.2 data standards. The GeoSciML v3 release includes worked examples of best practice in delivering geochemical analytical data using the Observations and Measurements (ISO19156) and SWE Common v2 models. The GeoSciML v3 data model does not include vocabularies to support the data model. However, it does provide a standard pattern to reference controlled vocabulary concepts using HTTP-URIs. The international GeoSciML community has developed distributed RDF-based geoscience vocabularies that can be accessed by GeoSciML web services using the standard pattern recommended in GeoSciML v3. GeoSciML v3 is the first version of GeoSciML that will be accompanied by web service validation tools using Schematron rules. For example, these validation tools may check for compliance of a web service to a particular profile of GeoSciML, or for logical consistency of data content that cannot be enforced by the application schemas. This validation process will support accreditation of GeoSciML services and a higher degree of semantic interoperability. * International Union of Geological Sciences Commission for Management and Application of Geoscience Information (CGI-IUGS)

  13. Past Changes in the Vertical Distribution of Ozone Part 1: Measurement Techniques, Uncertainties and Availability

    NASA Technical Reports Server (NTRS)

    Hassler, B.; Petropavlovskikh, I.; Staehelin, J.; August, T.; Bhartia, P. K.; Clerbaux, C.; Degenstein, D.; Maziere, M. De; Dinelli, B. M.; Dudhia, A.; hide

    2014-01-01

    Peak stratospheric chlorofluorocarbon (CFC) and other ozone depleting substance (ODS) concentrations were reached in the mid- to late 1990s. Detection and attribution of the expected recovery of the stratospheric ozone layer in an atmosphere with reduced ODSs as well as efforts to understand the evolution of stratospheric ozone in the presence of increasing greenhouse gases are key current research topics. These require a critical examination of the ozone changes with an accurate knowledge of the spatial (geographical and vertical) and temporal ozone response. For such an examination, it is vital that the quality of the measurements used be as high as possible and measurement uncertainties well quantified. In preparation for the 2014 United Nations Environment Programme (UNEP)/World Meteorological Organization (WMO) Scientific Assessment of Ozone Depletion, the SPARC/IO3C/IGACO-O3/NDACC (SI2N) Initiative was designed to study and document changes in the global ozone profile distribution. This requires assessing long-term ozone profile data sets in regards to measurement stability and uncertainty characteristics. The ultimate goal is to establish suitability for estimating long-term ozone trends to contribute to ozone recovery studies. Some of the data sets have been improved as part of this initiative with updated versions now available. This summary presents an overview of stratospheric ozone profile measurement data sets (ground and satellite based) available for ozone recovery studies. Here we document measurement techniques, spatial and temporal coverage, vertical resolution, native units and measurement uncertainties. In addition, the latest data versions are briefly described (including data version updates as well as detailing multiple retrievals when available for a given satellite instrument). Archive location information for each data set is also given.

  14. Internet MEMS design tools based on component technology

    NASA Astrophysics Data System (ADS)

    Brueck, Rainer; Schumer, Christian

    1999-03-01

    The micro electromechanical systems (MEMS) industry in Europe is characterized by small and medium sized enterprises specialized on products to solve problems in specific domains like medicine, automotive sensor technology, etc. In this field of business the technology driven design approach known from micro electronics is not appropriate. Instead each design problem aims at its own, specific technology to be used for the solution. The variety of technologies at hand, like Si-surface, Si-bulk, LIGA, laser, precision engineering requires a huge set of different design tools to be available. No single SME can afford to hold licenses for all these tools. This calls for a new and flexible way of designing, implementing and distributing design software. The Internet provides a flexible manner of offering software access along with methodologies of flexible licensing e.g. on a pay-per-use basis. New communication technologies like ADSL, TV cable of satellites as carriers promise to offer a bandwidth sufficient even for interactive tools with graphical interfaces in the near future. INTERLIDO is an experimental tool suite for process specification and layout verification for lithography based MEMS technologies to be accessed via the Internet. The first version provides a Java implementation even including a graphical editor for process specification. Currently, a new version is brought into operation that is based on JavaBeans component technology. JavaBeans offers the possibility to realize independent interactive design assistants, like a design rule checking assistants, a process consistency checking assistants, a technology definition assistants, a graphical editor assistants, etc. that may reside distributed over the Internet, communicating via Internet protocols. Each potential user thus is able to configure his own dedicated version of a design tool set dedicated to the requirements of the current problem to be solved.

  15. [Gender-related achievements and challenges in the 2006 National Health Survey: analysis of adults and households].

    PubMed

    Ruiz-Cantero, María Teresa; Carrasco-Portiño, Mercedes; Artazcoz, Lucía

    2011-01-01

    To examine the ability of the 2006 Spanish Health Survey (SHS-2006) to analyze the population's health from a gender perspective and identify gender-related inequalities in health, and to compare the 2006 version with that of 2003. A contents analysis of the adults and households questionnaires was performed from the gender perspective, taking gender as (a) the basis of social norms and values, (b) the organizer of social structure: gender division of labor, double workload, vertical/horizontal segregation, and access to resources and power, and (c) a component of individual identity. The 2006 SHS uses neutral language. The referent is the interviewee, substituting the head of the family/breadwinner of past surveys. A new section focuses on reproductive labor (caregiving and domestic tasks) and the time distribution for these tasks. However, some limitations in the questions about time distribution were identified, hampering accurate estimations. The time devoted to paid labor is not recorded. The 2006 version includes new information about family commitments as an obstacle to accessing healthcare and on the delay between seeking and receiving healthcare appointments. The SHS 2006 introduces sufficient variations to confirm its improvement from a gender perspective. Future surveys should reformulate the questions about the time devoted to paid and reproductive labor, which is essential to characterize gender division of labor and double workload. Updating future versions of the SHS will also involve gathering information on maternity/paternity and parental leave. The 2006 survey allows delays in receiving healthcare to be measured, but does not completely allow other delays, such as diagnostic and treatment delays, to be quantified. Copyright © 2010 SESPAS. Published by Elsevier Espana. All rights reserved.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Chun; Huang, Maoyi; Fast, Jerome D.

    Current climate models still have large uncertainties in estimating biogenic trace gases, which can significantly affect atmospheric chemistry and secondary aerosol formation that ultimately influences air quality and aerosol radiative forcing. These uncertainties result from many factors, including uncertainties in land surface processes and specification of vegetation types, both of which can affect the simulated near-surface fluxes of biogenic volatile organic compounds (BVOCs). In this study, the latest version of Model of Emissions of Gases and Aerosols from Nature (MEGAN v2.1) is coupled within the land surface scheme CLM4 (Community Land Model version 4.0) in the Weather Research and Forecasting model withmore » chemistry (WRF-Chem). In this implementation, MEGAN v2.1 shares a consistent vegetation map with CLM4 for estimating BVOC emissions. This is unlike MEGAN v2.0 in the public version of WRF-Chem that uses a stand-alone vegetation map that differs from what is used by land surface schemes. This improved modeling framework is used to investigate the impact of two land surface schemes, CLM4 and Noah, on BVOCs and examine the sensitivity of BVOCs to vegetation distributions in California. The measurements collected during the Carbonaceous Aerosol and Radiative Effects Study (CARES) and the California Nexus of Air Quality and Climate Experiment (CalNex) conducted in June of 2010 provided an opportunity to evaluate the simulated BVOCs. Sensitivity experiments show that land surface schemes do influence the simulated BVOCs, but the impact is much smaller than that of vegetation distributions. This study indicates that more effort is needed to obtain the most appropriate and accurate land cover data sets for climate and air quality models in terms of simulating BVOCs, oxidant chemistry and, consequently, secondary organic aerosol formation.« less

  17. Code OK3 - An upgraded version of OK2 with beam wobbling function

    NASA Astrophysics Data System (ADS)

    Ogoyski, A. I.; Kawata, S.; Popov, P. H.

    2010-07-01

    For computer simulations on heavy ion beam (HIB) irradiation onto a target with an arbitrary shape and structure in heavy ion fusion (HIF), the code OK2 was developed and presented in Computer Physics Communications 161 (2004). Code OK3 is an upgrade of OK2 including an important capability of wobbling beam illumination. The wobbling beam introduces a unique possibility for a smooth mechanism of inertial fusion target implosion, so that sufficient fusion energy is released to construct a fusion reactor in future. New version program summaryProgram title: OK3 Catalogue identifier: ADST_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADST_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 221 517 No. of bytes in distributed program, including test data, etc.: 2 471 015 Distribution format: tar.gz Programming language: C++ Computer: PC (Pentium 4, 1 GHz or more recommended) Operating system: Windows or UNIX RAM: 2048 MBytes Classification: 19.7 Catalogue identifier of previous version: ADST_v2_0 Journal reference of previous version: Comput. Phys. Comm. 161 (2004) 143 Does the new version supersede the previous version?: Yes Nature of problem: In heavy ion fusion (HIF), ion cancer therapy, material processing, etc., a precise beam energy deposition is essentially important [1]. Codes OK1 and OK2 have been developed to simulate the heavy ion beam energy deposition in three-dimensional arbitrary shaped targets [2, 3]. Wobbling beam illumination is important to smooth the beam energy deposition nonuniformity in HIF, so that a uniform target implosion is realized and a sufficient fusion output energy is released. Solution method: OK3 code works on the base of OK1 and OK2 [2, 3]. The code simulates a multi-beam illumination on a target with arbitrary shape and structure, including beam wobbling function. Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function. Summary of revisions:In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated. Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes. The target temperature rises during the time of energy deposition. Some procedures are improved to perform faster. The energy conservation is checked up on each step of calculation process and corrected if necessary. New procedures included in OK3 Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam. Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam. Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems. Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target. Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process. Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process. Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process. Modified procedures in OK3 Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm. Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition. Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster. Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy. Procedure SD( ) calculates the total relative root-mean-square (RMS) deviation and the total relative peak-to-valley (PTV) deviation in energy deposition non-uniformity. This procedure is not included in code OK2 because of its limited applications (for spherical targets only). It is taken from code OK1 and modified to perform with code OK3. Running time: The execution time depends on the pellet mesh number and the number of beams in the simulated illumination as well as on the beam characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost all of the practical running tests performed, the typical running time for one beam deposition is about 30 s on a PC with a CPU of Pentium 4, 2.4 GHz. References:A.I. Ogoyski, et al., Heavy ion beam irradiation non-uniformity in inertial fusion, Phys. Lett. A 315 (2003) 372-377. A.I. Ogoyski, et al., Code OK1 - Simulation of multi-beam irradiation on a spherical target in heavy ion fusion, Comput. Phys. Comm. 157 (2004) 160-172. A.I. Ogoyski, et al., Code OK2 - A simulation code of ion-beam illumination on an arbitrary shape and structure target, Comput. Phys. Comm. 161 (2004) 143-150.

  18. The Boeing plastic analysis capability for engines

    NASA Technical Reports Server (NTRS)

    Vos, R. G.

    1976-01-01

    The current BOPACE program is described as a nonlinear stress analysis program, which is based on a family of isoparametric finite elements. The theoretical, user, programmer, preprocessing aspects are discussed, and example problems are included. New features in the current program version include substructuring, an out-of-core Gauss wavefront equation solver, multipoint constraints, combined material and geometric nonlinearities, automatic calculation of inertia effects, provision for distributed as well as concentrated mechanical loads, follower forces, singular crack-tip elements, the SAIL automatic generation capability, and expanded user control over input quantity definition, output selection, and program execution. BOPACE is written in FORTRAN 4 and is currently available for both the IBM 360/370 and the UNIVAC 1108 machines.

  19. Documentation for the machine-readable version of the revised Catalogue of Stellar Rotational Velocities of Uesugi and Fukuda (1982)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1983-01-01

    The machine-readable catalog provides mean data on the old Slettebak system for 6472 stars. The catalog results from the review, analysis and transformation of 11460 data from 102 sources. Star identification, (major catalog number, name if the star has one, or cluster identification, etc.), a man projected rotational velocity, and a list of source references re included. The references are given in a second file included with the catalog when it is distributed on magnetic tape. The contents and/formats of the the data and reference files of the machine-readable catalog are described to enable users to read and process the data.

  20. The Open-source Data Inventory for Anthropogenic CO2, version 2016 (ODIAC2016): a global monthly fossil fuel CO2 gridded emissions data product for tracer transport simulations and surface flux inversions

    NASA Astrophysics Data System (ADS)

    Oda, Tomohiro; Maksyutov, Shamil; Andres, Robert J.

    2018-01-01

    The Open-source Data Inventory for Anthropogenic CO2 (ODIAC) is a global high-spatial-resolution gridded emissions data product that distributes carbon dioxide (CO2) emissions from fossil fuel combustion. The emissions spatial distributions are estimated at a 1 × 1 km spatial resolution over land using power plant profiles (emissions intensity and geographical location) and satellite-observed nighttime lights. This paper describes the year 2016 version of the ODIAC emissions data product (ODIAC2016) and presents analyses that help guide data users, especially for atmospheric CO2 tracer transport simulations and flux inversion analysis. Since the original publication in 2011, we have made modifications to our emissions modeling framework in order to deliver a comprehensive global gridded emissions data product. Major changes from the 2011 publication are (1) the use of emissions estimates made by the Carbon Dioxide Information Analysis Center (CDIAC) at the Oak Ridge National Laboratory (ORNL) by fuel type (solid, liquid, gas, cement manufacturing, gas flaring, and international aviation and marine bunkers); (2) the use of multiple spatial emissions proxies by fuel type such as (a) nighttime light data specific to gas flaring and (b) ship/aircraft fleet tracks; and (3) the inclusion of emissions temporal variations. Using global fuel consumption data, we extrapolated the CDIAC emissions estimates for the recent years and produced the ODIAC2016 emissions data product that covers 2000-2015. Our emissions data can be viewed as an extended version of CDIAC gridded emissions data product, which should allow data users to impose global fossil fuel emissions in a more comprehensive manner than the original CDIAC product. Our new emissions modeling framework allows us to produce future versions of the ODIAC emissions data product with a timely update. Such capability has become more significant given the CDIAC/ORNL's shutdown. The ODIAC data product could play an important role in supporting carbon cycle science, especially modeling studies with space-based CO2 data collected in near real time by ongoing carbon observing missions such as the Japanese Greenhouse gases Observing SATellite (GOSAT), NASA's Orbiting Carbon Observatory-2 (OCO-2), and upcoming future missions. The ODIAC emissions data product including the latest version of the ODIAC emissions data (ODIAC2017, 2000-2016) is distributed from http://db.cger.nies.go.jp/dataset/ODIAC/ with a DOI (https://doi.org/10.17595/20170411.001).

  1. Distributed Coordination of Heterogeneous Agents Using a Semantic Overlay Network and a Goal-Directed Graphplan Planner

    PubMed Central

    Lopes, António Luís; Botelho, Luís Miguel

    2013-01-01

    In this paper, we describe a distributed coordination system that allows agents to seamlessly cooperate in problem solving by partially contributing to a problem solution and delegating the subproblems for which they do not have the required skills or knowledge to appropriate agents. The coordination mechanism relies on a dynamically built semantic overlay network that allows the agents to efficiently locate, even in very large unstructured networks, the necessary skills for a specific problem. Each agent performs partial contributions to the problem solution using a new distributed goal-directed version of the Graphplan algorithm. This new goal-directed version of the original Graphplan algorithm provides an efficient solution to the problem of "distraction", which most forward-chaining algorithms suffer from. We also discuss a set of heuristics to be used in the backward-search process of the planning algorithm in order to distribute this process amongst idle agents in an attempt to find a solution in less time. The evaluation results show that our approach is effective in building a scalable and efficient agent society capable of solving complex distributable problems. PMID:23704885

  2. Model-Driven Development for scientific computing. An upgrade of the RHEEDGr program

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2009-11-01

    Model-Driven Engineering (MDE) is the software engineering discipline, which considers models as the most important element for software development, and for the maintenance and evolution of software, through model transformation. Model-Driven Architecture (MDA) is the approach for software development under the Model-Driven Engineering framework. This paper surveys the core MDA technology that was used to upgrade of the RHEEDGR program to C++0x language standards. New version program summaryProgram title: RHEEDGR-09 Catalogue identifier: ADUY_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 21 263 No. of bytes in distributed program, including test data, etc.: 1 266 982 Distribution format: tar.gz Programming language: Code Gear C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 MB Classification: 4.3, 7.2, 6.2, 8, 14 Does the new version supersede the previous version?: Yes Nature of problem: Reflection High-Energy Electron Diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the Molecular Beam Epitaxy (MBE). The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. Solution method: The calculations are based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. Reasons for new version: Responding to the user feedback the graphical version of the RHEED program has been upgraded to C++0x language standards. Also, functionality and documentation of the program have been improved. Summary of revisions: Model-Driven Architecture (MDA) is the approach defined by the Object Management Group (OMG) for software development under the Model-Driven Engineering framework [1]. The MDA approach shifts the focus of software development from writing code to building models. By adapting a model-centric approach, the MDA approach hopes to automate the generation of system implementation artifacts directly from the model. The following three models are the core of the MDA: (i) the Computation Independent Model (CIM), which is focused on basic requirements of the system, (ii) the Platform Independent Model (PIM), which is used by software architects and designers, and is focused on the operational capabilities of a system outside the context of a specific platform, and (iii) the Platform Specific Model (PSM), which is used by software developers and programmers, and includes details relating to the system for a specific platform. Basic requirements for the calculation of the RHEED intensity rocking curves in the one-beam condition have been described in Ref. [2]. Fig. 1 shows the PIM for the present version of the program. Fig. 2 presents the PSM for the program. The TGraph2D.bpk package has been recompiled to Graph2D0x.bpl and upgraded according to C++0x language standards. Fig. 3 shows the PSM of the Graph2D component, which is manifested by the Graph2D0x.bpl package presently. This diagram is a graphic presentation of the static view, which shows a collection of declarative model elements and their relationships. Installation instructions of the Graph2D0x package can be found in the new distribution. The program requires the user to provide the appropriate parameters for the crystal structure under investigation. These parameters are loaded from the parameters.ini file at run-time. Instructions for the preparation of the .ini files can be found in the new distribution. The program enables carrying out one-dimensional dynamical calculations for the fcc lattice, with a two-atoms basis and fcc lattice, with one atom basis but yet the zeroth Fourier component of the scattering potential in the TRHEED1D::crystPotUg() function can be modified according to users' specific application requirements. A graphical user interface (GUI) for the program has been reconstructed. The program has been compiled with English/USA regional and language options. Unusual features: The program is distributed in the form of main projects RHEEDGr_09.cbproj and Graph2D0x.cbproj with associated files, and should be compiled using Code Gear C++ Builder 2009 compilers. Running time: The typical running time is machine and user-parameters dependent. References: OMG, Model Driven Architecture Guide Version 1.0.1, 2003, http://www.omg.org/cgi-bin/doc?omg/03-06-01. A. Daniluk, Comput. Phys. Comm. 166 (2005) 123.

  3. The Data Dealers.

    ERIC Educational Resources Information Center

    Tenopir, Carol; Barry, Jeff

    1997-01-01

    Profiles 25 database distribution and production companies, all of which responded to a 1997 survey with information on 54 separate online, Web-based, or CD-ROM systems. Highlights increased competition, distribution formats, Web versions versus local area networks, full-text delivery, and pricing policies. Tables present a sampling of customers…

  4. The German Version of the Dutch Eating Behavior Questionnaire: Psychometric Properties, Measurement Invariance, and Population-Based Norms

    PubMed Central

    Hilbert, Anja; de Zwaan, Martina; Braehler, Elmar; Kersting, Anette

    2016-01-01

    The Dutch Eating Behavior Questionnaire is an internationally widely used instrument assessing different eating styles that may contribute to weight gain and overweight: emotional eating, external eating, and restraint. This study aimed to evaluate the psychometric properties of the 30-item German version of the DEBQ including its measurement invariance across gender, age, and BMI-status in a representative German population sample. Furthermore, we examined the distribution of eating styles in the general population and provide population-based norms for DEBQ scales. A representative sample of the German general population (N = 2513, age ≥ 14 years) was assessed with the German version of the DEBQ along with information on sociodemographic characteristics and body weight and height. The German version of the DEQB demonstrates good item characteristics and reliability (restraint: α = .92, emotional eating: α = .94, external eating: α = .89). The 3-factor structure of the DEBQ could be replicated in exploratory and confirmatory factor analyses and results of multi-group confirmatory factor analyses supported its metric and scalar measurement invariance across gender, age, and BMI-status. External eating was the most prevalent eating style in the German general population. Women scored higher on emotional and restrained eating scales than men, and overweight individuals scored higher in all three eating styles compared to normal weight individuals. Small differences across age were found for external eating. Norms were provided according to gender, age, and BMI-status. Our findings suggest that the German version of the DEBQ has good reliability and construct validity, and is suitable to reliably measure eating styles across age, gender, and BMI-status. Furthermore, the results demonstrate a considerable variation of eating styles across gender and BMI-status. PMID:27656879

  5. The German Version of the Dutch Eating Behavior Questionnaire: Psychometric Properties, Measurement Invariance, and Population-Based Norms.

    PubMed

    Nagl, Michaela; Hilbert, Anja; de Zwaan, Martina; Braehler, Elmar; Kersting, Anette

    The Dutch Eating Behavior Questionnaire is an internationally widely used instrument assessing different eating styles that may contribute to weight gain and overweight: emotional eating, external eating, and restraint. This study aimed to evaluate the psychometric properties of the 30-item German version of the DEBQ including its measurement invariance across gender, age, and BMI-status in a representative German population sample. Furthermore, we examined the distribution of eating styles in the general population and provide population-based norms for DEBQ scales. A representative sample of the German general population (N = 2513, age ≥ 14 years) was assessed with the German version of the DEBQ along with information on sociodemographic characteristics and body weight and height. The German version of the DEQB demonstrates good item characteristics and reliability (restraint: α = .92, emotional eating: α = .94, external eating: α = .89). The 3-factor structure of the DEBQ could be replicated in exploratory and confirmatory factor analyses and results of multi-group confirmatory factor analyses supported its metric and scalar measurement invariance across gender, age, and BMI-status. External eating was the most prevalent eating style in the German general population. Women scored higher on emotional and restrained eating scales than men, and overweight individuals scored higher in all three eating styles compared to normal weight individuals. Small differences across age were found for external eating. Norms were provided according to gender, age, and BMI-status. Our findings suggest that the German version of the DEBQ has good reliability and construct validity, and is suitable to reliably measure eating styles across age, gender, and BMI-status. Furthermore, the results demonstrate a considerable variation of eating styles across gender and BMI-status.

  6. Improved Soundings and Error Estimates using AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2006-01-01

    AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

  7. Creation of a Genome-Wide Metabolic Pathway Database for Populus trichocarpa Using a New Approach for Reconstruction and Curation of Metabolic Pathways for Plants1[W][OA

    PubMed Central

    Zhang, Peifen; Dreher, Kate; Karthikeyan, A.; Chi, Anjo; Pujar, Anuradha; Caspi, Ron; Karp, Peter; Kirkup, Vanessa; Latendresse, Mario; Lee, Cynthia; Mueller, Lukas A.; Muller, Robert; Rhee, Seung Yon

    2010-01-01

    Metabolic networks reconstructed from sequenced genomes or transcriptomes can help visualize and analyze large-scale experimental data, predict metabolic phenotypes, discover enzymes, engineer metabolic pathways, and study metabolic pathway evolution. We developed a general approach for reconstructing metabolic pathway complements of plant genomes. Two new reference databases were created and added to the core of the infrastructure: a comprehensive, all-plant reference pathway database, PlantCyc, and a reference enzyme sequence database, RESD, for annotating metabolic functions of protein sequences. PlantCyc (version 3.0) includes 714 metabolic pathways and 2,619 reactions from over 300 species. RESD (version 1.0) contains 14,187 literature-supported enzyme sequences from across all kingdoms. We used RESD, PlantCyc, and MetaCyc (an all-species reference metabolic pathway database), in conjunction with the pathway prediction software Pathway Tools, to reconstruct a metabolic pathway database, PoplarCyc, from the recently sequenced genome of Populus trichocarpa. PoplarCyc (version 1.0) contains 321 pathways with 1,807 assigned enzymes. Comparing PoplarCyc (version 1.0) with AraCyc (version 6.0, Arabidopsis [Arabidopsis thaliana]) showed comparable numbers of pathways distributed across all domains of metabolism in both databases, except for a higher number of AraCyc pathways in secondary metabolism and a 1.5-fold increase in carbohydrate metabolic enzymes in PoplarCyc. Here, we introduce these new resources and demonstrate the feasibility of using them to identify candidate enzymes for specific pathways and to analyze metabolite profiling data through concrete examples. These resources can be searched by text or BLAST, browsed, and downloaded from our project Web site (http://plantcyc.org). PMID:20522724

  8. MCNP output data analysis with ROOT (MODAR)

    NASA Astrophysics Data System (ADS)

    Carasco, C.

    2010-12-01

    MCNP Output Data Analysis with ROOT (MODAR) is a tool based on CERN's ROOT software. MODAR has been designed to handle time-energy data issued by MCNP simulations of neutron inspection devices using the associated particle technique. MODAR exploits ROOT's Graphical User Interface and functionalities to visualize and process MCNP simulation results in a fast and user-friendly way. MODAR allows to take into account the detection system time resolution (which is not possible with MCNP) as well as detectors energy response function and counting statistics in a straightforward way. New version program summaryProgram title: MODAR Catalogue identifier: AEGA_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGA_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 150 927 No. of bytes in distributed program, including test data, etc.: 4 981 633 Distribution format: tar.gz Programming language: C++ Computer: Most Unix workstations and PCs Operating system: Most Unix systems, Linux and windows, provided the ROOT package has been installed. Examples where tested under Suse Linux and Windows XP. RAM: Depends on the size of the MCNP output file. The example presented in the article, which involves three two dimensional 139×740 bins histograms, allocates about 60 MB. These data are running under ROOT and include consumption by ROOT itself. Classification: 17.6 Catalogue identifier of previous version: AEGA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1161 External routines: ROOT version 5.24.00 ( http://root.cern.ch/drupal/) Does the new version supersede the previous version?: Yes Nature of problem: The output of a MCNP simulation is an ascii file. The data processing is usually performed by copying and pasting the relevant parts of the ascii file into Microsoft Excel. Such an approach is satisfactory when the quantity of data is small but is not efficient when the size of the simulated data is large, for example when time-energy correlations are studied in detail such as in problems involving the associated particle technique. In addition, since the finite time resolution of the simulated detector cannot be modeled with MCNP, systems in which time-energy correlation is crucial cannot be described in a satisfactory way. Finally, realistic particle energy deposit in detectors is calculated with MCNP in a two step process involving type-5 then type-8 tallies. In the first step, the photon flux energy spectrum associated to a time region is selected and serves as a source energy distribution for the second step. Thus, several files must be manipulated before getting the result, which can be time consuming if one needs to study several time regions or different detectors performances. In the same way, modeling counting statistics obtained in a limited acquisition time requires several steps and can also be time consuming. Solution method: In order to overcome the previous limitations, the MODAR C++ code has been written to make use of CERN's ROOT data analysis software. MCNP output data are read from the MCNP output file with dedicated routines. Two dimensional histograms are filled and can be handled efficiently within the ROOT framework. To keep a user friendly analysis tool, all processing and data display can be done by means of ROOT Graphical User Interface. Specific routines have been written to include detectors finite time resolution and energy response function as well as counting statistics in a straightforward way. Reasons for new version: For applications involving the Associate Particle Technique, a large number of gamma rays are produced by the fast neutrons interactions. To study the energy spectra, it is useful to identify the gamma-ray energy peaks in a straightforward way. Therefore, the possibility to show gamma rays corresponding to specific reactions has been added in MODAR. Summary of revisions: It is possible to use a gamma ray database to better identify in the energy spectra gamma ray peaks with their first and second escapes. Histograms can be scaled by the number of source particle to evaluate the number of counts that is expected without statistical uncertainties. Additional comments: The possibility of adding tallies has also been incorporated in MODAR in order to describe systems in which the signal from several detectors can be summed. Moreover, MODAR can be adapted to handle other problems involving two dimensional data. Running time: The CPU time needed to smear a two dimensional histogram depends on the size of the histogram. In the presented example, the time-energy smearing of one of the 139×740 two dimensional histograms takes 3 minutes with a DELL computer equipped with INTEL Core 2.

  9. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  10. Unified EDGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2007-06-18

    UEDGE is an interactive suite of physics packages using the Python or BASIS scripting systems. The plasma is described by time-dependent 2D plasma fluid equations that include equations for density, velocity, ion temperature, electron temperature, electrostatic potential, and gas density in the edge region of a magnetic fusion energy confinement device. Slab, cylindrical, and toroidal geometries are allowed, and closed and open magnetic field-line regions are included. Classical transport is assumed along magnetic field lines, and anomalous transport is assumed across field lines. Multi-charge state impurities can be included with the corresponding line-radiation energy loss. Although UEDGE is written inmore » Fortran, for efficient execution and analysis of results, it utilizes either Python or BASIS scripting shells. Python is easily available for many platforms (http://www.Python.org/). The features and availability of BASIS are described in "Basis Manual Set" by P.F. Dubois, Z.C. Motteler, et al., Lawrence Livermore National Laboratory report UCRL-MA-1 18541, June, 2002 and http://basis.llnl.gov. BASIS has been reviewed and released by LLNL for unlimited distribution. The Python version utilizes PYBASIS scripts developed by D.P. Grote, LLNL. The Python version also uses MPPL code and MAC Perl script, available from the public-domain BASIS source above. The Forthon version of UEDGE uses the same source files, but utilizes Forthon to produce a Python-compatible source. Forthon has been developed by D.P. Grote at LBL (see http://hifweb.lbl.gov/Forthon/ and Grote et al. in the references below), and it is freely available. The graphics can be performed by any package importable to Python, such as PYGIST.« less

  11. Publisher Correction: The global distribution of tetrapods reveals a need for targeted reptile conservation.

    PubMed

    Roll, Uri; Feldman, Anat; Novosolov, Maria; Allison, Allen; Bauer, Aaron M; Bernard, Rodolphe; Böhm, Monika; Castro-Herrera, Fernando; Chirio, Laurent; Collen, Ben; Colli, Guarino R; Dabool, Lital; Das, Indraneil; Doan, Tiffany M; Grismer, Lee L; Hoogmoed, Marinus; Itescu, Yuval; Kraus, Fred; LeBreton, Matthew; Lewin, Amir; Martins, Marcio; Maza, Erez; Meirte, Danny; Nagy, Zoltán T; de C Nogueira, Cristiano; Pauwels, Olivier S G; Pincheira-Donoso, Daniel; Powney, Gary D; Sindaco, Roberto; Tallowin, Oliver J S; Torres-Carvajal, Omar; Trape, Jean-François; Vidan, Enav; Uetz, Peter; Wagner, Philipp; Wang, Yuezhao; Orme, C David L; Grenyer, Richard; Meiri, Shai

    2017-11-01

    In this Article originally published, owing to a technical error, the author 'Laurent Chirio' was mistakenly designated as a corresponding author in the HTML version, the PDF was correct. This error has now been corrected in the HTML version. Further, in Supplementary Table 3, the authors misspelt the surname of 'Danny Meirte'; this file has now been replaced.

  12. Effectiveness of Two Versions of a STD/HIV Prevention Program

    DTIC Science & Technology

    2002-01-01

    NAVAL HEALTH RESEARCH CENTER EFFECTIVENESS OF TWO VERSIONS OF A STD/HIV PREVENTION PROGRAM S . Booth-Kewley R. A. Shaffer...R. Y. Minagawa S . K. Brodine Report No. 01-01 Approved for public release; distribution unlimited...of a behavioral intervention called the STD/HIV Intervention Program (SHIP) in a sample of Marines. Marines were exposed to either a 6 hr or a 3 hr

  13. Documentation for the machine-readable version of the catalogue of 20457 Star positions obtained by photography in the declination zone -48 deg to -54 deg (1950)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1983-01-01

    The machine readable catalog, as it is distributed from the Astronomical Data Center, is described. Some minor reformatting of the magnetic tape version is received to decrease the record size and conserve space; the data content is identical to the sample shown in Table VI of the source reference.

  14. The orbifolder: A tool to study the low-energy effective theory of heterotic orbifolds

    NASA Astrophysics Data System (ADS)

    Nilles, H. P.; Ramos-Sánchez, S.; Vaudrevange, P. K. S.; Wingerter, A.

    2012-06-01

    The orbifolder is a program developed in C++ that computes and analyzes the low-energy effective theory of heterotic orbifold compactifications. The program includes routines to compute the massless spectrum, to identify the allowed couplings in the superpotential, to automatically generate large sets of orbifold models, to identify phenomenologically interesting models (e.g. MSSM-like models) and to analyze their vacuum configurations. Program summaryProgram title: orbifolder Catalogue identifier: AELR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 145 572 No. of bytes in distributed program, including test data, etc.: 930 517 Distribution format: tar.gz Programming language:C++ Computer: Personal computer Operating system: Tested on Linux (Fedora 15, Ubuntu 11, SuSE 11) Word size: 32 bits or 64 bits Classification: 11.1 External routines: Boost (http://www.boost.org/), GSL (http://www.gnu.org/software/gsl/) Nature of problem: Calculating the low-energy spectrum of heterotic orbifold compactifications. Solution method: Quadratic equations on a lattice; representation theory; polynomial algebra. Running time: Less than a second per model.

  15. Locally adaptive, spatially explicit projection of US population for 2030 and 2050.

    PubMed

    McKee, Jacob J; Rose, Amy N; Bright, Edward A; Huynh, Timmy; Bhaduri, Budhendra L

    2015-02-03

    Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Building on the spatial interpolation technique previously developed for high-resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically informed spatial distribution of projected population of the contiguous United States for 2030 and 2050, depicting one of many possible population futures. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection model departs from these by accounting for multiple components that affect population distribution. Modeled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the US Census's projection methodology, with the US Census's official projection as the benchmark. Applications of our model include incorporating multiple various scenario-driven events to produce a range of spatially explicit population futures for suitability modeling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.

  16. Simulation of n-qubit quantum systems. V. Quantum measurements

    NASA Astrophysics Data System (ADS)

    Radtke, T.; Fritzsche, S.

    2010-02-01

    The FEYNMAN program has been developed during the last years to support case studies on the dynamics and entanglement of n-qubit quantum registers. Apart from basic transformations and (gate) operations, it currently supports a good number of separability criteria and entanglement measures, quantum channels as well as the parametrizations of various frequently applied objects in quantum information theory, such as (pure and mixed) quantum states, hermitian and unitary matrices or classical probability distributions. With the present update of the FEYNMAN program, we provide a simple access to (the simulation of) quantum measurements. This includes not only the widely-applied projective measurements upon the eigenspaces of some given operator but also single-qubit measurements in various pre- and user-defined bases as well as the support for two-qubit Bell measurements. In addition, we help perform generalized and POVM measurements. Knowing the importance of measurements for many quantum information protocols, e.g., one-way computing, we hope that this update makes the FEYNMAN code an attractive and versatile tool for both, research and education. New version program summaryProgram title: FEYNMAN Catalogue identifier: ADWE_v5_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWE_v5_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 27 210 No. of bytes in distributed program, including test data, etc.: 1 960 471 Distribution format: tar.gz Programming language: Maple 12 Computer: Any computer with Maple software installed Operating system: Any system that supports Maple; the program has been tested under Microsoft Windows XP and Linux Classification: 4.15 Catalogue identifier of previous version: ADWE_v4_0 Journal reference of previous version: Comput. Phys. Commun. 179 (2008) 647 Does the new version supersede the previous version?: Yes Nature of problem: During the last decade, the field of quantum information science has largely contributed to our understanding of quantum mechanics, and has provided also new and efficient protocols that are used on quantum entanglement. To further analyze the amount and transfer of entanglement in n-qubit quantum protocols, symbolic and numerical simulations need to be handled efficiently. Solution method: Using the computer algebra system Maple, we developed a set of procedures in order to support the definition, manipulation and analysis of n-qubit quantum registers. These procedures also help to deal with (unitary) logic gates and (nonunitary) quantum operations and measurements that act upon the quantum registers. All commands are organized in a hierarchical order and can be used interactively in order to simulate and analyze the evolution of n-qubit quantum systems, both in ideal and noisy quantum circuits. Reasons for new version: Until the present, the FEYNMAN program supported the basic data structures and operations of n-qubit quantum registers [1], a good number of separability and entanglement measures [2], quantum operations (noisy channels) [3] as well as the parametrizations of various frequently applied objects, such as (pure and mixed) quantum states, hermitian and unitary matrices or classical probability distributions [4]. With the current extension, we here add all necessary features to simulate quantum measurements, including the projective measurements in various single-qubit and the two-qubit Bell basis, and POVM measurements. Together with the previously implemented functionality, this greatly enhances the possibilities of analyzing quantum information protocols in which measurements play a central role, e.g., one-way computation. Running time: Most commands require ⩽10 seconds of processor time on a Pentium 4 processor with ⩾2 GHz RAM or newer, if they work with quantum registers with five or less qubits. Moreover, about 5-20 MB of working memory is typically needed (in addition to the memory for the Maple environment itself). However, especially when working with symbolic expressions, the requirements on the CPU time and memory critically depend on the size of the quantum registers owing to the exponential growth of the dimension of the associated Hilbert space. For example, complex (symbolic) noise models, i.e. with several Kraus operators, may result in very large expressions that dramatically slow down the evaluation of e.g. distance measures or the final-state entropy, etc. In these cases, Maple's assume facility sometimes helps to reduce the complexity of the symbolic expressions, but more often than not only a numerical evaluation is feasible. Since the various commands can be applied to quite different scenarios, no general scaling rule can be given for the CPU time or the request of memory. References:[1] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 173 (2005) 91.[2] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 175 (2006) 145.[3] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 176 (2007) 617.[4] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 179 (2008) 647.

  17. TSPP - A Collection of FORTRAN Programs for Processing and Manipulating Time Series

    USGS Publications Warehouse

    Boore, David M.

    2008-01-01

    This report lists a number of FORTRAN programs that I have developed over the years for processing and manipulating strong-motion accelerograms. The collection is titled TSPP, which stands for Time Series Processing Programs. I have excluded 'strong-motion accelerograms' from the title, however, as the boundary between 'strong' and 'weak' motion has become blurred with the advent of broadband sensors and high-dynamic range dataloggers, and many of the programs can be used with any evenly spaced time series, not just acceleration time series. This version of the report is relatively brief, consisting primarily of an annotated list of the programs, with two examples of processing, and a few comments on usage. I do not include a parameter-by-parameter guide to the programs. Future versions might include more examples of processing, illustrating the various parameter choices in the programs. Although these programs have been used by the U.S. Geological Survey, no warranty, expressed or implied, is made by the USGS as to the accuracy or functioning of the programs and related program material, nor shall the fact of distribution constitute any such warranty, and no responsibility is assumed by the USGS in connection therewith. The programs are distributed on an 'as is' basis, with no warranty of support from me. These programs were written for my use and are being publically distributed in the hope that others might find them as useful as I have. I would, however, appreciate being informed about bugs, and I always welcome suggestions for improvements to the codes. Please note that I have made little effort to optimize the coding of the programs or to include a user-friendly interface (many of the programs in this collection have been included in the software usdp (Utility Software for Data Processing), being developed by Akkar et al. (personal communication, 2008); usdp includes a graphical user interface). Speed of execution has been sacrificed in favor of a code that is intended to be easy to understand, although on modern computers speed of execution is rarely a problem. I will be pleased if users incorporate portions of my programs into their own applications; I only ask that reference be made to this report as the source of the programs.

  18. Assessment of radionuclide databases in CAP88 mainframe version 1.0 and Windows-based version 3.0.

    PubMed

    LaBone, Elizabeth D; Farfán, Eduardo B; Lee, Patricia L; Jannik, G Timothy; Donnelly, Elizabeth H; Foley, Trevor Q

    2009-09-01

    In this study the radionuclide databases for two versions of the Clean Air Act Assessment Package-1988 (CAP88) computer model were assessed in detail. CAP88 estimates radiation dose and the risk of health effects to human populations from radionuclide emissions to air. This program is used by several U.S. Department of Energy (DOE) facilities to comply with National Emission Standards for Hazardous Air Pollutants regulations. CAP88 Mainframe, referred to as version 1.0 on the U.S. Environmental Protection Agency Web site (http://www.epa.gov/radiation/assessment/CAP88/), was the very first CAP88 version released in 1988. Some DOE facilities including the Savannah River Site still employ this version (1.0) while others use the more user-friendly personal computer Windows-based version 3.0 released in December 2007. Version 1.0 uses the program RADRISK based on International Commission on Radiological Protection Publication 30 as its radionuclide database. Version 3.0 uses half-life, dose, and risk factor values based on Federal Guidance Report 13. Differences in these values could cause different results for the same input exposure data (same scenario), depending on which version of CAP88 is used. Consequently, the differences between the two versions are being assessed in detail at Savannah River National Laboratory. The version 1.0 and 3.0 database files contain 496 and 838 radionuclides, respectively, and though one would expect the newer version to include all the 496 radionuclides, 35 radionuclides are listed in version 1.0 that are not included in version 3.0. The majority of these has either extremely short or long half-lives or is no longer in production; however, some of the short-lived radionuclides might produce progeny of great interest at DOE sites. In addition, 122 radionuclides were found to have different half-lives in the two versions, with 21 over 3 percent different and 12 over 10 percent different.

  19. PC-SEAPAK - ANALYSIS OF COASTAL ZONE COLOR SCANNER AND ADVANCED VERY HIGH RESOLUTION RADIOMETER DATA

    NASA Technical Reports Server (NTRS)

    Mcclain, C. R.

    1994-01-01

    PC-SEAPAK is a user-interactive satellite data analysis software package specifically developed for oceanographic research. The program is used to process and interpret data obtained from the Nimbus-7/Coastal Zone Color Scanner (CZCS), and the NOAA Advanced Very High Resolution Radiometer (AVHRR). PC-SEAPAK is a set of independent microcomputer-based image analysis programs that provide the user with a flexible, user-friendly, standardized interface, and facilitates relatively low-cost analysis of oceanographic satellite data. Version 4.0 includes 114 programs. PC-SEAPAK programs are organized into categories which include CZCS and AVHRR level-1 ingest, level-2 analyses, statistical analyses, data extraction, remapping to standard projections, graphics manipulation, image board memory manipulation, hardcopy output support and general utilities. Most programs allow user interaction through menu and command modes and also by the use of a mouse. Most programs also provide for ASCII file generation for further analysis in spreadsheets, graphics packages, etc. The CZCS scanning radiometer aboard the NIMBUS-7 satellite was designed to measure the concentration of photosynthetic pigments and their degradation products in the ocean. AVHRR data is used to compute sea surface temperatures and is supported for the NOAA 6, 7, 8, 9, 10, 11, and 12 satellites. The CZCS operated from November 1978 to June 1986. CZCS data may be obtained free of charge from the CZCS archive at NASA/Goddard Space Flight Center. AVHRR data may be purchased through NOAA's Satellite Data Service Division. Ordering information is included in the PC-SEAPAK documentation. Although PC-SEAPAK was developed on a COMPAQ Deskpro 386/20, it can be run on most 386-compatible computers with an AT bus, EGA controller, Intel 80387 coprocessor, and MS-DOS 3.3 or higher. A Matrox MVP-AT image board with appropriate monitor and cables is also required. Note that the authors have received some reports of incompatibilities between the MVP-AT image board and ZENITH computers. Also, the MVP-AT image board is not necessarily compatible with 486-based systems; users of 486-based systems should consult with Matrox about compatibility concerns. Other PC-SEAPAK requirements include a Microsoft mouse (serial version), 2Mb RAM, and 100Mb hard disk space. For data ingest and backup, 9-track tape, 8mm tape and optical disks are supported and recommended. PC-SEAPAK has been under development since 1988. Version 4.0 was updated in 1992, and is distributed without source code. It is available only as a set of 36 1.2Mb 5.25 inch IBM MS-DOS format diskettes. PC-SEAPAK is a copyrighted product with all copyright vested in the National Aeronautics and Space Administration. Phar Lap's DOS_Extender run-time version is integrated into several of the programs; therefore, the PC-SEAPAK programs may not be duplicated. Three of the distribution diskettes contain DOS_Extender files. One of the distribution diskettes contains Media Cybernetics' HALO88 font files, also licensed by NASA for dissemination but not duplication. IBM is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. HALO88 is a registered trademark of Media Cybernetics, but the product was discontinued in 1991.

  20. BioconductorBuntu: a Linux distribution that implements a web-based DNA microarray analysis server.

    PubMed

    Geeleher, Paul; Morris, Dermot; Hinde, John P; Golden, Aaron

    2009-06-01

    BioconductorBuntu is a custom distribution of Ubuntu Linux that automatically installs a server-side microarray processing environment, providing a user-friendly web-based GUI to many of the tools developed by the Bioconductor Project, accessible locally or across a network. System installation is via booting off a CD image or by using a Debian package provided to upgrade an existing Ubuntu installation. In its current version, several microarray analysis pipelines are supported including oligonucleotide, dual-or single-dye experiments, including post-processing with Gene Set Enrichment Analysis. BioconductorBuntu is designed to be extensible, by server-side integration of further relevant Bioconductor modules as required, facilitated by its straightforward underlying Python-based infrastructure. BioconductorBuntu offers an ideal environment for the development of processing procedures to facilitate the analysis of next-generation sequencing datasets. BioconductorBuntu is available for download under a creative commons license along with additional documentation and a tutorial from (http://bioinf.nuigalway.ie).

  1. Batching System for Superior Service

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Veridian's Portable Batch System (PBS) was the recipient of the 1997 NASA Space Act Award for outstanding software. A batch system is a set of processes for managing queues and jobs. Without a batch system, it is difficult to manage the workload of a computer system. By bundling the enterprise's computing resources, the PBS technology offers users a single coherent interface, resulting in efficient management of the batch services. Users choose which information to package into "containers" for system-wide use. PBS also provides detailed system usage data, a procedure not easily executed without this software. PBS operates on networked, multi-platform UNIX environments. Veridian's new version, PBS Pro,TM has additional features and enhancements, including support for additional operating systems. Veridian distributes the original version of PBS as Open Source software via the PBS website. Customers can register and download the software at no cost. PBS Pro is also available via the web and offers additional features such as increased stability, reliability, and fault tolerance.A company using PBS can expect a significant increase in the effective management of its computing resources. Tangible benefits include increased utilization of costly resources and enhanced understanding of computational requirements and user needs.

  2. The Monte Carlo event generator AcerMC versions 2.0 to 3.8 with interfaces to PYTHIA 6.4, HERWIG 6.5 and ARIADNE 4.1

    NASA Astrophysics Data System (ADS)

    Kersevan, Borut Paul; Richter-Waş, Elzbieta

    2013-03-01

    The AcerMC Monte Carlo generator is dedicated to the generation of Standard Model background processes which were recognised as critical for the searches at LHC, and generation of which was either unavailable or not straightforward so far. The program itself provides a library of the massive matrix elements (coded by MADGRAPH) and native phase space modules for generation of a set of selected processes. The hard process event can be completed by the initial and the final state radiation, hadronisation and decays through the existing interface with either PYTHIA, HERWIG or ARIADNE event generators and (optionally) TAUOLA and PHOTOS. Interfaces to all these packages are provided in the distribution version. The phase-space generation is based on the multi-channel self-optimising approach using the modified Kajantie-Byckling formalism for phase space construction and further smoothing of the phase space was obtained by using a modified ac-VEGAS algorithm. An additional improvement in the recent versions is the inclusion of the consistent prescription for matching the matrix element calculations with parton showering for a select list of processes. Catalogue identifier: ADQQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3853309 No. of bytes in distributed program, including test data, etc.: 68045728 Distribution format: tar.gz Programming language: FORTRAN 77 with popular extensions (g77, gfortran). Computer: All running Linux. Operating system: Linux. Classification: 11.2, 11.6. External routines: CERNLIB (http://cernlib.web.cern.ch/cernlib/), LHAPDF (http://lhapdf.hepforge.org/) Catalogue identifier of previous version: ADQQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 149(2003)142 Does the new version supersede the previous version?: Yes Nature of problem: Despite a large repertoire of processes implemented for generation in event generators like PYTHIA [1] or HERWIG [2] a number of background processes, crucial for studying the expected physics of the LHC experiments, is missing. For some of these processes the matrix element expressions are rather lengthy and/or to achieve a reasonable generation efficiency it is necessary to tailor the phase space selection procedure to the dynamics of the process. That is why it is not practical to imagine that any of the above general purpose generators will contain every, or even only observable, processes which will occur at LHC collisions. A more practical solution can be found in a library of dedicated matrix-element-based generators, with the standardised interfaces like that proposed in [3], to the more universal one which is used to complete the event generation. Solution method: The AcerMC EventGenerator provides a library of the matrix-element-based generators for several processes. The initial- and final-state showers, beam remnants and underlying events, fragmentation and remaining decays are supposed to be performed by the other universal generator to which this one is interfaced. We will call it a supervising generator. The interfaces to PYTHIA 6.4, ARIADNE 4.1 and HERWIG 6.5, as such generators, are provided. Provided is also an interface to TAUOLA [4] and PHOTOS [5] packages for τ-lepton decays (including spin correlations treatment) and QED radiations in decays of particles. At present, the following matrix-element-based processes have been implemented: gg,qq¯→tt¯bb¯, qq¯→W(→ℓν)bb¯; qq¯→W(→ℓν)tt¯; gg,qq¯→Z/γ∗(→ℓℓ)bb¯; gg,qq¯→Z/γ∗(→ℓℓ,νν,bb¯)tt¯; complete EW gg,qq¯→(Z/W/γ∗→)tt¯bb¯; gg,qq¯→tt¯tt¯; gg,qq¯→(tt¯→)ff¯bff¯b¯; gg,qq¯→(WWbb →)ff¯ff¯bb¯. Both interfaces allow the use of the LHAPDF/LHAGLUE library of parton density functions. Provided is also a set of control processes: qq¯→W→ℓν; qq¯→Z/γ∗→ℓℓ; gg,qq¯→tt¯ and gg→(tt¯→)WbWb¯; Reasons for new version: Implementation of several new processes and methods. Summary of revisions: Each version added new processes or functionalities, a detailed list is given in the section “Changes since AcerMC 1.0”. Restrictions: The package is optimised for the 14 TeV pp collision simulated in the LHC environment and also works at the achieved LHC energies of 7 TeV and 8 TeV. The consistency between results of the complete generation using PYTHIA 6.4 or HERWIG 6.5 interfaces is technically limited by the different approaches taken in both these generators for evaluating αQCD and αQED couplings and by the different models for fragmentation/hadronisation. For the consistency check, in the AcerMC library contains native coded definitions of the QCD and αQED. Using these native definitions leads to the same total cross-sections both with PYTHIA 6.4 or HERWIG 6.5 interfaces.

  3. Parameterization of dust emissions in the global atmospheric chemistry-climate model EMAC: impact of nudging and soil properties

    NASA Astrophysics Data System (ADS)

    Astitha, M.; Lelieveld, J.; Abdel Kader, M.; Pozzer, A.; de Meij, A.

    2012-11-01

    Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). One uses a globally uniform soil particle size distribution, whereas the other explicitly accounts for different soil textures worldwide. We have tested these two versions and investigated the sensitivity to input parameters, using remote sensing data from the Aerosol Robotic Network (AERONET) and dust concentrations and deposition measurements from the AeroCom dust benchmark database (and others). The two versions are shown to produce similar atmospheric dust loads in the N-African region, while they deviate in the Asian, Middle Eastern and S-American regions. The dust outflow from Africa over the Atlantic Ocean is accurately simulated by both schemes, in magnitude, location and seasonality. Approximately 70% of the modelled annual deposition data and 70-75% of the modelled monthly aerosol optical depth (AOD) in the Atlantic Ocean stations lay in the range 0.5 to 2 times the observations for all simulations. The two versions have similar performance, even though the total annual source differs by ~50%, which underscores the importance of transport and deposition processes (being the same for both versions). Even though the explicit soil particle size distribution is considered more realistic, the simpler scheme appears to perform better in several locations. This paper discusses the differences between the two versions of the dust emission scheme, focusing on their limitations and strengths in describing the global dust cycle and suggests possible future improvements.

  4. Psychometric properties of the Norwegian version of the Safety Attitudes Questionnaire (SAQ), Generic version (Short Form 2006).

    PubMed

    Deilkås, Ellen T; Hofoss, Dag

    2008-09-22

    How to protect patients from harm is a question of universal interest. Measuring and improving safety culture in care giving units is an important strategy for promoting a safe environment for patients. The Safety Attitudes Questionnaire (SAQ) is the only instrument that measures safety culture in a way which correlates with patient outcome. We have translated the SAQ to Norwegian and validated the translated version. The psychometric properties of the translated questionnaire are presented in this article. The questionnaire was translated with the back translation technique and tested in 47 clinical units in a Norwegian university hospital. SAQ's (the Generic version (Short Form 2006) the version with the two sets of questions on perceptions of management: on unit management and on hospital management) were distributed to 1911 frontline staff. 762 were distributed during unit meetings and 1149 through the postal system. Cronbach alphas, item-to-own correlations, and test-retest correlations were calculated, and response distribution analysis and confirmatory factor analysis were performed, as well as early validity tests. 1306 staff members completed and returned the questionnaire: a response rate of 68%. Questionnaire acceptability was good. The reliability measures were acceptable. The factor structure of the responses was tested by confirmatory factor analysis. 36 items were ascribed to seven underlying factors: Teamwork Climate, Safety Climate, Stress Recognition, Perceptions of Hospital Management, Perceptions of Unit Management, Working conditions, and Job satisfaction. Goodness-of-Fit Indices showed reasonable, but not indisputable, model fit. External validity indicators - recognizability of results, correlations with "trigger tool"-identified adverse events, with patient satisfaction with hospitalization, patient reports of possible maltreatment, and patient evaluation of organization of hospital work - provided preliminary validation. Based on the data from Akershus University Hospital, we conclude that the Norwegian translation of the SAQ showed satisfactory internal psychometric properties. With data from one hospital only, we cannot draw strong conclusions on its external validity. Further validation studies linking the SAQ-scores to patient outcome data should be performed.

  5. TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, S. F.

    1994-01-01

    The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  6. The Lunar Mapping and Modeling Project Update

    NASA Technical Reports Server (NTRS)

    Noble, S.; French, R.; Nall, M.; Muery, K.

    2010-01-01

    The Lunar Mapping and Modeling Project (LMMP) is managing the development of a suite of lunar mapping and modeling tools and data products that support lunar exploration activities, including the planning, design, development, test, and operations associated with crewed and/or robotic operations on the lunar surface. In addition, LMMP should prove to be a convenient and useful tool for scientific analysis and for education and public outreach (E/PO) activities. LMMP will utilize data predominately from the Lunar Reconnaissance Orbiter, but also historical and international lunar mission data (e.g. Lunar Prospector, Clementine, Apollo, Lunar Orbiter, Kaguya, and Chandrayaan-1) as available and appropriate. LMMP will provide such products as image mosaics, DEMs, hazard assessment maps, temperature maps, lighting maps and models, gravity models, and resource maps. We are working closely with the LRO team to prevent duplication of efforts and ensure the highest quality data products. A beta version of the LMMP software was released for limited distribution in December 2009, with the public release of version 1 expected in the Fall of 2010.

  7. HELAC-Onia 2.0: An upgraded matrix-element and event generator for heavy quarkonium physics

    NASA Astrophysics Data System (ADS)

    Shao, Hua-Sheng

    2016-01-01

    We present an upgraded version (denoted as version 2.0) of the program HELAC-ONIA for the automated computation of heavy-quarkonium helicity amplitudes within non-relativistic QCD framework. The new code has been designed to include many new and useful features for practical phenomenological simulations. It is designed for job submissions under cluster environment for parallel computations via PYTHON scripts. We have interfaced HELAC-ONIA to the parton shower Monte Carlo programs PYTHIA 8 and QEDPS to take into account the parton-shower effects. Moreover, the decay module guarantees that the program can perform the spin-entangled (cascade-)decay of heavy quarkonium after its generation. We have also implemented a reweighting method to automatically estimate the uncertainties from renormalization and/or factorization scales as well as parton-distribution functions to weighted or unweighted events. A further update is the possibility to generate one-dimensional or two-dimensional plots encoded in the analysis files on the fly. Some dedicated examples are given at the end of the writeup.

  8. ROBUS-2: A Fault-Tolerant Broadcast Communication System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.

    2005-01-01

    The Reliable Optical Bus (ROBUS) is the core communication system of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER), a general-purpose fault-tolerant integrated modular architecture currently under development at NASA Langley Research Center. The ROBUS is a time-division multiple access (TDMA) broadcast communication system with medium access control by means of time-indexed communication schedule. ROBUS-2 is a developmental version of the ROBUS providing guaranteed fault-tolerant services to the attached processing elements (PEs), in the presence of a bounded number of faults. These services include message broadcast (Byzantine Agreement), dynamic communication schedule update, clock synchronization, and distributed diagnosis (group membership). The ROBUS also features fault-tolerant startup and restart capabilities. ROBUS-2 is tolerant to internal as well as PE faults, and incorporates a dynamic self-reconfiguration capability driven by the internal diagnostic system. This version of the ROBUS is intended for laboratory experimentation and demonstrations of the capability to reintegrate failed nodes, dynamically update the communication schedule, and tolerate and recover from correlated transient faults.

  9. Enabling IPv6 at FZU - WLCG Tier2 in Prague

    NASA Astrophysics Data System (ADS)

    Kouba, Tomáš; Chudoba, Jiří; Eliáš, Marek

    2014-06-01

    The usage of the new IPv6 protocol in production is becoming reality in the HEP community and the Computing Centre of the Institute of Physics in Prague participates in many IPv6 related activities. Our contribution presents experience with monitoring in HEPiX distributed IPv6 testbed which includes 11 remote sites. We use Nagios to check availability of services and Smokeping for monitoring the network latency. Since it is not always trivial to setup DNS in a dual stack environment properly, we developed a Nagios plugin for checking whether a domain name is resolvable when using only IP protocol version 6 and only version 4. We will also present local area network monitoring and tuning related to IPv6 performance. One of the most important software for a grid site is a batch system for a job execution. We will present our experience with configuring and running Torque batch system in a dual stack environment. We also discuss the steps needed to run VO specific jobs in our IPv6 testbed.

  10. Binary Population and Spectral Synthesis Version 2.1: Construction, Observational Verification, and New Results

    NASA Astrophysics Data System (ADS)

    Eldridge, J. J.; Stanway, E. R.; Xiao, L.; McClelland, L. A. S.; Taylor, G.; Ng, M.; Greis, S. M. L.; Bray, J. C.

    2017-11-01

    The Binary Population and Spectral Synthesis suite of binary stellar evolution models and synthetic stellar populations provides a framework for the physically motivated analysis of both the integrated light from distant stellar populations and the detailed properties of those nearby. We present a new version 2.1 data release of these models, detailing the methodology by which Binary Population and Spectral Synthesis incorporates binary mass transfer and its effect on stellar evolution pathways, as well as the construction of simple stellar populations. We demonstrate key tests of the latest Binary Population and Spectral Synthesis model suite demonstrating its ability to reproduce the colours and derived properties of resolved stellar populations, including well-constrained eclipsing binaries. We consider observational constraints on the ratio of massive star types and the distribution of stellar remnant masses. We describe the identification of supernova progenitors in our models, and demonstrate a good agreement to the properties of observed progenitors. We also test our models against photometric and spectroscopic observations of unresolved stellar populations, both in the local and distant Universe, finding that binary models provide a self-consistent explanation for observed galaxy properties across a broad redshift range. Finally, we carefully describe the limitations of our models, and areas where we expect to see significant improvement in future versions.

  11. User guide for MODPATH Version 7—A particle-tracking model for MODFLOW

    USGS Publications Warehouse

    Pollock, David W.

    2016-09-26

    MODPATH is a particle-tracking post-processing program designed to work with MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. MODPATH version 7 is the fourth major release since its original publication. Previous versions were documented in USGS Open-File Reports 89–381 and 94–464 and in USGS Techniques and Methods 6–A41.MODPATH version 7 works with MODFLOW-2005 and MODFLOW–USG. Support for unstructured grids in MODFLOW–USG is limited to smoothed, rectangular-based quadtree and quadpatch grids.A software distribution package containing the computer program and supporting documentation, such as input instructions, output file descriptions, and example problems, is available from the USGS over the Internet (http://water.usgs.gov/ogw/modpath/).

  12. Modeling the Influence of Hemispheric Transport on Trends in O3 Distributions

    EPA Science Inventory

    We describe the development and application of the hemispheric version of the CMAQ to examine the influence of long-range pollutant transport on trends in surface level O3 distributions. The WRF-CMAQ model is expanded to hemispheric scales and multi-decadal model simulations were...

  13. Enhanced representation of soil NO emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    EPA Science Inventory

    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community...

  14. A NetCDF version of the two-dimensional energy balance model based on the full multigrid algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Kelin; North, Gerald R.; Stevens, Mark J.

    A NetCDF version of the two-dimensional energy balance model based on the full multigrid method in Fortran is introduced for both pedagogical and research purposes. Based on the land-sea-ice distribution, orbital elements, greenhouse gases concentration, and albedo, the code calculates the global seasonal surface temperature. A step-by-step guide with examples is provided for practice.

  15. Software Design Description for the Navy Coastal Ocean Model (NCOM) Version 4.0

    DTIC Science & Technology

    2008-12-31

    Naval Research Laboratory Stennis Space Center, MS 39529-5004 NRL/MR/7320--08-9149 Approved for public release; distribution is unlimited. Software ...suggestions for reducing this burden to Department of Defense, Washington Headquarters Services , Directorate for Information Operations and Reports (0704...LIMITATION OF ABSTRACT Software Design Description for the Navy Coastal Ocean Model (NCOM) Version 4.0 Paul Martin, Charlie N. Barron, Lucy F

  16. Structure and haemostatic effects of generic versions of enoxaparin available for clinical use in Brazil: similarity to the original drug.

    PubMed

    Glauser, Bianca F; Vairo, Bruno C; Oliveira, Stephan-Nicollas M C G; Cinelli, Leonardo P; Pereira, Mariana S; Mourão, Paulo A S

    2012-02-01

    Patent protection for enoxaparin has expired. Generic preparations are developed and approved for clinical use in different countries. However, there is still skepticism about the possibility of making an exact copy of the original drug due to the complex processes involved in generating low-molecular-weight heparins. We have undertaken a careful analysis of generic versions of enoxaparin available for clinical use in Brazil. Thirty-three batches of active ingredient and 70 of the final pharmaceutical product were obtained from six different suppliers. They were analysed for their chemical composition, molecular size distribution, in vitro anticoagulant activity and pharmacological effects on animal models of experimental thrombosis and bleeding. Clearly, the generic versions of enoxaparin available for clinical use in Brazil are similar to the original drug. Only three out of 33 batches of active ingredient from one supplier showed differences in molecular size distribution, resulting from a low percentage of tetrasaccharide or the presence of a minor component eluted as monosaccharide. Three out of 70 batches of the final pharmaceutical products contained lower amounts of the active ingredient than that declared by the suppliers. Our results suggest that the generic versions of enoxaparin are a viable therapeutic option, but their use requires strict regulations to ensure accurate standards.

  17. Analysis of Ultra High Resolution Sea Surface Temperature Level 4 Datasets

    NASA Technical Reports Server (NTRS)

    Wagner, Grant

    2011-01-01

    Sea surface temperature (SST) studies are often focused on improving accuracy, or understanding and quantifying uncertainties in the measurement, as SST is a leading indicator of climate change and represents the longest time series of any ocean variable observed from space. Over the past several decades SST has been studied with the use of satellite data. This allows a larger area to be studied with much more frequent measurements being taken than direct measurements collected aboard ship or buoys. The Group for High Resolution Sea Surface Temperature (GHRSST) is an international project that distributes satellite derived sea surface temperatures (SST) data from multiple platforms and sensors. The goal of the project is to distribute these SSTs for operational uses such as ocean model assimilation and decision support applications, as well as support fundamental SST research and climate studies. Examples of near real time applications include hurricane and fisheries studies and numerical weather forecasting. The JPL group has produced a new 1 km daily global Level 4 SST product, the Multiscale Ultrahigh Resolution (MUR), that blends SST data from 3 distinct NASA radiometers: the Moderate Resolution Imaging Spectroradiometer (MODIS), the Advanced Very High Resolution Radiometer (AVHRR), and the Advanced Microwave Scanning Radiometer ? Earth Observing System(AMSRE). This new product requires further validation and accuracy assessment, especially in coastal regions.We examined the accuracy of the new MUR SST product by comparing the high resolution version and a lower resolution version that has been smoothed to 19 km (but still gridded to 1 km). Both versions were compared to the same data set of in situ buoy temperature measurements with a focus on study regions of the oceans surrounding North and Central America as well as two smaller regions around the Gulf Stream and California coast. Ocean fronts exhibit high temperature gradients (Roden, 1976), and thus satellite data of SST can be used in the detection of these fronts. In this case, accuracy is less of a concern because the primary focus is on the spatial derivative of SST. We calculated the gradients for both versions of the MUR data set and did statistical comparisons focusing on the same regions.

  18. New approaches in cataloging and distributing multi-dimensional scientific data: Federal Data Repositories example

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Thornton, M.; Wei, Y.; Krishna, B.; Frame, M. T.; Zolly, L.; Records, R.; Palanisamy, G.

    2016-12-01

    Observational data should be collected and stored logical and scalable way. Most of the time, observation data capture variables or measurements at an exact point in time and are thus not reproducible. It is therefore imperative that initial data be captured and stored correctly the first time. In this paper, we will discuss how big federal data centers and repositories such as DOE's Atmospheric Radiation Measurement (ARM), NASA's Distributed Active Archive Center (DAAC) and the USGS's Science Data Catalog (SDC) at Oak Ridge National Laboratory are preparing, storing and distributing huge multi-dimensional scientific data. We will discuss tools and services, including data formats, that are being used within the ORNL DAAC for managing huge data sets such as Daymet, which provides gridded estimates of various daily weather parameters at a 1km x 1km resolution. Recently released, the Daymet version 3[1] data set covers the period from January 1, 1980 to December 31 2015 for North America and Hawaii: including Canada, Mexico, the United States of America, Puerto Rico, and Bermuda. We will also discuss the latest tools and services within ARM and SDC that are built on popular open source software such as Apache Solr 6, Cassandra, Spark, etc. The ARM Data center (http://www.archive.arm.gov/discovery) archives and distributes various data streams, which are collected through the routine operations and scientific field experiments of the ARM Climate Research Facility. The SDC (http://data.usgs.gov/datacatalog/) provides seamless access to USGS research and monitoring data from across the nation. Every month, tens of thousands of users download portions of these datasets totaling to several TBs/month. The popularity of the data result from many characteristics, but at the forefront is the careful consideration of community needs both in terms of data content and accessibility. Fundamental to this is adherence to data archive and distribution best practices providing open, standardized, and self-describing data which enables development of specialized tools and web services. References: [1] Thornton, P.E., M.M. Thornton, B.W. Mayer, Y. Wei, R. Devarakonda, R.S. Vose, and R.B. Cook. 2016. Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 3. ORNL DAAC, Oak Ridge, Tennessee, USA.

  19. Milne, a routine for the numerical solution of Milne's problem

    NASA Astrophysics Data System (ADS)

    Rawat, Ajay; Mohankumar, N.

    2010-11-01

    The routine Milne provides accurate numerical values for the classical Milne's problem of neutron transport for the planar one speed and isotropic scattering case. The solution is based on the Case eigen-function formalism. The relevant X functions are evaluated accurately by the Double Exponential quadrature. The calculated quantities are the extrapolation distance and the scalar and the angular fluxes. Also, the H function needed in astrophysical calculations is evaluated as a byproduct. Program summaryProgram title: Milne Catalogue identifier: AEGS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 701 No. of bytes in distributed program, including test data, etc.: 6845 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC under Linux or Windows Operating system: Ubuntu 8.04 (Kernel version 2.6.24-16-generic), Windows-XP Classification: 4.11, 21.1, 21.2 Nature of problem: The X functions are integral expressions. The convergence of these regular and Cauchy Principal Value integrals are impaired by the singularities of the integrand in the complex plane. The DE quadrature scheme tackles these singularities in a robust manner compared to the standard Gauss quadrature. Running time: The test included in the distribution takes a few seconds to run.

  20. Lambert W function for applications in physics

    NASA Astrophysics Data System (ADS)

    Veberič, Darko

    2012-12-01

    The Lambert W(x) function and its possible applications in physics are presented. The actual numerical implementation in C++ consists of Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued-logarithm recursion. Program summaryProgram title: LambertW Catalogue identifier: AENC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 1335 No. of bytes in distributed program, including test data, etc.: 25 283 Distribution format: tar.gz Programming language: C++ (with suitable wrappers it can be called from C, Fortran etc.), the supplied command-line utility is suitable for other scripting languages like sh, csh, awk, perl etc. Computer: All systems with a C++ compiler. Operating system: All Unix flavors, Windows. It might work with others. RAM: Small memory footprint, less than 1 MB Classification: 1.1, 4.7, 11.3, 11.9. Nature of problem: Find fast and accurate numerical implementation for the Lambert W function. Solution method: Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued logarithm recursion. Additional comments: Distribution file contains the command-line utility lambert-w. Doxygen comments, included in the source files. Makefile. Running time: The tests provided take only a few seconds to run.

  1. Mapping evidence on the distribution of human papillomavirus-related cancers in sub-Saharan Africa: scoping review protocol.

    PubMed

    Lekoane, Bridget K M; Mashamba-Thompson, Tivani P; Ginindza, Themba G

    2017-11-17

    Despite the introduction of HPV vaccines, the incidence of HPV-related cancers (cervical, penile, anal, vulvar, vagina, head, and neck) in sub-Saharan Africa has been rising. The increasing incidence of these HPV-related cancers has been attributed to changes in lifestyle-related risk factors, most notably sexual behavior. The main objective of this study is to map evidence on the distribution of HIV-related cancers in sub-Saharan Africa (SSA). We will conduct a scoping review to explore, describe, and map literature on the distribution of HPV-related cancers in sub-Saharan Africa. The primary search will include peer-reviewed and review articles. The list of references from included studies will also be searched. The search will be performed using EBSCOhost platform by searching the following databases within the platform: Academic search complete, health source: nursing/academic edition, CINAHL with full text, PubMed, Science Direct, Google scholar and World Health Organization (WHO) library databases, and gray literature. The researcher will search the articles using keywords, from the included studies; abstract and full articles will be screened by two independent reviewers. The screening will be guided by the inclusion and exclusion criteria. A thematic content analysis will be used to present the narrative account of the reviews, using NVivo version 10. We anticipate finding relevant literature on the distribution of HPV-related cancers in sub-Saharan Africa. The study findings will help reveal research gaps to guide future research. PROSPERO CRD42017062403.

  2. The NERC Vocabulary Server: Version 2.0

    NASA Astrophysics Data System (ADS)

    Leadbetter, A. M.; Lowry, R. K.

    2012-12-01

    The Natural Environment Research Council (NERC) Vocabulary Server (NVS) has been used to publish controlled vocabularies of terms relevant to marine environmental sciences since 2006 (version 0) with version 1 being introduced in 2007. It has been used for - metadata mark-up with verifiable content - populating dynamic drop down lists - semantic cross-walk between metadata schemata - so-called smart search - and the semantic enablement of Open Geospatial Consortium (OGC) Web Processing Services in the NERC Data Grid and the European Commission SeaDataNet, Geo-Seas, and European Marine Observation and Data Network (EMODnet) projects. The NVS is based on the Simple Knowledge Organization System (SKOS) model. SKOS is based on the "concept", which it defines as a "unit of thought", that is an idea or notion such as "oil spill". Following a version change for SKOS in 2009 there was a desire to upgrade the NVS to incorporate the changes. This version of SKOS introduces the ability to aggregate concepts in both collections and schemes. The design of version 2 of the NVS uses both types of aggregation: schemes for the discovery of content through hierarchical thesauri and collections for the publication and addressing of content. Other desired changes from version 1 of the NVS included: - the removal of the potential for multiple identifiers for the same concept to ensure consistent addressing of concepts - the addition of content and technical governance information in the payload documents to provide an audit trail to users of NVS content - the removal of XML snippets from concept definitions in order to correctly validate XML serializations of the SKOS - the addition of the ability to map into external knowledge organization systems in order to extend the knowledge base - a more truly RESTful approach URL access to the NVS to make the development of applications on top of the NVS easier - and support for multiple human languages to increase the user base of the NVS Version 2 of the NVS (NVS2.0) underpins the semantic layer for the Open Service Network for Marine Environmental Data (NETMAR) project, funded by the European Commission under the Seventh Framework Programme. Within NETMAR, NVS2.0 has been used for: - semantic validation of inputs to chained OGC Web Processing Services - smart discovery of data and services - integration of data from distributed nodes of the International Coastal Atlas Network Since its deployment, NVS2.0 has been adopted within the European SeaDataNet community's software products which has significantly increased the usage of the NVS2.0 Application Programming Interace (API), as illustrated in Table 1. Here we present the results of upgrading the NVS to version 2 and show applications which have been built on top of the NVS2.0 API, including a SPARQL endpoint and a hierarchical catalogue of oceanographic hardware.Table 1. NVS2.0 API usage by month from 467 unique IP addressest;

  3. A graphical user interface (GUI) toolkit for the calculation of three-dimensional (3D) multi-phase biological effective dose (BED) distributions including statistical analyses.

    PubMed

    Kauweloa, Kevin I; Gutierrez, Alonso N; Stathakis, Sotirios; Papanikolaou, Niko; Mavroidis, Panayiotis

    2016-07-01

    A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Battery Storage Evaluation Tool, version 1.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-02

    The battery storage evaluation tool developed at Pacific Northwest National Laboratory is used to run a one-year simulation to evaluate the benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. This tool is based on the optimal control strategies to capture multiple services from a single energy storage device. In this control strategy, at each hour, a lookahead optimization is first formulated and solved to determine the battery base operating point. The minute-by-minute simulation is then performed to simulate the actual battery operation.

  5. Distribution of Report on Procedures to Estimate Nitrogen Oxides (NOx) Emission Increases from Mobile and Area Sources for Prevention of Significant Deterioration (PSD) Increment Analyses

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  6. A Very Large Area Network (VLAN) knowledge-base applied to space communication problems

    NASA Technical Reports Server (NTRS)

    Zander, Carol S.

    1988-01-01

    This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.

  7. A fragmentation model of earthquake-like behavior in internet access activity

    NASA Astrophysics Data System (ADS)

    Paguirigan, Antonino A.; Angco, Marc Jordan G.; Bantang, Johnrob Y.

    We present a fragmentation model that generates almost any inverse power-law size distribution, including dual-scaled versions, consistent with the underlying dynamics of systems with earthquake-like behavior. We apply the model to explain the dual-scaled power-law statistics observed in an Internet access dataset that covers more than 32 million requests. The non-Poissonian statistics of the requested data sizes m and the amount of time τ needed for complete processing are consistent with the Gutenberg-Richter-law. Inter-event times δt between subsequent requests are also shown to exhibit power-law distributions consistent with the generalized Omori law. Thus, the dataset is similar to the earthquake data except that two power-law regimes are observed. Using the proposed model, we are able to identify underlying dynamics responsible in generating the observed dual power-law distributions. The model is universal enough for its applicability to any physical and human dynamics that is limited by finite resources such as space, energy, time or opportunity.

  8. Mapping the spatial distribution of global anthropogenic mercury atmospheric emission inventories

    NASA Astrophysics Data System (ADS)

    Wilson, Simon J.; Steenhuisen, Frits; Pacyna, Jozef M.; Pacyna, Elisabeth G.

    This paper describes the procedures employed to spatially distribute global inventories of anthropogenic emissions of mercury to the atmosphere, prepared by Pacyna, E.G., Pacyna, J.M., Steenhuisen, F., Wilson, S. [2006. Global anthropogenic mercury emission inventory for 2000. Atmospheric Environment, this issue, doi:10.1016/j.atmosenv.2006.03.041], and briefly discusses the results of this work. A new spatially distributed global emission inventory for the (nominal) year 2000, and a revised version of the 1995 inventory are presented. Emissions estimates for total mercury and major species groups are distributed within latitude/longitude-based grids with a resolution of 1×1 and 0.5×0.5°. A key component in the spatial distribution procedure is the use of population distribution as a surrogate parameter to distribute emissions from sources that cannot be accurately geographically located. In this connection, new gridded population datasets were prepared, based on the CEISIN GPW3 datasets (CIESIN, 2004. Gridded Population of the World (GPW), Version 3. Center for International Earth Science Information Network (CIESIN), Columbia University and Centro Internacional de Agricultura Tropical (CIAT). GPW3 data are available at http://beta.sedac.ciesin.columbia.edu/gpw/index.jsp). The spatially distributed emissions inventories and population datasets prepared in the course of this work are available on the Internet at www.amap.no/Resources/HgEmissions/

  9. Data Citation Concept for CMIP6

    NASA Astrophysics Data System (ADS)

    Stockhause, M.; Toussaint, F.; Lautenschlager, M.; Lawrence, B.

    2015-12-01

    There is a broad consensus among data centers and scientific publishers on Force 11's 'Joint Declaration of Data Citation Principles'. To put these principles into operation is not always as straight forward. The focus for CMIP6 data citations lies on the citation of data created by others and used in an analysis underlying the article. And for this source data usually no article of the data creators is available ('stand-alone data publication'). The planned data citation granularities are model data (data collections containing all datasets provided for the project by a single model) and experiment data (data collections containing all datasets for a scientific experiment run by a single model). In case of large international projects or activities like CMIP, the data is commonly stored and disseminated by multiple repositories in a federated data infrastructure such as the Earth System Grid Federation (ESGF). The individual repositories are subject to different institutional and national policies. A Data Management Plan (DMP) will define a certain standard for the repositories including data handling procedures. Another aspect of CMIP data, relevant for data citations, is its dynamic nature. For such large data collections, datasets are added, revised and retracted for years, before the data collection becomes stable for a data citation entity including all model or simulation data. Thus, a critical issue for ESGF is data consistency, requiring thorough dataset versioning to enable the identification of the data collection in the cited version. Currently, the ESGF is designed for accessing the latest dataset versions. Data citation introduces the necessity to support older and retracted dataset versions by storing metadata even beyond data availability (data unpublished in ESGF). Apart from ESGF, other infrastructure components exist for CMIP, which provide information that has to be connected to the CMIP6 data, e.g. ES-DOC providing information on models and simulations and the IPCC Data Distribution Centre (DDC) storing a subset of data together with available metadata (ES-DOC) for the long-term reuse of the interdisciplinary community. Other connections exist to standard project vocabularies, to personal identifiers (e.g. ORCID), or to data products (including provenance information).

  10. Atomization and vaporization characteristics of airblast fuel injection inside a venturi tube

    NASA Technical Reports Server (NTRS)

    Sun, H.; Chue, T.-H.; Lai, M.-C.; Tacina, R. R.

    1993-01-01

    This paper describes the experimental and numerical characterization of the capillary fuel injection, atomization, dispersion, and vaporization of liquid fuel in a coflowing air stream inside a single venturi tube. The experimental techniques used are all laser-based. Phase Doppler analyzer was used to characterize the atomization and vaporization process. Planar laser-induced fluorescence visualizations give good qualitative picture of the fuel droplet and vapor distribution. Limited quantitative capabilities of the technique are also demonstrated. A modified version of the KIVA-II was used to simulate the entire spray process, including breakup and vaporization. The advantage of venturi nozzle is demonstrated in terms of better atomization, more uniform F/A distribution, and less pressure drop. Multidimensional spray calculations can be used as a design tool only if care is taken for the proper breakup model, and wall impingement process.

  11. New and Improved Version of the ASDC MOPITT Search and Subset Web Application

    Atmospheric Science Data Center

    2016-07-06

    ... and Improved Version of the ASDC MOPITT Search and Subset Web Application Friday, June 24, 2016 A new and improved version of the ASDC MOPITT Search and Subset Web Application has been released. New features include: Versions 5 and 6 ...

  12. Newly Released TRMM Version 7 Products, GPCP Version 2.2 Precipitation Dataset and Data Services at NASA GES DISC

    NASA Astrophysics Data System (ADS)

    Ostrenga, D.; Liu, Z.; Teng, W. L.; Trivedi, B.; Kempler, S.

    2011-12-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is home of global precipitation product archives, in particular, the Tropical Rainfall Measuring Mission (TRMM) products. TRMM is a joint U.S.-Japan satellite mission to monitor tropical and subtropical (40deg S - 40deg N) precipitation and to estimate its associated latent heating. The TRMM satellite provides the first detailed and comprehensive dataset on the four dimensional distribution of rainfall and latent heating over vastly undersampled tropical and subtropical oceans and continents. The TRMM satellite was launched on November 27, 1997. TRMM data products are archived at and distributed by GES DISC. The newly released TRMM Version 7 consists of several changes including new parameters, new products, meta data, data structures, etc. For example, hydrometeor profiles in 2A12 now have 28 layers (14 in V6). New parameters have been added to several popular Level-3 products, such as, 3B42, 3B43. Version 2.2 of the Global Precipitation Climatology Project (GPCP) dataset has been added to the TRMM Online Visualization and Analysis System (TOVAS; URL: http://disc2.nascom.nasa.gov/Giovanni/tovas/), allowing online analysis and visualization without downloading data and software. The GPCP dataset extends back to 1979. Results of basic intercomparison between the new and the previous versions of both TRMM and GPCP will be presented to help understand changes in data product characteristics. To facilitate data and information access and support precipitation research and applications, we have developed a Precipitation Data and Information Services Center (PDISC; URL: http://disc.gsfc.nasa.gov/precipitation). In addition to TRMM, PDISC provides current and past observational precipitation data. Users can access precipitation data archives consisting of both remote sensing and in-situ observations. Users can use these data products to conduct a wide variety of activities, including case studies, model evaluation, uncertainty investigation, etc. To support Earth science applications, PDISC provides users near-real-time precipitation products over the Internet. At PDISC, users can access tools and software. Documentation, FAQ and assistance are also available. Other capabilities include: 1) Mirador (http://mirador.gsfc.nasa.gov/), a simplified interface for searching, browsing, and ordering Earth science data at NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Mirador is designed to be fast and easy to learn; 2)TOVAS; 3) NetCDF data download for the GIS community; 4) Data via OPeNDAP (http://disc.sci.gsfc.nasa.gov/services/opendap/). The OPeNDAP provides remote access to individual variables within datasets in a form usable by many tools, such as IDV, McIDAS-V, Panoply, Ferret and GrADS; 5) The Open Geospatial Consortium (OGC) Web Map Service (WMS) (http://disc.sci.gsfc.nasa.gov/services/wxs_ogc.shtml). The WMS is an interface that allows the use of data and enables clients to build customized maps with data coming from a different network. More details along with examples will be presented.

  13. Version 3.0 of EMINERS - Economic Mineral Resource Simulator

    USGS Publications Warehouse

    Duval, Joseph S.

    2012-01-01

    Quantitative mineral resource assessment, as developed by the U.S. Geological Survey (USGS), consists of three parts: (1) development of grade and tonnage mineral deposit models; (2) delineation of tracts permissive for each deposit type; and (3) probabilistic estimation of the numbers of undiscovered deposits for each deposit type. The estimate of the number of undiscovered deposits at different levels of probability is the input to the EMINERS (Economic Mineral Resource Simulator) program. EMINERS uses a Monte Carlo statistical process to combine probabilistic estimates of undiscovered mineral deposits with models of mineral deposit grade and tonnage to estimate mineral resources. Version 3.0 of the EMINERS program is available as this USGS Open-File Report 2004-1344. Changes from version 2.0 include updating 87 grade and tonnage models, designing new templates to produce graphs showing cumulative distribution and summary tables, and disabling economic filters. The economic filters were disabled because embedded data for costs of labor and materials, mining techniques, and beneficiation methods are out of date. However, the cost algorithms used in the disabled economic filters are still in the program and available for reference for mining methods and milling techniques. The release notes included with this report give more details on changes in EMINERS over the years. EMINERS is written in C++ and depends upon the Microsoft Visual C++ 6.0 programming environment. The code depends heavily on the use of Microsoft Foundation Classes (MFC) for implementation of the Windows interface. The program works only on Microsoft Windows XP or newer personal computers. It does not work on Macintosh computers. For help in using the program in this report, see the "Quick-Start Guide for Version 3.0 of EMINERS-Economic Mineral Resource Simulator" (W.J. Bawiec and G.T. Spanski, 2012, USGS Open-File Report 2009-1057, linked at right). It demonstrates how to execute EMINERS software using default settings and existing deposit models.

  14. NOAA's National Air Quality Prediction and Development of Aerosol and Atmospheric Composition Prediction Components for NGGPS

    NASA Astrophysics Data System (ADS)

    Stajner, I.; McQueen, J.; Lee, P.; Stein, A. F.; Wilczak, J. M.; Upadhayay, S.; daSilva, A.; Lu, C. H.; Grell, G. A.; Pierce, R. B.

    2017-12-01

    NOAA's operational air quality predictions of ozone, fine particulate matter (PM2.5) and wildfire smoke over the United States and airborne dust over the contiguous 48 states are distributed at http://airquality.weather.gov. The National Air Quality Forecast Capability (NAQFC) providing these predictions was updated in June 2017. Ozone and PM2.5 predictions are now produced using the system linking the Community Multiscale Air Quality model (CMAQ) version 5.0.2 with meteorological inputs from the North American Mesoscale Forecast System (NAM) version 4. Predictions of PM2.5 include intermittent dust emissions and wildfire emissions from an updated version of BlueSky system. For the latter, the CMAQ system is initialized by rerunning it over the previous 24 hours to include wildfire emissions at the time when they were observed from the satellites. Post processing to reduce the bias in PM2.5 prediction was updated using the Kalman filter analog (KFAN) technique. Dust related aerosol species at the CMAQ domain lateral boundaries now come from the NEMS Global Aerosol Component (NGAC) v2 predictions. Further development of NAQFC includes testing of CMAQ predictions to 72 hours, Canadian fire emissions data from Environment and Climate Change Canada (ECCC) and the KFAN technique to reduce bias in ozone predictions. NOAA is developing the Next Generation Global Predictions System (NGGPS) with an aerosol and gaseous atmospheric composition component to improve and integrate aerosol and ozone predictions and evaluate their impacts on physics, data assimilation and weather prediction. Efforts are underway to improve cloud microphysics, investigate aerosol effects and include representations of atmospheric composition of varying complexity into NGGPS: from the operational ozone parameterization, GOCART aerosols, with simplified ozone chemistry, to CMAQ chemistry with aerosol modules. We will present progress on community building, planning and development of NGGPS.

  15. IDC Re-Engineering Phase 2 System Requirements Document Version 1.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, James M.; Burns, John F.; Satpathi, Meara Allena

    This System Requirements Document (SRD) defines waveform data processing requirements for the International Data Centre (IDC) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO). The IDC applies, on a routine basis, automatic processing methods and interactive analysis to raw International Monitoring System (IMS) data in order to produce, archive, and distribute standard IDC products on behalf of all States Parties. The routine processing includes characterization of events with the objective of screening out events considered to be consistent with natural phenomena or non-nuclear, man-made phenomena. This document does not address requirements concerning acquisition, processing and analysis of radionuclide data,more » but includes requirements for the dissemination of radionuclide data and products.« less

  16. LARCMACS: A TEX macro set for typesetting NASA reports

    NASA Technical Reports Server (NTRS)

    Woessner, Linda H.; Mccaskill, Mary K.

    1988-01-01

    This LARCMACS user's manual describes the February 1988 version of LARCMACS, the TEX macro set used by the Technical Editing Branch (TEB) at NASA Langley Research Center. These macros were developed by the authors to facilitate the typesetting of NASA formal reports. They are also useful, however, for informal NASA reports and other technical documents such as meeting papers. LARCMACS are distributed by TEB for the convenience of the Langley TEX user community. LARCMACS contain macros for obtaining the standard double-column format for NASA reports, for typesetting tables in the ruled format traditional in NASA reports, and for typesetting difficult mathematical expressions. Each macro is described and numerous examples are included. Definitions of the LARCMACS macros are also included.

  17. Reliability and Validity of a Shorter Chinese Version for Ryff's Psychological Well-Being Scale

    ERIC Educational Resources Information Center

    Li, Ren-Hau

    2014-01-01

    Objective: The aim of this study was to develop a new and shorter Chinese version of Ryff's psychological well-being scale. Design: Cross-sectional survey. Setting: In recent years there have been several versions of this scale, including 84-item, 54-item and 18-item versions. Researchers in different countries have built on Ryff's version to…

  18. Derivation and Error Analysis of the Earth Magnetic Anomaly Grid at 2 arc min Resolution Version 3 (EMAG2v3)

    NASA Astrophysics Data System (ADS)

    Meyer, B.; Chulliat, A.; Saltus, R.

    2017-12-01

    The Earth Magnetic Anomaly Grid at 2 arc min resolution version 3, EMAG2v3, combines marine and airborne trackline observations, satellite data, and magnetic observatory data to map the location, intensity, and extent of lithospheric magnetic anomalies. EMAG2v3 includes over 50 million new data points added to NCEI's Geophysical Database System (GEODAS) in recent years. The new grid relies only on observed data, and does not utilize a priori geologic structure or ocean-age information. Comparing this grid to other global magnetic anomaly compilations (e.g., EMAG2 and WDMAM), we can see that the inclusion of a priori ocean-age patterns forces an artificial linear pattern to the grid; the data-only approach allows for greater complexity in representing the evolution along oceanic spreading ridges and continental margins. EMAG2v3 also makes use of the satellite-derived lithospheric field model MF7 in order to accurately represent anomalies with wavelengths greater than 300 km and to create smooth grid merging boundaries. The heterogeneous distribution of errors in the observations used in compiling the EMAG2v3 was explored, and is reported in the final distributed grid. This grid is delivered at both 4 km continuous altitude above WGS84, as well as at sea level for all oceanic and coastal regions.

  19. Large-Scale Low-Cost NGS Library Preparation Using a Robust Tn5 Purification and Tagmentation Protocol

    PubMed Central

    Hennig, Bianca P.; Velten, Lars; Racke, Ines; Tu, Chelsea Szu; Thoms, Matthias; Rybin, Vladimir; Besir, Hüseyin; Remans, Kim; Steinmetz, Lars M.

    2017-01-01

    Efficient preparation of high-quality sequencing libraries that well represent the biological sample is a key step for using next-generation sequencing in research. Tn5 enables fast, robust, and highly efficient processing of limited input material while scaling to the parallel processing of hundreds of samples. Here, we present a robust Tn5 transposase purification strategy based on an N-terminal His6-Sumo3 tag. We demonstrate that libraries prepared with our in-house Tn5 are of the same quality as those processed with a commercially available kit (Nextera XT), while they dramatically reduce the cost of large-scale experiments. We introduce improved purification strategies for two versions of the Tn5 enzyme. The first version carries the previously reported point mutations E54K and L372P, and stably produces libraries of constant fragment size distribution, even if the Tn5-to-input molecule ratio varies. The second Tn5 construct carries an additional point mutation (R27S) in the DNA-binding domain. This construct allows for adjustment of the fragment size distribution based on enzyme concentration during tagmentation, a feature that opens new opportunities for use of Tn5 in customized experimental designs. We demonstrate the versatility of our Tn5 enzymes in different experimental settings, including a novel single-cell polyadenylation site mapping protocol as well as ultralow input DNA sequencing. PMID:29118030

  20. Bridge-Scour Data Management System user's manual

    USGS Publications Warehouse

    Landers, Mark N.; Mueller, David S.; Martin, Gary R.

    1996-01-01

    The Bridge-Scour Data Management System (BSDMS) supports preparation, compilation, and analysis of bridge-scour data. The BSDMS provides interactive storage, retrieval, selection, editing, and display of bridge-scour data sets. Bridge-scour data sets include more than 200 site and measurement attributes of the channel geometry, flow hydraulics, hydrology, sediment, geomorphic-setting, location, and bridge specifications. This user's manual provides a general overview of the structure and organization of BSDMS data sets and detailed instructions to operate the program. Attributes stored by the BSDMS are described along with an illustration of the input screen where the attribute can be entered or edited. Measured scour depths can be compared with scour depths predicted by selected published equations using the BSDMS. The selected published equations available in the computational portion of the BSDMS are described. This manual is written for BSDMS, version 2.0. The data base will facilitate: (1) developing improved estimators of scour for specific regions or conditions; (2) describing scour processes; and (3) reducing risk from scour at bridges. BSDMS is available in DOS and UNIX versions. The program was written to be portable and, therefore, can be used on multiple computer platforms. Installation procedures depend on the computer platform, and specific installation instructions are distributed with the software. Sample data files and data sets of 384 pier-scour measurements from 56 bridges in 14 States are also distributed with the software.

  1. Computational Control of Flexible Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Sharpe, Lonnie, Jr.; Shen, Ji Yao

    1994-01-01

    The main objective of this project is to establish a distributed parameter modeling technique for structural analysis, parameter estimation, vibration suppression and control synthesis of large flexible aerospace structures. This report concentrates on the research outputs produced in the last two years of the project. The main accomplishments can be summarized as follows. A new version of the PDEMOD Code had been completed. A theoretical investigation of the NASA MSFC two-dimensional ground-based manipulator facility by using distributed parameter modelling technique has been conducted. A new mathematical treatment for dynamic analysis and control of large flexible manipulator systems has been conceived, which may provide a embryonic form of a more sophisticated mathematical model for future modified versions of the PDEMOD Codes.

  2. Flood predictions using the parallel version of distributed numerical physical rainfall-runoff model TOPKAPI

    NASA Astrophysics Data System (ADS)

    Boyko, Oleksiy; Zheleznyak, Mark

    2015-04-01

    The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.

  3. Computerized Engineering

    NASA Technical Reports Server (NTRS)

    1998-01-01

    In 1966, MacNeal-Schwendler Corporation (MSC) was awarded a contract by NASA to develop a general purpose structural analysis program dubbed NASTRAN (NASA structural analysis). The first operational version was delivered in 1969. In 1982, MSC procured the rights to market their subsequent version of NASTRAN to industry as a problem solver for applications ranging from acoustics to heat transfer. Known today as MSC/NASTRAN, the program has thousands of users worldwide. NASTRAN is also distributed through COSMIC.

  4. Lowell proper motion survey: Southern Hemisphere (Giclas, Burnham, and Thomas 1978). Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.

    1989-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The catalog is a summary compilation of the Lowell Proper Motion Survey for the Southern Hemisphere, as completed to mid-1978 and published in the Lowell Observatory Bulletins. This summary catalog serves as a Southern Hemisphere companion to the Lowell Proper Motion Survey, Northern Hemisphere.

  5. Weighted Lin-Wang Tests for Crossing Hazards

    PubMed Central

    Koziol, James A.; Jia, Zhenyu

    2014-01-01

    Lin and Wang have introduced a quadratic version of the logrank test, appropriate for situations in which the underlying survival distributions may cross. In this note, we generalize the Lin-Wang procedure to incorporate weights and investigate the performance of Lin and Wang's test and weighted versions in various scenarios. We find that weighting does increase statistical power in certain situations; however, none of the procedures was dominant under every scenario. PMID:24795776

  6. Method of Characteristic (MOC) Nozzle Flowfield Solver - User’s Guide and Input Manual Version 2.0

    DTIC Science & Technology

    2018-01-01

    TECHNICAL REPORT RDMR-SS-17-13 METHOD OF CHARACTERISTIC (MOC) NOZZLE FLOWFIELD SOLVER—USER’S GUIDE AND INPUT MANUAL VERSION 2.0 Kevin D. Kennedy...System Simulation and Development Directorate Aviation and Missile Research , Development, and Engineering Center January 2018 Distribution Statement...DOCUMENTS, DESTROY BY ANY METHOD THAT WILL PREVENT DISCLOSURE OF CONTENTS OR RECONSTRUCTION OF THE DOCUMENT. DISCLAIMER THE FINDINGS IN THIS REPORT

  7. Documentation for the machine-readable version of A Finding List for the Multiplet Tables of NSRDS-NBS 3, Sections 1-10 (Adelman, Adelman, Fischel and Warren 1984)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine-readable finding list, as it is currently being distributed from the Astronomical Data Center, is described. This version of the list supersedes an earlier one (1977) containing only Sections 1 through 7 of the NSRDS-NBS 3 multiplet tables publications. Additional sections are to be incorporated into this list as they are published.

  8. Fast computation of close-coupling exchange integrals using polynomials in a tree representation

    NASA Astrophysics Data System (ADS)

    Wallerberger, Markus; Igenbergs, Katharina; Schweinzer, Josef; Aumayr, Friedrich

    2011-03-01

    The semi-classical atomic-orbital close-coupling method is a well-known approach for the calculation of cross sections in ion-atom collisions. It strongly relies on the fast and stable computation of exchange integrals. We present an upgrade to earlier implementations of the Fourier-transform method. For this purpose, we implement an extensive library for symbolic storage of polynomials, relying on sophisticated tree structures to allow fast manipulation and numerically stable evaluation. Using this library, we considerably speed up creation and computation of exchange integrals. This enables us to compute cross sections for more complex collision systems. Program summaryProgram title: TXINT Catalogue identifier: AEHS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 332 No. of bytes in distributed program, including test data, etc.: 157 086 Distribution format: tar.gz Programming language: Fortran 95 Computer: All with a Fortran 95 compiler Operating system: All with a Fortran 95 compiler RAM: Depends heavily on input, usually less than 100 MiB Classification: 16.10 Nature of problem: Analytical calculation of one- and two-center exchange matrix elements for the close-coupling method in the impact parameter model. Solution method: Similar to the code of Hansen and Dubois [1], we use the Fourier-transform method suggested by Shakeshaft [2] to compute the integrals. However, we heavily speed up the calculation using a library for symbolic manipulation of polynomials. Restrictions: We restrict ourselves to a defined collision system in the impact parameter model. Unusual features: A library for symbolic manipulation of polynomials, where polynomials are stored in a space-saving left-child right-sibling binary tree. This provides stable numerical evaluation and fast mutation while maintaining full compatibility with the original code. Additional comments: This program makes heavy use of the new features provided by the Fortran 90 standard, most prominently pointers, derived types and allocatable structures and a small portion of Fortran 95. Only newer compilers support these features. Following compilers support all features needed by the program. GNU Fortran Compiler "gfortran" from version 4.3.0 GNU Fortran 95 Compiler "g95" from version 4.2.0 Intel Fortran Compiler "ifort" from version 11.0

  9. Application of the MacCormack scheme to overland flow routing for high-spatial resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia

    2018-03-01

    Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.

  10. Technical Data Exchange Software Tools Adapted to Distributed Microsatellite Design

    NASA Astrophysics Data System (ADS)

    Pache, Charly

    2002-01-01

    One critical issue concerning distributed design of satellites, is the collaborative work it requires. In particular, the exchange of data between each group responsible for each subsystem can be complex and very time-consuming. The goal of this paper is to present a design collaborative tool, the SSETI Design Model (SDM), specifically developed for enabling satellite distributed design. SDM is actually used in the ongoing Student Space Exploration &Technology (SSETI) initiative (www.sseti.net). SSETI is lead by European Space Agency (ESA) outreach office (http://www.estec.esa.nl/outreach), involving student groups from all over Europe for design, construction and launch of a microsatellite. The first part of this paper presents the current version of the SDM tool, a collection of Microsoft Excel linked worksheets, one for each subsystem. An overview of the project framework/structure is given, explaining the different actors, the flows between them, as well as the different types of data and the links - formulas - between data sets. Unified Modeling Language (UML) diagrams give an overview of the different parts . Then the SDM's functionalities, developed in VBA scripts (Visual Basic for Application), are introduced, as well as the interactive features, user interfaces and administration tools. The second part discusses the capabilities and limitations of SDM current version. Taking into account these capabilities and limitations, the third part outlines the next version of SDM, a web-oriented, database-driven evolution of the current version. This new approach will enable real-time data exchange and processing between the different actors of the mission. Comprehensive UML diagrams will guide the audience through the entire modeling process of such a system. Tradeoffs simulation capabilities, security, reliability, hardware and software issues will also be thoroughly discussed.

  11. GENXICC2.1: An improved version of GENXICC for hadronic production of doubly heavy baryons

    NASA Astrophysics Data System (ADS)

    Wang, Xian-You; Wu, Xing-Gang

    2013-03-01

    We present an improved version of GENXICC, which is a generator for hadronic production of the doubly heavy baryons Ξcc, Ξbc and Ξbb and has been introduced by C.H. Chang, J.X. Wang and X.G. Wu [Comput. Phys. Commun. 177 (2007) 467; Comput. Phys. Commun. 181 (2010) 1144]. In comparison with the previous GENXICC versions, we update the program in order to generate the unweighted baryon events more effectively under various simulation environments, whose distributions are now generated according to the probability proportional to the integrand. One Les Houches Event (LHE) common block has been added to produce a standard LHE data file that contains useful information of the doubly heavy baryon and its accompanying partons. Such LHE data can be conveniently imported into PYTHIA to do further hadronization and decay simulation, especially, the color-flow problem can be solved with PYTHIA8.0. NEW VERSION PROGRAM SUMMARYTitle of program: GENXICC2.1 Program obtained from: CPC Program Library Reference to original program: GENXICC Reference in CPC: Comput. Phys. Commun. 177, 467 (2007); Comput. Phys. Commun. 181, 1144 (2010) Does the new version supersede the old program: No Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating systems: LINUX Programming language used: FORTRAN 77/90 Memory required to execute with typical data: About 2.0 MB No. of bytes in distributed program: About 2 MB, including PYTHIA6.4 Distribution format: .tar.gz Nature of physical problem: Hadronic production of doubly heavy baryons Ξcc, Ξbc and Ξbb. Method of solution: The upgraded version with a proper interface to PYTHIA can generate full production and decay events, either weighted or unweighted, conveniently and effectively. Especially, the unweighted events are generated by using an improved hit-and-miss approach. Reasons for new version: Responding to the feedback from users of CMS and LHCb groups at the Large Hadron Collider, and based on the recent improvements of PYTHIA on the color-flow problem, we improve the efficiency for generating the unweighted events, and also improve the color-flow part for further hadronization. Especially, an interface has been added to import the output production events into a suitable form for PYTHIA8.0 simulation, in which the color-flow during the simulation can be correctly set. Typical running time: It depends on which option is chosen to match PYTHIA when generating the full events and also on which mechanism is chosen to generate the events. Typically, for the dominant gluon-gluon fusion mechanism to generate the mixed events via the intermediate diquarks in (cc)[3S1]3¯ and (cc)[1S0]6 states, setting IDWTUP=3 and unwght =.true., it takes 30 min to generate 105 unweighted events on a 2.27 GHz Intel Xeon E5520 processor machine; setting IDWTUP=3 and unwght =.false. or IDWTUP=1 and IGENERATE=0, it only needs 2 min to generate the 105 baryon events (the fastest way, for theoretical purposes only). As a comparison, for previous GENXICC versions, if setting IDWTUP=1 and IGENERATE=1, it takes about 22 hours to generate 1000 unweighted events. Keywords: Event generator; Doubly heavy baryons; Hadronic production. Summary of the changes (improvements): (1) The scheme for generating unweighted events has been improved; (2) One Les Houches Event (LHE) common block has been added to record the standard LHE data in order to be the correct input for PYTHIA8.0 for later simulation; (3) We present the code for connecting GENXICC to PYTHIA8.0, where three color-flows have to be correctly set for later simulation. More specifically, we present the changes together with their detailed explanations in the following:

  12. The Polish version of Skindex-29: psychometric properties of an instrument to measure quality of life in dermatology.

    PubMed

    Janowski, Konrad; Steuden, Stanisława; Bereza, Bernarda

    2014-02-01

    Skin conditions have a negative impact on quality of life and it is necessary to quantify this impact. Skindex-29 is a self-report questionnaire developed to measure dermatology-specific quality of life. The objective of this study is to adapt this questionnaire to Polish conditions. The adaptation procedure involved the works on the linguistic content of the items and testing psychometric properties of the Polish version of Skindex-29, including item characteristics, factorial structure, aspects of reliability and validity. Two-hundred and ninety patients (63.4% women and 35.2% men) suffering from a range of skin conditions were recruited from several dermatological out-patient and in-patient clinics in Poland. Quality of life was measured using Skindex-29 and appropriate clinical data were collected. The global score of Skindex-29 showed the normal distribution. Cronbach's α reliability coefficients were found to be high to very high for all Skindex-29 indexes. Factor analysis yielded four factors, in contrast to the original version of the questionnaire, for which a three-factor solution had been reported. Skindex-29 validity was demonstrated by showing the differences in the quality of life scores across different diagnostic categories, and between in-patients and out-patients. Skindex-29 global scores were found to be significantly predicted by the localization of the skin lesions on legs, anogenital areas and palms. The findings of this study support reliability and validity of the Polish version of Skindex-29, but they also raise questions to its three-factor structure.

  13. DIRAC: A new version of computer algebra tools for studying the properties and behavior of hydrogen-like ions

    NASA Astrophysics Data System (ADS)

    McConnell, Sean; Fritzsche, Stephan; Surzhykov, Andrey

    2010-03-01

    During recent years, the DIRAC package has proved to be an efficient tool for studying the structural properties and dynamic behavior of hydrogen-like ions. Originally designed as a set of MAPLE procedures, this package provides interactive access to the wave and Green's functions in the non-relativistic and relativistic frameworks and supports analytical evaluation of a large number of radial integrals that are required for the construction of transition amplitudes and interaction cross sections. We provide here a new version of the DIRAC program which is developed within the framework of MATHEMATICA (version 6.0). This new version aims to cater to a wider community of researchers that use the MATHEMATICA platform and to take advantage of the generally faster processing times therein. Moreover, the addition of new procedures, a more convenient and detailed help system, as well as source code revisions to overcome identified shortcomings should ensure expanded use of the new DIRAC program over its predecessor. New version program summaryProgram title: DIRAC Catalogue identifier: ADUQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 45 073 No. of bytes in distributed program, including test data, etc.: 285 828 Distribution format: tar.gz Programming language: Mathematica 6.0 or higher Computer: All computers with a license for the computer algebra package Mathematica (version 6.0 or higher) Operating system: Mathematica is O/S independent Classification: 2.1 Catalogue identifier of previous version: ADUQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 165 (2005) 139 Does the new version supersede the previous version?: Yes Nature of problem: Since the early days of quantum mechanics, the "hydrogen atom" has served as one of the key models for studying the structure and dynamics of various quantum systems. Its analytic solutions are frequently used in case studies in atomic and molecular physics, quantum optics, plasma physics, or even in the field of quantum information and computation. Fast and reliable access to functions and properties of the hydrogenic systems are frequently required, in both the non-relativistic and relativistic frameworks. Despite all the knowledge about one-electron ions, providing such an access is not a simple task, owing to the rather complicated mathematical structure of the Schrödinger and especially Dirac equations. Moreover, for analyzing experimental results as well as for performing advanced theoretical studies one often needs (apart from the detailed information on atomic wave- and Green's functions) to be able to calculate a number of integrals involving these functions. Although for many types of transition operators these integrals can be evaluated analytically in terms of special mathematical functions, such an evaluation is usually rather involved and prone to mistakes. Solution method: A set of Mathematica procedures is developed which provides both the non-relativistic and relativistic solutions of the "Hydrogen atom model". It facilitates, moreover, the symbolic evaluation of integrals involved in the calculations of cross sections and transition amplitudes. These procedures are based on a large number of relations among special mathematical functions, information about their integral representations, recurrence formulae and series expansions. Based on this knowledge, the DIRAC tools provide a fast and reliable algebraic (and if necessary, numeric) manipulation of functions and properties of one-electron systems, thus helping to obtain further insight into the behavior of quantum physical systems. Reasons for new version: The original version of the DIRAC program was developed as a toolbox of Maple procedures and was submitted to the CPC library in 2004 (cf. Ref. [1]). Since then DIRAC has found its niche in advanced theoretical studies carried out in realm of heavy ion physics. With the help of this program detailed analysis has been performed, in particular, for the various excitation and ionization processes occurring in relativistic ion-atom collisions [2], the polarization of the characteristic X-ray radiation following radiative electron capture [3], the correlation properties of the two-photon emission from few-electron heavy ions [4], the spin entanglement phenomena in atomic photoionization [5] and even for exploring the vibrational excitations of the heavy nuclei [6]. Although these studies have conclusively proven the potential of the program, they have also illuminated routes for its further enhancement. Apart from certain source code revisions, demand has grown for a new version of DIRAC compatible with the Mathematica platform. The version presented here includes a wider ranging and more user friendly interactive help system, a number of new procedures and reprogramming for greater computational efficiency. Summary of revisions: The most important new capabilities of the DIRAC program since the previous version are: The utilization of the Mathematica (version 6.0) platform. The addition of a number of new procedures. Since the complete list of the new (and updated) procedures can be found in the interactive help library of the program, we mention here only the most important ones: DiracGlobal[] - Displays a list of the current global settings which specify the framework, nuclear charge and the units which are to be used by the DIRAC program. DiracRadialOrbitalMomentum[] - Returns a non-relativistic radial orbital in momentum space for both, the bound and free electron states. DiracSlaterRadial[] - Evaluates the radial Slater integral both, with the non-relativistic and relativistic wavefunctions. In the previous version of the program this procedure was restricted to the non-relativistic framework only. DiracGreensIntegralRadial[] - Evaluates the two-dimensional radial integrals with the wave- and Green's functions both in non-relativistic and relativistic frameworks. DiracAngularMatrixElement[] - Calculates the angular matrix elements for various irreducible tensor operators. The elimination of some redundant procedures. In particular, the previous version supported evaluation of the spherical Bessel functions, Wigner 3j symbols, Clebsch-Gordan coefficients and spherical harmonics functions. These tools are now superseded by in-built procedures of Mathematica. The development of a full featured interactive help system which follows the style of the Mathematica Help Pages. Extensive revision of the source code in order to correct a number of bugs and inconsistencies that have been identified during use of the previous version of Dirac. The DIRAC package is distributed as a compressed tar file from which the DIRAC root directory can be (re-)generated. The root directory contains the source code and help libraries, a "Readme" file, Dirac_Installation_Instructions, as well as the notebook DemonstrationNotebook.nb that includes a number of test cases to illustrate the use of the program. These test cases, which concern the theoretical analysis of wavefunctions and the fine-structure of hydrogen-like ions, has already been discussed in detail in Ref. [1] and are provided here in order to underline the continuity between the previous (Maple) and new (Mathematica) versions of the DIRAC program. Unusual features: Even though all basic features of the previous Maple version have been retained in as close to the original form as possible, some small syntax changes became necessary in the new version of DIRAC in order to follow Mathematica standards. First of all, these changes concern naming conventions for DIRAC's procedures. As was discussed in Ref. [1], previously rather long names were employed in which each word was separated by an underscore. For example, when running the Maple version of the program one had to call the procedure Dirac_Slater_radial() in order to evaluate the Slater integral. Such a naming convention however, cannot be used in the Mathematica framework where the underscore character is reserved to represent Blank, a built-in symbol. In the new version of DIRAC we therefore follow the Mathematica convention of delimiting each word in a procedure's name by capitalization. Evaluation of the Slater determinant can be accomplished now simply by entering DiracSlaterRadial[]. Besides procedure names, a new convention is introduced to represent fundamental physical constants. In this version of DIRAC the group of (preset) global variables has changed to resemble their conventional symbols, specifically α, a, e, m, c and ℏ, being the fine structure constant, Bohr radius, electron charge, electron mass, speed of light and the Planck constant respectively. If the numerical evaluator N is wrapped around any of these constants, their numerical values are returned. Running time: Although the program replies promptly upon most requests, the running time also depends on the particular task. For example, computation of (radial) matrix elements involving components of relativistic wavefunctions might require a few seconds of a runtime. A number of test calculations performed regarding this and other tasks clearly indicate that the new version of Dirac requires up to 90% less evaluation time compared to its predecessor. References:A. Surzhykov, P. Koval, S. Fritzsche, Comput. Phys. Comm. 165 (2005) 139. H. Ogawa, et al., Phys. Rev. A 75 (2007) 1. A.V. Maiorova, et al., J. Phys. B: At. Mol. Opt. Phys. 42 (2009) 125003. L. Borowska, A. Surzhykov, Th. Stöhlker, S. Fritzsche, Phys. Rev. A 74 (2006) 062516. T. Radtke, S. Fritzsche, A. Surzhykov, Phys. Rev. A 74 (2006) 032709. A. Pálffy, Z. Harman, A. Surzhykov, U.D. Jentschura, Phys. Rev. A 75 (2007) 012712.

  14. PC Utilities: Small Programs with a Big Impact

    ERIC Educational Resources Information Center

    Baule, Steven

    2004-01-01

    The three utility commercial programs available on the Internet are like software packages purchased through a vendor or the Internet, shareware programs are developed by individuals and distributed via the Internet for a small fee to obtain the complete version of the product, and freeware programs are distributed via the Internet free of cost.…

  15. Flux tubes in the SU(3) vacuum

    NASA Astrophysics Data System (ADS)

    Cardaci, M. S.; Cea, P.; Cosmai, L.; Falcone, R.; Papa, A.

    We analyze the distribution of the chromoelectric field generated by a static quark-antiquark pair in the SU(3) vacuum. We find that the transverse profile of the flux tube resembles the dual version of the Abrikosov vortex field distribution and give an estimate of the London penetration length in the confined vacuum.

  16. The Spatial Distribution of Attention within and across Objects

    ERIC Educational Resources Information Center

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.; Vecera, Shaun P.

    2012-01-01

    Attention operates to select both spatial locations and perceptual objects. However, the specific mechanism by which attention is oriented to objects is not well understood. We examined the means by which object structure constrains the distribution of spatial attention (i.e., a "grouped array"). Using a modified version of the Egly et…

  17. PISMA: A Visual Representation of Motif Distribution in DNA Sequences.

    PubMed

    Alcántara-Silva, Rogelio; Alvarado-Hermida, Moisés; Díaz-Contreras, Gibrán; Sánchez-Barrios, Martha; Carrera, Samantha; Galván, Silvia Carolina

    2017-01-01

    Because the graphical presentation and analysis of motif distribution can provide insights for experimental hypothesis, PISMA aims at identifying motifs on DNA sequences, counting and showing them graphically. The motif length ranges from 2 to 10 bases, and the DNA sequences range up to 10 kb. The motif distribution is shown as a bar-code-like, as a gene-map-like, and as a transcript scheme. We obtained graphical schemes of the CpG site distribution from 91 human papillomavirus genomes. Also, we present 2 analyses: one of DNA motifs associated with either methylation-resistant or methylation-sensitive CpG islands and another analysis of motifs associated with exosome RNA secretion. PISMA is developed in Java; it is executable in any type of hardware and in diverse operating systems. PISMA is freely available to noncommercial users. The English version and the User Manual are provided in Supplementary Files 1 and 2, and a Spanish version is available at www.biomedicas.unam.mx/wp-content/software/pisma.zip and www.biomedicas.unam.mx/wp-content/pdf/manual/pisma.pdf.

  18. PISMA: A Visual Representation of Motif Distribution in DNA Sequences

    PubMed Central

    Alcántara-Silva, Rogelio; Alvarado-Hermida, Moisés; Díaz-Contreras, Gibrán; Sánchez-Barrios, Martha; Carrera, Samantha; Galván, Silvia Carolina

    2017-01-01

    Background: Because the graphical presentation and analysis of motif distribution can provide insights for experimental hypothesis, PISMA aims at identifying motifs on DNA sequences, counting and showing them graphically. The motif length ranges from 2 to 10 bases, and the DNA sequences range up to 10 kb. The motif distribution is shown as a bar-code–like, as a gene-map–like, and as a transcript scheme. Results: We obtained graphical schemes of the CpG site distribution from 91 human papillomavirus genomes. Also, we present 2 analyses: one of DNA motifs associated with either methylation-resistant or methylation-sensitive CpG islands and another analysis of motifs associated with exosome RNA secretion. Availability and Implementation: PISMA is developed in Java; it is executable in any type of hardware and in diverse operating systems. PISMA is freely available to noncommercial users. The English version and the User Manual are provided in Supplementary Files 1 and 2, and a Spanish version is available at www.biomedicas.unam.mx/wp-content/software/pisma.zip and www.biomedicas.unam.mx/wp-content/pdf/manual/pisma.pdf. PMID:28469418

  19. A massively parallel computational approach to coupled thermoelastic/porous gas flow problems

    NASA Technical Reports Server (NTRS)

    Shia, David; Mcmanus, Hugh L.

    1995-01-01

    A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.

  20. Documentation for the machine-readable character coded version of the SKYMAP catalogue

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1981-01-01

    The SKYMAP catalogue is a compilation of astronomical data prepared primarily for purposes of attitude guidance for satellites. In addition to the SKYMAP Master Catalogue data base, a software package of data base management and utility programs is available. The tape version of the SKYMAP Catalogue, as received by the Astronomical Data Center (ADC), contains logical records consisting of a combination of binary and EBCDIC data. Certain character coded data in each record are redundant in that the same data are present in binary form. In order to facilitate wider use of all SKYMAP data by the astronomical community, a formatted (character) version was prepared by eliminating all redundant character data and converting all binary data to character form. The character version of the catalogue is described. The document is intended to fully describe the formatted tape so that users can process the data problems and guess work; it should be distributed with any character version of the catalogue.

  1. Prognosis of Electrical Faults in Permanent Magnet AC Machines using the Hidden Markov Model

    DTIC Science & Technology

    2010-11-10

    time resolution and high frequency resolution Tiling is variable Wigner Ville Distribution Defined as W (t, ω) = ∫ s(t + τ 2 )s∗(t − τ 2 )e−jωτdτ...smoothed version of the Wigner distribution Amount of smoothing is controlled by σ Smoothing comes with a tradeoff of reduced resolution UNCLAS: Dist A...the Wigner or Choi-Williams distributions Although for Wigner and Choi-Williams distributions the probabilities are close for the early fault

  2. Dissemination of Clinical Practice Guidelines: A Content Analysis of Patient Versions.

    PubMed

    Santesso, Nancy; Morgano, Gian Paolo; Jack, Susan M; Haynes, R Brian; Hill, Sophie; Treweek, Shaun; Schünemann, Holger J

    2016-08-01

    Clinical practice guidelines (CPGs) are typically written for health care professionals but are meant to assist patients with health care decisions. A number of guideline producers have started to develop patient versions of CPGs to reach this audience. To describe the content and purpose of patient versions of CPGs and compare with patient and public views of CPGs. A descriptive qualitative study with a directed content analysis of a sample of patient versions of CPGs published and freely available in English from 2012 to 2014. We included 34 patient versions of CPGs from 17 guideline producers. Over half of the patient versions were in dedicated patient sections of national/professional agency websites. There was essentially no information about how to manage care in the health care system. The most common purpose was to equip people with information about disease, tests or treatments, and recommendations, but few provided quantitative data about benefits and harms of treatments. Information about beliefs, values and preferences, accessibility, costs, or feasibility of the interventions was rarely addressed. Few provided personal stories or scenarios to personalize the information. Three versions described the strength of the recommendation or the level of evidence. Our search for key institutions that produce patient versions of guidelines was comprehensive, but we only included English and freely available versions. Future work will include other languages. This review describes the current landscape of patient versions of CPGs and suggests that these versions may not address the needs of their targeted audience. Research is needed about how to personalize information, provide information about factors contributing to the recommendations, and provide access. © The Author(s) 2016.

  3. Improved Temperature Sounding and Quality Control Methodology Using AIRS/AMSU Data: The AIRS Science Team Version 5 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John M.; Iredell, Lena; Keita, Fricky

    2009-01-01

    This paper describes the AIRS Science Team Version 5 retrieval algorithm in terms of its three most significant improvements over the methodology used in the AIRS Science Team Version 4 retrieval algorithm. Improved physics in Version 5 allows for use of AIRS clear column radiances in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of clear column radiances .R(sub i) for all channels. This new approach allows for the generation of more accurate values of .R(sub i) and T(p) under most cloud conditions. Secondly, Version 5 contains a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 also contains for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology, referred to as AIRS Version 5 AO, was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Results are shown comparing the relative performance of the AIRS Version 4, Version 5, and Version 5 AO for the single day, January 25, 2003. The Goddard DISC is now generating and distributing products derived using the AIRS Science Team Version 5 retrieval algorithm. This paper also described the Quality Control flags contained in the DISC AIRS/AMSU retrieval products and their intended use for scientific research purposes.

  4. Quantifying Parameter Sensitivity, Interaction and Transferability in Hydrologically Enhanced Versions of Noah-LSM over Transition Zones

    NASA Technical Reports Server (NTRS)

    Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue

    2009-01-01

    We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.

  5. TS-SRP/PACK - COMPUTER PROGRAMS TO CHARACTERIZE ALLOYS AND PREDICT CYCLIC LIFE USING THE TOTAL STRAIN VERSION OF STRAINRANGE PARTITIONING

    NASA Technical Reports Server (NTRS)

    Saltsman, J. F.

    1994-01-01

    TS-SRP/PACK is a set of computer programs for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the total strain version of the Strainrange Partitioning (TS-SRP). The user should be thoroughly familiar with the TS-SRP method before attempting to use any of these programs. The document for this program includes a theory manual as well as a detailed user's manual with a tutorial to guide the user in the proper use of TS-SRP. An extensive database has also been developed in a parallel effort. This database is an excellent source of high-temperature, creep-fatigue test data and can be used with other life-prediction methods as well. Five programs are included in TS-SRP/PACK along with the alloy database. The TABLE program is used to print the datasets, which are in NAMELIST format, in a reader friendly format. INDATA is used to create new datasets or add to existing ones. The FAIL program is used to characterize the failure behavior of an alloy as given by the constants in the strainrange-life relations used by the total strain version of SRP (TS-SRP) and the inelastic strainrange-based version of SRP. The program FLOW is used to characterize the flow behavior (the constitutive response) of an alloy as given by the constants in the flow equations used by TS-SRP. Finally, LIFE is used to predict the life of a specified cycle, using the constants characterizing failure and flow behavior determined by FAIL and FLOW. LIFE is written in interpretive BASIC to avoid compiling and linking every time the equation constants are changed. Four out of five programs in this package are written in FORTRAN 77 for IBM PC series and compatible computers running MS-DOS and are designed to read data using the NAMELIST format statement. The fifth is written in BASIC version 3.0 for IBM PC series and compatible computers running MS-DOS version 3.10. The executables require at least 239K of memory and DOS 3.1 or higher. To compile the source, a Lahey FORTRAN compiler is required. Source code modifications will be necessary if the compiler to be used does not support NAMELIST input. Probably the easiest revision to make is to use a list-directed READ statement. The standard distribution medium for this program is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. TS-SRP/PACK was developed in 1992.

  6. Survey of computer vision technology for UVA navigation

    NASA Astrophysics Data System (ADS)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.

  7. Rigorous Results for the Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas; Reed, Stephanie

    2018-05-01

    This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists: the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the limiting distribution of money as time goes to infinity approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a distribution similar but not exactly equal to a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of these conjectures and also extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.

  8. NASA AVOSS Fast-Time Wake Prediction Models: User's Guide

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew

    2014-01-01

    The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.

  9. LION4; LION; three-dimensional temperature distribution program. [CDC6600,7600; UNIVAC1108; IBM360,370; FORTRAN IV and ASCENT (CDC6600,7600), FORTRAN IV (UNIVAC1108A,B and IBM360,370)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binney, E.J.

    LION4 is a computer program for calculating one-, two-, or three-dimensional transient and steady-state temperature distributions in reactor and reactor plant components. It is used primarily for thermal-structural analyses. It utilizes finite difference techniques with first-order forward difference integration and is capable of handling a wide variety of bounding conditions. Heat transfer situations accommodated include forced and free convection in both reduced and fully-automated temperature dependent forms, coolant flow effects, a limited thermal radiation capability, a stationary or stagnant fluid gap, a dual dependency (temperature difference and temperature level) heat transfer, an alternative heat transfer mode comparison and selection facilitymore » combined with heat flux direction sensor, and any form of time-dependent boundary temperatures. The program, which handles time and space dependent internal heat generation, can also provide temperature dependent material properties with limited non-isotropic properties. User-oriented capabilities available include temperature means with various weightings and a complete heat flow rate surveillance system.CDC6600,7600;UNIVAC1108;IBM360,370; FORTRAN IV and ASCENT (CDC6600,7600), FORTRAN IV (UNIVAC1108A,B and IBM360,370); SCOPE (CDC6600,7600), EXEC8 (UNIVAC1108A,B), OS/360,370 (IBM360,370); The CDC6600 version plotter routine LAPL4 is used to produce the input required by the associated CalComp plotter for graphical output. The IBM360 version requires 350K for execution and one additional input/output unit besides the standard units.« less

  10. Patterns of care analysis for head & neck cancer of unknown primary site: a survey inside the German society of radiation oncology (DEGRO).

    PubMed

    Müller von der Grün, Jens; Bon, Dimitra; Rödel, Claus; Balermpas, Panagiotis

    2018-05-14

    Due to the absence of randomized trials, the optimal management for squamous cell cancer of unknown primary in the head and neck region (SCCHN CUP) remains controversial. Current strategies are based on retrospective studies, clinical experience, and institutional policies. An anonymous questionnaire with a total of 24 questions was created and distributed by the use of an online version (Google Forms®, Google, Mountain View, CA, USA) as well as a printout version as equivalent option. An email with a link to the survey and the questionnaire as attachment was sent to 361 DEGRO(German Society of Radiation Oncology)-associated departments. Frequency distributions of responses for each question were calculated. The data were also analyzed by type of practice. Representativity of the sample size for the DEGRO was also evaluated. 66 responses were received including answers from 20 (30%) university departments, 16 (24%) non-university institutions, and 30 (46%) radiation oncology practices. 95% of the participants routinely present these cases in an interdisciplinary tumor board and use intensity modulated radiotherapy (IMRT) techniques for SCCHN CUP treatment. Surgery includes neck dissection in 83% and tonsillectomy in 73% of the cases. Human papilloma virus (HPV) status is routinely determined in 82% of the departments. Statistically significant differences between universities and institutions and clinics and practices could be found with respect to positron emission tomography-computed tomography (PET-CT) utilization, indications for chemotherapy, radiotherapy volumes, and cumulative doses. Diagnostics and treatment for SCCHN CUP within the DEGRO remain heterogeneous. A prospective register trial with standard operation procedures is warranted to homogenize and possibly improve management.

  11. Reliability and validity of the Dutch version of the Consultation and Relational Empathy Measure in primary care.

    PubMed

    van Dijk, Inge; Scholten Meilink Lenferink, Nick; Lucassen, Peter L B J; Mercer, Stewart W; van Weel, Chris; Olde Hartman, Tim C; Speckens, Anne E M

    2017-02-01

    Empathy is an essential skill in doctor-patient communication with positive effects on compliance, patient satisfaction and symptom duration. There are no validated patient-rated empathy measures available in Dutch. To investigate the validity and reliability of a Dutch version of the Consultation and Relational Empathy (CARE) Measure, a widely used 10-item patient-rated questionnaire of physician empathy. After translation and back translation, the Dutch CARE Measure was distributed among patients from 19 general practitioners in 5 primary care centers. Tests of internal reliability and validity included Cronbach's alpha, item total correlations and factor analysis. Seven items of the QUality Of care Through the patient's Eyes (QUOTE) questionnaire assessing 'affective performance' of the physician were included in factor analysis and used to investigate convergent validity. Of the 800 distributed questionnaires, 655 (82%) were returned. Acceptability and face validity were supported by a low number of 'does not apply' responses (range 0.2%-11.9%). Internal reliability was high (Cronbach's alpha 0.974). Corrected item total correlations were at a minimum of 0.837. Factor analysis on the 10 items of the CARE Measure and 7 QUOTE items resulted in two factors (Eigenvalue > 1), the first containing the CARE Measure items and the second containing the QUOTE items. Convergent construct validity between the CARE Measure and QUOTE was confirmed with a modest positive correlation (r = 0.34, n = 654, P < 0.001). The findings support the preliminary validity and reliability of the Dutch CARE Measure. Future research is required to investigate divergent validity and discriminant ability between doctors. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Evolving the Technical Infrastructure of the Planetary Data System for the 21st Century

    NASA Technical Reports Server (NTRS)

    Beebe, Reta F.; Crichton, D.; Hughes, S.; Grayzeck, E.

    2010-01-01

    The Planetary Data System (PDS) was established in 1989 as a distributed system to assure scientific oversight. Initially the PDS followed guidelines recommended by the National Academies Committee on Data Management and Computation (CODMAC, 1982) and placed emphasis on archiving validated datasets. But overtime user demands, supported by increased computing capabilities and communication methods, have placed increasing demands on the PDS. The PDS must add additional services to better enable scientific analysis within distributed environments and to ensure that those services integrate with existing systems and data. To face these challenges the Planetary Data System (PDS) must modernize its architecture and technical implementation. The PDS 2010 project addresses these challenges. As part of this project, the PDS has three fundamental project goals that include: (1) Providing more efficient client delivery of data by data providers to the PDS (2) Enabling a stable, long-term usable planetary science data archive (3) Enabling services for the data consumer to find, access and use the data they require in contemporary data formats. In order to achieve these goals, the PDS 2010 project is upgrading both the technical infrastructure and the data standards to support increased efficiency in data delivery as well as usability of the PDS. Efforts are underway to interface with missions as early as possible and to streamline the preparation and delivery of data to the PDS. Likewise, the PDS is working to define and plan for data services that will help researchers to perform analysis in cost-constrained environments. This presentation will cover the PDS 2010 project including the goals, data standards and technical implementation plans that are underway within the Planetary Data System. It will discuss the plans for moving from the current system, version PDS 3, to version PDS 4.

  13. Upgrading to MARPLOT 5.x

    EPA Pesticide Factsheets

    MARPLOT 5.x versions include significant changes from both the previous 4.x versions and the 3.x versions. To ensure that your data is successfully transferred from your old MARPLOT, follow these instructions carefully.

  14. Versions of the Waste Reduction Model (WARM)

    EPA Pesticide Factsheets

    This page provides a brief chronology of changes made to EPA’s Waste Reduction Model (WARM), organized by WARM version number. The page includes brief summaries of changes and updates since the previous version.

  15. Versions of the Waste Reduction Model (WARM)

    EPA Pesticide Factsheets

    2017-02-14

    This page provides a brief chronology of changes made to EPA’s Waste Reduction Model (WARM), organized by WARM version number. The page includes brief summaries of changes and updates since the previous version.

  16. SCELib3.0: The new revision of SCELib, the parallel computational library of molecular properties in the Single Center Approach

    NASA Astrophysics Data System (ADS)

    Sanna, N.; Baccarelli, I.; Morelli, G.

    2009-12-01

    SCELib is a computer program which implements the Single Center Expansion (SCE) method to describe molecular electronic densities and the interaction potentials between a charged projectile (electron or positron) and a target molecular system. The first version (CPC Catalog identifier ADMG_v1_0) was submitted to the CPC Program Library in 2000, and version 2.0 (ADMG_v2_0) was submitted in 2004. We here announce the new release 3.0 which presents additional features with respect to the previous versions aiming at a significative enhance of its capabilities to deal with larger molecular systems. SCELib 3.0 allows for ab initio effective core potential (ECP) calculations of the molecular wavefunctions to be used in the SCE method in addition to the standard all-electron description of the molecule. The list of supported architectures has been updated and the code has been ported to platforms based on accelerating coprocessors, such as the NVIDIA GPGPU and the new parallel model adopted is able to efficiently run on a mixed many-core computing system. Program summaryProgram title: SCELib3.0 Catalogue identifier: ADMG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMG_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 018 862 No. of bytes in distributed program, including test data, etc.: 4 955 014 Distribution format: tar.gz Programming language: C Compilers used: xlc V8.x, Intel C V10.x, Portland Group V7.x, nvcc V2.x Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. 1 to 32 (CPU or GPU) used RAM: Up to 32 GB depending on the molecular system and runtime parameters Classification: 16.5 Catalogue identifier of previous version: ADMG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 162 (2004) 51 External routines: CUDA libraries (SDK V2.x). Does the new version supersede the previous version?: Yes Nature of problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and correlation/polarization potentials can then be used in a wide variety of applications, such as electron-molecule scattering calculations, quantum chemistry studies, biomodelling and drug design. Solution method: The polycentre Hartree-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss Legendre/Chebyschev quadrature over the θ,φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behavior for the leading dipole molecular polarizabilities. Reasons for new version: The present release of SCELib allows the study of larger molecular systems with respect to the previous versions by means of theoretical and technological advances, with the first implementation of the code over a many-core computing system. Summary of revisions: The major features added with respect to SCELib Version 2.0 are molecular wavefunctions obtained via the Los Alamos (Hay and Wadt) LAN ECP plus DZ description of the inner-shell electrons (on Na-La, Hf-Bi elements) [1] can now be single-center-expanded; the addition required modifications of: (i) the filtering code readgau, (ii) the main reading function setinp, (iii) the sphint code (including changes to the CalcMO code), (iv) the densty code, (v) the vst code; the classes of platforms supported now include two more architectures based on accelerated coprocessors (Nvidia GSeries GPGPU and ClearSpeed e720 (ClearSpeed version, experimental; initial preliminary porting of the sphint() function not for production runs - see the code documentation for additional detail). A single-precision representation for real numbers in the SCE mapping of the GTOs ( sphint code), has been implemented into the new code; the I h symmetry point group for the molecular systems has been added to those already allowed in the SCE procedure; the orientation of the molecular axis system for the Cs (planar) symmetry has been changed in accord with the standard orientation adopted by the latest version of the quantum chemistry code (Gaussian C03 [2]), which is used to generate the input multi-centre molecular wavefunctions ( z-axis perpendicular to the symmetry plane); the abelian subgroup for the Cs point group has been changed from C 1 to Cs; atomic basis functions including g-type GTOs can now be single-center-expanded. Restrictions: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. The parallel GP-GPU implementation limits the number of CPU threads to the number of GPU cores present. Running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r,θ,φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. References:[1] P.J. Hay, W.R. Wadt, J. Chem. Phys. 82 (1985) 270; W.R. Wadt, P.J. Hay, J. Chem. Phys. 284 (1985);P.J. Hay, W.R. Wadt, J. Chem. Phys. 299 (1985). [2] M.J. Frisch et al., Gaussian 03, revision C.02, Gaussian, Inc., Wallingford, CT, 2004.

  17. FEAT - FAILURE ENVIRONMENT ANALYSIS TOOL (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Pack, G.

    1994-01-01

    The Failure Environment Analysis Tool, FEAT, enables people to see and better understand the effects of failures in a system. FEAT uses digraph models to determine what will happen to a system if a set of failure events occurs and to identify the possible causes of a selected set of failures. Failures can be user-selected from either engineering schematic or digraph model graphics, and the effects or potential causes of the failures will be color highlighted on the same schematic or model graphic. As a design tool, FEAT helps design reviewers understand exactly what redundancies have been built into a system and where weaknesses need to be protected or designed out. A properly developed digraph will reflect how a system functionally degrades as failures accumulate. FEAT is also useful in operations, where it can help identify causes of failures after they occur. Finally, FEAT is valuable both in conceptual development and as a training aid, since digraphs can identify weaknesses in scenarios as well as hardware. Digraphs models for use with FEAT are generally built with the Digraph Editor, a Macintosh-based application which is distributed with FEAT. The Digraph Editor was developed specifically with the needs of FEAT users in mind and offers several time-saving features. It includes an icon toolbox of components required in a digraph model and a menu of functions for manipulating these components. It also offers FEAT users a convenient way to attach a formatted textual description to each digraph node. FEAT needs these node descriptions in order to recognize nodes and propagate failures within the digraph. FEAT users store their node descriptions in modelling tables using any word processing or spreadsheet package capable of saving data to an ASCII text file. From within the Digraph Editor they can then interactively attach a properly formatted textual description to each node in a digraph. Once descriptions are attached to them, a selected set of nodes can be saved as a library file which represents a generic digraph structure for a class of components. The Generate Model feature can then use library files to generate digraphs for every component listed in the modeling tables, and these individual digraph files can be used in a variety of ways to speed generation of complete digraph models. FEAT contains a preprocessor which performs transitive closure on the digraph. This multi-step algorithm builds a series of phantom bridges, or gates, that allow accurate bi-directional processing of digraphs. This preprocessing can be time-consuming, but once preprocessing is complete, queries can be answered and displayed within seconds. A UNIX X-Windows port of version 3.5 of FEAT, XFEAT, is also available to speed the processing of digraph models created on the Macintosh. FEAT v3.6, which is only available for the Macintosh, has some report generation capabilities which are not available in XFEAT. For very large integrated systems, FEAT can be a real cost saver in terms of design evaluation, training, and knowledge capture. The capability of loading multiple digraphs and schematics into FEAT allows modelers to build smaller, more focused digraphs. Typically, each digraph file will represent only a portion of a larger failure scenario. FEAT will combine these files and digraphs from other modelers to form a continuous mathematical model of the system's failure logic. Since multiple digraphs can be cumbersome to use, FEAT ties propagation results to schematic drawings produced using MacDraw II (v1.1v2 or later) or MacDraw Pro. This makes it easier to identify single and double point failures that may have to cross several system boundaries and multiple engineering disciplines before creating a hazardous condition. FEAT v3.6 for the Macintosh is written in C-language using Macintosh Programmer's Workshop C v3.2. It requires at least a Mac II series computer running System 7 or System 6.0.8 and 32 Bit QuickDraw. It also requires a math coprocessor or coprocessor emulator and a color monitor (or one with 256 gray scale capability). A minimum of 4Mb of free RAM is highly recommended. The UNIX version of FEAT includes both FEAT v3.6 for the Macintosh and XFEAT. XFEAT is written in C-language for Sun series workstations running SunOS, SGI workstations running IRIX, DECstations running ULTRIX, and Intergraph workstations running CLIX version 6. It requires the MIT X Window System, Version 11 Revision 4, with OSF/Motif 1.1.3, and 16Mb of RAM. The standard distribution medium for FEAT 3.6 (Macintosh version) is a set of three 3.5 inch Macintosh format diskettes. The standard distribution package for the UNIX version includes the three FEAT 3.6 Macintosh diskettes plus a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format which contains XFEAT. Alternate distribution media and formats for XFEAT are available upon request. FEAT has been under development since 1990. Both FEAT v3.6 for the Macintosh and XFEAT v3.5 were released in 1993.

  18. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.

  19. SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.

  20. Timing of delivery after external cephalic version and the risk for cesarean delivery.

    PubMed

    Kabiri, Doron; Elram, Tamar; Aboo-Dia, Mushira; Elami-Suzin, Matan; Elchalal, Uriel; Ezra, Yossef

    2011-08-01

    To estimate the association between time of delivery after external cephalic version at term and the risk for cesarean delivery. This retrospective cohort study included all successful external cephalic versions performed in a tertiary center between January 1997 and January 2010. Stepwise logistic regression was used to calculate the odds ratio (OR) for cesarean delivery. We included 483 external cephalic versions in this study, representing 53.1% of all external cephalic version attempts. The incidence of cesarean delivery for 139 women (29%) who gave birth less than 96 hours from external cephalic version was 16.5%; for 344 women (71%) who gave birth greater than 96 hours from external cephalic version, the incidence of cesarean delivery was 7.8% (P = .004). The adjusted OR for cesarean delivery was 2.541 (95% confidence interval 1.36-4.72). When stratified by parity, the risk for cesarean delivery when delivery occurred less than 96 hours after external cephalic version was 2.97 and 2.28 for nulliparous and multiparous women, respectively. Delivery at less than 96 hours after successful external cephalic version was associated with an increased risk for cesarean delivery. III.

  1. Stages of Plasma Cell Neoplasms (Including Multiple Myeloma)

    MedlinePlus

    ... Health Professional Plasma Cell Neoplasms Treatment Research Plasma Cell Neoplasms (Including Multiple Myeloma) Treatment (PDQ®)–Patient Version General Information About Plasma Cell Neoplasms Go to Health Professional Version Key Points ...

  2. Computer version of astronomical ephemerides.

    NASA Astrophysics Data System (ADS)

    Choliy, V. Ya.

    A computer version of astronomical ephemerides for bodies of the Solar System, stars, and astronomical phenomena was created at the Main Astronomical Observatory of the National Academy of Sciences of Ukraine and the Astronomy and Cosmic Physics Department of the Taras Shevchenko National University. The ephemerides will be distributed via INTERNET or in the file form. This information is accessible via the web servers space.ups.kiev.ua and alfven.ups.kiev.ua or the address choliy@astrophys.ups.kiev.ua.

  3. Documentation for the machine-readable version of the general catalogue of trigonometric stellar parallaxes and supplement

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The machine-readable version of the General Catalog of Trigonometric Stellar parallaxes as distributed by the Astronomical Data Center is described. It is intended to enable users to read and process the data without problems and guesswork. The source reference should be consulted for details concerning the compilation of the main catalogue and supplement, the probable errors, and the weighting system used to combine determinations from different observatories.

  4. MPI-Defrost: Extension of Defrost to MPI-based Cluster Environment

    NASA Astrophysics Data System (ADS)

    Amin, Mustafa A.; Easther, Richard; Finkel, Hal

    2011-06-01

    MPI-Defrost extends Frolov’s Defrost to an MPI-based cluster environment. This version has been restricted to a single field. Restoring two-field support should be straightforward, but will require some code changes. Some output options may also not be fully supported under MPI. This code was produced to support our own work, and has been made available for the benefit of anyone interested in either oscillon simulations or an MPI capable version of Defrost, and it is provided on an "as-is" basis. Andrei Frolov is the primary developer of Defrost and we thank him for placing his work under the GPL (GNU Public License), and thus allowing us to distribute this modified version.

  5. Allowable residual contamination levels of radionuclides in soil from pathway analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nyquist, J.E.; Baes, C.F. III

    1987-01-01

    The uncertainty regarding radionuclide distributions among Remedial Action Program (RAP) sites and long-term decommissioning and closure options for these sites requires a flexible approach capable of handling different levels of contamination, dose limits, and closure scenarios. We identified a commercially available pathway analysis model, DECOM, which had been used previously in support of remedial activities involving contaminated soil at the Savannah River Plant. The DECOM computer code, which estimates concentrations of radionuclides uniformly distributed in soil that correspond to an annual effective dose equivalent, is written in BASIC and runs on an IBM PC or compatible microcomputer. We obtained themore » latest version of DECOM and modified it to make it more user friendly and applicable to the Oak Ridge National Laboratory (ORNL) RAP. Some modifications involved changes in default parameters or changes in models based on approaches used by the EPA in regulating remedial actions for hazardous substances. We created a version of DECOM as a LOTUS spreadsheet, using the same models as the BASIC version of DECOM. We discuss the specific modeling approaches taken, the regulatory framework that guided our efforts, the strengths and limitations of each approach, and areas for improvement. We also demonstrate how the LOTUS version of DECOM can be applied to specific problems that may be encountered during ORNL RAP activities. 18 refs., 2 figs., 3 tabs.« less

  6. Doing It Right: 366 answers to computing questions you didn't know you had

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herring, Stuart Davis

    Slides include information on history: version control, version control: branches, version control: Git, releases, requirements, readability, readability control flow, global variables, architecture, architecture redundancy, processes, input/output, unix, etcetera.

  7. Power Aware Signal Processing Environment (PASPE) for PAC/C

    DTIC Science & Technology

    2003-02-01

    vs. FFT Size For our implementation , the Annapolis FFT core was radix-256, and therefore the smallest PN code length that could be processed was the...PN-64. A C- code version of correlate was compared to the FPGA 61 implementation . The results in Figure 68 show that for a PN-1024, the...12a. DISTRIBUTION / AVAILABILITY STATEMENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum

  8. Global direct radiative forcing by process-parameterized aerosol optical properties

    NASA Astrophysics Data System (ADS)

    KirkevâG, Alf; Iversen, Trond

    2002-10-01

    A parameterization of aerosol optical parameters is developed and implemented in an extended version of the community climate model version 3.2 (CCM3) of the U.S. National Center for Atmospheric Research. Direct radiative forcing (DRF) by monthly averaged calculated concentrations of non-sea-salt sulfate and black carbon (BC) is estimated. Inputs are production-specific BC and sulfate from [2002] and background aerosol size distribution and composition. The scheme interpolates between tabulated values to obtain the aerosol single scattering albedo, asymmetry factor, extinction coefficient, and specific extinction coefficient. The tables are constructed by full calculations of optical properties for an array of aerosol input values, for which size-distributed aerosol properties are estimated from theory for condensation and Brownian coagulation, assumed distribution of cloud-droplet residuals from aqueous phase oxidation, and prescribed properties of the background aerosols. Humidity swelling is estimated from the Köhler equation, and Mie calculations finally yield spectrally resolved aerosol optical parameters for 13 solar bands. The scheme is shown to give excellent agreement with nonparameterized DRF calculations for a wide range of situations. Using IPCC emission scenarios for the years 2000 and 2100, calculations with an atmospheric global cliamte model (AFCM) yield a global net anthropogenic DRF of -0.11 and 0.11 W m-2, respectively, when 90% of BC from biomass burning is assumed anthropogenic. In the 2000 scenario, the individual DRF due to sulfate and BC has separately been estimated to -0.29 and 0.19 W m-2, respectively. Our estimates of DRF by BC per BC mass burden are lower than earlier published estimates. Some sensitivity tests are included to investigate to what extent uncertain assumptions may influence these results.

  9. Magnetic Levitation Coupled with Portable Imaging and Analysis for Disease Diagnostics.

    PubMed

    Knowlton, Stephanie M; Yenilmez, Bekir; Amin, Reza; Tasoglu, Savas

    2017-02-19

    Currently, many clinical diagnostic procedures are complex, costly, inefficient, and inaccessible to a large population in the world. The requirements for specialized equipment and trained personnel require that many diagnostic tests be performed at remote, centralized clinical laboratories. Magnetic levitation is a simple yet powerful technique and can be applied to levitate cells, which are suspended in a paramagnetic solution and placed in a magnetic field, at a position determined by equilibrium between a magnetic force and a buoyancy force. Here, we present a versatile platform technology designed for point-of-care diagnostics which uses magnetic levitation coupled to microscopic imaging and automated analysis to determine the density distribution of a patient's cells as a useful diagnostic indicator. We present two platforms operating on this principle: (i) a smartphone-compatible version of the technology, where the built-in smartphone camera is used to image cells in the magnetic field and a smartphone application processes the images and to measures the density distribution of the cells and (ii) a self-contained version where a camera board is used to capture images and an embedded processing unit with attached thin-film-transistor (TFT) screen measures and displays the results. Demonstrated applications include: (i) measuring the altered distribution of a cell population with a disease phenotype compared to a healthy phenotype, which is applied to sickle cell disease diagnosis, and (ii) separation of different cell types based on their characteristic densities, which is applied to separate white blood cells from red blood cells for white blood cell cytometry. These applications, as well as future extensions of the essential density-based measurements enabled by this portable, user-friendly platform technology, will significantly enhance disease diagnostic capabilities at the point of care.

  10. Performance of Versions 1,2 and 3 of the Goddard Earth Observing System (GEOS) Chemistry-Climate Model (CCM)

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Stolarski, Richard S.; Nielsen, J. Eric; Duncan, Bryan N.

    2008-01-01

    Version 1 of the Goddard Earth Observing System Chemistry-Climate Model (GEOS CCM) was used in the first CCMVa1 model evaluation and forms the basis for several studies of links between ozone and the circulation. That version of the CCM was based on the GEOS-4 GCM. Versions 2 and 3 of the GEOS CCM are based on the GEOS-5 GCM, which retains the "Lin-Rood" dynamical core but has a totally different set of physical parameterizatiOns to GEOS-4. In Version 2 of the GEOS CCM the Goddard stratospheric chemistry module is retained. Difference between Versions 1 and 2 thus reflect the physics changes of the underlying GCMs. Several comparisons between these two models are made, several of which reveal improvements in Version 2 (including a more realistic representation of the interannual variability of the Antarctic vortex). In Version 3 of the GEOS CCM, the stratospheric chemistry mechanism is replaced by the "GMI COMBO" code that includes tropospheric chemistry and different computational approaches. An advantage of this model version. is the reduction of high ozone biases that prevail at low chlorine loadings in Versions 1 and 2. This poster will compare and contrast various aspects of the three model versions that are relevant for understanding interactions between ozone and climate.

  11. LAS - LAND ANALYSIS SYSTEM, VERSION 5.0

    NASA Technical Reports Server (NTRS)

    Pease, P. B.

    1994-01-01

    The Land Analysis System (LAS) is an image analysis system designed to manipulate and analyze digital data in raster format and provide the user with a wide spectrum of functions and statistical tools for analysis. LAS offers these features under VMS with optional image display capabilities for IVAS and other display devices as well as the X-Windows environment. LAS provides a flexible framework for algorithm development as well as for the processing and analysis of image data. Users may choose between mouse-driven commands or the traditional command line input mode. LAS functions include supervised and unsupervised image classification, film product generation, geometric registration, image repair, radiometric correction and image statistical analysis. Data files accepted by LAS include formats such as Multi-Spectral Scanner (MSS), Thematic Mapper (TM) and Advanced Very High Resolution Radiometer (AVHRR). The enhanced geometric registration package now includes both image to image and map to map transformations. The over 200 LAS functions fall into image processing scenario categories which include: arithmetic and logical functions, data transformations, fourier transforms, geometric registration, hard copy output, image restoration, intensity transformation, multispectral and statistical analysis, file transfer, tape profiling and file management among others. Internal improvements to the LAS code have eliminated the VAX VMS dependencies and improved overall system performance. The maximum LAS image size has been increased to 20,000 lines by 20,000 samples with a maximum of 256 bands per image. The catalog management system used in earlier versions of LAS has been replaced by a more streamlined and maintenance-free method of file management. This system is not dependent on VAX/VMS and relies on file naming conventions alone to allow the use of identical LAS file names on different operating systems. While the LAS code has been improved, the original capabilities of the system have been preserved. These include maintaining associated image history, session logging, and batch, asynchronous and interactive mode of operation. The LAS application programs are integrated under version 4.1 of an interface called the Transportable Applications Executive (TAE). TAE 4.1 has four modes of user interaction: menu, direct command, tutor (or help), and dynamic tutor. In addition TAE 4.1 allows the operation of LAS functions using mouse-driven commands under the TAE-Facelift environment provided with TAE 4.1. These modes of operation allow users, from the beginner to the expert, to exercise specific application options. LAS is written in C-language and FORTRAN 77 for use with DEC VAX computers running VMS with approximately 16Mb of physical memory. This program runs under TAE 4.1. Since TAE 4.1 is not a current version of TAE, TAE 4.1 is included within the LAS distribution. Approximately 130,000 blocks (65Mb) of disk storage space are necessary to store the source code and files generated by the installation procedure for LAS and 44,000 blocks (22Mb) of disk storage space are necessary for TAE 4.1 installation. The only other dependencies for LAS are the subroutine libraries for the specific display device(s) that will be used with LAS/DMS (e.g. X-Windows and/or IVAS). The standard distribution medium for LAS is a set of two 9track 6250 BPI magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. This program was developed in 1986 and last updated in 1992.

  12. Treatment Options for Plasma Cell Neoplasms (Including Multiple Myeloma)

    MedlinePlus

    ... Health Professional Plasma Cell Neoplasms Treatment Research Plasma Cell Neoplasms (Including Multiple Myeloma) Treatment (PDQ®)–Patient Version General Information About Plasma Cell Neoplasms Go to Health Professional Version Key Points ...

  13. Treatment Option Overview (Plasma Cell Neoplasms Including Multiple Myeloma)

    MedlinePlus

    ... Health Professional Plasma Cell Neoplasms Treatment Research Plasma Cell Neoplasms (Including Multiple Myeloma) Treatment (PDQ®)–Patient Version General Information About Plasma Cell Neoplasms Go to Health Professional Version Key Points ...

  14. Analytical and Experimental Evaluation of the Heat Transfer Distribution over the Surfaces of Turbine Vanes

    NASA Technical Reports Server (NTRS)

    Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.

    1983-01-01

    Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.

  15. Analytical and experimental evaluation of the heat transfer distribution over the surfaces of turbine vanes

    NASA Astrophysics Data System (ADS)

    Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.

    1983-05-01

    Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.

  16. Analytical determination of the effect of structural elasticity on landing stability of a version of the Viking Lander

    NASA Technical Reports Server (NTRS)

    Laurenson, R. M.

    1972-01-01

    A limited analytical investigation was conducted to assess the effects of structural elasticity on the landing stability of a version of the Viking Lander. Two landing conditions and two lander mass and inertia distributions were considered. The results of this investigation show that the stability-critical surface slopes were lower for an uphill landing than for a downhill landing. In addition, the heavy footpad mass with its corresponding inertia distribution resulted in lower stability-critical ground slopes than were obtained for the light footpad mass and its corresponding inertia distribution. Structural elasticity was observed to have a large effect on the downhill landing stability of the light footpad mass configuration but had a negligible effect on the stability of the other configuration examined. Because of the limited nature of this study, care must be exercised in drawing conclusions from these results relative to the overall stability characteristics of the Viking Lander.

  17. Podcasting and the Long Tail

    ERIC Educational Resources Information Center

    Bull, Glen

    2005-01-01

    Podcasting allows distribution of audio files through an RSS feed. This permits users to subscribe to a series of podcasts that are automatically sent to their computer or MP3 player. The capability to receive podcasts is built into freely distributed software such as iPodder as well as the most recent version of iTunes, a free download. In this…

  18. Multi-GPU and multi-CPU accelerated FDTD scheme for vibroacoustic applications

    NASA Astrophysics Data System (ADS)

    Francés, J.; Otero, B.; Bleda, S.; Gallego, S.; Neipp, C.; Márquez, A.; Beléndez, A.

    2015-06-01

    The Finite-Difference Time-Domain (FDTD) method is applied to the analysis of vibroacoustic problems and to study the propagation of longitudinal and transversal waves in a stratified media. The potential of the scheme and the relevance of each acceleration strategy for massively computations in FDTD are demonstrated in this work. In this paper, we propose two new specific implementations of the bi-dimensional scheme of the FDTD method using multi-CPU and multi-GPU, respectively. In the first implementation, an open source message passing interface (OMPI) has been included in order to massively exploit the resources of a biprocessor station with two Intel Xeon processors. Moreover, regarding CPU code version, the streaming SIMD extensions (SSE) and also the advanced vectorial extensions (AVX) have been included with shared memory approaches that take advantage of the multi-core platforms. On the other hand, the second implementation called the multi-GPU code version is based on Peer-to-Peer communications available in CUDA on two GPUs (NVIDIA GTX 670). Subsequently, this paper presents an accurate analysis of the influence of the different code versions including shared memory approaches, vector instructions and multi-processors (both CPU and GPU) and compares them in order to delimit the degree of improvement of using distributed solutions based on multi-CPU and multi-GPU. The performance of both approaches was analysed and it has been demonstrated that the addition of shared memory schemes to CPU computing improves substantially the performance of vector instructions enlarging the simulation sizes that use efficiently the cache memory of CPUs. In this case GPU computing is slightly twice times faster than the fine tuned CPU version in both cases one and two nodes. However, for massively computations explicit vector instructions do not worth it since the memory bandwidth is the limiting factor and the performance tends to be the same than the sequential version with auto-vectorisation and also shared memory approach. In this scenario GPU computing is the best option since it provides a homogeneous behaviour. More specifically, the speedup of GPU computing achieves an upper limit of 12 for both one and two GPUs, whereas the performance reaches peak values of 80 GFlops and 146 GFlops for the performance for one GPU and two GPUs respectively. Finally, the method is applied to an earth crust profile in order to demonstrate the potential of our approach and the necessity of applying acceleration strategies in these type of applications.

  19. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Riley, G.

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.

  20. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Culbert, C.

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.

  1. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION WITH CLIPSITS)

    NASA Technical Reports Server (NTRS)

    Riley, , .

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.

  2. Static analysis of the hull plate using the finite element method

    NASA Astrophysics Data System (ADS)

    Ion, A.

    2015-11-01

    This paper aims at presenting the static analysis for two levels of a container ship's construction as follows: the first level is at the girder / hull plate and the second level is conducted at the entire strength hull of the vessel. This article will describe the work for the static analysis of a hull plate. We shall use the software package ANSYS Mechanical 14.5. The program is run on a computer with four Intel Xeon X5260 CPU processors at 3.33 GHz, 32 GB memory installed. In terms of software, the shared memory parallel version of ANSYS refers to running ANSYS across multiple cores on a SMP system. The distributed memory parallel version of ANSYS (Distributed ANSYS) refers to running ANSYS across multiple processors on SMP systems or DMP systems.

  3. Developing GIOVANNI-based Online Prototypes to Intercompare TRMM-Related Global Gridded-Precipitation Products

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Ostrenga, Dana; Teng, William; Kempler, Steven; Milich, Lenard

    2014-01-01

    New online prototypes have been developed to extend and enhance the previous effort by facilitating investigation of product characteristics and intercomparison of precipitation products in different algorithms as well as in different versions at different spatial scales ranging from local to global without downloading data and software. Several popular Tropical Rainfall Measuring Mission (TRMM) products and the TRMM Composite Climatology are included. In addition, users can download customized data in several popular formats for further analysis. Examples show product quality problems and differences in several monthly precipitation products. It is seen that differences in daily and monthly precipitation products are distributed unevenly in space and it is necessary to have tools such as those presented here for customized and detailed investigations. A simple time series and two area maps allow the discovery of abnormal values of 3A25 in one of the months. An example shows a V-shaped valley issue in the Version 6 3B43 time series and another example shows a sudden drop in 3A25 monthly rain rate, all of which provide important information when the products are used for long-term trend studies. Future plans include adding more products and statistical functionality in the prototypes.

  4. Spectra of late type dwarf stars of known abundance for stellar population models

    NASA Technical Reports Server (NTRS)

    Oconnell, R. W.

    1990-01-01

    The project consisted of two parts. The first was to obtain new low-dispersion, long-wavelength, high S/N IUE spectra of F-G-K dwarf stars with previously determined abundances, temperatures, and gravities. To insure high quality, the spectra are either trailed, or multiple exposures are taken within the large aperture. Second, the spectra are assembled into a library which combines the new data with existing IUE Archive data to yield mean spectral energy distributions for each important type of star. My principal responsibility is the construction and maintenance of this UV spectral library. It covers the spectral range 1200-3200A and is maintained in two parts: a version including complete wavelength coverage at the full spectral resolution of the Low Resolution cameras; and a selected bandpass version, consisting of the mean flux in pre-selected 20A bands. These bands are centered on spectral features or continuum regions of special utility - e.g. the C IV lambda 1550 or Mg II lambda 2800 feature. In the middle-UV region, special emphasis is given to those features (including continuum 'breaks') which are most useful in the study of F-G-K star spectra in the integrated light of old stellar populations.

  5. Spanish version of Colquitt's Organizational Justice Scale.

    PubMed

    Díaz-Gracia, Liliana; Barbaranelli, Claudio; Moreno-Jiménez, Bernardo

    2014-01-01

    Organizational justice (OJ) is an important predictor of different work attitudes and behaviors. Colquitt's Organizational Justice Scale (COJS) was designed to assess employees' perceptions of fairness. This scale has four dimensions: distributive, procedural, informational, and interpersonal justice. The objective of this study is to validate it in a Spanish sample. The scale was administered to 460 Spanish employees from the service sector. 40.4% were men and 59.6% women. The Confirmatory Factor Analysis (CFA) supported the four dimensions structure for Spanish version of COJS. This model showed a better fit to data that the others models tested. Cronbach's alpha obtained for subscales ranged between .88 and .95. Correlations of the Spanish version of COJS with measures of incivility and job satisfaction were statistically significant and had a moderate to high magnitude, indicating a reasonable degree of construct validity. The Spanish version of COJS has adequate psychometric properties and may be of value in assessing OJ in Spanish setting.

  6. MAFIA Version 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiland, T.; Bartsch, M.; Becker, U.

    1997-02-01

    MAFIA Version 4.0 is an almost completely new version of the general purpose electromagnetic simulator known since 13 years. The major improvements concern the new graphical user interface based on state of the art technology as well as a series of new solvers for new physics problems. MAFIA now covers heat distribution, electro-quasistatics, S-parameters in frequency domain, particle beam tracking in linear accelerators, acoustics and even elastodynamics. The solvers that were available in earlier versions have also been improved and/or extended, as for example the complex eigenmode solver, the 2D--3D coupled PIC solvers. Time domain solvers have new waveguide boundarymore » conditions with an extremely low reflection even near cutoff frequency, concentrated elements are available as well as a variety of signal processing options. Probably the most valuable addition are recursive sub-grid capabilities that enable modeling of very small details in large structures. {copyright} {ital 1997 American Institute of Physics.}« less

  7. MAFIA Version 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiland, T.; Bartsch, M.; Becker, U.

    1997-02-01

    MAFIA Version 4.0 is an almost completely new version of the general purpose electromagnetic simulator known since 13 years. The major improvements concern the new graphical user interface based on state of the art technology as well as a series of new solvers for new physics problems. MAFIA now covers heat distribution, electro-quasistatics, S-parameters in frequency domain, particle beam tracking in linear accelerators, acoustics and even elastodynamics. The solvers that were available in earlier versions have also been improved and/or extended, as for example the complex eigenmode solver, the 2D-3D coupled PIC solvers. Time domain solvers have new waveguide boundarymore » conditions with an extremely low reflection even near cutoff frequency, concentrated elements are available as well as a variety of signal processing options. Probably the most valuable addition are recursive sub-grid capabilities that enable modeling of very small details in large structures.« less

  8. Developments in fiber optics for distribution automation

    NASA Technical Reports Server (NTRS)

    Kirkham, H.; Friend, H.; Jackson, S.; Johnston, A.

    1991-01-01

    An optical fiber based communications system of unusual design is described. The system consists of a network of optical fibers overlaid on the distribution system. It is configured as a large number of interconnected rings, with some spurs. Protocols for access to and control of the network are described. Because of the way they function, the protocols are collectively called AbNET, in commemoration of the microbiologists' abbreviation Ab for antibody. Optical data links that could be optically powered are described. There are two versions, each of which has a good frequency response and minimal filtering requirements. In one, a conventional FM pulse train is used at the transmitter, and a novel form of phase-locked loop is used as demodulator. In the other, the FM transmitter is replaced with a pulse generator arranged so that the period between pulses represents the modulating signal. Transmitter and receiver designs, including temperature compensation methods, are presented. Experimental results are given.

  9. Resilient off-grid microgrids: Capacity planning and N-1 security

    DOE PAGES

    Madathil, Sreenath Chalil; Yamangil, Emre; Nagarajan, Harsha; ...

    2017-06-13

    Over the past century the electric power industry has evolved to support the delivery of power over long distances with highly interconnected transmission systems. Despite this evolution, some remote communities are not connected to these systems. These communities rely on small, disconnected distribution systems, i.e., microgrids to deliver power. However, as microgrids often are not held to the same reliability standards as transmission grids, remote communities can be at risk for extended blackouts. To address this issue, we develop an optimization model and an algorithm for capacity planning and operations of microgrids that include N-1 security and other practical modelingmore » features like AC power flow physics, component efficiencies and thermal limits. Lastly, we demonstrate the computational effectiveness of our approach on two test systems; a modified version of the IEEE 13 node test feeder and a model of a distribution system in a remote community in Alaska.« less

  10. Space-Time Dynamics of Soil Moisture and Temperature: Scale issues

    NASA Technical Reports Server (NTRS)

    Mohanty, Binayak P.; Miller, Douglas A.; Th.vanGenuchten, M.

    2003-01-01

    The goal of this project is to gain further understanding of soil moisture/temperature dynamics at different spatio-temporal scales and physical controls/parameters.We created a comprehensive GIS database, which has been accessed extensively by NASA Land Surface Hydrology investigators (and others), is located at the following URL: http://www.essc.psu.edu/nasalsh. For soil moisture field experiments such as SGP97, SGP99, SMEX02, and SMEX03, cartographic products were designed for multiple applications, both pre- and post-mission. Premission applications included flight line planning and field operations logistics, as well as general insight into the extent and distribution of soil, vegetation, and topographic properties for the study areas. The cartographic products were created from original spatial information resources that were imported into Adobe Illustrator, where the maps were created and PDF versions were made for distribution and download.

  11. Resilient off-grid microgrids: Capacity planning and N-1 security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madathil, Sreenath Chalil; Yamangil, Emre; Nagarajan, Harsha

    Over the past century the electric power industry has evolved to support the delivery of power over long distances with highly interconnected transmission systems. Despite this evolution, some remote communities are not connected to these systems. These communities rely on small, disconnected distribution systems, i.e., microgrids to deliver power. However, as microgrids often are not held to the same reliability standards as transmission grids, remote communities can be at risk for extended blackouts. To address this issue, we develop an optimization model and an algorithm for capacity planning and operations of microgrids that include N-1 security and other practical modelingmore » features like AC power flow physics, component efficiencies and thermal limits. Lastly, we demonstrate the computational effectiveness of our approach on two test systems; a modified version of the IEEE 13 node test feeder and a model of a distribution system in a remote community in Alaska.« less

  12. MISR-Versioning-V23

    Atmospheric Science Data Center

    2018-02-21

    ... MISR-Versioning-V23   Version Number: F13_0023 (aerosol), F08_0023 (land) Production Start Date: 11/1/2017 Product Updates:   This is a major revision to aerosol and land surface products, including both product format and algorithm ...

  13. ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Johnson, S. C.

    1994-01-01

    ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, is an interface that will enable reliability engineers to accurately design large semi-Markov models. The user describes the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. The abstract language allows efficient description of large, complex systems; a one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. Instead of listing the individual states of the Markov model, reliability engineers can specify the rules governing the behavior of a system, and these are used to automatically generate the model. ASSIST reads an input file describing the failure behavior of a system in an abstract language and generates a Markov model in the format needed for input to SURE, the semi-Markov Unreliability Range Evaluator program, and PAWS/STEM, the Pade Approximation with Scaling program and Scaled Taylor Exponential Matrix. A Markov model consists of a number of system states and transitions between them. Each state in the model represents a possible state of the system in terms of which components have failed, which ones have been removed, etc. Within ASSIST, each state is defined by a state vector, where each element of the vector takes on an integer value within a defined range. An element can represent any meaningful characteristic, such as the number of working components of one type in the system, or the number of faulty components of another type in use. Statements representing transitions between states in the model have three parts: a condition expression, a destination expression, and a rate expression. The first expression is a Boolean expression describing the state space variable values of states for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the documentation in PostScript, TeX, and DVI formats are provided on the distribution medium. (The VMS distribution lacks the .DVI format files, however.) ASSIST was developed in 1986 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  14. ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Johnson, S. C.

    1994-01-01

    ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, is an interface that will enable reliability engineers to accurately design large semi-Markov models. The user describes the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. The abstract language allows efficient description of large, complex systems; a one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. Instead of listing the individual states of the Markov model, reliability engineers can specify the rules governing the behavior of a system, and these are used to automatically generate the model. ASSIST reads an input file describing the failure behavior of a system in an abstract language and generates a Markov model in the format needed for input to SURE, the semi-Markov Unreliability Range Evaluator program, and PAWS/STEM, the Pade Approximation with Scaling program and Scaled Taylor Exponential Matrix. A Markov model consists of a number of system states and transitions between them. Each state in the model represents a possible state of the system in terms of which components have failed, which ones have been removed, etc. Within ASSIST, each state is defined by a state vector, where each element of the vector takes on an integer value within a defined range. An element can represent any meaningful characteristic, such as the number of working components of one type in the system, or the number of faulty components of another type in use. Statements representing transitions between states in the model have three parts: a condition expression, a destination expression, and a rate expression. The first expression is a Boolean expression describing the state space variable values of states for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the documentation in PostScript, TeX, and DVI formats are provided on the distribution medium. (The VMS distribution lacks the .DVI format files, however.) ASSIST was developed in 1986 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  15. Technical report series on global modeling and data assimilation. Volume 4: Documentation of the Goddard Earth Observing System (GEOS) data assimilation system, version 1

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Pfaendtner, James; Bloom, Stephen; Lamich, David; Seablom, Michael; Sienkiewicz, Meta; Stobie, James; Dasilva, Arlindo

    1995-01-01

    This report describes the analysis component of the Goddard Earth Observing System, Data Assimilation System, Version 1 (GEOS-1 DAS). The general features of the data assimilation system are outlined, followed by a thorough description of the statistical interpolation algorithm, including specification of error covariances and quality control of observations. We conclude with a discussion of the current status of development of the GEOS data assimilation system. The main components of GEOS-1 DAS are an atmospheric general circulation model and an Optimal Interpolation algorithm. The system is cycled using the Incremental Analysis Update (IAU) technique in which analysis increments are introduced as time independent forcing terms in a forecast model integration. The system is capable of producing dynamically balanced states without the explicit use of initialization, as well as a time-continuous representation of non- observables such as precipitation and radiational fluxes. This version of the data assimilation system was used in the five-year reanalysis project completed in April 1994 by Goddard's Data Assimilation Office (DAO) Data from this reanalysis are available from the Goddard Distributed Active Center (DAAC), which is part of NASA's Earth Observing System Data and Information System (EOSDIS). For information on how to obtain these data sets, contact the Goddard DAAC at (301) 286-3209, EMAIL daac@gsfc.nasa.gov.

  16. E-Roadway Animation (Text Version) | Transportation Research | NREL

    Science.gov Websites

    E-Roadway Animation (Text Version) E-Roadway Animation (Text Version) This text version of the e overall emissions. Background images include 1) a U.S. map with text (80% overall emissions reduction by ), 3) a California map with text (80% transportation emissions reduction by 2050), and 4) a European

  17. The Wonders of Physics

    NASA Astrophysics Data System (ADS)

    Sprott, J. C.

    2003-04-01

    In 1984 the University of Wisconsin began an outreach program called The Wonders of Physics. The program initially consisted of a series of public lectures intended to generate interest in physics through a series of fast-paced demonstrations suitable for a diverse audience. The demonstrations are organized around the areas of classical physics, including motion, heat, sound, electricity, magnetism, and light. The presentations include music, costumes, skits, and surprise appearances of special guests. The presentation has been given about 160 times on the Madison campus, nearly always to capacity crowds totaling over 50,000. Each year the program is videotaped and distributed to individuals, schools, and cable TV stations. In 1990, a Lecture Kit was produced and is widely distributed. A traveling version of the show was developed in 1988 and has been given about 800 times to a total audience of approximately 100,000, mostly school children in nineteen states and provinces. The program is funded by the Office of Fusion Energy Sciences of the Department of Energy and by donations from those for whom the presentations are made as well as a few corporations and benefactors.

  18. OSMEAN - OSCULATING/MEAN CLASSICAL ORBIT ELEMENTS CONVERSION (HP9000/7XX VERSION)

    NASA Technical Reports Server (NTRS)

    Guinn, J. R.

    1994-01-01

    OSMEAN is a sophisticated FORTRAN algorithm that converts between osculating and mean classical orbit elements. Mean orbit elements are advantageous for trajectory design and maneuver planning since they can be propagated very quickly; however, mean elements cannot describe the exact orbit at any given time. Osculating elements will enable the engineer to give an exact description of an orbit; however, computation costs are significantly higher due to the numerical integration procedure required for propagation. By calculating accurate conversions between osculating and mean orbit elements, OSMEAN allows the engineer to exploit the advantages of each approach for the design and planning of orbital trajectories and maneuver planning. OSMEAN is capable of converting mean elements to osculating elements or vice versa. The conversion is based on modelling of all first order aspherical and lunar-solar gravitation perturbations as well as a second-order aspherical term based on the second degree central body zonal perturbation. OSMEAN is written in FORTRAN 77 for HP 9000 series computers running HP-UX (NPO-18796) and DEC VAX series computers running VMS (NPO-18741). The HP version requires 388K of RAM for execution and the DEC VAX version requires 254K of RAM for execution. Sample input and output are listed in the documentation. Sample input is also provided on the distribution medium. The standard distribution medium for the HP 9000 series version is a .25 inch streaming magnetic IOTAMAT tape cartridge in UNIX tar format. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format or on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the DEC VAX version is a 1600 BPI 9-track magnetic tape in DEC VAX BACKUP format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. OSMEAN was developed on a VAX 6410 in 1989, and was ported to the HP 9000 series platform in 1991. It is a copyrighted work with all copyright vested in NASA.

  19. OSMEAN - OSCULATING/MEAN CLASSICAL ORBIT ELEMENTS CONVERSION (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Guinn, J. R.

    1994-01-01

    OSMEAN is a sophisticated FORTRAN algorithm that converts between osculating and mean classical orbit elements. Mean orbit elements are advantageous for trajectory design and maneuver planning since they can be propagated very quickly; however, mean elements cannot describe the exact orbit at any given time. Osculating elements will enable the engineer to give an exact description of an orbit; however, computation costs are significantly higher due to the numerical integration procedure required for propagation. By calculating accurate conversions between osculating and mean orbit elements, OSMEAN allows the engineer to exploit the advantages of each approach for the design and planning of orbital trajectories and maneuver planning. OSMEAN is capable of converting mean elements to osculating elements or vice versa. The conversion is based on modelling of all first order aspherical and lunar-solar gravitation perturbations as well as a second-order aspherical term based on the second degree central body zonal perturbation. OSMEAN is written in FORTRAN 77 for HP 9000 series computers running HP-UX (NPO-18796) and DEC VAX series computers running VMS (NPO-18741). The HP version requires 388K of RAM for execution and the DEC VAX version requires 254K of RAM for execution. Sample input and output are listed in the documentation. Sample input is also provided on the distribution medium. The standard distribution medium for the HP 9000 series version is a .25 inch streaming magnetic IOTAMAT tape cartridge in UNIX tar format. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format or on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the DEC VAX version is a 1600 BPI 9-track magnetic tape in DEC VAX BACKUP format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. OSMEAN was developed on a VAX 6410 in 1989, and was ported to the HP 9000 series platform in 1991. It is a copyrighted work with all copyright vested in NASA.

  20. 3DFEMWATER: A three-dimensional finite element model of water flow through saturated-unsaturated media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, G.T.

    1987-08-01

    The 3DFEMWATER model is designed to treat heterogeneous and anisotropic media consisting of as many geologic formations as desired, consider both distributed and point sources/sinks that are spatially and temporally dependent, accept the prescribed initial conditions or obtain them by simulating a steady state version of the system under consideration, deal with a transient head distributed over the Dirichlet boundary, handle time-dependent fluxes due to pressure gradient varying along the Neumann boundary, treat time-dependent total fluxes distributed over the Cauchy boundary, automatically determine variable boundary conditions of evaporation, infiltration, or seepage on the soil-air interface, include the off-diagonal hydraulic conductivitymore » components in the modified Richards equation for dealing with cases when the coordinate system does not coincide with the principal directions of the hydraulic conductivity tensor, give three options for estimating the nonlinear matrix, include two options (successive subregion block iterations and successive point interactions) for solving the linearized matrix equations, automatically reset time step size when boundary conditions or source/sinks change abruptly, and check the mass balance computation over the entire region for every time step. The model is verified with analytical solutions or other numerical models for three examples.« less

  1. Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    1999-01-01

    The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each group are approximately balanced. A proper number of threads are initially allocated to each group, and in subsequent iterations during the run-time, the number of threads are adjusted to achieve load balancing across the processes. Each process exploits the multitasking directives already established in Overflow.

  2. Large-Scale Simulation of Multi-Asset Ising Financial Markets

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2017-03-01

    We perform a large-scale simulation of an Ising-based financial market model that includes 300 asset time series. The financial system simulated by the model shows a fat-tailed return distribution and volatility clustering and exhibits unstable periods indicated by the volatility index measured as the average of absolute-returns. Moreover, we determine that the cumulative risk fraction, which measures the system risk, changes at high volatility periods. We also calculate the inverse participation ratio (IPR) and its higher-power version, IPR6, from the absolute-return cross-correlation matrix. Finally, we show that the IPR and IPR6 also change at high volatility periods.

  3. Flexible binding simulation by a novel and improved version of virtual-system coupled adaptive umbrella sampling

    NASA Astrophysics Data System (ADS)

    Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi

    2016-10-01

    Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.

  4. Equilibrium polymerization on the equivalent-neighbor lattice

    NASA Technical Reports Server (NTRS)

    Kaufman, Miron

    1989-01-01

    The equilibrium polymerization problem is solved exactly on the equivalent-neighbor lattice. The Flory-Huggins (Flory, 1986) entropy of mixing is exact for this lattice. The discrete version of the n-vector model is verified when n approaches 0 is equivalent to the equal reactivity polymerization process in the whole parameter space, including the polymerized phase. The polymerization processes for polymers satisfying the Schulz (1939) distribution exhibit nonuniversal critical behavior. A close analogy is found between the polymerization problem of index the Schulz r and the Bose-Einstein ideal gas in d = -2r dimensions, with the critical polymerization corresponding to the Bose-Einstein condensation.

  5. smwrData—An R package of example hydrologic data, version 1.1.1

    USGS Publications Warehouse

    Lorenz, David L.

    2015-11-06

    A collection of 24 datasets, including streamflow, well characteristics, groundwater elevations, and discrete water-quality concentrations, is provided to produce a consistent set of example data to demonstrate typical data manipulations or statistical analysis of hydrologic data. These example data are provided in an R package called smwrData. The data in the package have been collected by the U.S. Geological Survey or published in its reports, for example Helsel and Hirsch (2002). The R package provides a convenient mechanism for distributing the data to users of R within the U.S. Geological Survey and other users in the R community.

  6. Description of Blankaartia shatrovi n. sp. (Acari: Trombiculidae) From Brazil.

    PubMed

    Bassini-Silva, R; Jacinavicius, F C; Mendoza-Roldan, J A; Daemon, E; Barros-Battesti, D M

    2017-01-01

    The chigger mite genus Blankaartia includes 28 known species, of which 10 are distributed in the Nearctic and Neotropical regions. These species preferentially parasitize birds, but occasionally they can also be found on rodents, bats, and reptiles, showing low host selectivity. In the present study, we report the presence of this genus in Brazil for the first time, including the first report of Blankaartia sinnamaryi (Floch and Fauran) and the description of a new species of Blankaartia collected from birds (Order Passeriformes). © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com Version of Record, first published online October 5, 2016 with fixed content and layout in compliance with Art. 8.1.3.2 ICZN.

  7. Model-Driven Development for scientific computing. Computations of RHEED intensities for a disordered surface. Part I

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2010-03-01

    Scientific computing is the field of study concerned with constructing mathematical models, numerical solution techniques and with using computers to analyse and solve scientific and engineering problems. Model-Driven Development (MDD) has been proposed as a means to support the software development process through the use of a model-centric approach. This paper surveys the core MDD technology that was used to develop an application that allows computation of the RHEED intensities dynamically for a disordered surface. New version program summaryProgram title: RHEED1DProcess Catalogue identifier: ADUY_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 31 971 No. of bytes in distributed program, including test data, etc.: 3 039 820 Distribution format: tar.gz Programming language: Embarcadero C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 GB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADUY_v3_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2394 Does the new version supersede the previous version?: No Nature of problem: An application that implements numerical simulations should be constructed according to the CSFAR rules: clear and well-documented, simple, fast, accurate, and robust. A clearly written, externally and internally documented program is much easier to understand and modify. A simple program is much less prone to error and is more easily modified than one that is complicated. Simplicity and clarity also help make the program flexible. Making the program fast has economic benefits. It also allows flexibility because some of the features that make a program efficient can be traded off for greater accuracy. Making the program fast also has the benefit of allowing longer calculations with better resolution. The compromise between speed and accuracy has always posted one of the most troublesome challenges for the programmer. Almost all advances in numerical analysis have come about trying to reach these twin goals. Change in the basic algorithms will give greater improvements in accuracy and speed than using special numerical tricks or changing programming language. A robust program works correctly over a broad spectrum of input data. Solution method: The computational model of the program is based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. In the case of a disordered surface we can use the proportional model of the scattering potential, in which the potential of a partially filled layer is taken to be the product of the coverage of this layer and the potential of a fully filled layer: U(θ,z)=∑ θ(t/τ)U(1,z), where U(1,z) stands for the potential for the full nth layer, and U(θ,z) the potential of the growing layer. Reasons for new version: Responding to the user feedback the RHEEDGr_09 program has been upgraded to a standard that allows carrying out computations of the RHEED intensities for a disordered surface. Also, functionality and documentation of the program have been improved. Summary of revisions:The logical structure of the Platform-Specific Model of the RHEEDGr_09 program has been modified according to the scheme showed in Fig. 1*. The class diagram in Fig. 1* is a static view of the main platform-specific elements of the RHEED1DProcess architecture. Fig. 2* provides a dynamic view by showing the creation and destruction simplistic sequence diagram for the process. Fig. 3* shows the RHEED1DProcess use case model. As can be seen in Figs. 2-3* the RHEED1DProcess has been designed as a slave process that runs as a separate thread inside each transaction generated by the master Growth09 program (see pii:S0010-4655(09)00386-5 A. Daniluk, Model-Driven Development for scientific computing. Computations of RHEED intensities for a disordered surface. Part II The RHEED1DProcess requires the user to provide the appropriate parameters for the crystal structure under investigation. These parameters are loaded from the parameters.ini file at run-time. Instructions on the preparation of the .ini files can be found in the new distribution. The RHEED1DProcess requires the user to provide the appropriate values of the layers of coverage profiles. The CoverageProfiles.dat file (generated by Growth09 master application) at run-time loads these values. The RHEED1DProcess enables carrying out one-dimensional dynamical calculations for the fcc lattice, with a two-atoms basis and fcc lattice, with one atom basis but yet the zeroth Fourier component of the scattering potential in the TRHEED1D::crystPotUg() function can be modified according to users' specific application requirements. * The figures mentioned can be downloaded, see "Supplementary material" below. Unusual features: The program is distributed in the form of main projects RHEED1DProcess.cbproj and Graph2D0x.cbproj with associated files, and should be compiled using Embarcadero RAD Studio 2010 along with Together visual-modelling platform. The program should be compiled with English/USA regional and language options. Additional comments: This version of the RHEED program is designed to run in conjunction with the GROWTH09 (ADVL_v3_0) program. It does not replace the previous, stand alone, RHEEDGR-09 (ADUY_v3_0) version. Running time: The typical running time is machine and user-parameters dependent. References:[1] OMG, Model Driven Architecture Guide Version 1.0.1, 2003.

  8. An implementation of the NiftyRec medical imaging library for PIXE-tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Michelet, C.; Barberet, P.; Desbarats, P.; Giovannelli, J.-F.; Schou, C.; Chebil, I.; Delville, M.-H.; Gordillo, N.; Beasley, D. G.; Devès, G.; Moretto, P.; Seznec, H.

    2017-08-01

    A new development of the TomoRebuild software package is presented, including ;thick sample; correction for non linear X-ray production (NLXP) and X-ray absorption (XA). As in the previous versions, C++ programming with standard libraries was used for easier portability. Data reduction requires different steps which may be run either from a command line instruction or via a user friendly interface, developed as a portable Java plugin in ImageJ. All experimental and reconstruction parameters can be easily modified, either directly in the ASCII parameter files or via the ImageJ interface. A detailed user guide in English is provided. Sinograms and final reconstructed images are generated in usual binary formats that can be read by most public domain graphic softwares. New MLEM and OSEM methods are proposed, using optimized methods from the NiftyRec medical imaging library. An overview of the different medical imaging methods that have been used for ion beam microtomography applications is presented. In TomoRebuild, PIXET data reduction is performed for each chemical element independently and separately from STIMT, except for two steps where the fusion of STIMT and PIXET data is required: the calculation of the correction matrix and the normalization of PIXET data to obtain mass fraction distributions. Correction matrices for NLXP and XA are calculated using procedures extracted from the DISRA code, taking into account a large X-ray detection solid angle. For this, the 3D STIMT mass density distribution is used, considering a homogeneous global composition. A first example of PIXET experiment using two detectors is presented. Reconstruction results are compared and found in good agreement between different codes: FBP, NiftyRec MLEM and OSEM of the TomoRebuild software package, the original DISRA, its accelerated version provided in JPIXET and the accelerated MLEM version of JPIXET, with or without correction.

  9. Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion in the West Coast ShakeAlert System

    NASA Astrophysics Data System (ADS)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Murray, J. R.

    2016-12-01

    Finite-fault source algorithms can greatly benefit earthquake early warning (EEW) systems. Estimates of finite-fault parameters provide spatial information, which can significantly improve real-time shaking calculations and help with disaster response. In this project, we have focused on integrating a finite-fault seismic-geodetic algorithm into the West Coast ShakeAlert framework. The seismic part is FinDer 2, a C++ version of the algorithm developed by Böse et al. (2012). It interpolates peak ground accelerations and calculates the best fault length and strike from template matching. The geodetic part is a C++ version of BEFORES, the algorithm developed by Minson et al. (2014) that uses a Bayesian methodology to search for the most probable slip distribution on a fault of unknown orientation. Ultimately, these two will be used together where FinDer generates a Bayesian prior for BEFORES via the methodology of Minson et al. (2015), and the joint solution will generate estimates of finite-fault extent, strike, dip, best slip distribution, and magnitude. We have created C++ versions of both FinDer and BEFORES using open source libraries and have developed a C++ Application Protocol Interface (API) for them both. Their APIs allow FinDer and BEFORES to contribute to the ShakeAlert system via an open source messaging system, ActiveMQ. FinDer has been receiving real-time data, detecting earthquakes, and reporting messages on the development system for several months. We are also testing FinDer extensively with Earthworm tankplayer files. BEFORES has been tested with ActiveMQ messaging in the ShakeAlert framework, and works off a FinDer trigger. We are finishing the FinDer-BEFORES connections in this framework, and testing this system via seismic-geodetic tankplayer files. This will include actual and simulated data.

  10. Prepositioned Stocks: Marine Corps Needs to Improve Cost Estimate Reliability and Oversight of Inventory Systems for Equipment in Norway

    DTIC Science & Technology

    2015-09-01

    HMMWV), M1A1 Main Battle Tanks, Tank Retrievers, Armored Breeching Vehicles, Amphibious Assault Vehicles, and several variants of the Medium...MCPP-N equipment stored in the Norwegian caves. As noted earlier, Marine Corps equipment is distributed among six caves. While the current version of...according to Marine Corps Business System Integration Team officials, the initial plan was for the first version of the Global Combat Support System

  11. Factor information retrieval system version 2. 0 (fire) (for microcomputers). Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    FIRE Version 2.0 contains EPA's unique recommended criteria and toxic air emission estimation factors. FIRE consists of: (1) an EPA internal repository system that contains emission factor data identified and collected, and (2) an external distribution system that contains only EPA's recommended factors. The emission factors, compiled from a review of the literature, are identified by pollutant name, CAS number, process and emission source descriptions, SIC code, SCC, and control status. The factors are rated for quality using AP-42 rating criteria.

  12. Documentation for the machine-readable version of a supplement to the Bright Star catalogue (Hoffleit, Saladyga and Wlasuk 1983)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    Detailed descriptions of the three files of the machine-readable catalog are given. The files of the original tape have been restructured and the data records reformatted to produce a uniform data file having a single logical record per star and homogeneous data fields. The characteristics of the tape version as it is presently being distributed from the Astronomical Data Center are given and the changes to the original tape supplied are described.

  13. Experimental fusion of different versions of the total laboratory automation system and improvement of laboratory turnaround time.

    PubMed

    Chung, Hee-Jung; Song, Yoon Kyung; Hwang, Sang-Hyun; Lee, Do Hoon; Sugiura, Tetsuro

    2018-02-25

    Use of total laboratory automation (TLA) system has expanded to microbiology and hemostasis and upgraded to second and third generations. We herein report the first successful upgrades and fusion of different versions of the TLA system, thus improving laboratory turnaround time (TAT). A 21-day schedule was planned from the time of pre-meeting to installation and clinical sample application. We analyzed the monthly TAT in each menu, distribution of the "out of range for acceptable TAT" samples, and "prolonged time out of acceptable TAT," before and after the upgrade and fusion. We installed and customized hardware, middleware, and software. The one-way CliniLog 2.0 version track, 50.0-m long, was changed to a 23.2-m long one-way 2.0 version and an 18.7-m long two-way 4.0 version. The monthly TAT in the outpatient samples, before and after upgrading the TLA system, were uniformly satisfactory in the chemistry and viral marker menus. However, in the tumor marker menu, the target TAT (98.0% of samples ≤60 minutes) was not satisfied during the familiarization period. There was no significant difference in the proportion of "out of acceptable TAT" samples, before and after the TLA system upgrades (7.4‰ and 8.5‰). However, the mean "prolonged time out of acceptable TAT" in the chemistry samples was significantly shortened to 17.4 (±24.0) minutes after the fusion, from 34.5 (±43.4) minutes. Despite experimental challenges, a fusion of the TLA system shortened the "prolonged time out of acceptable TAT," indicating a distribution change in overall TAT. © 2018 Wiley Periodicals, Inc.

  14. Xyce release and distribution management : version 1.2.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, Scott Alan; Williamson, Charles Michael

    2003-10-01

    This document presents a high-level description of the Xyce {trademark} Parallel Electronic Simulator Release and Distribution Management Process. The purpose of this process is to standardize the manner in which all Xyce software products progress toward release and how releases are made available to customers. Rigorous Release Management will assure that Xyce releases are created in such a way that the elements comprising the release are traceable and the release itself is reproducible. Distribution Management describes what is to be done with a Xyce release that is eligible for distribution.

  15. Parallel grid library for rapid and flexible simulation development

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2013-04-01

    We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and load balancing. Solution method: The simulation grid is represented by an adjacency list (graph) with vertices stored into a hash table and edges into contiguous arrays. Message Passing Interface standard is used for parallelization. Cell data is given as a template parameter when instantiating the grid. Restrictions: Logically cartesian grid. Running time: Running time depends on the hardware, problem and the solution method. Small problems can be solved in under a minute and very large problems can take weeks. The examples and tests provided with the package take less than about one minute using default options. In the version of dccrg presented here the speed of adaptive mesh refinement is at most of the order of 106 total created cells per second. http://www.mpi-forum.org/. http://www.boost.org/. K. Devine, E. Boman, R. Heaphy, B. Hendrickson, C. Vaughan, Zoltan data management services for parallel dynamic applications, Comput. Sci. Eng. 4 (2002) 90-97. http://dx.doi.org/10.1109/5992.988653. https://gitorious.org/sfc++.

  16. Smart-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat K; Palmintier, Bryan S; Hodge, Brian S

    The National Renewable Energy Laboratory (NREL) in collaboration with Massachusetts Institute of Technology (MIT), Universidad Pontificia Comillas (Comillas-IIT, Spain) and GE Grid Solutions, is working on an ARPA-E GRID DATA project, titled Smart-DS, to create: 1) High-quality, realistic, synthetic distribution network models, and 2) Advanced tools for automated scenario generation based on high-resolution weather data and generation growth projections. Through these advancements, the Smart-DS project is envisioned to accelerate the development, testing, and adoption of advanced algorithms, approaches, and technologies for sustainable and resilient electric power systems, especially in the realm of U.S. distribution systems. This talk will present themore » goals and overall approach of the Smart-DS project, including the process of creating the synthetic distribution datasets using reference network model (RNM) and the comprehensive validation process to ensure network realism, feasibility, and applicability to advanced use cases. The talk will provide demonstrations of early versions of synthetic models, along with the lessons learnt from expert engagements to enhance future iterations. Finally, the scenario generation framework, its development plans, and co-ordination with GRID DATA repository teams to house these datasets for public access will also be discussed.« less

  17. Simple stochastic birth and death models of genome evolution: was there enough time for us to evolve?

    PubMed

    Karev, Georgy P; Wolf, Yuri I; Koonin, Eugene V

    2003-10-12

    The distributions of many genome-associated quantities, including the membership of paralogous gene families can be approximated with power laws. We are interested in developing mathematical models of genome evolution that adequately account for the shape of these distributions and describe the evolutionary dynamics of their formation. We show that simple stochastic models of genome evolution lead to power-law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.

  18. Resources | Division of Cancer Prevention

    Cancer.gov

    Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |

  19. Translating and culturally adapting the shortened version of the Hospital Ethical Climate Survey (HECS-S) - retaining or modifying validated instruments.

    PubMed

    Pergert, Pernilla; Bartholdson, Cecilia; Wenemark, Marika; Lützén, Kim; Af Sandeberg, Margareta

    2018-05-10

    The Hospital Ethical Climate Survey (HECS) was developed in the USA and later shortened (HECS-S). HECS has previously been translated into Swedish and the aim of this study was to describe a process of translating and culturally adapting HECS-S and to develop a Swedish multi-professional version, relevant for paediatrics. Another aim was to describe decisions about retaining versus modifying the questionnaire in order to keep the Swedish version as close as possible to the original while achieving a good functional level and trustworthiness. In HECS-S, the respondents are asked to indicate the veracity of statements. In HECS and HECS-S the labels of the scale range from 'almost never true' to 'almost always true'; while the Swedish HECS labels range from 'never' to 'always'. The procedure of translating and culturally adapting the Swedish version followed the scientific structure of guidelines. Three focus group interviews and three cognitive interviews were conducted with healthcare professionals. Furthermore, descriptive data were used from a previous study with healthcare professionals (n = 89), employing a modified Swedish HECS. Decisions on retaining or modifying items were made in a review group. The Swedish HECS-S consists of 21 items including all 14 items from HECS-S and items added to develop a multi-professional version, relevant for paediatrics. The descriptive data showed that few respondents selected 'never' and 'always'. To obtain a more even distribution of responses and keep Swedish HECS-S close to HECS-S, the original labels were retained. Linguistic adjustments were made to retain the intended meaning of the original items. The word 'respect' was used in HECS-S with two different meanings and was replaced in one of these because participants were concerned that respecting patients' wishes implied always complying with them. The process of developing a Swedish HECS-S included decisions on whether to retain or modify. Only minor adjustments were needed to achieve a good functional level and trustworthiness although some items needed to be added. Adjustments made could be used to also improve the English HECS-S. The results shed further light on the need to continuously evaluate even validated instruments and adapt them before use.

  20. Westerly wind bursts simulated in CAM4 and CCSM4

    NASA Astrophysics Data System (ADS)

    Lian, Tao; Tang, Youmin; Zhou, Lei; Islam, Siraj Ul; Zhang, Chan; Li, Xiaojing; Ling, Zheng

    2018-02-01

    The equatorial westerly wind bursts (WWBs) play an important role in modulating and predicting the El Niño-Southern Oscillation (ENSO). In this study, the ability of the Community Atmospheric Model version 4 (CAM4) and the Community Climate System Model version 4 (CCSM4) in simulating WWBs is systematically evaluated. Many characteristics of WWBs, including their longitude distributions, durations, zonal extensions, variabilities at seasonal, intraseasonal, and interannual timescales, as well as their relations with the Madden-Julian Oscillation (MJO) and ENSO, are discussed. Generally speaking, these characteristics of WWBs can be successfully reproduced by CAM4, owning to the improvement of the deep convection in the model. In CCSM4, significant bias such as the lack of the equatorial Pacific WWBs in boreal spring season and the weak modulation by a strong MJO are found. Our findings confirm the fact that the WWBs are greatly modulated by the surface temperature. It's also suggested that improving the air-sea coupling in CCSM4 may improve model performance in simulating WWBs, and may further improve the predictability of ENSO in the coupled model.

Top