ERIC Educational Resources Information Center
Pilot, A.
TAIGA (Twente Advanced Interactive Graphic Authoring system) is a system which can be used to develop instructional software. It is written in MS-PASCAL, and runs on computers that support MS-DOS. Designed to support the production of structured software, TAIGA has a hierarchical structure of three layers, each with a specific function, and each…
ELM - A SIMPLE TOOL FOR THERMAL-HYDRAULIC ANALYSIS OF SOLID-CORE NUCLEAR ROCKET FUEL ELEMENTS
NASA Technical Reports Server (NTRS)
Walton, J. T.
1994-01-01
ELM is a simple computational tool for modeling the steady-state thermal-hydraulics of propellant flow through fuel element coolant channels in nuclear thermal rockets. Written for the nuclear propulsion project of the Space Exploration Initiative, ELM evaluates the various heat transfer coefficient and friction factor correlations available for turbulent pipe flow with heat addition. In the past, these correlations were found in different reactor analysis codes, but now comparisons are possible within one program. The logic of ELM is based on the one-dimensional conservation of energy in combination with Newton's Law of Cooling to determine the bulk flow temperature and the wall temperature across a control volume. Since the control volume is an incremental length of tube, the corresponding pressure drop is determined by application of the Law of Conservation of Momentum. The size, speed, and accuracy of ELM make it a simple tool for use in fuel element parametric studies. ELM is a machine independent program written in FORTRAN 77. It has been successfully compiled on an IBM PC compatible running MS-DOS using Lahey FORTRAN 77, a DEC VAX series computer running VMS, and a Sun4 series computer running SunOS UNIX. ELM requires 565K of RAM under SunOS 4.1, 360K of RAM under VMS 5.4, and 406K of RAM under MS-DOS. Because this program is machine independent, no executable is provided on the distribution media. The standard distribution medium for ELM is one 5.25 inch 360K MS-DOS format diskette. ELM was developed in 1991. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. Sun4 and SunOS are trademarks of Sun Microsystems, Inc. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation.
Mount, D W; Conrad, B
1986-01-01
We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780
KINEXP: Computer Simulation in Enzyme Kinetics.
ERIC Educational Resources Information Center
Gelpi, Josep Lluis; Domenech, Carlos
1988-01-01
Describes a program which allows students to identify and characterize several kinetic inhibitory mechanisms. Uses the generic model of reversible inhibition of a monosubstrate enzyme but can be easily modified to run other models such as bisubstrate enzymes. Uses MS-DOS BASIC. (MVL)
Controlling Laboratory Processes From A Personal Computer
NASA Technical Reports Server (NTRS)
Will, H.; Mackin, M. A.
1991-01-01
Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.
Computing Equilibrium Chemical Compositions
NASA Technical Reports Server (NTRS)
Mcbride, Bonnie J.; Gordon, Sanford
1995-01-01
Chemical Equilibrium With Transport Properties, 1993 (CET93) computer program provides data on chemical-equilibrium compositions. Aids calculation of thermodynamic properties of chemical systems. Information essential in design and analysis of such equipment as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical-processing equipment. CET93/PC is version of CET93 specifically designed to run within 640K memory limit of MS-DOS operating system. CET93/PC written in FORTRAN.
GPR data processing computer software for the PC
Lucius, Jeffrey E.; Powers, Michael H.
2002-01-01
The computer software described in this report is designed for processing ground penetrating radar (GPR) data on Intel-compatible personal computers running the MS-DOS operating system or MS Windows 3.x/95/98/ME/2000. The earliest versions of these programs were written starting in 1990. At that time, commercially available GPR software did not meet the processing and display requirements of the USGS. Over the years, the programs were refined and new features and programs were added. The collection of computer programs presented here can perform all basic processing of GPR data, including velocity analysis and generation of CMP stacked sections and data volumes, as well as create publication quality data images.
A Review of MS-DOS Bulletin Board Software Suitable for Long Distance Learning.
ERIC Educational Resources Information Center
Sessa, Anneliese
This paper describes the advantages of using computer bulletin boards systems (BBS) for distance learning, including the use of the New York City Education Network (NYCENET) to access various databases and to communicate with individuals or the public. Questions to be answered in order to determine the most appropriate software for running a BBS…
PRELIMINARY DESIGN ANALYSIS OF AXIAL FLOW TURBINES
NASA Technical Reports Server (NTRS)
Glassman, A. J.
1994-01-01
A computer program has been developed for the preliminary design analysis of axial-flow turbines. Rapid approximate generalized procedures requiring minimum input are used to provide turbine overall geometry and performance adequate for screening studies. The computations are based on mean-diameter flow properties and a stage-average velocity diagram. Gas properties are assumed constant throughout the turbine. For any given turbine, all stages, except the first, are specified to have the same shape velocity diagram. The first stage differs only in the value of inlet flow angle. The velocity diagram shape depends upon the stage work factor value and the specified type of velocity diagram. Velocity diagrams can be specified as symmetrical, zero exit swirl, or impulse; or by inputting stage swirl split. Exit turning vanes can be included in the design. The 1991 update includes a generalized velocity diagram, a more flexible meanline path, a reheat model, a radial component of velocity, and a computation of free-vortex hub and tip velocity diagrams. Also, a loss-coefficient calibration was performed to provide recommended values for airbreathing engine turbines. Input design requirements include power or pressure ratio, mass flow rate, inlet temperature and pressure, and rotative speed. The design variables include inlet and exit diameters, stator angle or exit radius ratio, and number of stages. Gas properties are input as gas constant, specific heat ratio, and viscosity. The program output includes inlet and exit annulus dimensions, exit temperature and pressure, total and static efficiencies, flow angles, blading angles, and last stage absolute and relative Mach numbers. This program is written in FORTRAN 77 and can be ported to any computer with a standard FORTRAN compiler which supports NAMELIST. It was originally developed on an IBM 7000 series computer running VM and has been implemented on IBM PC computers and compatibles running MS-DOS under Lahey FORTRAN, and DEC VAX series computers running VMS. Format statements in the code may need to be rewritten depending on your FORTRAN compiler. The source code and sample data are available on a 5.25 inch 360K MS-DOS format diskette. This program was developed in 1972 and was last updated in 1991. IBM and IBM PC are registered trademarks of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC VAX, and VMS are trademarks of Digital Equipment Corporation.
Computing Operating Characteristics Of Bearing/Shaft Systems
NASA Technical Reports Server (NTRS)
Moore, James D.
1996-01-01
SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, D.W.; Johnston, W.E.; Hall, D.E.
1990-03-01
We describe the use of the Sun Remote Procedure Call and Unix socket interprocess communication mechanisms to provide the network transport for a distributed, client-server based, image handling system. Clients run under Unix or UNICOS and servers run under Unix or MS-DOS. The use of remote procedure calls across local or wide-area networks to make video movies is addressed.
Myers, E W; Mount, D W
1986-01-01
We describe a program which may be used to find approximate matches to a short predefined DNA sequence in a larger target DNA sequence. The program predicts the usefulness of specific DNA probes and sequencing primers and finds nearly identical sequences that might represent the same regulatory signal. The program is written in the C programming language and will run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The program has been integrated into an existing software package for the IBM personal computer (see article by Mount and Conrad, this volume). Some examples of its use are given. PMID:3753785
The Physician's Workstation: Recording a Physical Examination Using a Controlled Vocabulary
Cimino, James J.; Barnett, G. Octo
1987-01-01
A system has been developed which runs on MS-DOS personal computers and serves as an experimental model of a physician's workstation. The program provides an interface to a controlled vocabulary which allows rapid selection of appropriate terms and modifiers for entry of clinical information. Because it captures patient descriptions, it has the ability to serve as an intermediary between the physician and computer-based medical knowledge resources. At present, the vocabulary permits rapid, reliable representation of cardiac physical examination findings.
DATALINK: Records inventory data collection software. User`s guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, B.A.
1995-03-01
DATALINK was created to provide an easy to use data collection program for records management software products. It provides several useful tools for capturing and validating record index data in the field. It also allows users to easily create a comma delimited, ASCII text file for data export into most records management software products. It runs on virtually any computer us MS-DOS.
COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Smith, J. P.
1994-01-01
The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.
COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Smith, J. P.
1994-01-01
The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Computer graphics are often applied for better understanding and interpretation of data under observation. These graphics become more complicated when animation is required during "run-time", as found in many typical modern artificial intelligence and expert systems. Living Color Frame Maker is a solution to many of these real-time graphics problems. Living Color Frame Maker (LCFM) is a graphics generation and management tool for IBM or IBM compatible personal computers. To eliminate graphics programming, the graphic designer can use LCFM to generate computer graphics frames. The graphical frames are then saved as text files, in a readable and disclosed format, which can be easily accessed and manipulated by user programs for a wide range of "real-time" visual information applications. For example, LCFM can be implemented in a frame-based expert system for visual aids in management of systems. For monitoring, diagnosis, and/or controlling purposes, circuit or systems diagrams can be brought to "life" by using designated video colors and intensities to symbolize the status of hardware components (via real-time feedback from sensors). Thus status of the system itself can be displayed. The Living Color Frame Maker is user friendly with graphical interfaces, and provides on-line help instructions. All options are executed using mouse commands and are displayed on a single menu for fast and easy operation. LCFM is written in C++ using the Borland C++ 2.0 compiler for IBM PC series computers and compatible computers running MS-DOS. The program requires a mouse and an EGA/VGA display. A minimum of 77K of RAM is also required for execution. The documentation is provided in electronic form on the distribution medium in WordPerfect format. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The Living Color Frame Maker tool was developed in 1992.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busbey, A.B.
A number of methods and products, both hardware and software, to allow data exchange between Apple Macintosh computers and MS-DOS based systems. These included serial null modem connections, MS-DOS hardware and/or software emulation, MS-DOS disk-reading hardware and networking.
Simple and powerful visual stimulus generator.
Kremlácek, J; Kuba, M; Kubová, Z; Vít, F
1999-02-01
We describe a cheap, simple, portable and efficient approach to visual stimulation for neurophysiology which does not need any special hardware equipment. The method based on an animation technique uses the FLI autodesk animator format. This form of the animation is replayed by a special program ('player') providing synchronisation pulses toward recording system via parallel port. The 'player is running on an IBM compatible personal computer under MS-DOS operation system and stimulus is displayed on a VGA computer monitor. Various stimuli created with this technique for visual evoked potentials (VEPs) are presented.
Program Helps Generate And Manage Graphics
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Living Color Frame Maker (LCFM) computer program generates computer-graphics frames. Graphical frames saved as text files, in readable and disclosed format, easily retrieved and manipulated by user programs for wide range of real-time visual information applications. LCFM implemented in frame-based expert system for visual aids in management of systems. Monitoring, diagnosis, and/or control, diagrams of circuits or systems brought to "life" by use of designated video colors and intensities to symbolize status of hardware components (via real-time feedback from sensors). Status of systems can be displayed. Written in C++ using Borland C++ 2.0 compiler for IBM PC-series computers and compatible computers running MS-DOS.
TFSSRA - THICK FREQUENCY SELECTIVE SURFACE WITH RECTANGULAR APERTURES
NASA Technical Reports Server (NTRS)
Chen, J. C.
1994-01-01
Thick Frequency Selective Surface with Rectangular Apertures (TFSSRA) was developed to calculate the scattering parameters for a thick frequency selective surface with rectangular apertures on a skew grid at oblique angle of incidence. The method of moments is used to transform the integral equation into a matrix equation suitable for evaluation on a digital computer. TFSSRA predicts the reflection and transmission characteristics of a thick frequency selective surface for both TE and TM orthogonal linearly polarized plane waves. A model of a half-space infinite array is used in the analysis. A complete set of basis functions with unknown coefficients is developed for the waveguide region (waveguide modes) and for the free space region (Floquet modes) in order to represent the electromagnetic fields. To ensure the convergence of the solutions, the number of waveguide modes is adjustable. The method of moments is used to compute the unknown mode coefficients. Then, the scattering matrix of the half-space infinite array is calculated. Next, the reference plane of the scattering matrix is moved half a plate thickness in the negative z-direction, and a frequency selective surface of finite thickness is synthesized by positioning two plates of half-thickness back-to-back. The total scattering matrix is obtained by cascading the scattering matrices of the two half-space infinite arrays. TFSSRA is written in FORTRAN 77 with single precision. It has been successfully implemented on a Sun4 series computer running SunOS, an IBM PC compatible running MS-DOS, and a CRAY series computer running UNICOS, and should run on other systems with slight modifications. Double precision is recommended for running on a PC if many modes are used or if high accuracy is required. This package requires the LINPACK math library, which is included. TFSSRA requires 1Mb of RAM for execution. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. It is also available on a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. This program was developed in 1992 and is a copyrighted work with all copyright vested in NASA.
WT - WIND TUNNEL PERFORMANCE ANALYSIS
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
WT was developed to calculate fan rotor power requirements and output thrust for a closed loop wind tunnel. The program uses blade element theory to calculate aerodynamic forces along the blade using airfoil lift and drag characteristics at an appropriate blade aspect ratio. A tip loss model is also used which reduces the lift coefficient to zero for the outer three percent of the blade radius. The application of momentum theory is not used to determine the axial velocity at the rotor plane. Unlike a propeller, the wind tunnel rotor is prevented from producing an increase in velocity in the slipstream. Instead, velocities at the rotor plane are used as input. Other input for WT includes rotational speed, rotor geometry, and airfoil characteristics. Inputs for rotor blade geometry include blade radius, hub radius, number of blades, and pitch angle. Airfoil aerodynamic inputs include angle at zero lift coefficient, positive stall angle, drag coefficient at zero lift coefficient, and drag coefficient at stall. WT is written in APL2 using IBM's APL2 interpreter for IBM PC series and compatible computers running MS-DOS. WT requires a CGA or better color monitor for display. It also requires 640K of RAM and MS-DOS v3.1 or later for execution. Both an MS-DOS executable and the source code are provided on the distribution medium. The standard distribution medium for WT is a 5.25 inch 360K MS-DOS format diskette in PKZIP format. The utility to unarchive the files, PKUNZIP, is also included. WT was developed in 1991. APL2 and IBM PC are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. PKUNZIP is a registered trademark of PKWare, Inc.
NASA Technical Reports Server (NTRS)
Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.
2012-01-01
This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.
IRDS prototyping with applications to the representation of EA/RA models
NASA Technical Reports Server (NTRS)
Lekkos, Anthony A.; Greenwood, Bruce
1988-01-01
The requirements and system overview for the Information Resources Dictionary System (IRDS) are described. A formal design specification for a scaled down IRDS implementation compatible with the proposed FIPS IRDS standard is contained. The major design objectives for this IRDS will include a menu driven user interface, implementation of basic IRDS operations, and PC compatibility. The IRDS was implemented using Smalltalk/5 object oriented programming system and an ATT 6300 personal computer running under MS-DOS 3.1. The difficulties encountered in using Smalltalk are discussed.
SSL - THE SIMPLE SOCKETS LIBRARY
NASA Technical Reports Server (NTRS)
Campbell, C. E.
1994-01-01
The Simple Sockets Library (SSL) allows C programmers to develop systems of cooperating programs using Berkeley streaming Sockets running under the TCP/IP protocol over Ethernet. The SSL provides a simple way to move information between programs running on the same or different machines and does so with little overhead. The SSL can create three types of Sockets: namely a server, a client, and an accept Socket. The SSL's Sockets are designed to be used in a fashion reminiscent of the use of FILE pointers so that a C programmer who is familiar with reading and writing files will immediately feel comfortable with reading and writing with Sockets. The SSL consists of three parts: the library, PortMaster, and utilities. The user of the SSL accesses it by linking programs to the SSL library. The PortMaster initializes connections between clients and servers. The PortMaster also supports a "firewall" facility to keep out socket requests from unapproved machines. The "firewall" is a file which contains Internet addresses for all approved machines. There are three utilities provided with the SSL. SKTDBG can be used to debug programs that make use of the SSL. SPMTABLE lists the servers and port numbers on requested machine(s). SRMSRVR tells the PortMaster to forcibly remove a server name from its list. The package also includes two example programs: multiskt.c, which makes multiple accepts on one server, and sktpoll.c, which repeatedly attempts to connect a client to some server at one second intervals. SSL is a machine independent library written in the C-language for computers connected via Ethernet using the TCP/IP protocol. It has been successfully compiled and implemented on a variety of platforms, including Sun series computers running SunOS, DEC VAX series computers running VMS, SGI computers running IRIX, DECstations running ULTRIX, DEC alpha AXPs running OSF/1, IBM RS/6000 computers running AIX, IBM PC and compatibles running BSD/386 UNIX and HP Apollo 3000/4000/9000/400T computers running HP-UX. SSL requires 45K of RAM to run under SunOS and 80K of RAM to run under VMS. For use on IBM PC series computers and compatibles running DOS, SSL requires Microsoft C 6.0 and the Wollongong TCP/IP package. Source code for sample programs and debugging tools are provided. The documentation is available on the distribution medium in TeX and PostScript formats. The standard distribution medium for SSL is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 5.25 inch 360K MS-DOS format diskette. The SSL was developed in 1992 and was updated in 1993.
NASA Technical Reports Server (NTRS)
Saltsman, J. F.
1994-01-01
TS-SRP/PACK is a set of computer programs for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the total strain version of the Strainrange Partitioning (TS-SRP). The user should be thoroughly familiar with the TS-SRP method before attempting to use any of these programs. The document for this program includes a theory manual as well as a detailed user's manual with a tutorial to guide the user in the proper use of TS-SRP. An extensive database has also been developed in a parallel effort. This database is an excellent source of high-temperature, creep-fatigue test data and can be used with other life-prediction methods as well. Five programs are included in TS-SRP/PACK along with the alloy database. The TABLE program is used to print the datasets, which are in NAMELIST format, in a reader friendly format. INDATA is used to create new datasets or add to existing ones. The FAIL program is used to characterize the failure behavior of an alloy as given by the constants in the strainrange-life relations used by the total strain version of SRP (TS-SRP) and the inelastic strainrange-based version of SRP. The program FLOW is used to characterize the flow behavior (the constitutive response) of an alloy as given by the constants in the flow equations used by TS-SRP. Finally, LIFE is used to predict the life of a specified cycle, using the constants characterizing failure and flow behavior determined by FAIL and FLOW. LIFE is written in interpretive BASIC to avoid compiling and linking every time the equation constants are changed. Four out of five programs in this package are written in FORTRAN 77 for IBM PC series and compatible computers running MS-DOS and are designed to read data using the NAMELIST format statement. The fifth is written in BASIC version 3.0 for IBM PC series and compatible computers running MS-DOS version 3.10. The executables require at least 239K of memory and DOS 3.1 or higher. To compile the source, a Lahey FORTRAN compiler is required. Source code modifications will be necessary if the compiler to be used does not support NAMELIST input. Probably the easiest revision to make is to use a list-directed READ statement. The standard distribution medium for this program is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. TS-SRP/PACK was developed in 1992.
GAP: yet another image processing system for solar observations.
NASA Astrophysics Data System (ADS)
Keller, C. U.
GAP is a versatile, interactive image processing system for analyzing solar observations, in particular extended time sequences, and for preparing publication quality figures. It consists of an interpreter that is based on a language with a control flow similar to PASCAL and C. The interpreter may be accessed from a command line editor and from user-supplied functions, procedures, and command scripts. GAP is easily expandable via external FORTRAN programs that are linked to the GAP interface routines. The current version of GAP runs on VAX, DECstation, Sun, and Apollo computers. Versions for MS-DOS and OS/2 are in preparation.
Picco, P; Di Rocco, M; Buoncompagni, A; Gandullia, P; Lattere, M; Borrone, C
1991-01-01
A computerized program for children and adolescents with insulin dependent diabetes mellitus (IDDM) and their parents has been developed. Our program consists of computed assisted education, of aid to routine insulin dosage self adjustment and of records of home and hospital controls. Technically it has been implemented in DBIII plus: it runs on IBM PC computers (and compatible computers) and MS DOS (version 3.0 and later). Computed assisted education consists of 80 multiples choice questions divided in 2 parts: the first concerns basic informations about diabetes while the second one behavioral attitudes of patient in particular situations. Explanations are displayed after every question, apart from correct of incorrect choice. Help for self-adjustment of routine insulin dosage is offered in the third part. Finally daily home urine and/or blood controls and results of hospital admissions are stored in a database.
NASA Technical Reports Server (NTRS)
Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.
1990-01-01
This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.
37 CFR 1.96 - Submission of computer program listings.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Apple Macintosh; (ii) Operating System Compatibility: MS-DOS, MS-Windows, Unix, or Macintosh; (iii) Line... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Submission of computer... Models, Exhibits, Specimens § 1.96 Submission of computer program listings. (a) General. Descriptions of...
37 CFR 1.96 - Submission of computer program listings.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Apple Macintosh; (ii) Operating System Compatibility: MS-DOS, MS-Windows, Unix, or Macintosh; (iii) Line... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Submission of computer... Models, Exhibits, Specimens § 1.96 Submission of computer program listings. (a) General. Descriptions of...
37 CFR 1.96 - Submission of computer program listings.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Apple Macintosh; (ii) Operating System Compatibility: MS-DOS, MS-Windows, Unix, or Macintosh; (iii) Line... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Submission of computer... Models, Exhibits, Specimens § 1.96 Submission of computer program listings. (a) General. Descriptions of...
37 CFR 1.96 - Submission of computer program listings.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Apple Macintosh; (ii) Operating System Compatibility: MS-DOS, MS-Windows, Unix, or Macintosh; (iii) Line... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Submission of computer... Models, Exhibits, Specimens § 1.96 Submission of computer program listings. (a) General. Descriptions of...
37 CFR 1.96 - Submission of computer program listings.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Apple Macintosh; (ii) Operating System Compatibility: MS-DOS, MS-Windows, Unix, or Macintosh; (iii) Line... 37 Patents, Trademarks, and Copyrights 1 2012-07-01 2012-07-01 false Submission of computer... Models, Exhibits, Specimens § 1.96 Submission of computer program listings. (a) General. Descriptions of...
ERIC Educational Resources Information Center
Mauriello, David
1984-01-01
Reviews an interactive statistical analysis package (designed to run on 8- and 16-bit machines that utilize CP/M 80 and MS-DOS operating systems), considering its features and uses, documentation, operation, and performance. The package consists of 40 general purpose statistical procedures derived from the classic textbook "Statistical…
Program helps quickly calculate deviated well path
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, M.P.
1993-11-22
A BASIC computer program quickly calculates the angle and measured depth of a simple directional well given only the true vertical depth and total displacement of the target. Many petroleum engineers and geologists need a quick, easy method to calculate the angle and measured depth necessary to reach a target in a proposed deviated well bore. Too many of the existing programs are large and require much input data. The drilling literature is full of equations and methods to calculate the course of well paths from surveys taken after a well is drilled. Very little information, however, covers how tomore » calculate well bore trajectories for proposed wells from limited data. Furthermore, many of the equations are quite complex and difficult to use. A figure lists a computer program with the equations to calculate the well bore trajectory necessary to reach a given displacement and true vertical depth (TVD) for a simple build plant. It can be run on an IBM compatible computer with MS-DOS version 5 or higher, QBasic, or any BASIC that does no require line numbers. QBasic 4.5 compiler will also run the program. The equations are based on conventional geometry and trigonometry.« less
Whenever You Use a Computer You Are Using a Program Called an Operating System.
ERIC Educational Resources Information Center
Cook, Rick
1984-01-01
Examines design, features, and shortcomings of eight disk-based operating systems designed for general use that are popular or most likely to affect the future of microcomputing. Included are the CP/M family, MS-DOS, Apple DOS/ProDOS, Unix, Pick, the p-System, TRSDOS, and Macintosh/Lisa. (MBR)
ALPS - A LINEAR PROGRAM SOLVER
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA
NASA Technical Reports Server (NTRS)
Ledbetter, F. E.
1994-01-01
Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With minor modifications the source code is portable to any platform that supports an ANSI FORTRAN 77 compiler. MicroSoft FORTRAN v2.1 is required for the Macintosh version. An executable is included with the PC version. DATASPACE is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) or on a 3.5 inch 800K Macintosh format diskette. This program was developed in 1991. IBM PC is a trademark of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh and MacOS are trademarks of Apple Computer, Inc.
HZEFRG1 - SEMIEMPIRICAL NUCLEAR FRAGMENTATION MODEL
NASA Technical Reports Server (NTRS)
Townsend, L. W.
1994-01-01
The high charge and energy (HZE), Semiempirical Nuclear Fragmentation Model, HZEFRG1, was developed to provide a computationally efficient, user-friendly, physics-based program package for generating nuclear fragmentation databases. These databases can then be used in radiation transport applications such as space radiation shielding and dosimetry, cancer therapy with laboratory heavy ion beams, and simulation studies of detector design in nuclear physics experiments. The program provides individual element and isotope production cross sections for the breakup of high energy heavy ions by the combined nuclear and Coulomb fields of the interacting nuclei. The nuclear breakup contributions are estimated using an energy-dependent abrasion-ablation model of heavy ion fragmentation. The abrasion step involves removal of nucleons by direct knockout in the overlap region of the colliding nuclei. The abrasions are treated on a geometric basis and uniform spherical nuclear density distributions are assumed. Actual experimental nuclear radii obtained from tabulations of electron scattering data are incorporated. Nuclear transparency effects are included by using an energy-dependent, impact-parameter-dependent average transmission factor for the projectile and target nuclei, which accounts for the finite mean free path of nucleons in nuclear matter. The ablation step, as implemented by Bowman, Swiatecki, and Tsang (LBL report no. LBL-2908, July 1973), was treated as a single-nucleon emission for every 10 MeV of excitation energy. Fragmentation contributions from electromagnetic dissociation (EMD) processes, arising from the interacting Coulomb fields, are estimated by using the Weiszacker-Williams theory, extended to include electric dipole and electric quadrupole contributions to one-nucleon removal cross sections. HZEFRG1 consists of a main program, seven function subprograms, and thirteen subroutines. Each is fully commented and begins with a brief description of its functionality. The inputs, which are provided interactively by the user in response to on-screen questions, consist of the projectile kinetic energy in units of MeV/nucleon and the masses and charges of the projectile and target nuclei. With proper inputs, HZEFRG1 first calculates the EMD cross sections and then begins the calculations for nuclear fragmentation by searching through a specified number of isotopes for each charge number (Z) from Z=1 (hydrogen) to the charge of the incident fragmenting nucleus (Zp). After completing the nuclear fragmentation cross sections, HZEFRG1 sorts through the results and writes the sorted output to a file in descending order, based on the charge number of the fragmented nucleus. Details of the theory, extensive comparisons of its predictions with available experimental cross section data, and a complete description of the code implementing it are given in the program documentation. HZEFRG1 is written in ANSI FORTRAN 77 to be machine independent. It was originally developed on a DEC VAX series computer, and has been successfully implemented on a DECstation running RISC ULTRIX 4.3, a Sun4 series computer running SunOS 4.1, an HP 9000 series computer running HP-UX 8.0.1, a Cray Y-MP series computer running UNICOS, and IBM PC series computers running MS-DOS 3.3 and higher. HZEFRG1 requires 1Mb of RAM for execution. In addition, a FORTRAN 77 compiler is required to create an executable. A sample output run is included on the distribution medium for numerical comparison. The standard distribution medium for this program is a 3.5 inch 1.44Mb MS-DOS format diskette. Alternate distribution media and formats are available upon request. HZEFRG1 was completed in 1992.
PC-SEAPAK - ANALYSIS OF COASTAL ZONE COLOR SCANNER AND ADVANCED VERY HIGH RESOLUTION RADIOMETER DATA
NASA Technical Reports Server (NTRS)
Mcclain, C. R.
1994-01-01
PC-SEAPAK is a user-interactive satellite data analysis software package specifically developed for oceanographic research. The program is used to process and interpret data obtained from the Nimbus-7/Coastal Zone Color Scanner (CZCS), and the NOAA Advanced Very High Resolution Radiometer (AVHRR). PC-SEAPAK is a set of independent microcomputer-based image analysis programs that provide the user with a flexible, user-friendly, standardized interface, and facilitates relatively low-cost analysis of oceanographic satellite data. Version 4.0 includes 114 programs. PC-SEAPAK programs are organized into categories which include CZCS and AVHRR level-1 ingest, level-2 analyses, statistical analyses, data extraction, remapping to standard projections, graphics manipulation, image board memory manipulation, hardcopy output support and general utilities. Most programs allow user interaction through menu and command modes and also by the use of a mouse. Most programs also provide for ASCII file generation for further analysis in spreadsheets, graphics packages, etc. The CZCS scanning radiometer aboard the NIMBUS-7 satellite was designed to measure the concentration of photosynthetic pigments and their degradation products in the ocean. AVHRR data is used to compute sea surface temperatures and is supported for the NOAA 6, 7, 8, 9, 10, 11, and 12 satellites. The CZCS operated from November 1978 to June 1986. CZCS data may be obtained free of charge from the CZCS archive at NASA/Goddard Space Flight Center. AVHRR data may be purchased through NOAA's Satellite Data Service Division. Ordering information is included in the PC-SEAPAK documentation. Although PC-SEAPAK was developed on a COMPAQ Deskpro 386/20, it can be run on most 386-compatible computers with an AT bus, EGA controller, Intel 80387 coprocessor, and MS-DOS 3.3 or higher. A Matrox MVP-AT image board with appropriate monitor and cables is also required. Note that the authors have received some reports of incompatibilities between the MVP-AT image board and ZENITH computers. Also, the MVP-AT image board is not necessarily compatible with 486-based systems; users of 486-based systems should consult with Matrox about compatibility concerns. Other PC-SEAPAK requirements include a Microsoft mouse (serial version), 2Mb RAM, and 100Mb hard disk space. For data ingest and backup, 9-track tape, 8mm tape and optical disks are supported and recommended. PC-SEAPAK has been under development since 1988. Version 4.0 was updated in 1992, and is distributed without source code. It is available only as a set of 36 1.2Mb 5.25 inch IBM MS-DOS format diskettes. PC-SEAPAK is a copyrighted product with all copyright vested in the National Aeronautics and Space Administration. Phar Lap's DOS_Extender run-time version is integrated into several of the programs; therefore, the PC-SEAPAK programs may not be duplicated. Three of the distribution diskettes contain DOS_Extender files. One of the distribution diskettes contains Media Cybernetics' HALO88 font files, also licensed by NASA for dissemination but not duplication. IBM is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. HALO88 is a registered trademark of Media Cybernetics, but the product was discontinued in 1991.
MAT - MULTI-ATTRIBUTE TASK BATTERY FOR HUMAN OPERATOR WORKLOAD AND STRATEGIC BEHAVIOR RESEARCH
NASA Technical Reports Server (NTRS)
Comstock, J. R.
1994-01-01
MAT, a Multi-Attribute Task battery, gives the researcher the capability of performing multi-task workload and performance experiments. The battery provides a benchmark set of tasks for use in a wide range of laboratory studies of operator performance and workload. MAT incorporates tasks analogous to activities that aircraft crew members perform in flight, while providing a high degree of experiment control, performance data on each subtask, and freedom to use non-pilot test subjects. The MAT battery primary display is composed of four separate task windows which are as follows: a monitoring task window which includes gauges and warning lights, a tracking task window for the demands of manual control, a communication task window to simulate air traffic control communications, and a resource management task window which permits maintaining target levels on a fuel management task. In addition, a scheduling task window gives the researcher information about future task demands. The battery also provides the option of manual or automated control of tasks. The task generates performance data for each subtask. The task battery may be paused and onscreen workload rating scales presented to the subject. The MAT battery was designed to use a serially linked second computer to generate the voice messages for the Communications task. The MATREMX program and support files, which are included in the MAT package, were designed to work with the Heath Voice Card (Model HV-2000, available through the Heath Company, Benton Harbor, Michigan 49022); however, the MATREMX program and support files may easily be modified to work with other voice synthesizer or digitizer cards. The MAT battery task computer may also be used independent of the voice computer if no computer synthesized voice messages are desired or if some other method of presenting auditory messages is devised. MAT is written in QuickBasic and assembly language for IBM PC series and compatible computers running MS-DOS. The code in MAT is written for Microsoft QuickBasic 4.5 and Microsoft Macro Assembler 5.1. This package requires a joystick and EGA or VGA color graphics. An 80286, 386, or 486 processor machine is highly recommended. The standard distribution medium for MAT is a 5.25 inch 360K MS-DOS format diskette. The files are compressed using the PKZIP file compression utility. PKUNZIP is included on the distribution diskette. MAT was developed in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS, Microsoft QuickBasic, and Microsoft Macro Assembler are registered trademarks of Microsoft Corporation. PKZIP and PKUNZIP are registered trademarks of PKWare, Inc.
JTMIX - CRYOGENIC MIXED FLUID JOULE-THOMSON ANALYSIS PROGRAM
NASA Technical Reports Server (NTRS)
Jones, J. A.
1994-01-01
JTMIX was written to allow the prediction of both ideal and realistic properties of mixed gases in the 65-80K temperature range. It allows mixed gas J-T analysis for any fluid combination of neon, nitrogen, various hydrocarbons, argon, oxygen, carbon monoxide, carbon dioxide, and hydrogen sulfide. When used in conjunction with the NIST computer program DDMIX, JTMIX has accurately predicted order-of-magnitude increases in J-T cooling capacities when various hydrocarbons are added to nitrogen, and it predicts nitrogen normal boiling point depressions to as low as 60K when neon is added. JTMIX searches for heat exchanger "pinch points" that can result from insolubility of various components in each other. These points result in numerical solutions that cannot exist. The length of the heat exchanger is searched for such points and, if they exist, the user is warned and the temperatures and heat exchanger effectiveness are corrected to provide a real solution. JTMIX gives very good correlation (within data accuracy) to mixed gas data published by the USSR and data taken by APD for the U.S. Naval Weapons Lab. Data taken at JPL also confirms JTMIX for all cases tested. JTMIX is written in Turbo C for IBM PC compatible computers running MS-DOS. The National Institute of Standards and Technology's (NIST, Gaithersburg, MD, 301-975-2208) computer code DDMIX is required to provide mixed-fluid enthalpy data which is input into JTMIX. The standard distribution medium for this program is a 5.25 inch 360K MS-DOS format diskette. JTMIX was developed in 1991 and is a copyrighted work with all copyright vested in NASA.
StarTrax --- The Next Generation User Interface
NASA Astrophysics Data System (ADS)
Richmond, Alan; White, Nick
StarTrax is a software package to be distributed to end users for installation on their local computing infrastructure. It will provide access to many services of the HEASARC, i.e. bulletins, catalogs, proposal and analysis tools, initially for the ROSAT MIPS (Mission Information and Planning System), later for the Next Generation Browse. A user activating the GUI will reach all HEASARC capabilities through a uniform view of the system, independent of the local computing environment and of the networking method of accessing StarTrax. Use it if you prefer the point-and-click metaphor of modern GUI technology, to the classical command-line interfaces (CLI). Notable strengths include: easy to use; excellent portability; very robust server support; feedback button on every dialog; painstakingly crafted User Guide. It is designed to support a large number of input devices including terminals, workstations and personal computers. XVT's Portability Toolkit is used to build the GUI in C/C++ to run on: OSF/Motif (UNIX or VMS), OPEN LOOK (UNIX), or Macintosh, or MS-Windows (DOS), or character systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...
Code of Federal Regulations, 2012 CFR
2012-07-01
... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...
Code of Federal Regulations, 2013 CFR
2013-07-01
... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...
RESPSYST: An Interactive Microcomputer Program for Education.
ERIC Educational Resources Information Center
Boyle, Joseph
1985-01-01
RESPSYST is a computer program (written in BASICA and using MS-DOS/PC-DOS microcomputers) incorporating more than 20 of the factors that determine gas transport by the cardio-respiratory system. The five-part program discusses most of these factors, provides several question/answer sections, and relies heavily on graphics to demonstrate…
FASTRAN II - FATIGUE CRACK GROWTH STRUCTURAL ANALYSIS (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Newman, J. C.
1994-01-01
Predictions of fatigue crack growth behavior can be made with the Fatigue Crack Growth Structural Analysis (FASTRAN II) computer program. As cyclic loads are applied to a selected crack configuration with an initial crack size, FASTRAN II predicts crack growth as a function of cyclic load history until either a desired crack size is reached or failure occurs. FASTRAN II is based on plasticity-induced crack-closure behavior of cracks in metallic materials and accounts for load-interaction effects, such as retardation and acceleration, under variable-amplitude loading. The closure model is based on the Dugdale model with modifications to allow plastically deformed material to be left along the crack surfaces as the crack grows. Plane stress and plane strain conditions, as well as conditions between these two, can be simulated in FASTRAN II by using a constraint factor on tensile yielding at the crack front to approximately account for three-dimensional stress states. FASTRAN II contains seventeen predefined crack configurations (standard laboratory fatigue crack growth rate specimens and many common crack configurations found in structures); and the user can define one additional crack configuration. The baseline crack growth rate properties (effective stress-intensity factor against crack growth rate) may be given in either equation or tabular form. For three-dimensional crack configurations, such as surface cracks or corner cracks at holes or notches, the fatigue crack growth rate properties may be different in the crack depth and crack length directions. Final failure of the cracked structure can be modelled with fracture toughness properties using either linear-elastic fracture mechanics (brittle materials), a two-parameter fracture criterion (brittle to ductile materials), or plastic collapse (extremely ductile materials). The crack configurations in FASTRAN II can be subjected to either constant-amplitude, variable-amplitude or spectrum loading. The applied loads may be either tensile or compressive. Several standardized aircraft flight-load histories, such as TWIST, Mini-TWIST, FALSTAFF, Inverted FALSTAFF, Felix and Gaussian, are included as options. FASTRAN II also includes two other methods that will help the user input spectrum load histories. The two methods are: (1) a list of stress points, and (2) a flight-by-flight history of stress points. Examples are provided in the user manual. Developed as a research program, FASTRAN II has successfully predicted crack growth in many metallic materials under various aircraft spectrum loading. A computer program DKEFF which is a part of the FASTRAN II package was also developed to analyze crack growth rate data from laboratory specimens to obtain the effective stress-intensity factor against crack growth rate relations used in FASTRAN II. FASTRAN II is written in standard FORTRAN 77. It has been successfully compiled and implemented on Sun4 series computers running SunOS and on IBM PC compatibles running MS-DOS using the Lahey F77L FORTRAN compiler. Sample input and output data are included with the FASTRAN II package. The UNIX version requires 660K of RAM for execution. The standard distribution medium for the UNIX version (LAR-14865) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the MS-DOS version (LAR-14944) is a 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The program was developed in 1984 and revised in 1992. Sun4 and SunOS are trademarks of Sun Microsystems, Inc. IBM PC is a trademark of International Business Machines Corp. MS-DOS is a trademark of Microsoft, Inc. F77L is a trademark of the Lahey Computer Systems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. PKWARE and PKUNZIP are trademarks of PKWare, Inc.
FASTRAN II - FATIGUE CRACK GROWTH STRUCTURAL ANALYSIS (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Newman, J. C.
1994-01-01
Predictions of fatigue crack growth behavior can be made with the Fatigue Crack Growth Structural Analysis (FASTRAN II) computer program. As cyclic loads are applied to a selected crack configuration with an initial crack size, FASTRAN II predicts crack growth as a function of cyclic load history until either a desired crack size is reached or failure occurs. FASTRAN II is based on plasticity-induced crack-closure behavior of cracks in metallic materials and accounts for load-interaction effects, such as retardation and acceleration, under variable-amplitude loading. The closure model is based on the Dugdale model with modifications to allow plastically deformed material to be left along the crack surfaces as the crack grows. Plane stress and plane strain conditions, as well as conditions between these two, can be simulated in FASTRAN II by using a constraint factor on tensile yielding at the crack front to approximately account for three-dimensional stress states. FASTRAN II contains seventeen predefined crack configurations (standard laboratory fatigue crack growth rate specimens and many common crack configurations found in structures); and the user can define one additional crack configuration. The baseline crack growth rate properties (effective stress-intensity factor against crack growth rate) may be given in either equation or tabular form. For three-dimensional crack configurations, such as surface cracks or corner cracks at holes or notches, the fatigue crack growth rate properties may be different in the crack depth and crack length directions. Final failure of the cracked structure can be modelled with fracture toughness properties using either linear-elastic fracture mechanics (brittle materials), a two-parameter fracture criterion (brittle to ductile materials), or plastic collapse (extremely ductile materials). The crack configurations in FASTRAN II can be subjected to either constant-amplitude, variable-amplitude or spectrum loading. The applied loads may be either tensile or compressive. Several standardized aircraft flight-load histories, such as TWIST, Mini-TWIST, FALSTAFF, Inverted FALSTAFF, Felix and Gaussian, are included as options. FASTRAN II also includes two other methods that will help the user input spectrum load histories. The two methods are: (1) a list of stress points, and (2) a flight-by-flight history of stress points. Examples are provided in the user manual. Developed as a research program, FASTRAN II has successfully predicted crack growth in many metallic materials under various aircraft spectrum loading. A computer program DKEFF which is a part of the FASTRAN II package was also developed to analyze crack growth rate data from laboratory specimens to obtain the effective stress-intensity factor against crack growth rate relations used in FASTRAN II. FASTRAN II is written in standard FORTRAN 77. It has been successfully compiled and implemented on Sun4 series computers running SunOS and on IBM PC compatibles running MS-DOS using the Lahey F77L FORTRAN compiler. Sample input and output data are included with the FASTRAN II package. The UNIX version requires 660K of RAM for execution. The standard distribution medium for the UNIX version (LAR-14865) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the MS-DOS version (LAR-14944) is a 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The program was developed in 1984 and revised in 1992. Sun4 and SunOS are trademarks of Sun Microsystems, Inc. IBM PC is a trademark of International Business Machines Corp. MS-DOS is a trademark of Microsoft, Inc. F77L is a trademark of the Lahey Computer Systems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. PKWARE and PKUNZIP are trademarks of PKWare, Inc.
CWG - MUTUAL COUPLING PROGRAM FOR CIRCULAR WAVEGUIDE-FED APERTURE ARRAY (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Bailey, M. C.
1994-01-01
Mutual Coupling Program for Circular Waveguide-fed Aperture Array (CWG) was developed to calculate the electromagnetic interaction between elements of an antenna array of circular apertures with specified aperture field distributions. The field distributions were assumed to be a superposition of the modes which could exist in a circular waveguide. Various external media were included to provide flexibility of use, for example, the flexibility to determine the effects of dielectric covers (i.e., thermal protection system tiles) upon the impedance of aperture type antennas. The impedance and radiation characteristics of planar array antennas depend upon the mutual interaction between all the elements of the array. These interactions are influenced by several parameters (e.g., the array grid geometry, the geometry and excitation of each array element, the medium outside the array, and the internal network feeding the array.) For the class of array antenna whose radiating elements consist of small holes in a flat conducting plate, the electromagnetic problem can be divided into two parts, the internal and the external. In solving the external problem for an array of circular apertures, CWG will compute the mutual interaction between various combinations of circular modal distributions and apertures. CWG computes the mutual coupling between various modes assumed to exist in circular apertures that are located in a flat conducting plane of infinite dimensions. The apertures can radiate into free space, a homogeneous medium, a multilayered region or a reflecting surface. These apertures are assumed to be excited by one or more modes corresponding to the modal distributions in circular waveguides of the same cross sections as the apertures. The apertures may be of different sizes and also of different polarizations. However, the program assumes that each aperture field contains the same modal distributions, and calculates the complex scattering matrix between all mode and aperture combinations. The scattering matrix can then be used to determine the complex modal field amplitudes for each aperture with a specified array excitation. CWG is written in VAX FORTRAN for DEC VAX series computers running VMS (LAR-15236) and IBM PC series and compatible computers running MS-DOS (LAR-15226). It requires 360K of RAM for execution. To compile the source code for the PC version, the NDP Fortran compiler and linker will be required; however, the distribution medium for the PC version of CWG includes a sample MS-DOS executable which was created using NDP Fortran with the -vms compiler option. The standard distribution medium for the PC version of CWG is a 3.5 inch 1.44Mb MS-DOS format diskette. The standard distribution medium for the VAX version of CWG is a 1600 BPI 9track magnetic tape in DEC VAX BACKUP format. The VAX version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Both machine versions of CWG include an electronic version of the documentation in Microsoft Word for Windows format. CWG was developed in 1993 and is a copyrighted work with all copyright vested in NASA.
CWG - MUTUAL COUPLING PROGRAM FOR CIRCULAR WAVEGUIDE-FED APERTURE ARRAY (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Bailey, M. C.
1994-01-01
Mutual Coupling Program for Circular Waveguide-fed Aperture Array (CWG) was developed to calculate the electromagnetic interaction between elements of an antenna array of circular apertures with specified aperture field distributions. The field distributions were assumed to be a superposition of the modes which could exist in a circular waveguide. Various external media were included to provide flexibility of use, for example, the flexibility to determine the effects of dielectric covers (i.e., thermal protection system tiles) upon the impedance of aperture type antennas. The impedance and radiation characteristics of planar array antennas depend upon the mutual interaction between all the elements of the array. These interactions are influenced by several parameters (e.g., the array grid geometry, the geometry and excitation of each array element, the medium outside the array, and the internal network feeding the array.) For the class of array antenna whose radiating elements consist of small holes in a flat conducting plate, the electromagnetic problem can be divided into two parts, the internal and the external. In solving the external problem for an array of circular apertures, CWG will compute the mutual interaction between various combinations of circular modal distributions and apertures. CWG computes the mutual coupling between various modes assumed to exist in circular apertures that are located in a flat conducting plane of infinite dimensions. The apertures can radiate into free space, a homogeneous medium, a multilayered region or a reflecting surface. These apertures are assumed to be excited by one or more modes corresponding to the modal distributions in circular waveguides of the same cross sections as the apertures. The apertures may be of different sizes and also of different polarizations. However, the program assumes that each aperture field contains the same modal distributions, and calculates the complex scattering matrix between all mode and aperture combinations. The scattering matrix can then be used to determine the complex modal field amplitudes for each aperture with a specified array excitation. CWG is written in VAX FORTRAN for DEC VAX series computers running VMS (LAR-15236) and IBM PC series and compatible computers running MS-DOS (LAR-15226). It requires 360K of RAM for execution. To compile the source code for the PC version, the NDP Fortran compiler and linker will be required; however, the distribution medium for the PC version of CWG includes a sample MS-DOS executable which was created using NDP Fortran with the -vms compiler option. The standard distribution medium for the PC version of CWG is a 3.5 inch 1.44Mb MS-DOS format diskette. The standard distribution medium for the VAX version of CWG is a 1600 BPI 9track magnetic tape in DEC VAX BACKUP format. The VAX version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Both machine versions of CWG include an electronic version of the documentation in Microsoft Word for Windows format. CWG was developed in 1993 and is a copyrighted work with all copyright vested in NASA.
Tools of the Courseware Trade: A Comparison of ToolBook 1.0 and HyperCard 2.0.
ERIC Educational Resources Information Center
Brader, Lorinda L.
1990-01-01
Compares two authoring tools that were developed to enable users without programing experience to create and modify software. HyperCard, designed for Macintosh microcomputers, and ToolBook, for microcomputers that run on MS-DOS, are compared in the areas of programing languages, graphics and printing capabilities, user interface, system…
ERIC Educational Resources Information Center
Batt, Russell H., Ed.
1989-01-01
Describes two chemistry computer programs: (1) "Eureka: A Chemistry Problem Solver" (problem files may be written by the instructor, MS-DOS 2.0, IBM with 384K); and (2) "PC-File+" (database management, IBM with 416K and two floppy drives). (MVL)
Multiple running speed signals in medial entorhinal cortex
Hinman, James R.; Brandon, Mark P.; Climer, Jason R.; Chapman, G. William; Hasselmo, Michael E.
2016-01-01
Grid cells in medial entorhinal cortex (MEC) can be modeled using oscillatory interference or attractor dynamic mechanisms that perform path integration, a computation requiring information about running direction and speed. The two classes of computational models often use either an oscillatory frequency or a firing rate that increases as a function of running speed. Yet it is currently not known whether these are two manifestations of the same speed signal or dissociable signals with potentially different anatomical substrates. We examined coding of running speed in MEC and identified these two speed signals to be independent of each other within individual neurons. The medial septum (MS) is strongly linked to locomotor behavior and removal of MS input resulted in strengthening of the firing rate speed signal, while decreasing the strength of the oscillatory speed signal. Thus two speed signals are present in MEC that are differentially affected by disrupted MS input. PMID:27427460
Remote media vision-based computer input device
NASA Astrophysics Data System (ADS)
Arabnia, Hamid R.; Chen, Ching-Yi
1991-11-01
In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.
BALANCER: A Computer Program for Balancing Chemical Equations.
ERIC Educational Resources Information Center
Jones, R. David; Schwab, A. Paul
1989-01-01
Describes the theory and operation of a computer program which was written to balance chemical equations. Software consists of a compiled file of 46K for use under MS-DOS 2.0 or later on IBM PC or compatible computers. Additional specifications of courseware and availability information are included. (Author/RT)
Strong running coupling at τ and Z(0) mass scales from lattice QCD.
Blossier, B; Boucaud, Ph; Brinet, M; De Soto, F; Du, X; Morenas, V; Pène, O; Petrov, K; Rodríguez-Quintero, J
2012-06-29
This Letter reports on the first computation, from data obtained in lattice QCD with u, d, s, and c quarks in the sea, of the running strong coupling via the ghost-gluon coupling renormalized in the momentum-subtraction Taylor scheme. We provide readers with estimates of α(MS[over ¯])(m(τ)(2)) and α(MS[over ¯])(m(Z)(2)) in very good agreement with experimental results. Including a dynamical c quark makes the needed running of α(MS[over ¯]) safer.
METCAN-PC - METAL MATRIX COMPOSITE ANALYZER
NASA Technical Reports Server (NTRS)
Murthy, P. L.
1994-01-01
High temperature metal matrix composites offer great potential for use in advanced aerospace structural applications. The realization of this potential however, requires concurrent developments in (1) a technology base for fabricating high temperature metal matrix composite structural components, (2) experimental techniques for measuring their thermal and mechanical characteristics, and (3) computational methods to predict their behavior. METCAN (METal matrix Composite ANalyzer) is a computer program developed to predict this behavior. METCAN can be used to computationally simulate the non-linear behavior of high temperature metal matrix composites (HT-MMC), thus allowing the potential payoff for the specific application to be assessed. It provides a comprehensive analysis of composite thermal and mechanical performance. METCAN treats material nonlinearity at the constituent (fiber, matrix, and interphase) level, where the behavior of each constituent is modeled accounting for time-temperature-stress dependence. The composite properties are synthesized from the constituent instantaneous properties by making use of composite micromechanics and macromechanics. Factors which affect the behavior of the composite properties include the fabrication process variables, the fiber and matrix properties, the bonding between the fiber and matrix and/or the properties of the interphase between the fiber and matrix. The METCAN simulation is performed as point-wise analysis and produces composite properties which are readily incorporated into a finite element code to perform a global structural analysis. After the global structural analysis is performed, METCAN decomposes the composite properties back into the localized response at the various levels of the simulation. At this point the constituent properties are updated and the next iteration in the analysis is initiated. This cyclic procedure is referred to as the integrated approach to metal matrix composite analysis. METCAN-PC is written in FORTRAN 77 for IBM PC series and compatible computers running MS-DOS. An 80286 machine with an 80287 math co-processor is required for execution. The executable requires at least 640K of RAM and DOS 3.1 or higher. The package includes sample executables which were compiled under Microsoft FORTRAN v. 5.1. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. METCAN-PC was developed in 1992.
Emissivity of Rocket Plume Particulates
1992-09-01
V. EXPERIMENTAL RESULTS ........ ............... 29 VI. CONCLUSIONS AND RECOMMENDATIONS .... ........ 32 APPENDIX A. CATS -E SOFTWARE...interfaced through the CATS E Thermal Analysis software, which is MS-DOS based, and can be run on any 28b or higher CPU. This system allows real-time...body source to establish the parameters required by the CATS program for proper microscope/scanner interface. A complete description of microscope
Computer Code Aids Design Of Wings
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1993-01-01
AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.
ERIC Educational Resources Information Center
Moore, John W., Ed.
1988-01-01
Describes five computer software packages; four for MS-DOS Systems and one for Apple II. Included are SPEC20, an interactive simulation of a Bausch and Lomb Spectronic-20; a database for laboratory chemicals and programs for visualizing Boltzmann-like distributions, orbital plot for the hydrogen atom and molecular orbital theory. (CW)
Development of a prototype commonality analysis tool for use in space programs
NASA Technical Reports Server (NTRS)
Yeager, Dorian P.
1988-01-01
A software tool to aid in performing commonality analyses, called Commonality Analysis Problem Solver (CAPS), was designed, and a prototype version (CAPS 1.0) was implemented and tested. The CAPS 1.0 runs in an MS-DOS or IBM PC-DOS environment. The CAPS is designed around a simple input language which provides a natural syntax for the description of feasibility constraints. It provides its users with the ability to load a database representing a set of design items, describe the feasibility constraints on items in that database, and do a comprehensive cost analysis to find the most economical substitution pattern.
RSM 1.0 - A RESUPPLY SCHEDULER USING INTEGER OPTIMIZATION
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
RSM, Resupply Scheduling Modeler, is a fully menu-driven program that uses integer programming techniques to determine an optimum schedule for replacing components on or before the end of a fixed replacement period. Although written to analyze the electrical power system on the Space Station Freedom, RSM is quite general and can be used to model the resupply of almost any system subject to user-defined resource constraints. RSM is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more computationally intensive, integer programming was required for accuracy when modeling systems with small quantities of components. Input values for component life cane be real numbers, RSM converts them to integers by dividing the lifetime by the period duration, then reducing the result to the next lowest integer. For each component, there is a set of constraints that insure that it is replaced before its lifetime expires. RSM includes user-defined constraints such as transportation mass and volume limits, as well as component life, available repair crew time and assembly sequences. A weighting factor allows the program to minimize factors such as cost. The program then performs an iterative analysis, which is displayed during the processing. A message gives the first period in which resources are being exceeded on each iteration. If the scheduling problem is unfeasible, the final message will also indicate the first period in which resources were exceeded. RSM is written in APL2 for IBM PC series computers and compatibles. A stand-alone executable version of RSM is provided; however, this is a "packed" version of RSM which can only utilize the memory within the 640K DOS limit. This executable requires at least 640K of memory and DOS 3.1 or higher. Source code for an APL2/PC workspace version is also provided. This version of RSM can make full use of any installed extended memory but must be run with the APL2 interpreter; and it requires an 80486 based microcomputer or an 80386 based microcomputer with an 80387 math coprocessor, at least 2Mb of extended memory, and DOS 3.3 or higher. The standard distribution medium for this package is one 5.25 inch 360K MS-DOS format diskette. RSM was developed in 1991. APL2 and IBM PC are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
VPI - VIBRATION PATTERN IMAGER: A CONTROL AND DATA ACQUISITION SYSTEM FOR SCANNING LASER VIBROMETERS
NASA Technical Reports Server (NTRS)
Rizzi, S. A.
1994-01-01
The Vibration Pattern Imager (VPI) system was designed to control and acquire data from laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor (Ometron Limited, Kelvin House, Worsley Bridge Road, London, SE26 5BX, England), but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. VPI's graphical user interface allows the operation of the program to be controlled interactively through keyboard and mouse-selected menu options. The main menu controls all functions for setup, data acquisition, display, file operations, and exiting the program. Two types of data may be acquired with the VPI system: single point or "full field". In the single point mode, time series data is sampled by the A/D converter on the I/O board at a user-defined rate for the selected number of samples. The position of the measuring point, adjusted by mirrors in the sensor, is controlled via a mouse input. In the "full field" mode, the measurement point is moved over a user-selected rectangular area with up to 256 positions in both x and y directions. The time series data is sampled by the A/D converter on the I/O board and converted to a root-mean-square (rms) value by the DSP board. The rms "full field" velocity distribution is then uploaded for display and storage. VPI is written in C language and Texas Instruments' TMS320C30 assembly language for IBM PC series and compatible computers running MS-DOS. The program requires 640K of RAM for execution, and a hard disk with 10Mb or more of disk space is recommended. The program also requires a mouse, a VGA graphics display, a Four Channel analog I/O board (Spectrum Signal Processing, Inc.; Westborough, MA), a break-out box and a Spirit-30 board (Sonitech International, Inc.; Wellesley, MA) which includes a TMS320C30 DSP processor, 256Kb zero wait state SRAM, and a daughter board with 8Mb one wait state DRAM. Please contact COSMIC for additional information on required hardware and software. In order to compile the provided VPI source code, a Microsoft C version 6.0 compiler, a Texas Instruments' TMS320C30 assembly language compiler, and the Spirit 30 run time libraries are required. A math co-processor is highly recommended. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. VPI was developed in 1991-1992.
TRL - A FORMAL TEST REPRESENTATION LANGUAGE AND TOOL FOR FUNCTIONAL TEST DESIGNS
NASA Technical Reports Server (NTRS)
Hops, J. M.
1994-01-01
A Formal Test Representation Language and Tool for Functional Test Designs (TRL) is an automatic tool and a formal language that is used to implement the Category-Partition Method and produce the specification of test cases in the testing phase of software development. The Category-Partition Method is particularly useful in defining the inputs, outputs and purpose of the test design phase and combines the benefits of choosing normal cases with error exposing properties. Traceability can be maintained quite easily by creating a test design for each objective in the test plan. The effort to transform the test cases into procedures is simplified by using an automatic tool to create the cases based on the test design. The method allows the rapid elimination of undesired test cases from consideration, and easy review of test designs by peer groups. The first step in the category-partition method is functional decomposition, in which the specification and/or requirements are decomposed into functional units that can be tested independently. A secondary purpose of this step is to identify the parameters that affect the behavior of the system for each functional unit. The second step, category analysis, carries the work done in the previous step further by determining the properties or sub-properties of the parameters that would make the system behave in different ways. The designer should analyze the requirements to determine the features or categories of each parameter and how the system may behave if the category were to vary its value. If the parameter undergoing refinement is a data-item, then categories of this data-item may be any of its attributes, such as type, size, value, units, frequency of change, or source. After all the categories for the parameters of the functional unit have been determined, the next step is to partition each category's range space into mutually exclusive values that the category can assume. In choosing partition values, all possible kinds of values should be included, especially the ones that will maximize error detection. The purpose of the final step, partition constraint analysis, is to refine the test design specification so that only the technically effective and economically feasible test cases are implied. TRL is written in C-language to be machine independent. It has been successfully implemented on an IBM PC compatible running MS DOS, a Sun4 series computer running SunOS, an HP 9000/700 series workstation running HP-UX, a DECstation running DEC RISC ULTRIX, and a DEC VAX series computer running VMS. TRL requires 1Mb of disk space and a minimum of 84K of RAM. The documentation is available in electronic form in Word Perfect format. The standard distribution media for TRL is a 5.25 inch 360K MS-DOS format diskette. Alternate distribution media and formats are available upon request. TRL was developed in 1993 and is a copyrighted work with all copyright vested in NASA.
Pizzuti, A; Baralis, G; Bassignana, A; Antonielli, E; Di Leo, M
1997-01-01
The MS200 Cardioscope, from MRT Micro as., Norway, is a 12 channel ECG card to be directly inserted into a standard personal computer (PC). The standard ISA Bus compatible half length card comes with a set of 10 cables with electrodes and the software for recording, displaying and saving ECG signals. The system is supplied with DOS or Windows software. The goal of the present work was to evaluate the affordability and usability of the MS200 in a clinical setting. We tested the 1.5 DOS version of the software. In 30 patients with various cardiac diseases the ECG signal has been recorded with MS200 and with standard Hellige CardioSmart equipment. The saved ECGs were recalled and printed using an Epson Stylus 800 ink-jet printer. Two cardiologists reviewed the recordings for a looking at output quality, amplitude and speed precision, artifacts, etc. 1) Installation: the card has proven to be totally compatible with the hardware; no changes in default settings had to be made. 2) Usage: the screens are clear; the commands and menus are intuitive and easy to use. Due to the boot-strap and software loading procedures and, most important, off-line printing, the time needed to obtain a complete ECG printout has been longer than that of the reference machine. 3) Archiving and retrieval of ECG: the ECG curves can be saved in original or compressed form: selecting the latter, the noise and non-ECG information is filtered away and the space consumption on disk is reduced: on average, 20 Kb are needed for 10 seconds of signal. The MS200 can be run on a Local Area Network and is prepared for integrating with an existing informative system: we are currently testing the system in this scenery. 4) MS200 includes options for on-line diagnosis, a technology we have not tested in the present work. 5) The only setting allowed for printing full pages is letter size (A4): the quality of printouts is good, with a resolution of 180 DPI. In conclusion, the MS200 system seems reliable and safe. In the configuration we tested, it cannot substitute a dedicated ECG equipment: from this point of view, a smaller PCMCIA-type card with a battery-operated notebook PC will be more suitable for clinical uses. Nevertheless, the possibility to log and track ECG records, integrated into the department informative system, may provide a valuable tool for improving access to medical information.
An Expert System for Identification of Minerals in Thin Section.
ERIC Educational Resources Information Center
Donahoe, James Louis; And Others
1989-01-01
Discusses a computer database which includes optical properties of 142 minerals. Uses fuzzy logic to identify minerals from incomplete and imprecise information. Written in Turbo PASCAL for MS-DOS with 128K. (MVL)
Updated System-Availability and Resource-Allocation Program
NASA Technical Reports Server (NTRS)
Viterna, Larry
2004-01-01
A second version of the Availability, Cost and Resource Allocation (ACARA) computer program has become available. The first version was reported in an earlier tech brief. To recapitulate: ACARA analyzes the availability, mean-time-between-failures of components, life-cycle costs, and scheduling of resources of a complex system of equipment. ACARA uses a statistical Monte Carlo method to simulate the failure and repair of components while complying with user-specified constraints on spare parts and resources. ACARA evaluates the performance of the system on the basis of a mathematical model developed from a block-diagram representation. The previous version utilized the MS-DOS operating system and could not be run by use of the most recent versions of the Windows operating system. The current version incorporates the algorithms of the previous version but is compatible with Windows and utilizes menus and a file-management approach typical of Windows-based software.
The report is a reference manual for RASSMlT Version 2.1, a computer program that was developed to simulate and aid in the design of sub-slab depressurization systems used for indoor radon mitigation. The program was designed to run on DOS-compatible personal computers to ensure ...
Use of Optical Storage Devices as Shared Resources in Local Area Networks
1989-09-01
13 3. SERVICE CALLS FOR MS-DOS CD-ROM EXTENSIONS . 14 4. MS-DOS PRIMITIVE GROUPS ....................... 15 5. RAM USAGE FOR VARIOUS LAN...17 2. Service Call Translation to DOS Primitives ............. 19 3. MS-DOS Device Drivers ............................. 21 4. MS-DOS/ROM...directed to I/O devices will be referred to as primitive instruction groups). These primitive instruction groups include keyboard, video, disk, serial
MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1994-01-01
MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running SunOS, a Hewlett-Packard 720 computer running HP-UX, a Macintosh computer running MacOS, and an IBM PC compatible computer running MS-DOS. Accompanying the library is a set of 196 "demo" drivers that exercise all of the user-callable subprograms. The FORTRAN source code for MATH77 comprises 109K lines of code in 375 files with a total size of 4.5Mb. The demo drivers comprise 11K lines of code and 418K. Forty-four percent of the lines of the library code and 29% of those in the demo code are comment lines. The standard distribution medium for MATH77 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9track 1600 BPI magnetic tape in VAX BACKUP format and a TK50 tape cartridge in VAX BACKUP format. An electronic copy of the documentation is included on the distribution media. Previous releases of MATH77 have been used over a number of years in a variety of JPL applications. MATH77 Release 4.0 was completed in 1992. MATH77 is a copyrighted work with all copyright vested in NASA.
PGCA: An algorithm to link protein groups created from MS/MS data
Sasaki, Mayu; Hollander, Zsuzsanna; Smith, Derek; McManus, Bruce; McMaster, W. Robert; Ng, Raymond T.; Cohen Freue, Gabriela V.
2017-01-01
The quantitation of proteins using shotgun proteomics has gained popularity in the last decades, simplifying sample handling procedures, removing extensive protein separation steps and achieving a relatively high throughput readout. The process starts with the digestion of the protein mixture into peptides, which are then separated by liquid chromatography and sequenced by tandem mass spectrometry (MS/MS). At the end of the workflow, recovering the identity of the proteins originally present in the sample is often a difficult and ambiguous process, because more than one protein identifier may match a set of peptides identified from the MS/MS spectra. To address this identification problem, many MS/MS data processing software tools combine all plausible protein identifiers matching a common set of peptides into a protein group. However, this solution introduces new challenges in studies with multiple experimental runs, which can be characterized by three main factors: i) protein groups’ identifiers are local, i.e., they vary run to run, ii) the composition of each group may change across runs, and iii) the supporting evidence of proteins within each group may also change across runs. Since in general there is no conclusive evidence about the absence of proteins in the groups, protein groups need to be linked across different runs in subsequent statistical analyses. We propose an algorithm, called Protein Group Code Algorithm (PGCA), to link groups from multiple experimental runs by forming global protein groups from connected local groups. The algorithm is computationally inexpensive and enables the connection and analysis of lists of protein groups across runs needed in biomarkers studies. We illustrate the identification problem and the stability of the PGCA mapping using 65 iTRAQ experimental runs. Further, we use two biomarker studies to show how PGCA enables the discovery of relevant candidate protein group markers with similar but non-identical compositions in different runs. PMID:28562641
An MS-DOS-based program for analyzing plutonium gamma-ray spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruhter, W.D.; Buckley, W.M.
1989-09-07
A plutonium gamma-ray analysis system that operates on MS-DOS-based computers has been developed for the International Atomic Energy Agency (IAEA) to perform in-field analysis of plutonium gamma-ray spectra for plutonium isotopics. The program titled IAEAPU consists of three separate applications: a data-transfer application for transferring spectral data from a CICERO multichannel analyzer to a binary data file, a data-analysis application to analyze plutonium gamma-ray spectra, for plutonium isotopic ratios and weight percents of total plutonium, and a data-quality assurance application to check spectral data for proper data-acquisition setup and performance. Volume 3 contains the software listings for these applications.
QCD Coupling from a Nonperturbative Determination of the Three-Flavor Λ Parameter
Bruno, Mattia; Brida, Mattia Dalla; Fritzsch, Patrick; ...
2017-09-08
We present a lattice determination of the Λ parameter in three-flavor QCD and the strong coupling at the Z pole mass. Computing the nonperturbative running of the coupling in the range from 0.2 to 70 GeV, and using experimental input values for the masses and decay constants of the pion and the kaon, we obtain Λ(3)MS=341(12) MeV. The nonperturbative running up to very high energies guarantees that systematic effects associated with perturbation theory are well under control. Using the four-loop prediction for Λ(5)MS/Λ(3)MS yields α(5)MS(mZ)=0.11852(84).
NASA Technical Reports Server (NTRS)
Carlson, H. W.
1994-01-01
This code was developed to aid design engineers in the selection and evaluation of aerodynamically efficient wing-canard and wing-horizontal-tail configurations that may employ simple hinged-flap systems. Rapid estimates of the longitudinal aerodynamic characteristics of conceptual airplane lifting surface arrangements are provided. The method is particularly well suited to configurations which, because of high speed flight requirements, must employ thin wings with highly swept leading edges. The code is applicable to wings with either sharp or rounded leading edges. The code provides theoretical pressure distributions over the wing, the canard or horizontal tail, and the deflected flap surfaces as well as estimates of the wing lift, drag, and pitching moments which account for attainable leading edge thrust and leading edge separation vortex forces. The wing planform information is specified by a series of leading edge and trailing edge breakpoints for a right hand wing panel. Up to 21 pairs of coordinates may be used to describe both the leading edge and the trailing edge. The code has been written to accommodate 2000 right hand panel elements, but can easily be modified to accommodate a larger or smaller number of elements depending on the capacity of the target computer platform. The code provides solutions for wing surfaces composed of all possible combinations of leading edge and trailing edge flap settings provided by the original deflection multipliers and by the flap deflection multipliers. Up to 25 pairs of leading edge and trailing edge flap deflection schedules may thus be treated simultaneously. The code also provides for an improved accounting of hinge-line singularities in determination of wing forces and moments. To determine lifting surface perturbation velocity distributions, the code provides for a maximum of 70 iterations. The program is constructed so that successive runs may be made with a given code entry. To make additional runs, it is necessary only to add an identification record and the namelist data that are to be changed from the previous run. This code was originally developed in 1989 in FORTRAN V on a CDC 6000 computer system, and was later ported to an MS-DOS environment. Both versions are available from COSMIC. There are only a few differences between the PC version (LAR-14458) and CDC version (LAR-14178) of AERO2S distributed by COSMIC. The CDC version has one main source code file while the PC version has two files which are easier to edit and compile on a PC. The PC version does not require a FORTRAN compiler which supports NAMELIST because a special INPUT subroutine has been added. The CDC version includes two MODIFY decks which can be used to improve the code and prevent the possibility of some infrequently occurring errors while PC-version users will have to make these code changes manually. The PC version includes an executable which was generated with the Ryan McFarland/FORTRAN compiler and requires 253K RAM and an 80x87 math co-processor. Using this executable, the sample case requires about four hours to execute on an 8MHz AT-class microcomputer with a co-processor. The source code conforms to the FORTRAN 77 standard except that it uses variables longer than six characters. With two minor modifications, the PC version should be portable to any computer with a FORTRAN compiler and sufficient memory. The CDC version of AERO2S is available in CDC NOS Internal format on a 9-track 1600 BPI magnetic tape. The PC version is available on a set of two 5.25 inch 360K MS-DOS format diskettes. IBM AT is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. CDC is a registered trademark of Control Data Corporation. NOS is a trademark of Control Data Corporation.
NASA Technical Reports Server (NTRS)
Darden, C. M.
1994-01-01
This code was developed to aid design engineers in the selection and evaluation of aerodynamically efficient wing-canard and wing-horizontal-tail configurations that may employ simple hinged-flap systems. Rapid estimates of the longitudinal aerodynamic characteristics of conceptual airplane lifting surface arrangements are provided. The method is particularly well suited to configurations which, because of high speed flight requirements, must employ thin wings with highly swept leading edges. The code is applicable to wings with either sharp or rounded leading edges. The code provides theoretical pressure distributions over the wing, the canard or horizontal tail, and the deflected flap surfaces as well as estimates of the wing lift, drag, and pitching moments which account for attainable leading edge thrust and leading edge separation vortex forces. The wing planform information is specified by a series of leading edge and trailing edge breakpoints for a right hand wing panel. Up to 21 pairs of coordinates may be used to describe both the leading edge and the trailing edge. The code has been written to accommodate 2000 right hand panel elements, but can easily be modified to accommodate a larger or smaller number of elements depending on the capacity of the target computer platform. The code provides solutions for wing surfaces composed of all possible combinations of leading edge and trailing edge flap settings provided by the original deflection multipliers and by the flap deflection multipliers. Up to 25 pairs of leading edge and trailing edge flap deflection schedules may thus be treated simultaneously. The code also provides for an improved accounting of hinge-line singularities in determination of wing forces and moments. To determine lifting surface perturbation velocity distributions, the code provides for a maximum of 70 iterations. The program is constructed so that successive runs may be made with a given code entry. To make additional runs, it is necessary only to add an identification record and the namelist data that are to be changed from the previous run. This code was originally developed in 1989 in FORTRAN V on a CDC 6000 computer system, and was later ported to an MS-DOS environment. Both versions are available from COSMIC. There are only a few differences between the PC version (LAR-14458) and CDC version (LAR-14178) of AERO2S distributed by COSMIC. The CDC version has one main source code file while the PC version has two files which are easier to edit and compile on a PC. The PC version does not require a FORTRAN compiler which supports NAMELIST because a special INPUT subroutine has been added. The CDC version includes two MODIFY decks which can be used to improve the code and prevent the possibility of some infrequently occurring errors while PC-version users will have to make these code changes manually. The PC version includes an executable which was generated with the Ryan McFarland/FORTRAN compiler and requires 253K RAM and an 80x87 math co-processor. Using this executable, the sample case requires about four hours to execute on an 8MHz AT-class microcomputer with a co-processor. The source code conforms to the FORTRAN 77 standard except that it uses variables longer than six characters. With two minor modifications, the PC version should be portable to any computer with a FORTRAN compiler and sufficient memory. The CDC version of AERO2S is available in CDC NOS Internal format on a 9-track 1600 BPI magnetic tape. The PC version is available on a set of two 5.25 inch 360K MS-DOS format diskettes. IBM AT is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. CDC is a registered trademark of Control Data Corporation. NOS is a trademark of Control Data Corporation.
Newsletter for Asian and Middle Eastern Languages on Computer, Volume 1, Numbers 3 & 4.
ERIC Educational Resources Information Center
Meadow, Anthony, Ed.
1986-01-01
Volume 1, numbers 3 and 4, of the newsletter on the use of non-Western languages with computers contains the following articles: "Reversing the Screen under MS/PC-DOS" (Dan Brink); "Comments on Diacritics Using Wordstar, etc. and CP/M Software for Non-Western Languages" (Michael Broschat); "Carving Tibetan in Silicon: A…
The K-12 Hardware Industry: A Heated Race that Shows No Sign of Letting Up.
ERIC Educational Resources Information Center
McCarthy, Robert
1989-01-01
This overview of the computer industry vendors that supply microcomputer hardware to educators for use in kindergarten through high school focuses on Apple, Tandy, Commodore, and IBM. The use of MS-DOS versus the operating system used in Apple computers is discussed, and pricing and service issues are raised. (LRW)
John R. Mills
1989-01-01
The timber resource inventory model (TRIM) has been adapted to run on person al computers. The personal computer version of TRIM (PC-TRIM) is more widely used than its mainframe parent. Errors that existed in previous versions of TRIM have been corrected. Information is presented to help users with program input and output management in the DOS environment, to...
RighTime: A real time clock correcting program for MS-DOS-based computer systems
NASA Technical Reports Server (NTRS)
Becker, G. Thomas
1993-01-01
A computer program is described which effectively eliminates the misgivings of the DOS system clock in PC/AT-class computers. RighTime is a small, sophisticated memory-resident program that automatically corrects both the DOS system clock and the hardware 'CMOS' real time clock (RTC) in real time. RighTime learns what corrections are required without operator interaction beyond the occasional accurate time set. Both warm (power on) and cool (power off) errors are corrected, usually yielding better than one part per million accuracy in the typical desktop computer with no additional hardware, and RighTime increases the system clock resolution from approximately 0.0549 second to 0.01 second. Program tools are also available which allow visualization of RighTime's actions, verification of its performance, display of its history log, and which provide data for graphing of the system clock behavior. The program has found application in a wide variety of industries, including astronomy, satellite tracking, communications, broadcasting, transportation, public utilities, manufacturing, medicine, and the military.
LABORATORY PROCESS CONTROLLER USING NATURAL LANGUAGE COMMANDS FROM A PERSONAL COMPUTER
NASA Technical Reports Server (NTRS)
Will, H.
1994-01-01
The complex environment of the typical research laboratory requires flexible process control. This program provides natural language process control from an IBM PC or compatible machine. Sometimes process control schedules require changes frequently, even several times per day. These changes may include adding, deleting, and rearranging steps in a process. This program sets up a process control system that can either run without an operator, or be run by workers with limited programming skills. The software system includes three programs. Two of the programs, written in FORTRAN77, record data and control research processes. The third program, written in Pascal, generates the FORTRAN subroutines used by the other two programs to identify the user commands with the user-written device drivers. The software system also includes an input data set which allows the user to define the user commands which are to be executed by the computer. To set the system up the operator writes device driver routines for all of the controlled devices. Once set up, this system requires only an input file containing natural language command lines which tell the system what to do and when to do it. The operator can make up custom commands for operating and taking data from external research equipment at any time of the day or night without the operator in attendance. This process control system requires a personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. The program requires a FORTRAN77 compiler and user-written device drivers. This program was developed in 1989 and has a memory requirement of about 62 Kbytes.
COATING ALTERNATIVES GUIDE (CAGE) USER'S GUIDE
The guide provides instructions for using the Coating Alternatives GuidE (CAGE) software program, version 1.0. It assumes that the user is familiar with the fundamentals of operating an IBM-compatible personal computer (PC) under the Microsoft disk operating system (MS-DOS). CAGE...
SAGE 2.0 SOLVENT ALTERNATIVES GUIDE - USER'S GUIDE
The guide provides instruction for using the SAGE (Solvent Alternatives Guide) software system, version 2.O. It assumes that the user is familiar with the fundamentals of operating a personal computer under the Microsoft disk operating system (MS-DOS). AGE recommends solvent repl...
Welsch, Goetz H; Laqmani, Azien; Henes, Frank O; Kaul, Michael G; Schoen, Gerhard; Adam, Gerhard; Regier, Marc
2016-01-01
Objective: To quantitatively assess the immediate effect of long-distance running on T2 and T2* relaxation times of the articular cartilage of the knee at 3.0 T in young healthy adults. Methods: 30 healthy male adults (18–31 years) who perform sports at an amateur level underwent an initial MRI at 3.0 T with T2 weighted [16 echo times (TEs): 9.7–154.6 ms] and T2* weighted (24 TEs: 4.6–53.6 ms) relaxation measurements. Thereafter, all participants performed a 45-min run. After the run, all individuals were immediately re-examined. Data sets were post-processed using dedicated software (ImageJ; National Institute of Health, Bethesda, MD). 22 regions of interest were manually drawn in segmented areas of the femoral, tibial and patellar cartilage. For statistical evaluation, Pearson product–moment correlation coefficients and confidence intervals were computed. Results: Mean initial values were 35.7 ms for T2 and 25.1 ms for T2*. After the run, a significant decrease in the mean T2 and T2* relaxation times was observed for all segments in all participants. A mean decrease of relaxation time was observed for T2 with 4.6 ms (±3.6 ms) and for T2* with 3.6 ms (±5.1 ms) after running. Conclusion: A significant decrease could be observed in all cartilage segments for both biomarkers. Both quantitative techniques, T2 and T2*, seem to be valuable parameters in the evaluation of immediate changes in the cartilage ultrastructure after running. Advances in knowledge: This is the first direct comparison of immediate changes in T2 and T2* relaxation times after running in healthy adults. PMID:27336705
Behzadi, Cyrus; Welsch, Goetz H; Laqmani, Azien; Henes, Frank O; Kaul, Michael G; Schoen, Gerhard; Adam, Gerhard; Regier, Marc
2016-08-01
To quantitatively assess the immediate effect of long-distance running on T2 and T2* relaxation times of the articular cartilage of the knee at 3.0 T in young healthy adults. 30 healthy male adults (18-31 years) who perform sports at an amateur level underwent an initial MRI at 3.0 T with T2 weighted [16 echo times (TEs): 9.7-154.6 ms] and T2* weighted (24 TEs: 4.6-53.6 ms) relaxation measurements. Thereafter, all participants performed a 45-min run. After the run, all individuals were immediately re-examined. Data sets were post-processed using dedicated software (ImageJ; National Institute of Health, Bethesda, MD). 22 regions of interest were manually drawn in segmented areas of the femoral, tibial and patellar cartilage. For statistical evaluation, Pearson product-moment correlation coefficients and confidence intervals were computed. Mean initial values were 35.7 ms for T2 and 25.1 ms for T2*. After the run, a significant decrease in the mean T2 and T2* relaxation times was observed for all segments in all participants. A mean decrease of relaxation time was observed for T2 with 4.6 ms (±3.6 ms) and for T2* with 3.6 ms (±5.1 ms) after running. A significant decrease could be observed in all cartilage segments for both biomarkers. Both quantitative techniques, T2 and T2*, seem to be valuable parameters in the evaluation of immediate changes in the cartilage ultrastructure after running. This is the first direct comparison of immediate changes in T2 and T2* relaxation times after running in healthy adults.
Teaching Bibliometric Analysis and MS/DOS Commands.
ERIC Educational Resources Information Center
Dou, Henri; And Others
1988-01-01
Outlines the steps involved in bibliometric studies, and demonstrates the ability to execute simple studies on microcomputers by downloading files using only the capability of MS/DOS. Detailed illustrations of the MS/DOS commands used are provided. (eight references) (CLB)
NASA Technical Reports Server (NTRS)
Ricks, W. R.
1994-01-01
PWC is used for pair-wise comparisons in both psychometric scaling techniques and cognitive research. The cognitive tasks and processes of a human operator of automated systems are now prominent considerations when defining system requirements. Recent developments in cognitive research have emphasized the potential utility of psychometric scaling techniques, such as multidimensional scaling, for representing human knowledge and cognitive processing structures. Such techniques involve collecting measurements of stimulus-relatedness from human observers. When data are analyzed using this scaling approach, an n-dimensional representation of the stimuli is produced. This resulting representation is said to describe the subject's cognitive or perceptual view of the stimuli. PWC applies one of the many techniques commonly used to acquire the data necessary for these types of analyses: pair-wise comparisons. PWC administers the task, collects the data from the test subject, and formats the data for analysis. It therefore addresses many of the limitations of the traditional "pen-and-paper" methods. By automating the data collection process, subjects are prevented from going back to check previous responses, the possibility of erroneous data transfer is eliminated, and the burden of the administration and taking of the test is eased. By using randomization, PWC ensures that subjects see the stimuli pairs presented in random order, and that each subject sees pairs in a different random order. PWC is written in Turbo Pascal v6.0 for IBM PC compatible computers running MS-DOS. The program has also been successfully compiled with Turbo Pascal v7.0. A sample executable is provided. PWC requires 30K of RAM for execution. The standard distribution medium for this program is a 5.25 inch 360K MS-DOS format diskette. Two electronic versions of the documentation are included on the diskette: one in ASCII format and one in MS Word for Windows format. PWC was developed in 1993.
Potential-Field Geophysical Software for the PC
,
1995-01-01
The computer programs of the Potential-Field Software Package run under the DOS operating system on IBM-compatible personal computers. They are used for the processing, display, and interpretation of potential-field geophysical data (gravity- and magnetic-field measurements) and other data sets that can be represented as grids or profiles. These programs have been developed on a variety of computer systems over a period of 25 years by the U.S. Geological Survey.
Distance Education at Memorial University of Newfoundland.
ERIC Educational Resources Information Center
Keough, Erin
This presentation describes the distance education program at Memorial University (Newfoundland), which operates the Telemedicine Centre, including an audiographic, teleconference network that uses a combination of hardware and software to turn an MS DOS computer into an interactive long distance blackboard. Topics covered by the presentation…
QCD Coupling from a Nonperturbative Determination of the Three-Flavor Λ Parameter.
Bruno, Mattia; Brida, Mattia Dalla; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Schaefer, Stefan; Simma, Hubert; Sint, Stefan; Sommer, Rainer
2017-09-08
We present a lattice determination of the Λ parameter in three-flavor QCD and the strong coupling at the Z pole mass. Computing the nonperturbative running of the coupling in the range from 0.2 to 70 GeV, and using experimental input values for the masses and decay constants of the pion and the kaon, we obtain Λ_{MS[over ¯]}^{(3)}=341(12) MeV. The nonperturbative running up to very high energies guarantees that systematic effects associated with perturbation theory are well under control. Using the four-loop prediction for Λ_{MS[over ¯]}^{(5)}/Λ_{MS[over ¯]}^{(3)} yields α_{MS[over ¯]}^{(5)}(m_{Z})=0.11852(84).
COATING ALTERNATIVES GUIDE (CAGE) USER'S GUIDE (EPA/600/R-01/030)
The guide provides instructions for using the Coating Alternatives GuidE (CAGE) software program, version 1.0. It assumes that the user is familiar with the fundamentals of operating an IBM-compatible personal computer (PC) under the Microsoft disk operating system (MS-DOS). CAGE...
NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACINTOSH VERSION)
NASA Technical Reports Server (NTRS)
Phillips, T. A.
1994-01-01
NETS, A Tool for the Development and Evaluation of Neural Networks, provides a simulation of Neural Network algorithms plus an environment for developing such algorithms. Neural Networks are a class of systems modeled after the human brain. Artificial Neural Networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to brain neurons. Problems which involve pattern matching readily fit the class of problems which NETS is designed to solve. NETS uses the back propagation learning method for all of the networks which it creates. The nodes of a network are usually grouped together into clumps called layers. Generally, a network will have an input layer through which the various environment stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to some features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. NETS allows the user to customize the patterns of connections between layers of a network. NETS also provides features for saving the weight values of a network during the learning process, which allows for more precise control over the learning process. NETS is an interpreter. Its method of execution is the familiar "read-evaluate-print" loop found in interpreted languages such as BASIC and LISP. The user is presented with a prompt which is the simulator's way of asking for input. After a command is issued, NETS will attempt to evaluate the command, which may produce more prompts requesting specific information or an error if the command is not understood. The typical process involved when using NETS consists of translating the problem into a format which uses input/output pairs, designing a network configuration for the problem, and finally training the network with input/output pairs until an acceptable error is reached. NETS allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard distribution medium) or a .25 inch streaming magnetic tape cartridge in UNIX tar format. NETS was developed in 1989 and updated in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. SunIPC and SunOS are trademarks of Sun Microsystems, Inc. CRAY Y-MP and UNICOS are trademarks of Cray Research, Inc.
NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACHINE INDEPENDENT VERSION)
NASA Technical Reports Server (NTRS)
Baffes, P. T.
1994-01-01
NETS, A Tool for the Development and Evaluation of Neural Networks, provides a simulation of Neural Network algorithms plus an environment for developing such algorithms. Neural Networks are a class of systems modeled after the human brain. Artificial Neural Networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to brain neurons. Problems which involve pattern matching readily fit the class of problems which NETS is designed to solve. NETS uses the back propagation learning method for all of the networks which it creates. The nodes of a network are usually grouped together into clumps called layers. Generally, a network will have an input layer through which the various environment stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to some features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. NETS allows the user to customize the patterns of connections between layers of a network. NETS also provides features for saving the weight values of a network during the learning process, which allows for more precise control over the learning process. NETS is an interpreter. Its method of execution is the familiar "read-evaluate-print" loop found in interpreted languages such as BASIC and LISP. The user is presented with a prompt which is the simulator's way of asking for input. After a command is issued, NETS will attempt to evaluate the command, which may produce more prompts requesting specific information or an error if the command is not understood. The typical process involved when using NETS consists of translating the problem into a format which uses input/output pairs, designing a network configuration for the problem, and finally training the network with input/output pairs until an acceptable error is reached. NETS allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard distribution medium) or a .25 inch streaming magnetic tape cartridge in UNIX tar format. NETS was developed in 1989 and updated in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. SunIPC and SunOS are trademarks of Sun Microsystems, Inc. CRAY Y-MP and UNICOS are trademarks of Cray Research, Inc.
Response to "Comparison and Evaluation of Clustering Algorithms for Tandem Mass Spectra".
Griss, Johannes; Perez-Riverol, Yasset; The, Matthew; Käll, Lukas; Vizcaíno, Juan Antonio
2018-05-04
In the recent benchmarking article entitled "Comparison and Evaluation of Clustering Algorithms for Tandem Mass Spectra", Rieder et al. compared several different approaches to cluster MS/MS spectra. While we certainly recognize the value of the manuscript, here, we report some shortcomings detected in the original analyses. For most analyses, the authors clustered only single MS/MS runs. In one of the reported analyses, three MS/MS runs were processed together, which already led to computational performance issues in many of the tested approaches. This fact highlights the difficulties of using many of the tested algorithms on the nowadays produced average proteomics data sets. Second, the authors only processed identified spectra when merging MS runs. Thereby, all unidentified spectra that are of lower quality were already removed from the data set and could not influence the clustering results. Next, we found that the authors did not analyze the effect of chimeric spectra on the clustering results. In our analysis, we found that 3% of the spectra in the used data sets were chimeric, and this had marked effects on the behavior of the different clustering algorithms tested. Finally, the authors' choice to evaluate the MS-Cluster and spectra-cluster algorithms using a precursor tolerance of 5 Da for high-resolution Orbitrap data only was, in our opinion, not adequate to assess the performance of MS/MS clustering approaches.
Tempest: Accelerated MS/MS database search software for heterogeneous computing platforms
Adamo, Mark E.; Gerber, Scott A.
2017-01-01
MS/MS database search algorithms derive a set of candidate peptide sequences from in-silico digest of a protein sequence database, and compute theoretical fragmentation patterns to match these candidates against observed MS/MS spectra. The original Tempest publication described these operations mapped to a CPU-GPU model, in which the CPU generates peptide candidates that are asynchronously sent to a discrete GPU to be scored against experimental spectra in parallel (Milloy et al., 2012). The current version of Tempest expands this model, incorporating OpenCL to offer seamless parallelization across multicore CPUs, GPUs, integrated graphics chips, and general-purpose coprocessors. Three protocols describe how to configure and run a Tempest search, including discussion of how to leverage Tempest's unique feature set to produce optimal results. PMID:27603022
ms2: A molecular simulation tool for thermodynamic properties
NASA Astrophysics Data System (ADS)
Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran
2011-11-01
This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.
ERIC Educational Resources Information Center
Zambon, Franco
This study sought to determine a useful frequency for refreshing students' memories of complex procedures that involved a formal computer language. Students were required to execute the Microsoft Disc Operating System (MS-DOS) commands for "copy,""backup," and "restore." A total of 126 college students enrolled in six…
10 CFR 435.304 - The COSTSAFR Program.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) operates on a micro-computer system that uses the MS DOS operating system and is equipped with an 8087 co... (Version 3.0) also prints out a point system that identifies a wide array of different energy conservation... goal and the point system in the design and procurement procedures so that designers and builders can...
10 CFR 435.304 - The COSTSAFR Program.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) operates on a micro-computer system that uses the MS DOS operating system and is equipped with an 8087 co... (Version 3.0) also prints out a point system that identifies a wide array of different energy conservation... goal and the point system in the design and procurement procedures so that designers and builders can...
10 CFR 435.304 - The COSTSAFR Program.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) operates on a micro-computer system that uses the MS DOS operating system and is equipped with an 8087 co... (Version 3.0) also prints out a point system that identifies a wide array of different energy conservation... goal and the point system in the design and procurement procedures so that designers and builders can...
10 CFR 435.304 - The COSTSAFR Program.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) operates on a micro-computer system that uses the MS DOS operating system and is equipped with an 8087 co... (Version 3.0) also prints out a point system that identifies a wide array of different energy conservation... goal and the point system in the design and procurement procedures so that designers and builders can...
10 CFR 435.304 - The COSTSAFR Program.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) operates on a micro-computer system that uses the MS DOS operating system and is equipped with an 8087 co... (Version 3.0) also prints out a point system that identifies a wide array of different energy conservation... goal and the point system in the design and procurement procedures so that designers and builders can...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, T.
2000-07-01
The Write One, Run Many (WORM) site (worm.csirc.net) is the on-line home of the WORM language and is hosted by the Criticality Safety Information Resource Center (CSIRC) (www.csirc.net). The purpose of this web site is to create an on-line community for WORM users to gather, share, and archive WORM-related information. WORM is an embedded, functional, programming language designed to facilitate the creation of input decks for computer codes that take standard ASCII text files as input. A functional programming language is one that emphasizes the evaluation of expressions, rather than execution of commands. The simplest and perhaps most common examplemore » of a functional language is a spreadsheet such as Microsoft Excel. The spreadsheet user specifies expressions to be evaluated, while the spreadsheet itself determines the commands to execute, as well as the order of execution/evaluation. WORM functions in a similar fashion and, as a result, is very simple to use and easy to learn. WORM improves the efficiency of today's criticality safety analyst by allowing: (1) input decks for parameter studies to be created quickly and easily; (2) calculations and variables to be embedded into any input deck, thus allowing for meaningful parameter specifications; (3) problems to be specified using any combination of units; and (4) complex mathematically defined models to be created. WORM is completely written in Perl. Running on all variants of UNIX, Windows, MS-DOS, MacOS, and many other operating systems, Perl is one of the most portable programming languages available. As such, WORM works on practically any computer platform.« less
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
A Management Information System for Allocating, Monitoring and Reviewing Work Assignments.
1986-06-01
This thesis investigated the feasibility of developing a small scale management information system on a micro-computer. The working system was...ORSA journal. The management information system was designed using Ashton-Tate’s dBaseIII software. As designed, the system will operate on any...computer operating under microsoft’s Disk Operating System (MS-DOS). The user must provide his own dBaseIII software. A similar management information system could
General-Purpose Ada Software Packages
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.
1991-01-01
Collection of subprograms brings to Ada many features from other programming languages. All generic packages designed to be easily instantiated for types declared in user's facility. Most packages have widespread applicability, although some oriented for avionics applications. All designed to facilitate writing new software in Ada. Written on IBM/AT personal computer running under PC DOS, v.3.1.
Tempest: Accelerated MS/MS Database Search Software for Heterogeneous Computing Platforms.
Adamo, Mark E; Gerber, Scott A
2016-09-07
MS/MS database search algorithms derive a set of candidate peptide sequences from in silico digest of a protein sequence database, and compute theoretical fragmentation patterns to match these candidates against observed MS/MS spectra. The original Tempest publication described these operations mapped to a CPU-GPU model, in which the CPU (central processing unit) generates peptide candidates that are asynchronously sent to a discrete GPU (graphics processing unit) to be scored against experimental spectra in parallel. The current version of Tempest expands this model, incorporating OpenCL to offer seamless parallelization across multicore CPUs, GPUs, integrated graphics chips, and general-purpose coprocessors. Three protocols describe how to configure and run a Tempest search, including discussion of how to leverage Tempest's unique feature set to produce optimal results. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.
Operating System For Numerically Controlled Milling Machine
NASA Technical Reports Server (NTRS)
Ray, R. B.
1992-01-01
OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.
Thermomechanical Multiaxial Fatigue Testing Capability Developed
NASA Technical Reports Server (NTRS)
1996-01-01
Structural components in aeronautical gas turbine engines typically experience multiaxial states of stress under nonisothermal conditions. To estimate the durability of the various components in the engine, one must characterize the cyclic deformation and fatigue behavior of the materials used under thermal and complex mechanical loading conditions. To this end, a testing protocol and associated test control software were developed at the NASA Lewis Research Center for thermomechanical axial-torsional fatigue tests. These tests are to be performed on thin-walled, tubular specimens fabricated from the cobalt-based superalloy Haynes 188. The software is written in C and runs on an MS-DOS based microcomputer.
28 CFR 51.20 - Form of submissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... megabyte MS-DOS formatted diskettes; 5 1/4″ 1.2 megabyte MS-DOS formatted floppy disks; nine-track tape... provided in hard copy. (c) All magnetic media shall be clearly labeled with the following information: (1... a disk operating system (DOS) file, it shall be formatted in a standard American Standard Code for...
An Ada Linear-Algebra Software Package Modeled After HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.; Lawson, Charles L.
1990-01-01
New avionics software written more easily. Software package extends Ada programming language to include linear-algebra capabilities similar to those of HAL/S programming language. Designed for such avionics applications as Space Station flight software. In addition to built-in functions of HAL/S, package incorporates quaternion functions used in Space Shuttle and Galileo projects and routines from LINPAK solving systems of equations involving general square matrices. Contains two generic programs: one for floating-point computations and one for integer computations. Written on IBM/AT personal computer running under PC DOS, v.3.1.
KERNELHR: A program for estimating animal home ranges
Seaman, D.E.; Griffith, B.; Powell, R.A.
1998-01-01
Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.
A FORTRAN program for multivariate survival analysis on the personal computer.
Mulder, P G
1988-01-01
In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.
Computer virus information update CIAC-2301
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orvis, W.J.
1994-01-15
While CIAC periodically issues bulletins about specific computer viruses, these bulletins do not cover all the computer viruses that affect desktop computers. The purpose of this document is to identify most of the known viruses for the MS-DOS and Macintosh platforms and give an overview of the effects of each virus. The authors also include information on some windows, Atari, and Amiga viruses. This document is revised periodically as new virus information becomes available. This document replaces all earlier versions of the CIAC Computer virus Information Update. The date on the front cover indicates date on which the information inmore » this document was extracted from CIAC`s Virus database.« less
A demonstrative model of a lunar base simulation on a personal computer
NASA Technical Reports Server (NTRS)
1985-01-01
The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.
Functional programming interpreter. M. S. thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robison, A.D.
1987-03-01
Functional Programming (FP) sup BAC87 is an alternative to conventional imperative programming languages. This thesis describes an FP interpreter implementation. Superficially, FP appears to be a simple, but very inefficient language. Its simplicity, however, allows it to be interpreted quickly. Much of the inefficiency can be removed by simple interpreter techniques. This thesis describes the Illinois Functional Programming (IFP) interpreter, an interactive functional programming implementation which runs under both MS-DOS and UNIX. The IFP interpreter allows functions to be created, executed, and debugged in an environment very similar to UNIX. IFP's speed is competitive with other interpreted languages such asmore » BASIC.« less
Brusniak, Mi-Youn; Bodenmiller, Bernd; Campbell, David; Cooke, Kelly; Eddes, James; Garbutt, Andrew; Lau, Hollis; Letarte, Simon; Mueller, Lukas N; Sharma, Vagisha; Vitek, Olga; Zhang, Ning; Aebersold, Ruedi; Watts, Julian D
2008-01-01
Background Quantitative proteomics holds great promise for identifying proteins that are differentially abundant between populations representing different physiological or disease states. A range of computational tools is now available for both isotopically labeled and label-free liquid chromatography mass spectrometry (LC-MS) based quantitative proteomics. However, they are generally not comparable to each other in terms of functionality, user interfaces, information input/output, and do not readily facilitate appropriate statistical data analysis. These limitations, along with the array of choices, present a daunting prospect for biologists, and other researchers not trained in bioinformatics, who wish to use LC-MS-based quantitative proteomics. Results We have developed Corra, a computational framework and tools for discovery-based LC-MS proteomics. Corra extends and adapts existing algorithms used for LC-MS-based proteomics, and statistical algorithms, originally developed for microarray data analyses, appropriate for LC-MS data analysis. Corra also adapts software engineering technologies (e.g. Google Web Toolkit, distributed processing) so that computationally intense data processing and statistical analyses can run on a remote server, while the user controls and manages the process from their own computer via a simple web interface. Corra also allows the user to output significantly differentially abundant LC-MS-detected peptide features in a form compatible with subsequent sequence identification via tandem mass spectrometry (MS/MS). We present two case studies to illustrate the application of Corra to commonly performed LC-MS-based biological workflows: a pilot biomarker discovery study of glycoproteins isolated from human plasma samples relevant to type 2 diabetes, and a study in yeast to identify in vivo targets of the protein kinase Ark1 via phosphopeptide profiling. Conclusion The Corra computational framework leverages computational innovation to enable biologists or other researchers to process, analyze and visualize LC-MS data with what would otherwise be a complex and not user-friendly suite of tools. Corra enables appropriate statistical analyses, with controlled false-discovery rates, ultimately to inform subsequent targeted identification of differentially abundant peptides by MS/MS. For the user not trained in bioinformatics, Corra represents a complete, customizable, free and open source computational platform enabling LC-MS-based proteomic workflows, and as such, addresses an unmet need in the LC-MS proteomics field. PMID:19087345
Margaret R. Holdaway
1994-01-01
Describes Geo-CLM, a computer application (for Mac or DOS) whose primary aim is to perform multiple kriging runs to interpolate the historic climatic record at research plots in the Lake States. It is an exploration and analysis tool. Addition capabilities include climatic databases, a flexible test mode, cross validation, lat/long conversion, English/metric units,...
Distributed run of a one-dimensional model in a regional application using SOAP-based web services
NASA Astrophysics Data System (ADS)
Smiatek, Gerhard
This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.
Simulation of two dimensional electrophoresis and tandem mass spectrometry for teaching proteomics.
Fisher, Amanda; Sekera, Emily; Payne, Jill; Craig, Paul
2012-01-01
In proteomics, complex mixtures of proteins are separated (usually by chromatography or electrophoresis) and identified by mass spectrometry. We have created 2DE Tandem MS, a computer program designed for use in the biochemistry, proteomics, or bioinformatics classroom. It contains two simulations-2D electrophoresis and tandem mass spectrometry. The two simulations are integrated together and are designed to teach the concept of proteome analysis of prokaryotic and eukaryotic organisms. 2DE-Tandem MS can be used as a freestanding simulation, or in conjunction with a wet lab, to introduce proteomics in the undergraduate classroom. 2DE Tandem MS is a free program available on Sourceforge at https://sourceforge.net/projects/jbf/. It was developed using Java Swing and functions in Mac OSX, Windows, and Linux, ensuring that every student sees a consistent and informative graphical user interface no matter the computer platform they choose. Java must be installed on the host computer to run 2DE Tandem MS. Example classroom exercises are provided in the Supporting Information. Copyright © 2012 Wiley Periodicals, Inc.
Low-Budget, Cost-Effective OCR: Optical Character Recognition for MS-DOS Micros.
ERIC Educational Resources Information Center
Perez, Ernest
1990-01-01
Discusses optical character recognition (OCR) for use with MS-DOS microcomputers. Cost effectiveness is considered, three types of software approaches to character recognition are explained, hardware and operation requirements are described, possible library applications are discussed, future OCR developments are suggested, and a list of OCR…
NASA Astrophysics Data System (ADS)
Feudo, Christopher V.
1994-04-01
This dissertation demonstrates that inadequately protected wireless LANs are more vulnerable to rogue program attack than traditional LANs. Wireless LANs not only run the same risks as traditional LANs, but they also run additional risks associated with an open transmission medium. Intruders can scan radio waves and, given enough time and resources, intercept, analyze, decipher, and reinsert data into the transmission medium. This dissertation describes the development and instantiation of an abstract model of the rogue code insertion process into a DOS-based wireless communications system using radio frequency (RF) atmospheric signal transmission. The model is general enough to be applied to widely used target environments such as UNIX, Macintosh, and DOS operating systems. The methodology and three modules, the prober, activator, and trigger modules, to generate rogue code and insert it into a wireless LAN were developed to illustrate the efficacy of the model. Also incorporated into the model are defense measures against remotely introduced rogue programs and a cost-benefit analysis that determined that such defenses for a specific environment were cost-justified.
ERIC Educational Resources Information Center
Traven, Bill
1988-01-01
Discusses using the DOS PATH command (for MS-DOS) to enable the microcomputer user to move from directory to directory on a hard drive. Lists the commands to be programed, gives examples, and explains the use of each. (MVL)
Toward Interactive Scenario Analysis and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gayle, Thomas R.; Summers, Kenneth Lee; Jungels, John
2015-01-01
As Modeling and Simulation (M&S) tools have matured, their applicability and importance have increased across many national security challenges. In particular, they provide a way to test how something may behave without the need to do real world testing. However, current and future changes across several factors including capabilities, policy, and funding are driving a need for rapid response or evaluation in ways that many M&S tools cannot address. Issues around large data, computational requirements, delivery mechanisms, and analyst involvement already exist and pose significant challenges. Furthermore, rising expectations, rising input complexity, and increasing depth of analysis will only increasemore » the difficulty of these challenges. In this study we examine whether innovations in M&S software coupled with advances in ''cloud'' computing and ''big-data'' methodologies can overcome many of these challenges. In particular, we propose a simple, horizontally-scalable distributed computing environment that could provide the foundation (i.e. ''cloud'') for next-generation M&S-based applications based on the notion of ''parallel multi-simulation''. In our context, the goal of parallel multi- simulation is to consider as many simultaneous paths of execution as possible. Therefore, with sufficient resources, the complexity is dominated by the cost of single scenario runs as opposed to the number of runs required. We show the feasibility of this architecture through a stable prototype implementation coupled with the Umbra Simulation Framework [6]. Finally, we highlight the utility through multiple novel analysis tools and by showing the performance improvement compared to existing tools.« less
Laboratory process control using natural language commands from a personal computer
NASA Technical Reports Server (NTRS)
Will, Herbert A.; Mackin, Michael A.
1989-01-01
PC software is described which provides flexible natural language process control capability with an IBM PC or compatible machine. Hardware requirements include the PC, and suitable hardware interfaces to all controlled devices. Software required includes the Microsoft Disk Operating System (MS-DOS) operating system, a PC-based FORTRAN-77 compiler, and user-written device drivers. Instructions for use of the software are given as well as a description of an application of the system.
TARGET - TASK ANALYSIS REPORT GENERATION TOOL, VERSION 1.0
NASA Technical Reports Server (NTRS)
Ortiz, C. J.
1994-01-01
The Task Analysis Report Generation Tool, TARGET, is a graphical interface tool used to capture procedural knowledge and translate that knowledge into a hierarchical report. TARGET is based on VISTA, a knowledge acquisition tool developed by the Naval Systems Training Center. TARGET assists a programmer and/or task expert organize and understand the steps involved in accomplishing a task. The user can label individual steps in the task through a dialogue-box and get immediate graphical feedback for analysis. TARGET users can decompose tasks into basic action kernels or minimal steps to provide a clear picture of all basic actions needed to accomplish a job. This method allows the user to go back and critically examine the overall flow and makeup of the process. The user can switch between graphics (box flow diagrams) and text (task hierarchy) versions to more easily study the process being documented. As the practice of decomposition continues, tasks and their subtasks can be continually modified to more accurately reflect the user's procedures and rationale. This program is designed to help a programmer document an expert's task thus allowing the programmer to build an expert system which can help others perform the task. Flexibility is a key element of the system design and of the knowledge acquisition session. If the expert is not able to find time to work on the knowledge acquisition process with the program developer, the developer and subject matter expert may work in iterative sessions. TARGET is easy to use and is tailored to accommodate users ranging from the novice to the experienced expert systems builder. TARGET is written in C-language for IBM PC series and compatible computers running MS-DOS and Microsoft Windows version 3.0 or 3.1. No source code is supplied. The executable also requires 2Mb of RAM, a Microsoft compatible mouse, a VGA display and an 80286, 386 or 486 processor machine. The standard distribution medium for TARGET is one 5.25 inch 360K MS-DOS format diskette. TARGET was developed in 1991.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reaugh, J. E.
HE ignition caused by shear localization is the principal concern for safety analyses of postulated mechanical insults to explosive assemblies. Although prompt detonation from shock is certainly a concern, insults that lead to prompt detonation are associated with high velocity, and correspondingly rare. For high-density HMX assemblies, an impact speed (by a steel object) of 400 m/s is needed to develop a detonation in a run distance less than 30 mm. To achieve a steady plane shock, which results in the shortest run distance to detonation for a given peak pressure, the impactor diameter must exceed 60 mm, and thicknessmore » approach 20 mm. Thinner plates and/or smaller diameter ones require even higher impact velocity. Ignitions from shear localization, however, have been observed from impacts less than 50 m/s in Steven tests, less than 30 m/s from spigot impact tests, and less than 10 m/s from various drop tests. This lower velocity range is much frequent in postulated mechanical insults. Preliminary computer simulations and analyses of a variety of such tests have suggested that although each is accompanied by shear localization, there are differing detailed mechanisms at work that cause the ignitions. We identify those mechanisms that may be at work in a variety of such tests, and suggest how models of shear ignition, such as HERMES, may be revised and calibrated to conform to experiment. We suggest combining additional experiments with computer simulations and model development to begin confirm or uncover mechanisms that may be at work in a specific postulated event.« less
mr: A C++ library for the matching and running of the Standard Model parameters
NASA Astrophysics Data System (ADS)
Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.
2016-09-01
We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library. Catalogue identifier: AFAI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 517613 No. of bytes in distributed program, including test data, etc.: 2358729 Distribution format: tar.gz Programming language: C++. Computer: IBM PC. Operating system: Linux, Mac OS X. RAM: 1 GB Classification: 11.1. External routines: TSIL [1], OdeInt [2], boost [3] Nature of problem: The running parameters of the Standard Model renormalized in the MS bar scheme at some high renormalization scale, which is chosen by the user, are evaluated in perturbation theory as precisely as possible in two steps. First, the initial conditions at the electroweak energy scale are evaluated from the Fermi constant GF and the pole masses of the W, Z, and Higgs bosons and the bottom and top quarks including the full two-loop threshold corrections. Second, the evolution to the high energy scale is performed by numerically solving the renormalization group evolution equations through three loops. Pure QCD corrections to the matching and running are included through four loops. Solution method: Numerical integration of analytic expressions Additional comments: Available for download from URL: http://apik.github.io/mr/. The MathLink interface is tested to work with Mathematica 7-9 and, with an additional flag, also with Mathematica 10 under Linux and with Mathematica 10 under Mac OS X. Running time: less than 1 second References: [1] S. P. Martin and D. G. Robertson, Comput. Phys. Commun. 174 (2006) 133-151 [hep-ph/0501132]. [2] K. Ahnert and M. Mulansky, AIP Conf. Proc. 1389 (2011) 1586-1589 [arxiv:1110.3397 [cs.MS
LOP- LONG-TERM ORBIT PREDICTOR
NASA Technical Reports Server (NTRS)
Kwok, J. H.
1994-01-01
The Long-Term Orbit Predictor (LOP) trajectory propagation program is a useful tool in lifetime analysis of orbiting spacecraft. LOP is suitable for studying planetary orbit missions with reconnaissance (flyby) and exploratory (mapping) trajectories. Sample data is included for a geosynchronous station drift cycle study, a Venus radar mapping strategy, a frozen orbit about Mars, and a repeat ground trace orbit. LOP uses the variation-of-parameters method in formulating the equations of motion. Terms involving the mean anomaly are removed from numerical integrations so that large step sizes, on the order of days, are possible. Consequently, LOP executes much faster than programs based on Cowell's method, such as the companion program ASAP (the Artificial Satellite Analysis Program, NPO-17522, also available through COSMIC). The program uses a force model with a gravity field of up to 21 by 21, lunisolar perturbation, drag, and solar radiation pressure. The input includes classical orbital elements (either mean or oscillating), orbital elements of the sun relative to the planet, reference time and dates, drag coefficients, gravitational constants, planet radius, rotation rate. The printed output contains the classical elements for each time step or event step, and additional orbital data such as true anomaly, eccentric anomaly, latitude, longitude, periapsis altitude, and the rate of change per day of certain elements. Selected output is additionally written to a plot file for postprocessing by the user. LOP is written in FORTRAN 77 for batch execution on IBM PC compatibles running MS-DOS with a minimum of 256K RAM. Recompiling the source requires the Lahey F77 v2.2 compiler. The LOP package includes examples that use LOTUS 1-2-3 for graphical displays, but any graphics software package should be able to handle the ASCII plot file. The program is available on two 5.25 inch 360K MS-DOS format diskettes. The program was written in 1986 and last updated in 1989. LOP is a copyrighted work with all copyright vested in NASA. IBM PC is a registered trademark of International Business Machines Corporation. Lotus 1-2-3 is a registered trademark of Lotus Development Corporation. MS-DOS is a trademark of Microsoft Corporation.
Stretching Your Energetic Budget: How Tendon Compliance Affects the Metabolic Cost of Running
Uchida, Thomas K.; Hicks, Jennifer L.; Dembia, Christopher L.; Delp, Scott L.
2016-01-01
Muscles attach to bones via tendons that stretch and recoil, affecting muscle force generation and metabolic energy consumption. In this study, we investigated the effect of tendon compliance on the metabolic cost of running using a full-body musculoskeletal model with a detailed model of muscle energetics. We performed muscle-driven simulations of running at 2–5 m/s with tendon force–strain curves that produced between 1 and 10% strain when the muscles were developing maximum isometric force. We computed the average metabolic power consumed by each muscle when running at each speed and with each tendon compliance. Average whole-body metabolic power consumption increased as running speed increased, regardless of tendon compliance, and was lowest at each speed when tendon strain reached 2–3% as muscles were developing maximum isometric force. When running at 2 m/s, the soleus muscle consumed less metabolic power at high tendon compliance because the strain of the tendon allowed the muscle fibers to operate nearly isometrically during stance. In contrast, the medial and lateral gastrocnemii consumed less metabolic power at low tendon compliance because less compliant tendons allowed the muscle fibers to operate closer to their optimal lengths during stance. The software and simulations used in this study are freely available at simtk.org and enable examination of muscle energetics with unprecedented detail. PMID:26930416
OCEAN-PC and a distributed network for ocean data
NASA Technical Reports Server (NTRS)
Mclain, Douglas R.
1992-01-01
The Intergovernmental Oceanographic Commission (IOC) wishes to develop an integrated software package for oceanographic data entry and access in developing countries. The software, called 'OCEAN-PC', would run on low cost PC microcomputers and would encourage and standardize: (1) entry of local ocean observations; (2) quality control of the local data; (3) merging local data with historical data; (4) improved display and analysis of the merged data; and (5) international data exchange. OCEAN-PC will link existing MS-DOS oceanographic programs and data sets with table-driven format conversions. Since many ocean data sets are now being distributed on optical discs (Compact Discs - Read Only Memory, CD-ROM, Mass et al. 1987), OCEAN-PC will emphasize access to CD-ROMs.
ADRPM-VII applied to the long-range acoustic detection problem
NASA Technical Reports Server (NTRS)
Shalis, Edward; Koenig, Gerald
1990-01-01
An acoustic detection range prediction model (ADRPM-VII) has been written for IBM PC/AT machines running on the MS-DOS operating system. The software allows the user to predict detection distances of ground combat vehicles and their associated targets when they are involved in quasi-military settings. The program can also calculate individual attenuation losses due to spherical spreading, atmospheric absorption, ground reflection and atmospheric refraction due to temperature and wind gradients while varying parameters effecting the source-receiver problem. The purpose here is to examine the strengths and limitations of ADRPM-VII by modeling the losses due to atmospheric refraction and ground absorption, commonly known as excess attenuation, when applied to the long range detection problem for distances greater than 3 kilometers.
NASA Technical Reports Server (NTRS)
Lamar, J. E.
1994-01-01
This program represents a subsonic aerodynamic method for determining the mean camber surface of trimmed noncoplaner planforms with minimum vortex drag. With this program, multiple surfaces can be designed together to yield a trimmed configuration with minimum induced drag at some specified lift coefficient. The method uses a vortex-lattice and overcomes previous difficulties with chord loading specification. A Trefftz plane analysis is used to determine the optimum span loading for minimum drag. The program then solves for the mean camber surface of the wing associated with this loading. Pitching-moment or root-bending-moment constraints can be employed at the design lift coefficient. Sensitivity studies of vortex-lattice arrangements have been made with this program and comparisons with other theories show generally good agreement. The program is very versatile and has been applied to isolated wings, wing-canard configurations, a tandem wing, and a wing-winglet configuration. The design problem solved with this code is essentially an optimization one. A subsonic vortex-lattice is used to determine the span load distribution(s) on bent lifting line(s) in the Trefftz plane. A Lagrange multiplier technique determines the required loading which is used to calculate the mean camber slopes, which are then integrated to yield the local elevation surface. The problem of determining the necessary circulation matrix is simplified by having the chordwise shape of the bound circulation remain unchanged across each span, though the chordwise shape may vary from one planform to another. The circulation matrix is obtained by calculating the spanwise scaling of the chordwise shapes. A chordwise summation of the lift and pitching-moment is utilized in the Trefftz plane solution on the assumption that the trailing wake does not roll up and that the general configuration has specifiable chord loading shapes. VLMD is written in FORTRAN for IBM PC series and compatible computers running MS-DOS. This program requires 360K of RAM for execution. The Ryan McFarland FORTRAN compiler and PLINK86 are required to recompile the source code; however, a sample executable is provided on the diskette. The standard distribution medium for VLMD is a 5.25 inch 360K MS-DOS format diskette. VLMD was originally developed for use on CDC 6000 series computers in 1976. It was originally ported to the IBM PC in 1986, and, after minor modifications, the IBM PC port was released in 1993.
Real-time machine vision system using FPGA and soft-core processor
NASA Astrophysics Data System (ADS)
Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad
2012-06-01
This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.
Mercury⊕: An evidential reasoning image classifier
NASA Astrophysics Data System (ADS)
Peddle, Derek R.
1995-12-01
MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.
NASA Technical Reports Server (NTRS)
Davis, G. J.
1994-01-01
One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
Constructive Engineering of Simulations
NASA Technical Reports Server (NTRS)
Snyder, Daniel R.; Barsness, Brendan
2011-01-01
Joint experimentation that investigates sensor optimization, re-tasking and management has far reaching implications for Department of Defense, Interagency and multinational partners. An adaption of traditional human in the loop (HITL) Modeling and Simulation (M&S) was one approach used to generate the findings necessary to derive and support these implications. Here an entity-based simulation was re-engineered to run on USJFCOM's High Performance Computer (HPC). The HPC was used to support the vast number of constructive runs necessary to produce statistically significant data in a timely manner. Then from the resulting sensitivity analysis, event designers blended the necessary visualization and decision making components into a synthetic environment for the HITL simulations trials. These trials focused on areas where human decision making had the greatest impact on the sensor investigations. Thus, this paper discusses how re-engineering existing M&S for constructive applications can positively influence the design of an associated HITL experiment.
SEEK: A FORTRAN optimization program using a feasible directions gradient search
NASA Technical Reports Server (NTRS)
Savage, M.
1995-01-01
This report describes the use of computer program 'SEEK' which works in conjunction with two user-written subroutines and an input data file to perform an optimization procedure on a user's problem. The optimization method uses a modified feasible directions gradient technique. SEEK is written in ANSI standard Fortran 77, has an object size of about 46K bytes, and can be used on a personal computer running DOS. This report describes the use of the program and discusses the optimizing method. The program use is illustrated with four example problems: a bushing design, a helical coil spring design, a gear mesh design, and a two-parameter Weibull life-reliability curve fit.
SHABERTH - ANALYSIS OF A SHAFT BEARING SYSTEM (CRAY VERSION)
NASA Technical Reports Server (NTRS)
Coe, H. H.
1994-01-01
The SHABERTH computer program was developed to predict operating characteristics of bearings in a multibearing load support system. Lubricated and non-lubricated bearings can be modeled. SHABERTH calculates the loads, torques, temperatures, and fatigue life for ball and/or roller bearings on a single shaft. The program also allows for an analysis of the system reaction to the termination of lubricant supply to the bearings and other lubricated mechanical elements. SHABERTH has proven to be a valuable tool in the design and analysis of shaft bearing systems. The SHABERTH program is structured with four nested calculation schemes. The thermal scheme performs steady state and transient temperature calculations which predict system temperatures for a given operating state. The bearing dimensional equilibrium scheme uses the bearing temperatures, predicted by the temperature mapping subprograms, and the rolling element raceway load distribution, predicted by the bearing subprogram, to calculate bearing diametral clearance for a given operating state. The shaft-bearing system load equilibrium scheme calculates bearing inner ring positions relative to the respective outer rings such that the external loading applied to the shaft is brought into equilibrium by the rolling element loads which develop at each bearing inner ring for a given operating state. The bearing rolling element and cage load equilibrium scheme calculates the rolling element and cage equilibrium positions and rotational speeds based on the relative inner-outer ring positions, inertia effects, and friction conditions. The ball bearing subprograms in the current SHABERTH program have several model enhancements over similar programs. These enhancements include an elastohydrodynamic (EHD) film thickness model that accounts for thermal heating in the contact area and lubricant film starvation; a new model for traction combined with an asperity load sharing model; a model for the hydrodynamic rolling and shear forces in the inlet zone of lubricated contacts, which accounts for the degree of lubricant film starvation; modeling normal and friction forces between a ball and a cage pocket, which account for the transition between the hydrodynamic and elastohydrodynamic regimes of lubrication; and a model of the effect on fatigue life of the ratio of the EHD plateau film thickness to the composite surface roughness. SHABERTH is intended to be as general as possible. The models in SHABERTH allow for the complete mathematical simulation of real physical systems. Systems are limited to a maximum of five bearings supporting the shaft, a maximum of thirty rolling elements per bearing, and a maximum of one hundred temperature nodes. The SHABERTH program structure is modular and has been designed to permit refinement and replacement of various component models as the need and opportunities develop. A preprocessor is included in the IBM PC version of SHABERTH to provide a user friendly means of developing SHABERTH models and executing the resulting code. The preprocessor allows the user to create and modify data files with minimal effort and a reduced chance for errors. Data is utilized as it is entered; the preprocessor then decides what additional data is required to complete the model. Only this required information is requested. The preprocessor can accommodate data input for any SHABERTH compatible shaft bearing system model. The system may include ball bearings, roller bearings, and/or tapered roller bearings. SHABERTH is written in FORTRAN 77, and two machine versions are available from COSMIC. The CRAY version (LEW-14860) has a RAM requirement of 176K of 64 bit words. The IBM PC version (MFS-28818) is written for IBM PC series and compatible computers running MS-DOS, and includes a sample MS-DOS executable. For execution, the PC version requires at least 1Mb of RAM and an 80386 or 486 processor machine with an 80x87 math co-processor. The standard distribution medium for the IBM PC version is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The standard distribution medium for the CRAY version is also a 5.25 inch 360K MS-DOS format diskette, but alternate distribution media and formats are available upon request. The original version of SHABERTH was developed in FORTRAN IV at Lewis Research Center for use on a UNIVAC 1100 series computer. The Cray version was released in 1988, and was updated in 1990 to incorporate fluid rheological data for Rocket Propellant 1 (RP-1), thereby allowing the analysis of bearings lubricated with RP-1. The PC version is a port of the 1990 CRAY version and was developed in 1992 by SRS Technologies under contract to NASA Marshall Space Flight Center.
NASA Technical Reports Server (NTRS)
Tournier, Jean-Michel; El-Genk, Mohamed S.
1995-01-01
This report describes the user's manual for 'HPTAM,' a two-dimensional Heat Pipe Transient Analysis Model. HPTAM is described in detail in the UNM-ISNPS-3-1995 report which accompanies the present manual. The model offers a menu that lists a number of working fluids and wall and wick materials from which the user can choose. HPTAM is capable of simulating the startup of heat pipes from either a fully-thawed or frozen condition of the working fluid in the wick structure. The manual includes instructions for installing and running HPTAM on either a UNIX, MS-DOS or VMS operating system. Samples for input and output files are also provided to help the user with the code.
Bringing the medical library to the office desktop.
Brown, S R; Decker, G; Pletzke, C J
1991-01-01
This demonstration illustrates LRC Remote Computer Services- a dual operating system, multi-protocol system for delivering medical library services to the medical professional's desktop. A working model draws resources from CD-ROM and magnetic media file services, Novell and AppleTalk network protocol suites and gating, LAN and asynchronous (dial-in) access strategies, commercial applications for MS-DOS and Macintosh workstations and custom user interfaces. The demonstration includes a discussion of issues relevant to the delivery of said services, particularly with respect to maintenance, security, training/support, staffing, software licensing and costs.
Spreadsheet macros for coloring sequence alignments.
Haygood, M G
1993-12-01
This article describes a set of Microsoft Excel macros designed to color amino acid and nucleotide sequence alignments for review and preparation of visual aids. The colored alignments can then be modified to emphasize features of interest. Procedures for importing and coloring sequences are described. The macro file adds a new menu to the menu bar containing sequence-related commands to enable users unfamiliar with Excel to use the macros more readily. The macros were designed for use with Macintosh computers but will also run with the DOS version of Excel.
Martin W. Ritchie; Robert F. Powers
1993-01-01
SYSTUM-1 is an individual-tree/distance-independent simulator developed for use in young plantations in California and southern Oregon. The program was developed to run under the DOS operating system and requires DOS 3.0 or higher running on an 8086 or higher processor. The simulator is designed to provide a link with existing PC-based simulators (CACTOS and ORGANON)...
Microcosm to Cosmos: The Growth of a Divisional Computer Network
Johannes, R.S.; Kahane, Stephen N.
1987-01-01
In 1982, we reported the deployment of a network of microcomputers in the Division of Gastroenterology[1]. This network was based upon Corvus Systems Omninet®. Corvus was one of the very first firms to offer networking products for PC's. This PC development occurred coincident with the planning phase of the Johns Hopkins Hospital's multisegment ethernet project. A rich communications infra-structure is now in place at the Johns Hopkins Medical Institutions[2,3]. Shortly after the hospital development under the direction of the Operational and Clinical Systems Division (OCS) development began, the Johns Hopkins School of Medicine began an Integrated Academic Information Management Systems (IAIMS) planning effort. We now present a model that uses aspects of all three planning efforts (PC networks, Hospital Information Systems & IAIMS) to build a divisional computing facility. This facility is viewed as a terminal leaf on then institutional network diagram. Nevertheless, it is noteworthy that this leaf, the divisional resource in the Division of Gastroenterology (GASNET), has a rich substructure and functionality of its own, perhaps revealing the recursive nature of network architecture. The current status, design and function of the GASNET computational facility is discussed. Among the major positive aspects of this design are the sharing and centralization of MS-DOS software, the high-speed DOS/Unix link that makes available most of the our institution's computing resources.
Lahr, John C.
1999-01-01
This report provides Fortran source code and program manuals for HYPOELLIPSE, a computer program for determining hypocenters and magnitudes of near regional earthquakes and the ellipsoids that enclose the 68-percent confidence volumes of the computed hypocenters. HYPOELLIPSE was developed to meet the needs of U.S. Geological Survey (USGS) scientists studying crustal and sub-crustal earthquakes recorded by a sparse regional seismograph network. The program was extended to locate hypocenters of volcanic earthquakes recorded by seismographs distributed on and around the volcanic edifice, at elevations above and below the hypocenter. HYPOELLIPSE was used to locate events recorded by the USGS southern Alaska seismograph network from October 1971 to the early 1990s. Both UNIX and PC/DOS versions of the source code of the program are provided along with sample runs.
Modeling and simulation in biomedicine.
Aarts, J.; Möller, D.; van Wijk van Brievingh, R.
1991-01-01
A group of researchers and educators in The Netherlands, Germany and Czechoslovakia have developed and adapted mathematical computer models of phenomena in the field of physiology and biomedicine for use in higher education. The models are graphical and highly interactive, and are all written in TurboPascal or the mathematical simulation language PSI. An educational shell has been developed to launch the models. The shell allows students to interact with the models and teachers to edit the models, to add new models and to monitor the achievements of the students. The models and the shell have been implemented on a MS-DOS personal computer. This paper describes the features of the modeling package and presents the modeling and simulation of the heart muscle as an example. PMID:1807745
ACARA - AVAILABILITY, COST AND RESOURCE ALLOCATION
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
ACARA is a program for analyzing availability, lifecycle cost, and resource scheduling. It uses a statistical Monte Carlo method to simulate a system's capacity states as well as component failure and repair. Component failures are modelled using a combination of exponential and Weibull probability distributions. ACARA schedules component replacement to achieve optimum system performance. The scheduling will comply with any constraints on component production, resupply vehicle capacity, on-site spares, or crew manpower and equipment. ACARA is capable of many types of analyses and trade studies because of its integrated approach. It characterizes the system performance in terms of both state availability and equivalent availability (a weighted average of state availability). It can determine the probability of exceeding a capacity state to assess reliability and loss of load probability. It can also evaluate the effect of resource constraints on system availability and lifecycle cost. ACARA interprets the results of a simulation and displays tables and charts for: (1) performance, i.e., availability and reliability of capacity states, (2) frequency of failure and repair, (3) lifecycle cost, including hardware, transportation, and maintenance, and (4) usage of available resources, including mass, volume, and maintenance man-hours. ACARA incorporates a user-friendly, menu-driven interface with full screen data entry. It provides a file management system to store and retrieve input and output datasets for system simulation scenarios. ACARA is written in APL2 using the APL2 interpreter for IBM PC compatible systems running MS-DOS. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. A dot matrix printer is required if the user wishes to print a graph from a results table. A sample MS-DOS executable is provided on the distribution medium. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The standard distribution medium for this program is a set of three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ACARA was developed in 1992.
Johnsen Lind, Andreas; Helge Johnsen, Bjorn; Hill, Labarron K; Sollers Iii, John J; Thayer, Julian F
2011-01-01
The aim of the present manuscript is to present a user-friendly and flexible platform for transforming Kubios HRV output files to an .xls-file format, used by MS Excel. The program utilizes either native or bundled Java and is platform-independent and mobile. This means that it can run without being installed on a computer. It also has an option of continuous transferring of data indicating that it can run in the background while Kubios produces output files. The program checks for changes in the file structure and automatically updates the .xls- output file.
NASA Technical Reports Server (NTRS)
Tennant, Allyn F.
1991-01-01
PLT is a high level plotting package. A Programmer can create a default plot suited for the data being displayed. At run times, users can then interact with the plot overriding any or all of these defaults. The user is also provided the capability to fit functions to the displayed data. This ability to display, interact with, and to fit the data make PLT a useful tool in the analysis of data. The Quick and Dandy Plotter (QDP) program will read ASCII text files that contain PLT commands and data. Thus, QDP provides and easy way to use the PLT software QPD files provide a convenient way to exchange data. The QPD/PLT software is written in standard FORTRAN 77 and has been ported to VAX VMS, SUN UNIX, IBM AIX, NeXT NextStep, and MS-DOS systems.
Yildirim, Ilyas; Park, Hajeung; Disney, Matthew D.; Schatz, George C.
2013-01-01
One class of functionally important RNA is repeating transcripts that cause disease through various mechanisms. For example, expanded r(CAG) repeats can cause Huntington’s and other disease through translation of toxic proteins. Herein, crystal structure of r[5ʹUUGGGC(CAG)3GUCC]2, a model of CAG expanded transcripts, refined to 1.65 Å resolution is disclosed that show both anti-anti and syn-anti orientations for 1×1 nucleotide AA internal loops. Molecular dynamics (MD) simulations using Amber force field in explicit solvent were run for over 500 ns on model systems r(5ʹGCGCAGCGC)2 (MS1) and r(5ʹCCGCAGCGG)2 (MS2). In these MD simulations, both anti-anti and syn-anti AA base pairs appear to be stable. While anti-anti AA base pairs were dynamic and sampled multiple anti-anti conformations, no syn-anti↔anti-anti transformations were observed. Umbrella sampling simulations were run on MS2, and a 2D free energy surface was created to extract transformation pathways. In addition, over 800 ns explicit solvent MD simulation was run on r[5ʹGGGC(CAG)3GUCC]2, which closely represents the refined crystal structure. One of the terminal AA base pairs (syn-anti conformation), transformed to anti-anti conformation. The pathway followed in this transformation was the one predicted by umbrella sampling simulations. Further analysis showed a binding pocket near AA base pairs in syn-anti conformations. Computational results combined with the refined crystal structure show that global minimum conformation of 1×1 nucleotide AA internal loops in r(CAG) repeats is anti-anti but can adopt syn-anti depending on the environment. These results are important to understand RNA dynamic-function relationships and develop small molecules that target RNA dynamic ensembles. PMID:23441937
POMESH - DIFFRACTION ANALYSIS OF REFLECTOR ANTENNAS
NASA Technical Reports Server (NTRS)
Hodges, R. E.
1994-01-01
POMESH is a computer program capable of predicting the performance of reflector antennas. Both far field pattern and gain calculations are performed using the Physical Optics (PO) approximation of the equivalent surface currents. POMESH is primarily intended for relatively small reflectors. It is useful in situations where the surface is described by irregular data that must be interpolated and for cases where the surface derivatives are not known. This method is flexible and robust and also supports near field calculations. Because of the near field computation ability, this computational engine is quite useful for subreflector computations. The program is constructed in a highly modular form so that it may be readily adapted to perform tasks other than the one that is explicitly described here. Since the computationally intensive portions of the algorithm are simple loops, the program can be easily adapted to take advantage of vector processor and parallel architectures. In POMESH the reflector is represented as a piecewise planar surface comprised of triangular regions known as facets. A uniform physical optics (PO) current is assumed to exist on each triangular facet. Then, the PO integral on a facet is approximated by the product of the PO current value at the center and the area of the triangle. In this way, the PO integral over the reflector surface is reduced to a summation of the contribution from each triangular facet. The source horn, or feed, that illuminates the subreflector is approximated by a linear combination of plane patterns. POMESH contains three polarization pattern definitions for the feed; a linear x-polarized element, linear y-polarized element, and a circular polarized element. If a more general feed pattern is required, it is a simple matter to replace the subroutine that implements the pattern definitions. POMESH obtains information necessary to specify the coordinate systems, location of other data files, and parameters of the desired calculation from a user provided data file. A numerical description of the principle plane patterns of the source horn must also be provided. The program is supplied with an analytically defined parabolic reflector surface. However, it is a simple matter to replace it with a user defined reflector surface. Output is given in the form of a data stream to the terminal; a summary of the parameters used in the computation and some sample results in a file; and a data file of the results of the pattern calculations suitable for plotting. POMESH is written in FORTRAN 77 for execution on CRAY series computers running UNICOS. With minor modifications, it has also been successfully implemented on a Sun4 series computer running SunOS, a DEC VAX series computer running VMS, and an IBM PC series computer running OS/2. It requires 2.5Mb of RAM under SunOS 4.1.1, 2.5Mb of RAM under VMS 5-4.3, and 2.5Mb of RAM under OS/2. The OS/2 version requires the Lahey F77L compiler. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format and a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. POMESH was developed in 1989 and is a copyrighted work with all copyright vested in NASA. CRAY and UNICOS are registered trademarks of Cray Research, Inc. SunOS and Sun4 are trademarks of Sun Microsystems, Inc. DEC, DEC FILES-11, VAX and VMS are trademarks of Digital Equipment Corporation. IBM PC and OS/2 are registered trademarks of International Business Machines, Inc. UNIX is a registered trademark of Bell Laboratories.
Rotordynamics on the PC: Transient Analysis With ARDS
NASA Technical Reports Server (NTRS)
Fleming, David P.
1997-01-01
Personal computers can now do many jobs that formerly required a large mainframe computer. An example is NASA Lewis Research Center's program Analysis of RotorDynamic Systems (ARDS), which uses the component mode synthesis method to analyze the dynamic motion of up to five rotating shafts. As originally written in the early 1980's, this program was considered large for the mainframe computers of the time. ARDS, which was written in Fortran 77, has been successfully ported to a 486 personal computer. Plots appear on the computer monitor via calls programmed for the original CALCOMP plotter; plots can also be output on a standard laser printer. The executable code, which uses the full array sizes of the mainframe version, easily fits on a high-density floppy disk. The program runs under DOS with an extended memory manager. In addition to transient analysis of blade loss, step turns, and base acceleration, with simulation of squeeze-film dampers and rubs, ARDS calculates natural frequencies and unbalance response.
Simultaneous real-time data collection methods
NASA Technical Reports Server (NTRS)
Klincsek, Thomas
1992-01-01
This paper describes the development of electronic test equipment which executes, supervises, and reports on various tests. This validation process uses computers to analyze test results and report conclusions. The test equipment consists of an electronics component and the data collection and reporting unit. The PC software, display screens, and real-time data-base are described. Pass-fail procedures and data replay are discussed. The OS2 operating system and Presentation Manager user interface system were used to create a highly interactive automated system. The system outputs are hardcopy printouts and MS DOS format files which may be used as input for other PC programs.
Mosher, T J; Liu, Y; Torok, C M
2010-03-01
To characterize effects of age and physical activity level on cartilage thickness and T2 response immediately after running. Institutional review board approval was obtained and all subjects provided informed consent prior to study participation. Cartilage thickness and magnetic resonance imaging (MRI) T2 values of 22 marathon runners and 15 sedentary controls were compared before and after 30 min of running. Runner and control groups were stratified by age
Monte Carlo simulation of electrothermal atomization on a desktop personal computer
NASA Astrophysics Data System (ADS)
Histen, Timothy E.; Güell, Oscar A.; Chavez, Iris A.; Holcombea, James A.
1996-07-01
Monte Carlo simulations have been applied to electrothermal atomization (ETA) using a tubular atomizer (e.g. graphite furnace) because of the complexity in the geometry, heating, molecular interactions, etc. The intense computational time needed to accurately model ETA often limited its effective implementation to the use of supercomputers. However, with the advent of more powerful desktop processors, this is no longer the case. A C-based program has been developed and can be used under Windows TM or DOS. With this program, basic parameters such as furnace dimensions, sample placement, furnace heating and kinetic parameters such as activation energies for desorption and adsorption can be varied to show the absorbance profile dependence on these parameters. Even data such as time-dependent spatial distribution of analyte inside the furnace can be collected. The DOS version also permits input of external temperaturetime data to permit comparison of simulated profiles with experimentally obtained absorbance data. The run-time versions are provided along with the source code. This article is an electronic publication in Spectrochimica Acta Electronica (SAE), the electronic section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by a diskette with a program (PC format), data files and text files.
White, Jennifer; Scurr, Joanna; Hedger, Wendy
2011-02-01
Comparisons of breast support requirements during overground and treadmill running have yet to be explored. The purpose of this study was to investigate 3D breast displacement and breast comfort during overground and treadmill running. Six female D cup participants had retro-reflective markers placed on the nipples, anterior superior iliac spines and clavicles. Five ProReflex infrared cameras (100 Hz) measured 3D marker displacement in four breast support conditions. For overground running, participants completed 5 running trials (3.1 m/s ± 0.1 m/s) over a 10 m indoor runway; for treadmill running, speed was steadily increased to 3.1 m/s and 5 gait cycles were analyzed. Subjective feedback on breast discomfort was collected using a visual analog scale. Running modality had no significant effect on breast displacement (p > .05). Moderate correlations (r = .45 to .68, p < .05) were found between breast discomfort and displacement. Stride length (m) and frequency (Hz) did not differ (p < .05) between breast support conditions or running modalities. Findings suggest that breast motion studies that examine treadmill running are applicable to overground running.
Hamzeiy, Hamid; Cox, Jürgen
2017-02-01
Computational workflows for mass spectrometry-based shotgun proteomics and untargeted metabolomics share many steps. Despite the similarities, untargeted metabolomics is lagging behind in terms of reliable fully automated quantitative data analysis. We argue that metabolomics will strongly benefit from the adaptation of successful automated proteomics workflows to metabolomics. MaxQuant is a popular platform for proteomics data analysis and is widely considered to be superior in achieving high precursor mass accuracies through advanced nonlinear recalibration, usually leading to five to ten-fold better accuracy in complex LC-MS/MS runs. This translates to a sharp decrease in the number of peptide candidates per measured feature, thereby strongly improving the coverage of identified peptides. We argue that similar strategies can be applied to untargeted metabolomics, leading to equivalent improvements in metabolite identification. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
MsSpec-1.0: A multiple scattering package for electron spectroscopies in material science
NASA Astrophysics Data System (ADS)
Sébilleau, Didier; Natoli, Calogero; Gavaza, George M.; Zhao, Haifeng; Da Pieve, Fabiana; Hatada, Keisuke
2011-12-01
We present a multiple scattering package to calculate the cross-section of various spectroscopies namely photoelectron diffraction (PED), Auger electron diffraction (AED), X-ray absorption (XAS), low-energy electron diffraction (LEED) and Auger photoelectron coincidence spectroscopy (APECS). This package is composed of three main codes, computing respectively the cluster, the potential and the cross-section. In the latter case, in order to cover a range of energies as wide as possible, three different algorithms are provided to perform the multiple scattering calculation: full matrix inversion, series expansion or correlation expansion of the multiple scattering matrix. Numerous other small Fortran codes or bash/csh shell scripts are also provided to perform specific tasks. The cross-section code is built by the user from a library of subroutines using a makefile. Program summaryProgram title: MsSpec-1.0 Catalogue identifier: AEJT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 504 438 No. of bytes in distributed program, including test data, etc.: 14 448 180 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any Operating system: Linux, MacOs RAM: Bytes Classification: 7.2 External routines: Lapack ( http://www.netlib.org/lapack/) Nature of problem: Calculation of the cross-section of various spectroscopies. Solution method: Multiple scattering. Running time: The test runs provided only take a few seconds to run.
Flexibility and running economy in female collegiate track athletes.
Beaudoin, C M; Whatley Blum, J
2005-09-01
Limited information exists regarding the association between flexibility and running economy in female athletes. This study examined relationships between lower limb and trunk flexibility and running economy in 17 female collegiate track athletes (20.12+/-1.80 y). Correlational design, subjects completed 4 testing sessions over a 2-week period. The 1st session assessed maximal oxygen uptake (VO2max=55.39+/-6.96 ml.kg-1.min-1). The 2nd session assessed trunk and lower limb flexibility. Two sets of 6 trunk and lower limb flexibility measures were performed after a 10-min treadmill warm-up at 2.68 m.s-1. The 3rd session consisted of 3 10-min accommodation runs at a speed of 2.68 m.s-1 which was approximately 60% VO2max. Each accommodation bout was separated by a 10-min rest. The 4th session assessed running economy. Subjects completed a 5-min warm-up at 2.68 m.s-1 followed by 10-min economy run at 2.68 m.s-1. Pearson product moment correlations revealed no significant correlations between running economy and flexibility measures. Results are in contrast to studies demonstrating an inverse relationship between trunk and/or lower limb flexibility and running economy in males. Furthermore, results are in contrast to studies reporting positive relationships between flexibility and running economy.
A memory efficient user interface for CLIPS micro-computer applications
NASA Technical Reports Server (NTRS)
Sterle, Mark E.; Mayer, Richard J.; Jordan, Janice A.; Brodale, Howard N.; Lin, Min-Jin
1990-01-01
The goal of the Integrated Southern Pine Beetle Expert System (ISPBEX) is to provide expert level knowledge concerning treatment advice that is convenient and easy to use for Forest Service personnel. ISPBEX was developed in CLIPS and delivered on an IBM PC AT class micro-computer, operating with an MS/DOS operating system. This restricted the size of the run time system to 640K. In order to provide a robust expert system, with on-line explanation, help, and alternative actions menus, as well as features that allow the user to back up or execute 'what if' scenarios, a memory efficient menuing system was developed to interface with the CLIPS programs. By robust, we mean an expert system that (1) is user friendly, (2) provides reasonable solutions for a wide variety of domain specific problems, (3) explains why some solutions were suggested but others were not, and (4) provides technical information relating to the problem solution. Several advantages were gained by using this type of user interface (UI). First, by storing the menus on the hard disk (instead of main memory) during program execution, a more robust system could be implemented. Second, since the menus were built rapidly, development time was reduced. Third, the user may try a new scenario by backing up to any of the input screens and revising segments of the original input without having to retype all the information. And fourth, asserting facts from the menus provided for a dynamic and flexible fact base. This UI technology has been applied successfully in expert systems applications in forest management, agriculture, and manufacturing. This paper discusses the architecture of the UI system, human factors considerations, and the menu syntax design.
The Effects of Denial-of-Service Attacks on Secure Time-Critical Communications in the Smart Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fengli; Li, QInghua; Mantooth, Homer Alan
2016-04-02
According to IEC 61850, many smart grid communications require messages to be delivered in a very short time. –Trip messages and sample values applied to the transmission level: 3 ms –Interlocking messages applied to the distribution level: 10 ms •Time-critical communications are vulnerable to denial-of-service (DoS) attacks –Flooding attack: Attacker floods many messages to the target network/machine. We conducted systematic, experimental study about how DoS attacks affect message delivery delays.
IOS: PDP 11/45 formatted input/output task stacker and processer. [In MACRO-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koschik, J.
1974-07-08
IOS allows the programer to perform formated Input/Output at assembly language level to/from any peripheral device. It runs under DOS versions V8-O8 or V9-19, reading and writing DOS-compatible files. Additionally, IOS will run, with total transparency, in an environment with memory management enabled. Minimum hardware required is a 16K PDP 11/45, Keyboard Device, DISK (DK,DF, or DC), and Line Frequency Clock. The source language is MACRO-11 (3.3K Decimal Words).
Wilson, J Adam; Williams, Justin C
2009-01-01
The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.
Speckle interferometry. Data acquisition and control for the SPID instrument.
NASA Astrophysics Data System (ADS)
Altarac, S.; Tallon, M.; Thiebaut, E.; Foy, R.
1998-08-01
SPID (SPeckle Imaging by Deconvolution) is a new speckle camera currently under construction at CRAL-Observatoire de Lyon. Its high spectral resolution and high image restoration capabilities open new astrophysical programs. The instrument SPID is composed of four main optical modules which are fully automated and computer controlled by a software written in Tcl/Tk/Tix and C. This software provides an intelligent assistance to the user by choosing observational parameters as a function of atmospheric parameters, computed in real time, and the desired restored image quality. Data acquisition is made by a photon-counting detector (CP40). A VME-based computer under OS9 controls the detector and stocks the data. The intelligent system runs under Linux on a PC. A slave PC under DOS commands the motors. These 3 computers communicate through an Ethernet network. SPID can be considered as a precursor for VLT's (Very Large Telescope, four 8-meter telescopes currently built in Chile by European Southern Observatory) very high spatial resolution camera.
NASA Technical Reports Server (NTRS)
Thompson, R. A.
1994-01-01
Accurate numerical prediction of high-temperature, chemically reacting flowfields requires a knowledge of the physical properties and reaction kinetics for the species involved in the reacting gas mixture. Assuming an 11-species air model at temperatures below 30,000 degrees Kelvin, SPECIES (Computer Codes for the Evaluation of Thermodynamic Properties, Transport Properties, and Equilibrium Constants of an 11-Species Air Model) computes values for the species thermodynamic and transport properties, diffusion coefficients and collision cross sections for any combination of the eleven species, and reaction rates for the twenty reactions normally occurring. The species represented in the model are diatomic nitrogen, diatomic oxygen, atomic nitrogen, atomic oxygen, nitric oxide, ionized nitric oxide, the free electron, ionized atomic nitrogen, ionized atomic oxygen, ionized diatomic nitrogen, and ionized diatomic oxygen. Sixteen subroutines compute the following properties for both a single species, interaction pair, or reaction, and an array of all species, pairs, or reactions: species specific heat and static enthalpy, species viscosity, species frozen thermal conductivity, diffusion coefficient, collision cross section (OMEGA 1,1), collision cross section (OMEGA 2,2), collision cross section ratio, and equilibrium constant. The program uses least squares polynomial curve-fits of the most accurate data believed available to provide the requested values more quickly than is possible with table look-up methods. The subroutines for computing transport coefficients and collision cross sections use additional code to correct for any electron pressure when working with ionic species. SPECIES was developed on a SUN 3/280 computer running the SunOS 3.5 operating system. It is written in standard FORTRAN 77 for use on any machine, and requires roughly 92K memory. The standard distribution medium for SPECIES is a 5.25 inch 360K MS-DOS format diskette. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. This program was last updated in 1991. SUN and SunOS are registered trademarks of Sun Microsystems, Inc.
SHABERTH - ANALYSIS OF A SHAFT BEARING SYSTEM (CRAY VERSION)
NASA Technical Reports Server (NTRS)
Coe, H. H.
1994-01-01
The SHABERTH computer program was developed to predict operating characteristics of bearings in a multibearing load support system. Lubricated and non-lubricated bearings can be modeled. SHABERTH calculates the loads, torques, temperatures, and fatigue life for ball and/or roller bearings on a single shaft. The program also allows for an analysis of the system reaction to the termination of lubricant supply to the bearings and other lubricated mechanical elements. SHABERTH has proven to be a valuable tool in the design and analysis of shaft bearing systems. The SHABERTH program is structured with four nested calculation schemes. The thermal scheme performs steady state and transient temperature calculations which predict system temperatures for a given operating state. The bearing dimensional equilibrium scheme uses the bearing temperatures, predicted by the temperature mapping subprograms, and the rolling element raceway load distribution, predicted by the bearing subprogram, to calculate bearing diametral clearance for a given operating state. The shaft-bearing system load equilibrium scheme calculates bearing inner ring positions relative to the respective outer rings such that the external loading applied to the shaft is brought into equilibrium by the rolling element loads which develop at each bearing inner ring for a given operating state. The bearing rolling element and cage load equilibrium scheme calculates the rolling element and cage equilibrium positions and rotational speeds based on the relative inner-outer ring positions, inertia effects, and friction conditions. The ball bearing subprograms in the current SHABERTH program have several model enhancements over similar programs. These enhancements include an elastohydrodynamic (EHD) film thickness model that accounts for thermal heating in the contact area and lubricant film starvation; a new model for traction combined with an asperity load sharing model; a model for the hydrodynamic rolling and shear forces in the inlet zone of lubricated contacts, which accounts for the degree of lubricant film starvation; modeling normal and friction forces between a ball and a cage pocket, which account for the transition between the hydrodynamic and elastohydrodynamic regimes of lubrication; and a model of the effect on fatigue life of the ratio of the EHD plateau film thickness to the composite surface roughness. SHABERTH is intended to be as general as possible. The models in SHABERTH allow for the complete mathematical simulation of real physical systems. Systems are limited to a maximum of five bearings supporting the shaft, a maximum of thirty rolling elements per bearing, and a maximum of one hundred temperature nodes. The SHABERTH program structure is modular and has been designed to permit refinement and replacement of various component models as the need and opportunities develop. A preprocessor is included in the IBM PC version of SHABERTH to provide a user friendly means of developing SHABERTH models and executing the resulting code. The preprocessor allows the user to create and modify data files with minimal effort and a reduced chance for errors. Data is utilized as it is entered; the preprocessor then decides what additional data is required to complete the model. Only this required information is requested. The preprocessor can accommodate data input for any SHABERTH compatible shaft bearing system model. The system may include ball bearings, roller bearings, and/or tapered roller bearings. SHABERTH is written in FORTRAN 77, and two machine versions are available from COSMIC. The CRAY version (LEW-14860) has a RAM requirement of 176K of 64 bit words. The IBM PC version (MFS-28818) is written for IBM PC series and compatible computers running MS-DOS, and includes a sample MS-DOS executable. For execution, the PC version requires at least 1Mb of RAM and an 80386 or 486 processor machine with an 80x87 math co-processor. The standard distribution medium for the IBM PC version is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diske
SUBSONIC WIND TUNNEL PERFORMANCE ANALYSIS SOFTWARE
NASA Technical Reports Server (NTRS)
Eckert, W. T.
1994-01-01
This program was developed as an aid in the design and analysis of subsonic wind tunnels. It brings together and refines previously scattered and over-simplified techniques used for the design and loss prediction of the components of subsonic wind tunnels. It implements a system of equations for determining the total pressure losses and provides general guidelines for the design of diffusers, contractions, corners and the inlets and exits of non-return tunnels. The algorithms used in the program are applicable to compressible flow through most closed- or open-throated, single-, double- or non-return wind tunnels or ducts. A comparison between calculated performance and that actually achieved by several existing facilities produced generally good agreement. Any system through which air is flowing which involves turns, fans, contractions etc. (e.g., an HVAC system) may benefit from analysis using this software. This program is an update of ARC-11138 which includes PC compatibility and an improved user interface. The method of loss analysis used by the program is a synthesis of theoretical and empirical techniques. Generally, the algorithms used are those which have been substantiated by experimental test. The basic flow-state parameters used by the program are determined from input information about the reference control section and the test section. These parameters were derived from standard relationships for compressible flow. The local flow conditions, including Mach number, Reynolds number and friction coefficient are determined for each end of each component or section. The loss in total pressure caused by each section is calculated in a form non-dimensionalized by local dynamic pressure. The individual losses are based on the nature of the section, local flow conditions and input geometry and parameter information. The loss forms for typical wind tunnel sections considered by the program include: constant area ducts, open throat ducts, contractions, constant area corners, diffusing corners, diffusers, exits, flow straighteners, fans, and fixed, known losses. Input to this program consists of data describing each section; the section type, the section end shapes, the section diameters, and parameters which vary from section to section. Output from the program consists of a tabulation of the performance-related parameters for each section of the wind tunnel circuit and the overall performance values that include the total circuit length, the total pressure losses and energy ratios for the circuit, and the total operating power required. If requested, the output also includes an echo of the input data, a summary of the circuit characteristics and plotted results on the cumulative pressure losses and the wall pressure differentials. The Subsonic Wind Tunnel Performance Analysis Software is written in FORTRAN 77 (71%) and BASIC (29%) for IBM PC series computers and compatibles running MS-DOS 2.1 or higher. The machine requirements include either an 80286 or 80386 processor, a math co-processor and 640K of main memory. The PERFORM analysis software is written for the RM/FORTRAN v2.4 compiler. This portion of the code is portable to other platforms which support a standard FORTRAN 77 compiler. Source code and executables for the PC are included with the distribution. They are compressed using the PKWARE archiving tool; the utility to unarchive the files, PKUNZIP.EXE, is included. With the PERFINTER program interface the user is allowed to enter the wind tunnel characteristics via the menu driven program, but this is only available for the PC. The standard distribution medium for this package is a 5.25 inch 360K MS-DOS format diskette. This software package was developed in 1990. DEC, VAX and VMS are trademarks of Digital Equipment Corporation. RM/FORTRAN is trademark of Ryan McFarland Corporation. PERFORM is a trademark of Prime Computer Inc. MS-DOS is a registered trademark of Microsoft Corporation.
SILHOUETTE - HIDDEN LINE COMPUTER CODE WITH GENERALIZED SILHOUETTE SOLUTION
NASA Technical Reports Server (NTRS)
Hedgley, D. R.
1994-01-01
Flexibility in choosing how to display computer-generated three-dimensional drawings has become increasingly important in recent years. A major consideration is the enhancement of the realism and aesthetics of the presentation. A polygonal representation of objects, even with hidden lines removed, is not always desirable. A more pleasing pictorial representation often can be achieved by removing some of the remaining visible lines, thus creating silhouettes (or outlines) of selected surfaces of the object. Additionally, it should be noted that this silhouette feature allows warped polygons. This means that any polygon can be decomposed into constituent triangles. Considering these triangles as members of the same family will present a polygon with no interior lines, and thus removes the restriction of flat polygons. SILHOUETTE is a program for calligraphic drawings that can render any subset of polygons as a silhouette with respect to itself. The program is flexible enough to be applicable to every class of object. SILHOUETTE offers all possible combinations of silhouette and nonsilhouette specifications for an arbitrary solid. Thus, it is possible to enhance the clarity of any three-dimensional scene presented in two dimensions. Input to the program can be line segments or polygons. Polygons designated with the same number will be drawn as a silhouette of those polygons. SILHOUETTE is written in FORTRAN 77 and requires a graphics package such as DI-3000. The program has been implemented on a DEC VAX series computer running VMS and used 65K of virtual memory without a graphics package linked in. The source code is intended to be machine independent. This program is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) and is also available on a 9-track 1600 BPI ASCII CARD IMAGE magnetic tape. SILHOUETTE was developed in 1986 and was last updated in 1992.
NASA Technical Reports Server (NTRS)
Manning, R. M.
1994-01-01
The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal antenna required to establish a link with the satellite, the statistical parameters that characterize the rainrate process at the terminal site, the length of the propagation path within the potential rain region, and its projected length onto the local horizontal. The IBM PC version of LeRC-SLAM (LEW-14979) is written in Microsoft QuickBASIC for an IBM PC compatible computer with a monitor and printer capable of supporting an 80-column format. The IBM PC version is available on a 5.25 inch MS-DOS format diskette. The program requires about 30K RAM. The source code and executable are included. The Macintosh version of LeRC-SLAM (LEW-14977) is written in Microsoft Basic, Binary (b) v2.00 for Macintosh II series computers running MacOS. This version requires 400K RAM and is available on a 3.5 inch 800K Macintosh format diskette, which includes source code only. The Macintosh version was developed in 1987 and the IBM PC version was developed in 1989. IBM PC is a trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh is a registered trademark of Apple Computer, Inc.
LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Manning, R. M.
1994-01-01
The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal antenna required to establish a link with the satellite, the statistical parameters that characterize the rainrate process at the terminal site, the length of the propagation path within the potential rain region, and its projected length onto the local horizontal. The IBM PC version of LeRC-SLAM (LEW-14979) is written in Microsoft QuickBASIC for an IBM PC compatible computer with a monitor and printer capable of supporting an 80-column format. The IBM PC version is available on a 5.25 inch MS-DOS format diskette. The program requires about 30K RAM. The source code and executable are included. The Macintosh version of LeRC-SLAM (LEW-14977) is written in Microsoft Basic, Binary (b) v2.00 for Macintosh II series computers running MacOS. This version requires 400K RAM and is available on a 3.5 inch 800K Macintosh format diskette, which includes source code only. The Macintosh version was developed in 1987 and the IBM PC version was developed in 1989. IBM PC is a trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh is a registered trademark of Apple Computer, Inc.
NASA Technical Reports Server (NTRS)
hoelzer, H. D.; Fourroux, K. A.; Rickman, D. L.; Schrader, C. M.
2011-01-01
Figures of Merit (FoMs) and the FoM software provide a method for quantitatively evaluating the quality of a regolith simulant by comparing the simulant to a reference material. FoMs may be used for comparing a simulant to actual regolith material, specification by stating the value a simulant s FoMs must attain to be suitable for a given application and comparing simulants from different vendors or production runs. FoMs may even be used to compare different simulants to each other. A single FoM is conceptually an algorithm that computes a single number for quantifying the similarity or difference of a single characteristic of a simulant material and a reference material and provides a clear measure of how well a simulant and reference material match or compare. FoMs have been constructed to lie between zero and 1, with zero indicating a poor or no match and 1 indicating a perfect match. FoMs are defined for modal composition, particle size distribution, particle shape distribution, (aspect ratio and angularity), and density. This TM covers the mathematics, use, installation, and licensing for the existing FoM code in detail.
Effect of Minimalist Footwear on Running Efficiency: A Randomized Crossover Trial.
Gillinov, Stephen M; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M
2015-05-01
Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Randomized crossover trial. Level 3. Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes.
[Automatic tracing of conversion scales from conventional units to the SI system of units].
Besozzi, M; Bianchi, P; Agrifoglio, L
1988-01-01
American medical journals, as the Journal of the American Medical Association (JAMA), and the American Journal of Clinical Pathology (AJCP), the Journal of the American Society of Clinical Pathologists (ASCP), are shifting to selected SI (Système International d'Unités) units for reporting measurements. Further discussion by the AMA, the ASCP and other organizations is required before consensus in the US medical community can be reached as to the extent of and time frame for conversion to SI for reporting clinical laboratory measurements: however this decision will certainly greatly speed up the process of conversion in European countries too. Transition to SI units will require the use of different reference ranges, and there will be a potential for serious misinterpretation of laboratory data unless well-planned educational programs are instituted before the change. A simple program written in Microsoft Basic for automatically tracing on one's personal computer (PC) monitor a dual scale, in the conventional and in the SI system of units, is presented here. The program may be easily implemented and run on every PC operating under MS-DOS, equipped with a CGA or an AT&T6300 graphic card: through the operating system the scales may also be printed on a dot-matrix graphic printer. We believe that this, and other tools of this kind, will be useful in the thorough educational process of those reading the reports, and will be an important factor in the success of conversion to SI reporting.
MIMS - MEDICAL INFORMATION MANAGEMENT SYSTEM
NASA Technical Reports Server (NTRS)
Frankowski, J. W.
1994-01-01
MIMS, Medical Information Management System is an interactive, general purpose information storage and retrieval system. It was first designed to be used in medical data management, and can be used to handle all aspects of data related to patient care. Other areas of application for MIMS include: managing occupational safety data in the public and private sectors; handling judicial information where speed and accuracy are high priorities; systemizing purchasing and procurement systems; and analyzing organizational cost structures. Because of its free format design, MIMS can offer immediate assistance where manipulation of large data bases is required. File structures, data categories, field lengths and formats, including alphabetic and/or numeric, are all user defined. The user can quickly and efficiently extract, display, and analyze the data. Three means of extracting data are provided: certain short items of information, such as social security numbers, can be used to uniquely identify each record for quick access; records can be selected which match conditions defined by the user; and specific categories of data can be selected. Data may be displayed and analyzed in several ways which include: generating tabular information assembled from comparison of all the records on the system; generating statistical information on numeric data such as means, standard deviations and standard errors; and displaying formatted listings of output data. The MIMS program is written in Microsoft FORTRAN-77. It was designed to operate on IBM Personal Computers and compatibles running under PC or MS DOS 2.00 or higher. MIMS was developed in 1987.
GPU-Accelerated Stony-Brook University 5-class Microphysics Scheme in WRF
NASA Astrophysics Data System (ADS)
Mielikainen, J.; Huang, B.; Huang, A.
2011-12-01
The Weather Research and Forecasting (WRF) model is a next-generation mesoscale numerical weather prediction system. Microphysics plays an important role in weather and climate prediction. Several bulk water microphysics schemes are available within the WRF, with different numbers of simulated hydrometeor classes and methods for estimating their size fall speeds, distributions and densities. Stony-Brook University scheme (SBU-YLIN) is a 5-class scheme with riming intensity predicted to account for mixed-phase processes. In the past few years, co-processing on Graphics Processing Units (GPUs) has been a disruptive technology in High Performance Computing (HPC). GPUs use the ever increasing transistor count for adding more processor cores. Therefore, GPUs are well suited for massively data parallel processing with high floating point arithmetic intensity. Thus, it is imperative to update legacy scientific applications to take advantage of this unprecedented increase in computing power. CUDA is an extension to the C programming language offering programming GPU's directly. It is designed so that its constructs allow for natural expression of data-level parallelism. A CUDA program is organized into two parts: a serial program running on the CPU and a CUDA kernel running on the GPU. The CUDA code consists of three computational phases: transmission of data into the global memory of the GPU, execution of the CUDA kernel, and transmission of results from the GPU into the memory of CPU. CUDA takes a bottom-up point of view of parallelism is which thread is an atomic unit of parallelism. Individual threads are part of groups called warps, within which every thread executes exactly the same sequence of instructions. To test SBU-YLIN, we used a CONtinental United States (CONUS) benchmark data set for 12 km resolution domain for October 24, 2001. A WRF domain is a geographic region of interest discretized into a 2-dimensional grid parallel to the ground. Each grid point has multiple levels, which correspond to various vertical heights in the atmosphere. The size of the CONUS 12 km domain is 433 x 308 horizontal grid points with 35 vertical levels. First, the entire SBU-YLIN Fortran code was rewritten in C in preparation of GPU accelerated version. After that, C code was verified against Fortran code for identical outputs. Default compiler options from WRF were used for gfortran and gcc compilers. The processing time for the original Fortran code is 12274 ms and 12893 ms for C version. The processing times for GPU implementation of SBU-YLIN microphysics scheme with I/O are 57.7 ms and 37.2 ms for 1 and 2 GPUs, respectively. The corresponding speedups are 213x and 330x compared to a Fortran implementation. Without I/O the speedup is 896x on 1 GPU. Obviously, ignoring I/O time speedup scales linearly with GPUs. Thus, 2 GPUs have a speedup of 1788x without I/O. Microphysics computation is just a small part of the whole WRF model. After having completely implemented WRF on GPU, the inputs for SBU-YLIN do not have to be transferred from CPU. Instead they are results of previous WRF modules. Therefore, the role of I/O is greatly diminished once all of WRF have been converted to run on GPUs. In the near future, we expect to have a WRF running completely on GPUs for a superior performance.
McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian
2014-05-01
This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min(-1)). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key pointsDifferences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern.Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot.Stride frequency when barefoot was higher than when shod or in minimalist footwear.Contact time when shod was longer than when barefoot or in minimalist footwear.Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running.
McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian
2014-01-01
This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min-1). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key points Differences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern. Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot. Stride frequency when barefoot was higher than when shod or in minimalist footwear. Contact time when shod was longer than when barefoot or in minimalist footwear. Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running. PMID:24790480
PYROLASER - PYROLASER OPTICAL PYROMETER OPERATING SYSTEM
NASA Technical Reports Server (NTRS)
Roberts, F. E.
1994-01-01
The PYROLASER package is an operating system for the Pyrometer Instrument Company's Pyrolaser. There are 6 individual programs in the PYROLASER package: two main programs, two lower level subprograms, and two programs which, although independent, function predominantly as macros. The package provides a quick and easy way to setup, control, and program a standard Pyrolaser. Temperature and emissivity measurements may be either collected as if the Pyrolaser were in the manual operations mode, or displayed on real time strip charts and stored in standard spreadsheet format for post-test analysis. A shell is supplied to allow macros, which are test-specific, to be easily added to the system. The Pyrolaser Simple Operation program provides full on-screen remote operation capabilities, thus allowing the user to operate the Pyrolaser from the computer just as it would be operated manually. The Pyrolaser Simple Operation program also allows the use of "quick starts". Quick starts provide an easy way to permit routines to be used as setup macros for specific applications or tests. The specific procedures required for a test may be ordered in a sequence structure and then the sequence structure can be started with a simple button in the cluster structure provided. One quick start macro is provided for continuous Pyrolaser operation. A subprogram, Display Continuous Pyr Data, is used to display and store the resulting data output. Using this macro, the system is set up for continuous operation and the subprogram is called to display the data in real time on strip charts. The data is simultaneously stored in a spreadsheet format. The resulting spreadsheet file can be opened in any one of a number of commercially available spreadsheet programs. The Read Continuous Pyrometer program is provided as a continuously run subprogram for incorporation of the Pyrolaser software into a process control or feedback control scheme in a multi-component system. The program requires the Pyrolaser to be set up using the Pyrometer String Transfer macro. It requires no inputs and provides temperature and emissivity as outputs. The Read Continuous Pyrometer program can be run continuously and the data can be sampled as often or as seldom as updates of temperature and emissivity are required. PYROLASER is written using the Labview software for use on Macintosh series computers running System 6.0.3 or later, Sun Sparc series computers running OpenWindows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatibles running Microsoft Windows 3.1 or later. Labview requires a minimum of 5Mb of RAM on a Macintosh, 24Mb of RAM on a Sun, and 8Mb of RAM on an IBM PC or compatible. The Labview software is a product of National Instruments (Austin,TX; 800-433-3488), and is not included with this program. The standard distribution medium for PYROLASER is a 3.5 inch 800K Macintosh format diskette. It is also available on a 3.5 inch 720K MS-DOS format diskette, a 3.5 inch diskette in UNIX tar format, and a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation in Macintosh WordPerfect version 2.0.4 format is included on the distribution medium. Printed documentation is included in the price of the program. PYROLASER was developed in 1992.
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Meana-Pañeda, Rubén; Truhlar, Donald G.
2013-08-01
We present an improved version of the MSTor program package, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsions; the method is based on either a coupled torsional potential or an uncoupled torsional potential. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes seven utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files for the MSTor calculation and Voronoi calculation, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multitorsional problems for which one can afford to calculate all the conformational structures and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes, the symmetry program for determining point group symmetry of a molecule, and seven utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes of the torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 26 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 s. References: [1] MS-T(C) method: Quantum Thermochemistry: Multi-Structural Method with Torsional Anharmonicity Based on a Coupled Torsional Potential, J. Zheng and D.G. Truhlar, Journal of Chemical Theory and Computation 9 (2013) 1356-1367, DOI: http://dx.doi.org/10.1021/ct3010722. [2] MS-T(U) method: Practical Methods for Including Torsional Anharmonicity in Thermochemical Calculations of Complex Molecules: The Internal-Coordinate Multi-Structural Approximation, J. Zheng, T. Yu, E. Papajak, I, M. Alecu, S.L. Mielke, and D.G. Truhlar, Physical Chemistry Chemical Physics 13 (2011) 10885-10907.
Clearing a Path: The 16-Bit Operating System Jungle Offers Confusion, Not Standardization.
ERIC Educational Resources Information Center
Pournelle, Jerry
1984-01-01
Discusses the design and limited uses of the Pascal, MS-DOS, CP/M, and PC-DOS operating systems as standard operating systems for 16-bit microprocessors, especially with the more sophisticated microcomputers currently being developed. Advantages and disadvantages of Unix--a multitasking, multiuser operating system--as a standard operating system…
Effect of Minimalist Footwear on Running Efficiency
Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.
2015-01-01
Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304
PMARC_12 - PANEL METHOD AMES RESEARCH CENTER, VERSION 12
NASA Technical Reports Server (NTRS)
Ashby, D. L.
1994-01-01
Panel method computer programs are software tools of moderate cost used for solving a wide range of engineering problems. The panel code PMARC_12 (Panel Method Ames Research Center, version 12) can compute the potential flow field around complex three-dimensional bodies such as complete aircraft models. PMARC_12 is a well-documented, highly structured code with an open architecture that facilitates modifications and the addition of new features. Adjustable arrays are used throughout the code, with dimensioning controlled by a set of parameter statements contained in an include file; thus, the size of the code (i.e. the number of panels that it can handle) can be changed very quickly. This allows the user to tailor PMARC_12 to specific problems and computer hardware constraints. In addition, PMARC_12 can be configured (through one of the parameter statements in the include file) so that the code's iterative matrix solver is run entirely in RAM, rather than reading a large matrix from disk at each iteration. This significantly increases the execution speed of the code, but it requires a large amount of RAM memory. PMARC_12 contains several advanced features, including internal flow modeling, a time-stepping wake model for simulating either steady or unsteady (including oscillatory) motions, a Trefftz plane induced drag computation, off-body and on-body streamline computations, and computation of boundary layer parameters using a two-dimensional integral boundary layer method along surface streamlines. In a panel method, the surface of the body over which the flow field is to be computed is represented by a set of panels. Singularities are distributed on the panels to perturb the flow field around the body surfaces. PMARC_12 uses constant strength source and doublet distributions over each panel, thus making it a low order panel method. Higher order panel methods allow the singularity strength to vary linearly or quadratically across each panel. Experience has shown that low order panel methods can provide nearly the same accuracy as higher order methods over a wide range of cases with significantly reduced computation times; hence, the low order formulation was adopted for PMARC_12. The flow problem is solved by modeling the body as a closed surface dividing space into two regions: the region external to the surface in which an unknown velocity potential exists representing the flow field of interest, and the region internal to the surface in which a known velocity potential (representing a fictitious flow) is prescribed as a boundary condition. Both velocity potentials are required to satisfy Laplace's equation. A surface integral equation for the unknown potential external to the surface can be written by applying Green's Theorem to the external region. Using the internal potential and zero flow through the surface as boundary conditions, the unknown potential external to the surface can be solved for. When the internal flow option, which allows the analysis of closed ducts, wind tunnels, and similar internal flow problems, is selected, the geometry is modeled such that the flow field of interest is inside the geometry and the fictitious flow is outside the geometry. Items such as wings, struts, or aircraft models can be included in the internal flow problem. The time-stepping wake model gives PMARC_12 the ability to model both steady and unsteady flow problems. The wake is convected downstream from the wake-separation line by the local velocity field. With each time step, a new row of wake panels is added to the wake at the wake-separation line. Time stepping can start from time t=0 (no initial wake) or from time t=t0 (an initial wake is specified). A wide range of motions can be prescribed, including constant rates of translation, constant rate of rotation about an arbitrary axis, oscillatory translation, and oscillatory rotation about any of the three coordinate axes. Investigators interested in a visual representation of the phenomenon they are studying with PMARC_12 may want to consider obtaining the program GVS (ARC-13361), the General Visualization System. GVS is a Silicon Graphics IRIS program which was created for the purpose of supporting the scientific visualization needs of PMARC_12. GVS is available separately from COSMIC. PMARC_12 is written in standard FORTRAN 77, with the exception of the NAMELIST extension used for input. This makes the code fairly machine independent. A compiler which supports the NAMELIST extension is required. The amount of free disk space and RAM memory required for PMARC_12 will vary depending on how the code is dimensioned using the parameter statements in the include file. The recommended minimum requirements are 20Mb of free disk space and 4Mb of RAM. PMARC_12 has been successfully implemented on a Macintosh II running System 6.0.7 or 7.0 (using MPW/Language Systems Fortran 3.0), a Sun SLC running SunOS 4.1.1, an HP 720 running HP-UX 8.07, an SGI IRIS running IRIX 4.0 (it will not run under IRIX 3.x.x without modifications), an IBM RS/6000 running AIX, a DECstation 3100 running ULTRIX, and a CRAY-YMP running UNICOS 6.0 or later. Due to its memory requirements, this program does not readily lend itself to implementation on MS-DOS based machines. The standard distribution medium for PMARC_12 is a set of three 3.5 inch 800K Macintosh format diskettes and one 3.5 inch 1.44Mb Macintosh format diskette which contains an electronic copy of the documentation in MS Word 5.0 format for the Macintosh. Alternate distribution media and formats are available upon request, but these will not include the electronic version of the document. No executables are included on the distribution media. This program is an update to PMARC version 11, which was released in 1989. PMARC_12 was released in 1993. It is available only for use by United States citizens.
DataPlus - a revolutionary applications generator for DOS hand-held computers
David Dean; Linda Dean
2000-01-01
DataPlus allows the user to easily design data collection templates for DOS-based hand-held computers that mimic clipboard data sheets. The user designs and tests the application on the desktop PC and then transfers it to a DOS field computer. Other features include: error checking, missing data checks, and sensor input from RS-232 devices such as bar code wands,...
TLIFE: a Program for Spur, Helical and Spiral Bevel Transmission Life and Reliability Modeling
NASA Technical Reports Server (NTRS)
Savage, M.; Prasanna, M. G.; Rubadeux, K. L.
1994-01-01
This report describes a computer program, 'TLIFE', which models the service life of a transmission. The program is written in ANSI standard Fortran 77 and has an executable size of about 157 K bytes for use on a personal computer running DOS. It can also be compiled and executed in UNIX. The computer program can analyze any one of eleven unit transmissions either singly or in a series combination of up to twenty-five unit transmissions. Metric or English unit calculations are performed with the same routines using consistent input data and a units flag. Primary outputs are the dynamic capacity of the transmission and the mean lives of the transmission and of the sum of its components. The program uses a modular approach to separate the load analyses from the system life calculations. The program and its input and output data files are described herein. Three examples illustrate its use. A development of the theory behind the analysis in the program is included after the examples.
Space station operating system study
NASA Technical Reports Server (NTRS)
Horn, Albert E.; Harwell, Morris C.
1988-01-01
The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.
Fukuchi, Claudiane A.; Duarte, Marcos
2017-01-01
Background The goals of this study were (1) to present the set of data evaluating running biomechanics (kinematics and kinetics), including data on running habits, demographics, and levels of muscle strength and flexibility made available at Figshare (DOI: 10.6084/m9.figshare.4543435); and (2) to examine the effect of running speed on selected gait-biomechanics variables related to both running injuries and running economy. Methods The lower-extremity kinematics and kinetics data of 28 regular runners were collected using a three-dimensional (3D) motion-capture system and an instrumented treadmill while the subjects ran at 2.5 m/s, 3.5 m/s, and 4.5 m/s wearing standard neutral shoes. Results A dataset comprising raw and processed kinematics and kinetics signals pertaining to this experiment is available in various file formats. In addition, a file of metadata, including demographics, running characteristics, foot-strike patterns, and muscle strength and flexibility measurements is provided. Overall, there was an effect of running speed on most of the gait-biomechanics variables selected for this study. However, the foot-strike patterns were not affected by running speed. Discussion Several applications of this dataset can be anticipated, including testing new methods of data reduction and variable selection; for educational purposes; and answering specific research questions. This last application was exemplified in the study’s second objective. PMID:28503379
Fukuchi, Reginaldo K; Fukuchi, Claudiane A; Duarte, Marcos
2017-01-01
The goals of this study were (1) to present the set of data evaluating running biomechanics (kinematics and kinetics), including data on running habits, demographics, and levels of muscle strength and flexibility made available at Figshare (DOI: 10.6084/m9.figshare.4543435); and (2) to examine the effect of running speed on selected gait-biomechanics variables related to both running injuries and running economy. The lower-extremity kinematics and kinetics data of 28 regular runners were collected using a three-dimensional (3D) motion-capture system and an instrumented treadmill while the subjects ran at 2.5 m/s, 3.5 m/s, and 4.5 m/s wearing standard neutral shoes. A dataset comprising raw and processed kinematics and kinetics signals pertaining to this experiment is available in various file formats. In addition, a file of metadata, including demographics, running characteristics, foot-strike patterns, and muscle strength and flexibility measurements is provided. Overall, there was an effect of running speed on most of the gait-biomechanics variables selected for this study. However, the foot-strike patterns were not affected by running speed. Several applications of this dataset can be anticipated, including testing new methods of data reduction and variable selection; for educational purposes; and answering specific research questions. This last application was exemplified in the study's second objective.
Acute changes in knee cartilage transverse relaxation time after running and bicycling.
Gatti, Anthony A; Noseworthy, Michael D; Stratford, Paul W; Brenneman, Elora C; Totterman, Saara; Tamez-Peña, José; Maly, Monica R
2017-02-28
To compare the acute effect of running and bicycling of an equivalent cumulative load on knee cartilage composition and morphometry in healthy young men. A secondary analysis investigated the relationship between activity history and the change in cartilage composition after activity. In fifteen men (25.8±4.2 years), the vertical ground reaction force was measured to determine the cumulative load exposure of a 15-min run. The vertical pedal reaction force was recorded during bicycling to define the bicycling duration of an equivalent cumulative load. On separate visits that were spaced on average 17 days apart, participants completed these running and bicycling bouts. Mean cartilage transverse relaxation times (T 2 ) were determined for cartilage on the tibia and weight-bearing femur before and after each exercise. T 2 was measured using a multi-echo spin-echo sequence and 3T MRI. Cartilage of the weight bearing femur and tibia was segmented using a highly-automated segmentation algorithm. Activity history was captured using the International Physical Activity Questionnaire. The response of T 2 to bicycling and running was different (p=0.019; mean T 2 : pre-running=34.27ms, pre-bicycling=32.93ms, post-running=31.82ms, post-bicycling=32.36ms). While bicycling produced no change (-1.7%, p=0.300), running shortened T 2 (-7.1%, p<0.001). Greater activity history predicted smaller changes in tibial, but not femoral, T 2 . Changes in knee cartilage vary based on activity type, independent of total load exposure, in healthy young men. Smaller changes in T 2 were observed after bicycling relative to running. Activity history was inversely related to tibial T 2 , suggesting cartilage conditioning. Copyright © 2017 Elsevier Ltd. All rights reserved.
Liew, Bernard X W; Morris, Susan; Netto, Kevin
2016-06-01
Investigating the impact of incremental load magnitude on running joint power and kinematics is important for understanding the energy cost burden and potential injury-causative mechanisms associated with load carriage. It was hypothesized that incremental load magnitude would result in phase-specific, joint power and kinematic changes within the stance phase of running, and that these relationships would vary at different running velocities. Thirty-one participants performed running while carrying three load magnitudes (0%, 10%, 20% body weight), at three velocities (3, 4, 5m/s). Lower limb trajectories and ground reaction forces were captured, and global optimization was used to derive the variables. The relationships between load magnitude and joint power and angle vectors, at each running velocity, were analyzed using Statistical Parametric Mapping Canonical Correlation Analysis. Incremental load magnitude was positively correlated to joint power in the second half of stance. Increasing load magnitude was also positively correlated with alterations in three dimensional ankle angles during mid-stance (4.0 and 5.0m/s), knee angles at mid-stance (at 5.0m/s), and hip angles during toe-off (at all velocities). Post hoc analyses indicated that at faster running velocities (4.0 and 5.0m/s), increasing load magnitude appeared to alter power contribution in a distal-to-proximal (ankle→hip) joint sequence from mid-stance to toe-off. In addition, kinematic changes due to increasing load influenced both sagittal and non-sagittal plane lower limb joint angles. This study provides a list of plausible factors that may influence running energy cost and injury risk during load carriage running. Copyright © 2016 Elsevier B.V. All rights reserved.
Modular use of human body models of varying levels of complexity: Validation of head kinematics.
Decker, William; Koya, Bharath; Davis, Matthew L; Gayzik, F Scott
2017-05-29
The significant computational resources required to execute detailed human body finite-element models has motivated the development of faster running, simplified models (e.g., GHBMC M50-OS). Previous studies have demonstrated the ability to modularly incorporate the validated GHBMC M50-O brain model into the simplified model (GHBMC M50-OS+B), which allows for localized analysis of the brain in a fraction of the computation time required for the detailed model. The objective of this study is to validate the head and neck kinematics of the GHBMC M50-O and M50-OS (detailed and simplified versions of the same model) against human volunteer test data in frontal and lateral loading. Furthermore, the effect of modular insertion of the detailed brain model into the M50-OS is quantified. Data from the Navy Biodynamics Laboratory (NBDL) human volunteer studies, including a 15g frontal, 8g frontal, and 7g lateral impact, were reconstructed and simulated using LS-DYNA. A five-point restraint system was used for all simulations, and initial positions of the models were matched with volunteer data using settling and positioning techniques. Both the frontal and lateral simulations were run with the M50-O, M50-OS, and M50-OS+B with active musculature for a total of nine runs. Normalized run times for the various models used in this study were 8.4 min/ms for the M50-O, 0.26 min/ms for the M50-OS, and 0.97 min/ms for the M50-OS+B, a 32- and 9-fold reduction in run time, respectively. Corridors were reanalyzed for head and T1 kinematics from the NBDL studies. Qualitative evaluation of head rotational accelerations and linear resultant acceleration, as well as linear resultant T1 acceleration, showed reasonable results between all models and the experimental data. Objective evaluation of the results for head center of gravity (CG) accelerations was completed via ISO TS 18571, and indicated scores of 0.673 (M50-O), 0.638 (M50-OS), and 0.656 (M50-OS+B) for the 15g frontal impact. Scores at lower g levels yielded similar results, 0.667 (M50-O), 0.675 (M50-OS), and 0.710 (M50-OS+B) for the 8g frontal impact. The 7g lateral simulations also compared fairly with an average ISO score of 0.565 for the M50-O, 0.634 for the M50-OS, and 0.606 for the M50-OS+B. The three HBMs experienced similar head and neck motion in the frontal simulations, but the M50-O predicted significantly greater head rotation in the lateral simulation. The greatest departure from the detailed occupant models were noted in lateral flexion, potentially indicating the need for further study. Precise modeling of the belt system however was limited by available data. A sensitivity study of these parameters in the frontal condition showed that belt slack and muscle activation have a modest effect on the ISO score. The reduction in computation time of the M50-OS+B reduces the burden of high computational requirements when handling detailed HBMs. Future work will focus on harmonizing the lateral head response of the models and studying localized injury criteria within the brain from the M50-O and M50-OS+B.
NLM microcomputer-based tutorials (for microcomputers). Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, M.
1990-04-01
The package consists of TOXLEARN--a microcomputer-based training package for TOXLINE (Toxicology Information Online), CHEMLEARN-a microcomputer-based training package for CHEMLINE (Chemical Information Online), MEDTUTOR--a microcomputer-based training package for MEDLINE (Medical Information Online), and ELHILL LEARN--a microcomputer-based training package for the ELHILL search and retrieval software that supports the above-mentioned databases...Software Description: The programs were developed under PILOTplus using the NLM LEARN Programmer. They run on IBM-PC, XT, AT, PS/2, and fully compatible computers. The programs require 512K RAM memory, one disk drive, and DOS 2.0 or higher. The software supports most monochrome, color graphics, enhanced color graphics, or visual graphics displays.
Molecular Diagnostics for the Study of Hypersonic Flows
2000-04-01
between the at the F4 high-enthalpy wind tunnel [21]. Figure 5 electrodes. The fast electrons exit the anode disk shows the image acquired 90 ms after...Discharge Figure 5 Typical F4 run, flow at 90 ms , Grounded Electrode convection imaged 5 jis after beam emission. Figure 4 Schematic diagram of the...accounts for the classical phenomena like absorption and Figure 6 Velocity profile at 90 ms for run of refraction. X(2) is the second-order
Zhang, Zhengxiang; Yan, Bo; Liu, Kelin; Liao, Yiping; Liu, Huwei
2009-01-01
The first application of charged polymer-protected gold nanoparticles (Au NPs) as semi-permanent capillary coating in CE-MS was presented. Poly(diallyldimethylammonium chloride) (PDDA) was the only reducing and stabilizing agent for Au NPs preparation. Stable and repeatable coating with good tolerance to 0.1 M HCl, methanol, and ACN was obtained via a simple rinsing procedure. Au NPs enhanced the coating stability toward flushing by methanol, improved the run-to-run and capillary-to-capillary repeatabilities, and improved the separation efficiency of heroin and its basic impurities for tracing geographical origins of illicit samples. Baseline resolution of eight heroin-related alkaloids was achieved on the PDDA-protected Au NPs-coated capillary under the optimum conditions: 120 mM ammonium acetate (pH 5.2) with addition of 13% methanol, separation temperature 20 degrees C, applied voltage -20 kV, and capillary effective length 60.0 cm. CE-MS analysis with run-to-run RSDs (n=5) of migration time in the range of 0.43-0.62% and RSDs (n=5) of peak area in the range of 1.49-4.68% was obtained. The established CE-MS method would offer sensitive detection and confident identification of heroin and related compounds and provide an alternative to LC-MS and GC-MS for illicit drug control.
Fiesta, Matthew P; Eagleman, David M
2008-09-15
As the frequency of a flickering light is increased, the perception of flicker is replaced by the perception of steady light at what is known as the critical flicker fusion threshold (CFFT). This threshold provides a useful measure of the brain's information processing speed, and has been used in medicine for over a century both for diagnostic and drug efficacy studies. However, the hardware for presenting the stimulus has not advanced to take advantage of computers, largely because the refresh rates of typical monitors are too slow to provide fine-grained changes in the alternation rate of a visual stimulus. For example, a cathode ray tube (CRT) computer monitor running at 100Hz will render a new frame every 10 ms, thus restricting the period of a flickering stimulus to multiples of 20 ms. These multiples provide a temporal resolution far too low to make precise threshold measurements, since typical CFFT values are in the neighborhood of 35 ms. We describe here a simple and novel technique to enable alternating images at several closely-spaced periods on a standard monitor. The key to our technique is to programmatically control the video card to dynamically reset the refresh rate of the monitor. Different refresh rates allow slightly different frame durations; this can be leveraged to vastly increase the resolution of stimulus presentation times. This simple technique opens new inroads for experiments on computers that require more finely-spaced temporal resolution than a monitor at a single, fixed refresh rate can allow.
Peer mentorship in student-run free clinics: the impact on preclinical education.
Choudhury, Noura; Khanwalkar, Ashoke; Kraninger, Jennifer; Vohra, Adam; Jones, Kohar; Reddy, Shalini
2014-03-01
Our study examines the perceptions of first-year medical students (MS1s) toward fourth-year colleagues (MS4s) in student-run free clinics to investigate the impact of peer mentorship on augmenting the clinical education received by MS1s in a primary care setting. To our knowledge, this is the first study examining the impact of MS4 mentorship in free clinics. A 55-item online questionnaire was administered to MS1s 9 months after matriculation in April 2012. Questions focused on MS1 perceptions of MS4 impact on comfort with patients, self-reported improvement in clinical skills, and overall satisfaction with mentorship in free clinics. The MS4s referenced in the questionnaire were enrolled in a longitudinal service-learning elective. Results were analyzed using one-sample Wilcoxon sign-ranked median test and ordered logistic regression with STATA software. Fifty-five of 77 (71.4%) eligible students began the online survey, with 48 (62.3%) completing it. Responses reflected experiences at four student-run free clinics. Overall, MS4 presence improved MS1 comfort with patients and enhanced interactions with attendings. MS1s were satisfied with the level of MS4 mentorship and agreed that MS4s had a distinct mentoring role from attendings. Ordered logistic regression showed that presence of MS4s was significantly associated with self-reported improvements to physical exam skills at one clinic. At each clinic, MS1s reported improved comfort with patients and satisfaction with mentorship received from MS4s. MS4s did not merely duplicate the role of attending physicians but enhanced interactions between MS1s and physicians. This suggests that the consistent presence of MS4s is a valuable adjunct to the educational experience of free clinics volunteering for MS1s.
Improving the visualization of 3D ultrasound data with 3D filtering
NASA Astrophysics Data System (ADS)
Shamdasani, Vijay; Bae, Unmin; Managuli, Ravi; Kim, Yongmin
2005-04-01
3D ultrasound imaging is quickly gaining widespread clinical acceptance as a visualization tool that allows clinicians to obtain unique views not available with traditional 2D ultrasound imaging and an accurate understanding of patient anatomy. The ability to acquire, manipulate and interact with the 3D data in real time is an important feature of 3D ultrasound imaging. Volume rendering is often used to transform the 3D volume into 2D images for visualization. Unlike computed tomography (CT) and magnetic resonance imaging (MRI), volume rendering of 3D ultrasound data creates noisy images in which surfaces cannot be readily discerned due to speckles and low signal-to-noise ratio. The degrading effect of speckles is especially severe when gradient shading is performed to add depth cues to the image. Several researchers have reported that smoothing the pre-rendered volume with a 3D convolution kernel, such as 5x5x5, can significantly improve the image quality, but at the cost of decreased resolution. In this paper, we have analyzed the reasons for the improvement in image quality with 3D filtering and determined that the improvement is due to two effects. The filtering reduces speckles in the volume data, which leads to (1) more accurate gradient computation and better shading and (2) decreased noise during compositing. We have found that applying a moderate-size smoothing kernel (e.g., 7x7x7) to the volume data before gradient computation combined with some smoothing of the volume data (e.g., with a 3x3x3 lowpass filter) before compositing yielded images with good depth perception and no appreciable loss in resolution. Providing the clinician with the flexibility to control both of these effects (i.e., shading and compositing) independently could improve the visualization of the 3D ultrasound data. Introducing this flexibility into the ultrasound machine requires 3D filtering to be performed twice on the volume data, once before gradient computation and again before compositing. 3D filtering of an ultrasound volume containing millions of voxels requires a large amount of computation, and doing it twice decreases the number of frames that can be visualized per second. To address this, we have developed several techniques to make computation efficient. For example, we have used the moving average method to filter a 128x128x128 volume with a 3x3x3 boxcar kernel in 17 ms on a single MAP processor running at 400 MHz. The same methods reduced the computing time on a Pentium 4 running at 3 GHz from 110 ms to 62 ms. We believe that our proposed method can improve 3D ultrasound visualization without sacrificing resolution and incurring an excessive computing time.
Kosonen, Jukka; Kulmala, Juha-Pekka; Müller, Erich; Avela, Janne
2017-03-21
Anti-pronation orthoses, like medially posted insoles (MPI), have traditionally been used to treat various of lower limb problems. Yet, we know surprisingly little about their effects on overall foot motion and lower limb mechanics across walking and running, which represent highly different loading conditions. To address this issue, multi-segment foot and lower limb mechanics was examined among 11 overpronating men with normal (NORM) and MPI insoles during walking (self-selected speed 1.70±0.19m/s vs 1.72±0.20m/s, respectively) and running (4.04±0.17m/s vs 4.10±0.13m/s, respectively). The kinematic results showed that MPI reduced the peak forefoot eversion movement in respect to both hindfoot and tibia across walking and running when compared to NORM (p<0.05-0.01). No differences were found in hindfoot eversion between conditions. The kinetic results showed no insole effects in walking, but during running MPI shifted center of pressure medially under the foot (p<0.01) leading to an increase in frontal plane moments at the hip (p<0.05) and knee (p<0.05) joints and a reduction at the ankle joint (p<0.05). These findings indicate that MPI primarily controlled the forefoot motion across walking and running. While kinetic response to MPI was more pronounced in running than walking, kinematic effects were essentially similar across both modes. This suggests that despite higher loads placed upon lower limb during running, there is no need to have a stiffer insoles to achieve similar reduction in the forefoot motion than in walking. Copyright © 2017 Elsevier Ltd. All rights reserved.
Global EOS: exploring the 300-ms-latency region
NASA Astrophysics Data System (ADS)
Mascetti, L.; Jericho, D.; Hsu, C.-Y.
2017-10-01
EOS, the CERN open-source distributed disk storage system, provides the highperformance storage solution for HEP analysis and the back-end for various work-flows. Recently EOS became the back-end of CERNBox, the cloud synchronisation service for CERN users. EOS can be used to take advantage of wide-area distributed installations: for the last few years CERN EOS uses a common deployment across two computer centres (Geneva-Meyrin and Budapest-Wigner) about 1,000 km apart (∼20-ms latency) with about 200 PB of disk (JBOD). In late 2015, the CERN-IT Storage group and AARNET (Australia) set-up a challenging R&D project: a single EOS instance between CERN and AARNET with more than 300ms latency (16,500 km apart). This paper will report about the success in deploy and run a distributed storage system between Europe (Geneva, Budapest), Australia (Melbourne) and later in Asia (ASGC Taipei), allowing different type of data placement and data access across these four sites.
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
PIPI: PTM-Invariant Peptide Identification Using Coding Method.
Yu, Fengchao; Li, Ning; Yu, Weichuan
2016-12-02
In computational proteomics, the identification of peptides with an unlimited number of post-translational modification (PTM) types is a challenging task. The computational cost associated with database search increases exponentially with respect to the number of modified amino acids and linearly with respect to the number of potential PTM types at each amino acid. The problem becomes intractable very quickly if we want to enumerate all possible PTM patterns. To address this issue, one group of methods named restricted tools (including Mascot, Comet, and MS-GF+) only allow a small number of PTM types in database search process. Alternatively, the other group of methods named unrestricted tools (including MS-Alignment, ProteinProspector, and MODa) avoids enumerating PTM patterns with an alignment-based approach to localizing and characterizing modified amino acids. However, because of the large search space and PTM localization issue, the sensitivity of these unrestricted tools is low. This paper proposes a novel method named PIPI to achieve PTM-invariant peptide identification. PIPI belongs to the category of unrestricted tools. It first codes peptide sequences into Boolean vectors and codes experimental spectra into real-valued vectors. For each coded spectrum, it then searches the coded sequence database to find the top scored peptide sequences as candidates. After that, PIPI uses dynamic programming to localize and characterize modified amino acids in each candidate. We used simulation experiments and real data experiments to evaluate the performance in comparison with restricted tools (i.e., Mascot, Comet, and MS-GF+) and unrestricted tools (i.e., Mascot with error tolerant search, MS-Alignment, ProteinProspector, and MODa). Comparison with restricted tools shows that PIPI has a close sensitivity and running speed. Comparison with unrestricted tools shows that PIPI has the highest sensitivity except for Mascot with error tolerant search and ProteinProspector. These two tools simplify the task by only considering up to one modified amino acid in each peptide, which results in a higher sensitivity but has difficulty in dealing with multiple modified amino acids. The simulation experiments also show that PIPI has the lowest false discovery proportion, the highest PTM characterization accuracy, and the shortest running time among the unrestricted tools.
ARNICA: the Arcetri Observatory NICMOS3 imaging camera
NASA Astrophysics Data System (ADS)
Lisi, Franco; Baffa, Carlo; Hunt, Leslie K.
1993-10-01
ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometers that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1' per pixel, with sky coverage of more than 4' X 4' on the NICMOS 3 (256 X 256 pixels, 40 micrometers side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature is 76 K. The camera is remotely controlled by a 486 PC, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the 486 PC, acquires and stores the frames, and controls the timing of the array. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some details on the main parameters of the NICMOS 3 detector.
ARNICA, the NICMOS 3 imaging camera of TIRGO.
NASA Astrophysics Data System (ADS)
Lisi, F.; Baffa, C.; Hunt, L.; Stanga, R.
ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 μm that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1″per pixel, with sky coverage of more than 4 min×4 min on the NICMOS 3 (256×256 pixels, 40 μm side) detector array. The camera is remotely controlled by a PC 486, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the PC 486, acquires and stores the frames, and controls the timing of the array. The camera is intended for imaging of large extra-galactic and Galactic fields; a large effort has been dedicated to explore the possibility of achieving precise photometric measurements in the J, H, K astronomical bands, with very promising results.
BCTC for Windows: Abstract of Issue 9903W
NASA Astrophysics Data System (ADS)
Whisnant, David M.; McCormick, James A.
1999-05-01
BCTC for Windows was originally published by JCE Software in 1992 (1) in Series B for PC-compatible (MS-DOS) computers. JCE Software is now re-releasing BCTC for Windows as issue 9903W to make it more accessible to Windows users-especially those running Windows 95 and Windows 98-while we continue to phase out Series B (DOS) issues. Aside from a new Windows-compatible installation program, BCTC is unchanged. BCTC is an environmental simulation modeled after the dioxin controversy (2). In the simulation, students are involved in the investigation of a suspected carcinogen called BCTC, which has been found in a river below a chemical plant and above the water supply of a nearby city. The students have the options of taking water samples, analyzing the water (for BCTC, oxygen, metals, and pesticides), determining LD50s in an animal lab, visiting a library, making economic analyses, and conferring with colleagues, all using the computer. In the Classroom BCTC gives students experience with science in the context of a larger social and political problem. It can serve as the basis for a scientific report, class discussion, or a role-playing exercise (3). Because it requires no previous laboratory experience, this simulation can be used by students in middle and high school science classes, or in college courses for non-science majors. It also has been used in introductory chemistry courses for science majors. One of the intentions of BCTC is to involve students in an exercise (2) that closely approximates what scientists do. The realistic pictures, many of them captured with a video camera, create an atmosphere that furthers this goal. BCTC also reflects the comments of teachers who have used the program (4) and accounts of dioxin research (5). Screen from BCTC showing location of the entry of the effluent in the river, the city, and the city water supply.


MicroSPE-nanoLC-ESI-MS/MS Using 10-μm-i.d. Silica-Based Monolithic Columns for Proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Quanzhou; Page, Jason S.; Tang, Keqi
2007-01-01
Silica-based monolithic narrow bore capillary columns (25 cm x 10 µm i.d.) with an integrated nanoESI emitter has been developed to provide high quality and robust microSPE-nanoLC-ESI-MS analyses. The integrated nanoESI emitter adds no dead volume to the LC separation, allowing stable electrospray performance to be obtained at flow rates of ~10 nL/min. In an initial application we identified 5510 unique peptides covering 1443 distinct Shewanella oneidensis proteins from a 300 ng tryptic digest sample in a single 4-h LC-MS/MS analysis using a linear ion trap MS (LTQ). We found the use of an integrated monolithic ESI emitter provided enhancedmore » resistance to clogging and good run-to-run reproducibility.« less
DOT National Transportation Integrated Search
1973-06-01
This manual describes the internal workings of the Disk Operating System (DOS-32 for the Noneywell H - 632 computer. DOS - 32 is a core resident, one user, console oriented operating system written primarily in FORTRAN. A companion document DOS - 32 ...
DOT National Transportation Integrated Search
1973-01-01
This manual describes the internal workings of the Disk Operating System (DOS-32 for the Noneywell H - 632 computer. DOS - 32 is a core resident, one user, console oriented operating system written primarily in FORTRAN. A companion document DOS - 32 ...
Is midsole thickness a key parameter for the running pattern?
Chambon, Nicolas; Delattre, Nicolas; Guéguen, Nils; Berton, Eric; Rao, Guillaume
2014-01-01
Many studies have highlighted differences in foot strike pattern comparing habitually shod runners who ran barefoot and with running shoes. Barefoot running results in a flatter foot landing and in a decreased vertical ground reaction force compared to shod running. The aim of this study was to investigate one possible parameter influencing running pattern: the midsole thickness. Fifteen participants ran overground at 3.3 ms(-1) barefoot and with five shoes of different midsole thickness (0 mm, 2 mm, 4 mm, 8 mm, 16 mm) with no difference of height between rearfoot and forefoot. Impact magnitude was evaluated using transient peak of vertical ground reaction force, loading rate, tibial acceleration peak and rate. Hip, knee and ankle flexion angles were computed at touch-down and during stance phase (range of motion and maximum values). External net joint moments and stiffness for hip, knee and ankle joints were also observed as well as global leg stiffness. No significant effect of midsole thickness was observed on ground reaction force and tibial acceleration. However, the contact time increased with midsole thickness. Barefoot running compared to shod running induced ankle in plantar flexion at touch-down, higher ankle dorsiflexion and lower knee flexion during stance phase. These adjustments are suspected to explain the absence of difference on ground reaction force and tibial acceleration. This study showed that the presence of very thin footwear upper and sole was sufficient to significantly influence the running pattern. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Polansky, A. C.
1982-01-01
A method for diagnosing surface parameters on a regional scale via geosynchronous satellite imagery is presented. Moisture availability, thermal inertia, atmospheric heat flux, and total evaporation are determined from three infrared images obtained from the Geostationary Operational Environmental Satellite (GOES). Three GOES images (early morning, midafternoon, and night) are obtained from computer tape. Two temperature-difference images are then created. The boundary-layer model is run, and its output is inverted via cubic regression equations. The satellite imagery is efficiently converted into output-variable fields. All computations are executed on a PDP 11/34 minicomputer. Output fields can be produced within one hour of the availability of aligned satellite subimages of a target area.
Li, Zhengdong; Zou, Donghua; Liu, Ningguo; Zhong, Liangwei; Shao, Yu; Wan, Lei; Huang, Ping; Chen, Yijiu
2013-06-10
The elucidation and prediction of the biomechanics of lower limb fractures could serve as a useful tool in forensic practices. Finite element (FE) analysis could potentially help in the understanding of the fracture mechanisms of lower limb fractures frequently caused by car-pedestrian accidents. Our aim was (1) to develop and validate a FE model of the human lower limb, (2) to assess the biomechanics of specific injuries concerning run-over and impact loading conditions, and (3) to reconstruct one real car-pedestrian collision case using the model created in this study. We developed a novel lower limb FE model and simulated three different loading scenarios. The geometry of the model was reconstructed using Mimics 13.0 based on computed tomography (CT) scans from an actual traffic accident. The material properties were based upon a synthesis of data found in published literature. The FE model validation and injury reconstruction were conducted using the LS-DYNA code. The FE model was validated by a comparison of the simulation results of three-point bending, overall lateral impact tests and published postmortem human surrogate (PMHS) results. Simulated loading scenarios of running-over the thigh with a wheel, the impact on the upper leg, and impact on the lower thigh were conducted with velocities of 10 m/s, 20 m/s, and 40 m/s, respectively. We compared the injuries resulting from one actual case with the simulated results in order to explore the possible fracture bio-mechanism. The peak fracture forces, maximum bending moments, and energy lost ratio exhibited no significant differences between the FE simulations and the literature data. Under simulated run-over conditions, the segmental fracture pattern was formed and the femur fracture patterns and mechanisms were consistent with the actual injury features of the case. Our study demonstrated that this simulation method could potentially be effective in identifying forensic cases and exploring of the injury mechanisms of lower limb fractures encountered due to inflicted lesions. This model can also help to distinguish between possible and impossible scenarios. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Graumann, Johannes; Scheltema, Richard A; Zhang, Yong; Cox, Jürgen; Mann, Matthias
2012-03-01
In the analysis of complex peptide mixtures by MS-based proteomics, many more peptides elute at any given time than can be identified and quantified by the mass spectrometer. This makes it desirable to optimally allocate peptide sequencing and narrow mass range quantification events. In computer science, intelligent agents are frequently used to make autonomous decisions in complex environments. Here we develop and describe a framework for intelligent data acquisition and real-time database searching and showcase selected examples. The intelligent agent is implemented in the MaxQuant computational proteomics environment, termed MaxQuant Real-Time. It analyzes data as it is acquired on the mass spectrometer, constructs isotope patterns and SILAC pair information as well as controls MS and tandem MS events based on real-time and prior MS data or external knowledge. Re-implementing a top10 method in the intelligent agent yields similar performance to the data dependent methods running on the mass spectrometer itself. We demonstrate the capabilities of MaxQuant Real-Time by creating a real-time search engine capable of identifying peptides "on-the-fly" within 30 ms, well within the time constraints of a shotgun fragmentation "topN" method. The agent can focus sequencing events onto peptides of specific interest, such as those originating from a specific gene ontology (GO) term, or peptides that are likely modified versions of already identified peptides. Finally, we demonstrate enhanced quantification of SILAC pairs whose ratios were poorly defined in survey spectra. MaxQuant Real-Time is flexible and can be applied to a large number of scenarios that would benefit from intelligent, directed data acquisition. Our framework should be especially useful for new instrument types, such as the quadrupole-Orbitrap, that are currently becoming available.
Graumann, Johannes; Scheltema, Richard A.; Zhang, Yong; Cox, Jürgen; Mann, Matthias
2012-01-01
In the analysis of complex peptide mixtures by MS-based proteomics, many more peptides elute at any given time than can be identified and quantified by the mass spectrometer. This makes it desirable to optimally allocate peptide sequencing and narrow mass range quantification events. In computer science, intelligent agents are frequently used to make autonomous decisions in complex environments. Here we develop and describe a framework for intelligent data acquisition and real-time database searching and showcase selected examples. The intelligent agent is implemented in the MaxQuant computational proteomics environment, termed MaxQuant Real-Time. It analyzes data as it is acquired on the mass spectrometer, constructs isotope patterns and SILAC pair information as well as controls MS and tandem MS events based on real-time and prior MS data or external knowledge. Re-implementing a top10 method in the intelligent agent yields similar performance to the data dependent methods running on the mass spectrometer itself. We demonstrate the capabilities of MaxQuant Real-Time by creating a real-time search engine capable of identifying peptides “on-the-fly” within 30 ms, well within the time constraints of a shotgun fragmentation “topN” method. The agent can focus sequencing events onto peptides of specific interest, such as those originating from a specific gene ontology (GO) term, or peptides that are likely modified versions of already identified peptides. Finally, we demonstrate enhanced quantification of SILAC pairs whose ratios were poorly defined in survey spectra. MaxQuant Real-Time is flexible and can be applied to a large number of scenarios that would benefit from intelligent, directed data acquisition. Our framework should be especially useful for new instrument types, such as the quadrupole-Orbitrap, that are currently becoming available. PMID:22171319
In vivo behavior of the human soleus muscle with increasing walking and running speeds.
Lai, Adrian; Lichtwark, Glen A; Schache, Anthony G; Lin, Yi-Chung; Brown, Nicholas A T; Pandy, Marcus G
2015-05-15
The interaction between the muscle fascicle and tendon components of the human soleus (SO) muscle influences the capacity of the muscle to generate force and mechanical work during walking and running. In the present study, ultrasound-based measurements of in vivo SO muscle fascicle behavior were combined with an inverse dynamics analysis to investigate the interaction between the muscle fascicle and tendon components over a broad range of steady-state walking and running speeds: slow-paced walking (0.7 m/s) through to moderate-paced running (5.0 m/s). Irrespective of a change in locomotion mode (i.e., walking vs. running) or an increase in steady-state speed, SO muscle fascicles were found to exhibit minimal shortening compared with the muscle-tendon unit (MTU) throughout stance. During walking and running, the muscle fascicles contributed only 35 and 20% of the overall MTU length change and shortening velocity, respectively. Greater levels of muscle activity resulted in increasingly shorter SO muscle fascicles as locomotion speed increased, both of which facilitated greater tendon stretch and recoil. Thus the elastic tendon contributed the majority of the MTU length change during walking and running. When transitioning from walking to running near the preferred transition speed (2.0 m/s), greater, more economical ankle torque development is likely explained by the SO muscle fascicles shortening more slowly and operating on a more favorable portion (i.e., closer to the plateau) of the force-length curve. Copyright © 2015 the American Physiological Society.
Cheng, Keding; Sloan, Angela; McCorrister, Stuart; Peterson, Lorea; Chui, Huixia; Drebot, Mike; Nadon, Celine; Knox, J David; Wang, Gehua
2014-12-01
The need for rapid and accurate H typing is evident during Escherichia coli outbreak situations. This study explores the transition of MS-H, a method originally developed for rapid H antigen typing of E. coli using LC-MS/MS of flagella digest of reference strains and some clinical strains, to E. coli isolates in clinical scenario through quantitative analysis and method validation. Motile and nonmotile strains were examined in batches to simulate clinical sample scenario. Various LC-MS/MS batch run procedures and MS-H typing rules were compared and summarized through quantitative analysis of MS-H data output for a standard method development. Label-free quantitative data analysis of MS-H typing was proven very useful for examining the quality of MS-H result and the effects of some sample carryovers from motile E. coli isolates. Based on this, a refined procedure and protein identification rule specific for clinical MS-H typing was established and validated. With LC-MS/MS batch run procedure and database search parameter unique for E. coli MS-H typing, the standard procedure maintained high accuracy and specificity in clinical situations, and its potential to be used in a clinical setting was clearly established. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
POSTMan (POST-translational modification analysis), a software application for PTM discovery.
Arntzen, Magnus Ø; Osland, Christoffer Leif; Raa, Christopher Rasch-Olsen; Kopperud, Reidun; Døskeland, Stein-Ove; Lewis, Aurélia E; D'Santos, Clive S
2009-03-01
Post-translationally modified peptides present in low concentrations are often not selected for CID, resulting in no sequence information for these peptides. We have developed a software POSTMan (POST-translational Modification analysis) allowing post-translationally modified peptides to be targeted for fragmentation. The software aligns LC-MS runs (MS(1) data) between individual runs or within a single run and isolates pairs of peptides which differ by a user defined mass difference (post-translationally modified peptides). The method was validated for acetylated peptides and allowed an assessment of even the basal protein phosphorylation of phenylalanine hydroxylase (PHA) in intact cells.
A graphics subsystem retrofit design for the bladed-disk data acquisition system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Carney, R. R.
1983-01-01
A graphics subsystem retrofit design for the turbojet blade vibration data acquisition system is presented. The graphics subsystem will operate in two modes permitting the system operator to view blade vibrations on an oscilloscope type of display. The first mode is a real-time mode that displays only gross blade characteristics, such as maximum deflections and standing waves. This mode is used to aid the operator in determining when to collect detailed blade vibration data. The second mode of operation is a post-processing mode that will animate the actual blade vibrations using the detailed data collected on an earlier data collection run. The operator can vary the rate of payback to view differring characteristics of blade vibrations. The heart of the graphics subsystem is a modified version of AMD's ""super sixteen'' computer, called the graphics preprocessor computer (GPC). This computer is based on AMD's 2900 series of bit-slice components.
Effects of blue light on pigment biosynthesis of Monascus.
Chen, Di; Xue, Chunmao; Chen, Mianhua; Wu, Shufen; Li, Zhenjing; Wang, Changlu
2016-04-01
The influence of different illumination levels of blue light on the growth and intracellular pigment yields of Monascus strain M9 was investigated. Compared with darkness, constant exposure to blue light of 100 lux reduced the yields of six pigments, namely, rubropunctatamine (RUM), monascorubramine (MOM), rubropunctatin (RUN), monascorubrin (MON), monascin (MS), and ankaflavin (AK). However, exposure to varying levels of blue light had different effects on pigment production. Exposure to 100 lux of blue light once for 30 min/day and to 100 lux of blue light once and twice for 15 min/day could enhance RUM, MOM, MS, and AK production and reduce RUN and MON compared with non-exposure. Exposure to 100 lux twice for 30 min/day and to 200 lux once for 45 min/day decreased the RUM, MOM, MS, and AK yields and increased the RUN and MON. Meanwhile, the expression levels of pigment biosynthetic genes were analyzed by real-time quantitative PCR. Results indicated that gene MpPKS5, mppR1, mppA, mppB, mmpC, mppD, MpFasA, MpFasB, and mppF were positively correlated with the yields of RUN and MON, whereas mppE and mppR2 were associated with RUM, MOM, MS, and AK production.
A Computer Model of Drafting Effects on Collective Behavior in Elite 10,000-m Runners.
Trenchard, Hugh; Renfree, Andrew; Peters, Derek M
2017-03-01
Drafting in cycling influences collective behavior of pelotons. Although evidence for collective behavior in competitive running events exists, it is not clear if this results from energetic savings conferred by drafting. This study modeled the effects of drafting on behavior in elite 10,000-m runners. Using performance data from a men's elite 10,000-m track running event, computer simulations were constructed using Netlogo 5.1 to test the effects of 3 different drafting quantities on collective behavior: no drafting, drafting to 3 m behind with up to ~8% energy savings (a realistic running draft), and drafting up to 3 m behind with up to 38% energy savings (a realistic cycling draft). Three measures of collective behavior were analyzed in each condition: mean speed, mean group stretch (distance between first- and last-placed runner), and runner-convergence ratio (RCR), which represents the degree of drafting benefit obtained by the follower in a pair of coupled runners. Mean speeds were 6.32 ± 0.28, 5.57 ± 0.18, and 5.51 ± 0.13 m/s in the cycling-draft, runner-draft, and no-draft conditions, respectively (all P < .001). RCR was lower in the cycling-draft condition but did not differ between the other 2. Mean stretch did not differ between conditions. Collective behaviors observed in running events cannot be fully explained through energetic savings conferred by realistic drafting benefits. They may therefore result from other, possibly psychological, processes. The benefits or otherwise of engaging in such behavior are as yet unclear.
Heegaard, P M; Holm, A; Hagerup, M
1993-01-01
A personal computer program for the conversion of linear amino acid sequences to multiple, small, overlapping peptide sequences has been developed. Peptide lengths and "jumps" (the distance between two consecutive overlapping peptides) are defined by the user. To facilitate the use of the program for parallel solid-phase chemical peptide syntheses for the synchronous production of multiple peptides, amino acids at each acylation step are laid out by the program in a convenient standard multi-well setup. Also, the total number of equivalents, as well as the derived amount in milligrams (depend-ending on user-defined equivalent weights and molar surplus), of each amino acid are given. The program facilitates the implementation of multipeptide synthesis, e.g., for the elucidation of polypeptide structure-function relationships, and greatly reduces the risk of introducing mistakes at the planning step. It is written in Pascal and runs on any DOS-based personal computer. No special graphic display is needed.
"Observation Obscurer" - Time Series Viewer, Editor and Processor
NASA Astrophysics Data System (ADS)
Andronov, I. L.
The program is described, which contains a set of subroutines suitable for East viewing and interactive filtering and processing of regularly and irregularly spaced time series. Being a 32-bit DOS application, it may be used as a default fast viewer/editor of time series in any compute shell ("commander") or in Windows. It allows to view the data in the "time" or "phase" mode, to remove ("obscure") or filter outstanding bad points; to make scale transformations and smoothing using few methods (e.g. mean with phase binning, determination of the statistically opti- mal number of phase bins; "running parabola" (Andronov, 1997, As. Ap. Suppl, 125, 207) fit and to make time series analysis using some methods, e.g. correlation, autocorrelation and histogram analysis: determination of extrema etc. Some features have been developed specially for variable star observers, e.g. the barycentric correction, the creation and fast analysis of "OC" diagrams etc. The manual for "hot keys" is presented. The computer code was compiled with a 32-bit Free Pascal (www.freepascal.org).
Web services as applications' integration tool: QikProp case study.
Laoui, Abdel; Polyakov, Valery R
2011-07-15
Web services are a new technology that enables to integrate applications running on different platforms by using primarily XML to enable communication among different computers over the Internet. Large number of applications was designed as stand alone systems before the concept of Web services was introduced and it is a challenge to integrate them into larger computational networks. A generally applicable method of wrapping stand alone applications into Web services was developed and is described. To test the technology, it was applied to the QikProp for DOS (Windows). Although performance of the application did not change when it was delivered as a Web service, this form of deployment had offered several advantages like simplified and centralized maintenance, smaller number of licenses, and practically no training for the end user. Because by using the described approach almost any legacy application can be wrapped as a Web service, this form of delivery may be recommended as a global alternative to traditional deployment solutions. Copyright © 2011 Wiley Periodicals, Inc.
Cheng, Wing-Chi; Yau, Tsan-Sang; Wong, Ming-Kei; Chan, Lai-Ping; Mok, Vincent King-Kuen
2006-10-16
A rapid urinalysis system based on SPE-LC-MS/MS with an in-house post-analysis data management system has been developed for the simultaneous identification and semi-quantitation of opiates (morphine, codeine), methadone, amphetamines (amphetamine, methylamphetamine (MA), 3,4-methylenedioxyamphetamine (MDA) and 3,4-methylenedioxymethamphetamine (MDMA)), 11-benzodiazepines or their metabolites and ketamine. The urine samples are subjected to automated solid phase extraction prior to analysis by LC-MS (Finnigan Surveyor LC connected to a Finnigan LCQ Advantage) fitted with an Alltech Rocket Platinum EPS C-18 column. With a single point calibration at the cut-off concentration for each analyte, simultaneous identification and semi-quantitation for the above mentioned drugs can be achieved in a 10 min run per urine sample. A computer macro-program package was developed to automatically retrieve appropriate data from the analytical data files, compare results with preset values (such as cut-off concentrations, MS matching scores) of each drug being analyzed and generate user-defined Excel reports to indicate all positive and negative results in batch-wise manner for ease of checking. The final analytical results are automatically copied into an Access database for report generation purposes. Through the use of automation in sample preparation, simultaneous identification and semi-quantitation by LC-MS/MS and a tailored made post-analysis data management system, this new urinalysis system significantly improves the quality of results, reduces the post-data treatment time, error due to data transfer and is suitable for high-throughput laboratory in batch-wise operation.
Martin, Daniel B; Holzman, Ted; May, Damon; Peterson, Amelia; Eastham, Ashley; Eng, Jimmy; McIntosh, Martin
2008-11-01
Multiple reaction monitoring (MRM) mass spectrometry identifies and quantifies specific peptides in a complex mixture with very high sensitivity and speed and thus has promise for the high throughput screening of clinical samples for candidate biomarkers. We have developed an interactive software platform, called MRMer, for managing highly complex MRM-MS experiments, including quantitative analyses using heavy/light isotopic peptide pairs. MRMer parses and extracts information from MS files encoded in the platform-independent mzXML data format. It extracts and infers precursor-product ion transition pairings, computes integrated ion intensities, and permits rapid visual curation for analyses exceeding 1000 precursor-product pairs. Results can be easily output for quantitative comparison of consecutive runs. Additionally MRMer incorporates features that permit the quantitative analysis experiments including heavy and light isotopic peptide pairs. MRMer is open source and provided under the Apache 2.0 license.
xQTL workbench: a scalable web environment for multi-level QTL analysis.
Arends, Danny; van der Velde, K Joeri; Prins, Pjotr; Broman, Karl W; Möller, Steffen; Jansen, Ritsert C; Swertz, Morris A
2012-04-01
xQTL workbench is a scalable web platform for the mapping of quantitative trait loci (QTLs) at multiple levels: for example gene expression (eQTL), protein abundance (pQTL), metabolite abundance (mQTL) and phenotype (phQTL) data. Popular QTL mapping methods for model organism and human populations are accessible via the web user interface. Large calculations scale easily on to multi-core computers, clusters and Cloud. All data involved can be uploaded and queried online: markers, genotypes, microarrays, NGS, LC-MS, GC-MS, NMR, etc. When new data types come available, xQTL workbench is quickly customized using the Molgenis software generator. xQTL workbench runs on all common platforms, including Linux, Mac OS X and Windows. An online demo system, installation guide, tutorials, software and source code are available under the LGPL3 license from http://www.xqtl.org. m.a.swertz@rug.nl.
xQTL workbench: a scalable web environment for multi-level QTL analysis
Arends, Danny; van der Velde, K. Joeri; Prins, Pjotr; Broman, Karl W.; Möller, Steffen; Jansen, Ritsert C.; Swertz, Morris A.
2012-01-01
Summary: xQTL workbench is a scalable web platform for the mapping of quantitative trait loci (QTLs) at multiple levels: for example gene expression (eQTL), protein abundance (pQTL), metabolite abundance (mQTL) and phenotype (phQTL) data. Popular QTL mapping methods for model organism and human populations are accessible via the web user interface. Large calculations scale easily on to multi-core computers, clusters and Cloud. All data involved can be uploaded and queried online: markers, genotypes, microarrays, NGS, LC-MS, GC-MS, NMR, etc. When new data types come available, xQTL workbench is quickly customized using the Molgenis software generator. Availability: xQTL workbench runs on all common platforms, including Linux, Mac OS X and Windows. An online demo system, installation guide, tutorials, software and source code are available under the LGPL3 license from http://www.xqtl.org. Contact: m.a.swertz@rug.nl PMID:22308096
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelley, B. M.
The electric utility industry is undergoing significant transformations in its operation model, including a greater emphasis on automation, monitoring technologies, and distributed energy resource management systems (DERMS). With these changes and new technologies, while driving greater efficiencies and reliability, these new models may introduce new vectors of cyber attack. The appropriate cybersecurity controls to address and mitigate these newly introduced attack vectors and potential vulnerabilities are still widely unknown and performance of the control is difficult to vet. This proposal argues that modeling and simulation (M&S) is a necessary tool to address and better understand these problems introduced by emergingmore » technologies for the grid. M&S will provide electric utilities a platform to model its transmission and distribution systems and run various simulations against the model to better understand the operational impact and performance of cybersecurity controls.« less
Letarte, Simon; Brusniak, Mi-Youn; Campbell, David; Eddes, James; Kemp, Christopher J; Lau, Hollis; Mueller, Lukas; Schmidt, Alexander; Shannon, Paul; Kelly-Spratt, Karen S; Vitek, Olga; Zhang, Hui; Aebersold, Ruedi; Watts, Julian D
2008-12-01
A proof-of-concept demonstration of the use of label-free quantitative glycoproteomics for biomarker discovery workflow is presented here, using a mouse model for skin cancer as an example. Blood plasma was collected from 10 control mice, and 10 mice having a mutation in the p19(ARF) gene, conferring them high propensity to develop skin cancer after carcinogen exposure. We enriched for N-glycosylated plasma proteins, ultimately generating deglycosylated forms of the modified tryptic peptides for liquid chromatography mass spectrometry (LC-MS) analyses. LC-MS runs for each sample were then performed with a view to identifying proteins that were differentially abundant between the two mouse populations. We then used a recently developed computational framework, Corra, to perform peak picking and alignment, and to compute the statistical significance of any observed changes in individual peptide abundances. Once determined, the most discriminating peptide features were then fragmented and identified by tandem mass spectrometry with the use of inclusion lists. We next assessed the identified proteins to see if there were sets of proteins indicative of specific biological processes that correlate with the presence of disease, and specifically cancer, according to their functional annotations. As expected for such sick animals, many of the proteins identified were related to host immune response. However, a significant number of proteins also directly associated with processes linked to cancer development, including proteins related to the cell cycle, localisation, trasport, and cell death. Additional analysis of the same samples in profiling mode, and in triplicate, confirmed that replicate MS analysis of the same plasma sample generated less variation than that observed between plasma samples from different individuals, demonstrating that the reproducibility of the LC-MS platform was sufficient for this application. These results thus show that an LC-MS-based workflow can be a useful tool for the generation of candidate proteins of interest as part of a disease biomarker discovery effort.
Letarte, Simon; Brusniak, Mi-Youn; Campbell, David; Eddes, James; Kemp, Christopher J.; Lau, Hollis; Mueller, Lukas; Schmidt, Alexander; Shannon, Paul; Kelly-Spratt, Karen S.; Vitek, Olga; Zhang, Hui; Aebersold, Ruedi; Watts, Julian D.
2010-01-01
A proof-of-concept demonstration of the use of label-free quantitative glycoproteomics for biomarker discovery workflow is presented here, using a mouse model for skin cancer as an example. Blood plasma was collected from 10 control mice, and 10 mice having a mutation in the p19ARF gene, conferring them high propensity to develop skin cancer after carcinogen exposure. We enriched for N-glycosylated plasma proteins, ultimately generating deglycosylated forms of the modified tryptic peptides for liquid chromatography mass spectrometry (LC-MS) analyses. LC-MS runs for each sample were then performed with a view to identifying proteins that were differentially abundant between the two mouse populations. We then used a recently developed computational framework, Corra, to perform peak picking and alignment, and to compute the statistical significance of any observed changes in individual peptide abundances. Once determined, the most discriminating peptide features were then fragmented and identified by tandem mass spectrometry with the use of inclusion lists. We next assessed the identified proteins to see if there were sets of proteins indicative of specific biological processes that correlate with the presence of disease, and specifically cancer, according to their functional annotations. As expected for such sick animals, many of the proteins identified were related to host immune response. However, a significant number of proteins also directly associated with processes linked to cancer development, including proteins related to the cell cycle, localisation, trasport, and cell death. Additional analysis of the same samples in profiling mode, and in triplicate, confirmed that replicate MS analysis of the same plasma sample generated less variation than that observed between plasma samples from different individuals, demonstrating that the reproducibility of the LC-MS platform was sufficient for this application. These results thus show that an LC-MS-based workflow can be a useful tool for the generation of candidate proteins of interest as part of a disease biomarker discovery effort. PMID:20157627
Using Pedagogical Tools to Help Hispanics be Successful in Computer Science
NASA Astrophysics Data System (ADS)
Irish, Rodger
Irish, Rodger, Using Pedagogical Tools to Help Hispanics Be Successful in Computer Science. Master of Science (MS), July 2017, 68 pp., 4 tables, 2 figures, references 48 titles. Computer science (CS) jobs are a growing field and pay a living wage, but the Hispanics are underrepresented in this field. This project seeks to give an overview of several contributing factors to this problem. It will then explore some possible solutions to this problem and how a combination of some tools (teaching methods) can create the best possible outcome. It is my belief that this approach can produce successful Hispanics to fill the needed jobs in the CS field. Then the project will test its hypothesis. I will discuss the tools used to measure progress both in the affective and the cognitive domains. I will show how the decision to run a Computer Club was reached and the results of the research. The conclusion will summarize the results and tell of future research that still needs to be done.
Brumberg, Jonathan S; Lorenz, Sean D; Galbraith, Byron V; Guenther, Frank H
2012-01-01
In this paper we present a framework for reducing the development time needed for creating applications for use in non-invasive brain-computer interfaces (BCI). Our framework is primarily focused on facilitating rapid software "app" development akin to current efforts in consumer portable computing (e.g. smart phones and tablets). This is accomplished by handling intermodule communication without direct user or developer implementation, instead relying on a core subsystem for communication of standard, internal data formats. We also provide a library of hardware interfaces for common mobile EEG platforms for immediate use in BCI applications. A use-case example is described in which a user with amyotrophic lateral sclerosis participated in an electroencephalography-based BCI protocol developed using the proposed framework. We show that our software environment is capable of running in real-time with updates occurring 50-60 times per second with limited computational overhead (5 ms system lag) while providing accurate data acquisition and signal analysis.
Dribbling determinants in sub-elite youth soccer players.
Zago, Matteo; Piovan, Andrea Gianluca; Annoni, Isabella; Ciprandi, Daniela; Iaia, F Marcello; Sforza, Chiarella
2016-01-01
Dribbling speed in soccer is considered critical to the outcome of the game and can assist in the talent identification process. However, little is known about the biomechanics of this skill. By means of a motion capture system, we aimed to quantitatively investigate the determinants of effective dribbling skill in a group of 10 Under-13 sub-elite players, divided by the median-split technique according to their dribbling test time (faster and slower groups). Foot-ball contacts cadence, centre of mass (CoM), ranges of motion (RoM), velocity and acceleration, as well as stride length, cadence and variability were computed. Hip and knee joint RoMs were also considered. Faster players, as compared to slower players, showed a 30% higher foot-ball cadence (3.0 ± 0.1 vs. 2.3 ± 0.2 contacts · s(-1), P < 0.01); reduced CoM mediolateral (0.91 ± 0.05 vs. 1.14 ± 0.16 m, P < 0.05) and vertical (0.19 ± 0.01 vs. 0.25 ± 0.03 m, P < 0.05) RoMs; higher right stride cadence (+20%, P < 0.05) with lower variability (P < 0.05); reduced hip and knee flexion RoMs (P < 0.05). In conclusion, faster players are able to run with the ball through a shorter path in a more economical way. To effectively develop dribbling skill, coaches are encouraged to design specific practices where high stride frequency and narrow run trajectories are required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, T.P.; Clark, R.M.; Mostrom, M.A.
This report discusses the following topics on the LAMDA program: General maintenance; CTSS FCL script; DOS batch files; Macintosh MPW scripts; UNICOS FCL script; VAX/MS command file; LINC calling tree; and LAMDA calling tree.
Effects of forefoot bending elasticity of running shoes on gait and running performance.
Chen, Chia-Hsiang; Tu, Kuan-Hua; Liu, Chiang; Shiang, Tzyy-Yuang
2014-12-01
The aim of this study was to investigate the effects of forefoot bending elasticity of running shoes on kinetics and kinematics during walking and running. Twelve healthy male participants wore normal and elastic shoes while walking at 1.5m/s, jogging at 2.5m/s, and running at 3.5m/s. The elastic shoes were designed by modifying the stiffness of flexible shoes with elastic bands added to the forefoot part of the shoe sole. A Kistler force platform and Vicon system were used to collect kinetic and kinematic data during push-off. Electromyography was used to record the muscle activity of the medial gastrocnemius and medial tibialis anterior. A paired dependent t-test was used to compare the various shoes and the level of significance was set at α=.05. The range of motion of the ankle joint and the maximal anterior-posterior propulsive force differed significantly between elastic and flexible shoes in walking and jogging. The contact time and medial gastrocnemius muscle activation in the push-off phase were significantly lower for the elastic shoes compared with the flexible shoes in walking and jogging. The elastic forefoot region of shoes can alter movement characteristics in walking and jogging. However, for running, the elasticity used in this study was not strong enough to exert a similar effect. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Discrete normal plantar stress variations with running speed.
Gross, T S; Bunch, R P
1989-01-01
The distribution of force beneath the plantar foot surface during shod distance running, a kinetic descriptor of locomotion not previously reported, was recorded for ten rearfoot striking runners. Normal discrete stresses were assessed while the subjects ran on a treadmill at 2.98, 3.58, and 4.47 ms-1, with eight piezoceramic transducers secured inside the left shoe. Between 2.98 and 4.47 ms-1, mean peak stress increased significantly beneath the calcaneus (303.9-426.6 kPa), second metatarsal head (633.5-730.5 kPa), and hallux (575.1-712.4 kPa). Calcaneal stresses were notable for their rapid loading rate, averaging 10.1 kPa (ms)-1 at 3.58 ms-1. Highest stresses were measured beneath the second and third metatarsal heads and hallux. Peak first metatarsal head stress was less than peak second and third metatarsal head stresses in each of the 30 combinations of subjects and running speeds. However, lower stresses do not necessarily imply lower forces, as the force bearing surface area of each metatarsal head must be considered.
User's guide for the thermal analyst's help desk expert system
NASA Technical Reports Server (NTRS)
Ormsby, Rachel A.
1994-01-01
A guide for users of the Thermal Analyst's Help Desk is provided. Help Desk is an expert system that runs on a DOS based personal computer and operates within the EXSYS expert system shell. Help Desk is an analysis tool designed to provide users having various degrees of experience with the capability to determine first approximations of thermal capacity for spacecraft and instruments. The five analyses supported in Help Desk are: surface area required for a radiating surface, equilibrium temperature of a surface, enclosure temperature and heat loads for a defined position in orbit, enclosure temperature and heat loads over a complete orbit, and selection of appropriate surface properties. The two geometries supported by Help Desk are a single flat plate and a rectangular box enclosure.
Röst, Hannes L; Liu, Yansheng; D'Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi
2016-09-01
Next-generation mass spectrometric (MS) techniques such as SWATH-MS have substantially increased the throughput and reproducibility of proteomic analysis, but ensuring consistent quantification of thousands of peptide analytes across multiple liquid chromatography-tandem MS (LC-MS/MS) runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we developed TRIC (http://proteomics.ethz.ch/tric/), a software tool that utilizes fragment-ion data to perform cross-run alignment, consistent peak-picking and quantification for high-throughput targeted proteomics. TRIC reduced the identification error compared to a state-of-the-art SWATH-MS analysis without alignment by more than threefold at constant recall while correcting for highly nonlinear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups. Thus, TRIC fills a gap in the pipeline for automated analysis of massively parallel targeted proteomics data sets.
Effects of Run-Up Velocity on Performance, Kinematics, and Energy Exchanges in The Pole Vault
Linthorne, Nicholas P.; Weetman, A. H. Gemma
2012-01-01
This study examined the effect of run-up velocity on the peak height achieved by the athlete in the pole vault and on the corresponding changes in the athlete's kinematics and energy exchanges. Seventeen jumps by an experienced male pole vaulter were video recorded in the sagittal plane and a wide range of run-up velocities (4.5-8.5 m/s) was obtained by setting the length of the athlete's run-up (2-16 steps). A selection of performance variables, kinematic variables, energy variables, and pole variables were calculated from the digitized video data. We found that the athlete's peak height increased linearly at a rate of 0.54 m per 1 m/s increase in run-up velocity and this increase was achieved through a combination of a greater grip height and a greater push height. At the athlete's competition run-up velocity (8.4 m/s) about one third of the rate of increase in peak height arose from an increase in grip height and about two thirds arose from an increase in push height. Across the range of run-up velocities examined here the athlete always performed the basic actions of running, planting, jumping, and inverting on the pole. However, he made minor systematic changes to his jumping kinematics, vaulting kinematics, and selection of pole characteristics as the run-up velocity increased. The increase in run-up velocity and changes in the athlete's vaulting kinematics resulted in substantial changes to the magnitudes of the energy exchanges during the vault. A faster run-up produced a greater loss of energy during the take-off, but this loss was not sufficient to negate the increase in run-up velocity and the increase in work done by the athlete during the pole support phase. The athlete therefore always had a net energy gain during the vault. However, the magnitude of this gain decreased slightly as run-up velocity increased. Key pointsIn the pole vault the optimum technique is to run-up as fast as possible.The athlete's vault height increases at a rate of about 0.5 m per 1 m/s increase in run-up velocity.The increase in vault height is achieved through a greater grip height and a greater push height. At the athlete's competition run-up velocity about one third of the rate of increase in vault height arises from an increase in grip height and two thirds arises from an increase in push height.The athlete has a net energy gain during the vault. A faster run-up velocity produces a greater loss of energy during the take-off but this loss of energy is not sufficient to negate the increase in run-up velocity and the increase in the work done by the athlete during the pole support phase. PMID:24149197
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J
2009-01-01
Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, A.E.; Tschanz, J.; Monarch, M.
1996-05-01
The Air Quality Utility Information System (AQUIS) is a database management system that operates under dBASE IV. It runs on an IBM-compatible personal computer (PC) with MS DOS 5.0 or later, 4 megabytes of memory, and 30 megabytes of disk space. AQUIS calculates emissions for both traditional and toxic pollutants and reports emissions in user-defined formats. The system was originally designed for use at 7 facilities of the Air Force Materiel Command, and now more than 50 facilities use it. Within the last two years, the system has been used in support of Title V permit applications at Department ofmore » Defense facilities. Growth in the user community, changes and additions to reference emission factor data, and changing regulatory requirements have demanded additions and enhancements to the system. These changes have ranged from adding or updating an emission factor to restructuring databases and adding new capabilities. Quality assurance (QA) procedures have been developed to ensure that emission calculations are correct even when databases are reconfigured and major changes in calculation procedures are implemented. This paper describes these QA and updating procedures. Some user facilities include light industrial operations associated with aircraft maintenance. These facilities have operations such as fiberglass and composite layup and plating operations for which standard emission factors are not available or are inadequate. In addition, generally applied procedures such as material balances may need special treatment to work in an automated environment, for example, in the use of oils and greases and when materials such as polyurethane paints react chemically during application. Some techniques used in these situations are highlighted here. To provide a framework for the main discussions, this paper begins with a description of AQUIS.« less
Aerospace Power Systems Design and Analysis (APSDA) Tool
NASA Technical Reports Server (NTRS)
Truong, Long V.
1998-01-01
The conceptual design of space and/or planetary electrical power systems has required considerable effort. Traditionally, in the early stages of the design cycle (conceptual design), the researchers have had to thoroughly study and analyze tradeoffs between system components, hardware architectures, and operating parameters (such as frequencies) to optimize system mass, efficiency, reliability, and cost. This process could take anywhere from several months to several years (as for the former Space Station Freedom), depending on the scale of the system. Although there are many sophisticated commercial software design tools for personal computers (PC's), none of them can support or provide total system design. To meet this need, researchers at the NASA Lewis Research Center cooperated with Professor George Kusic from the University of Pittsburgh to develop a new tool to help project managers and design engineers choose the best system parameters as quickly as possible in the early design stages (in days instead of months). It is called the Aerospace Power Systems Design and Analysis (APSDA) Tool. By using this tool, users can obtain desirable system design and operating parameters such as system weight, electrical distribution efficiency, bus power, and electrical load schedule. With APSDA, a large-scale specific power system was designed in a matter of days. It is an excellent tool to help designers make tradeoffs between system components, hardware architectures, and operation parameters in the early stages of the design cycle. user interface. It operates on any PC running the MS-DOS (Microsoft Corp.) operating system, version 5.0 or later. A color monitor (EGA or VGA) and two-button mouse are required. The APSDA tool was presented at the 30th Intersociety Energy Conversion Engineering Conference (IECEC) and is being beta tested at several NASA centers. Beta test packages are available for evaluation by contacting the author.
MS-BWME: A Wireless Real-Time Monitoring System for Brine Well Mining Equipment
Xiao, Xinqing; Zhu, Tianyu; Qi, Lin; Moga, Liliana Mihaela; Zhang, Xiaoshuan
2014-01-01
This paper describes a wireless real-time monitoring system (MS-BWME) to monitor the running state of pumps equipment in brine well mining and prevent potential failures that may produce unexpected interruptions with severe consequences. MS-BWME consists of two units: the ZigBee Wireless Sensors Network (WSN) unit and the real-time remote monitoring unit. MS-BWME was implemented and tested in sampled brine wells mining in Qinghai Province and four kinds of indicators were selected to evaluate the performance of the MS-BWME, i.e., sensor calibration, the system's real-time data reception, Received Signal Strength Indicator (RSSI) and sensor node lifetime. The results show that MS-BWME can accurately judge the running state of the pump equipment by acquiring and transmitting the real-time voltage and electric current data of the equipment from the spot and provide real-time decision support aid to help workers overhaul the equipment in a timely manner and resolve failures that might produce unexpected production down-time. The MS-BWME can also be extended to a wide range of equipment monitoring applications. PMID:25340455
Nguyen, Tinh T.; Martí-Arbona, Ricardo; Hall, Richard S.; ...
2013-05-21
Transcriptional regulators (TRs) are an important and versatile group of proteins, yet very little progress has been achieved towards the discovery and annotation of their biological functions. We have characterized a previously unknown organic hydroperoxide resistance regulator from Burkholderia xenovoransLB400, Bxe_B2842, which is homologous to E. coli’s OhrR. Bxe_B2842 regulates the expression of an organic hydroperoxide resistance protein (OsmC). We utilized frontal affinity chromatography coupled with mass spectrometry (FAC-MS) and electrophoretic mobility gel shift assays (EMSA) to identify and characterize the possible effectors of the regulation by Bxe_B2842. Without an effector, Bxe_B2842 binds a DNA operator sequence (DOS) upstream ofmore » osmC. FAC-MS results suggest that 2-aminophenol binds to the protein and is potentially an effector molecule. EMSA analysis shows that 2-aminophenol also attenuates the Bxe_B2842’s affinity for its DOS. EMSA analysis also shows that organic peroxides attenuate Bxe_B2842/DOS affinity, suggesting that binding of the TR to its DOS is regulated by the two-cysteine mechanism, common to TRs in this family. Bxe_B2842 is the first OhrR TR to have both oxidative and effector-binding mechanisms of regulation. Our paper reveals further mechanistic diversity TR mediated gene regulation and provides insights into methods for function discovery of TRs.« less
Au, Ivan P H; Lau, Fannie O Y; An, Winko W; Zhang, Janet H; Chen, Tony L; Cheung, Roy T H
2018-02-01
This study investigated the immediate and short-term effects of minimalist shoes (MS) and traditional running shoes (TRS) on vertical loading rates, foot strike pattern and lower limb kinematics in a group of habitual barefoot runners. Twelve habitual barefoot runners were randomly given a pair of MS or TRS and were asked to run with the prescribed shoes for 1 month. Outcome variables were obtained before, immediate after and 1 month after shoe prescription. Average and instantaneous vertical loading rates at the 1-month follow-up were significantly higher than that at the pre-shod session (P < 0.034, η 2 p > 0.474). Foot strike angle in the TRS group was significantly lower than that in the MS group (P = 0.045, η 2 p = 0.585). However, there was no significant time nor shoe effect on overstride, knee and ankle excursion (P > 0.061). Habitual barefoot runners appeared to land with a greater impact during shod running and they tended to have a more rearfoot strike pattern while wearing TRS. Lower limb kinematics were comparable before and after shoe prescription. Longer period of follow-up is suggested to further investigate the footwear effect on the running biomechanics in habitual barefoot runners.
Mixture model normalization for non-targeted gas chromatography/mass spectrometry metabolomics data.
Reisetter, Anna C; Muehlbauer, Michael J; Bain, James R; Nodzenski, Michael; Stevens, Robert D; Ilkayeva, Olga; Metzger, Boyd E; Newgard, Christopher B; Lowe, William L; Scholtens, Denise M
2017-02-02
Metabolomics offers a unique integrative perspective for health research, reflecting genetic and environmental contributions to disease-related phenotypes. Identifying robust associations in population-based or large-scale clinical studies demands large numbers of subjects and therefore sample batching for gas-chromatography/mass spectrometry (GC/MS) non-targeted assays. When run over weeks or months, technical noise due to batch and run-order threatens data interpretability. Application of existing normalization methods to metabolomics is challenged by unsatisfied modeling assumptions and, notably, failure to address batch-specific truncation of low abundance compounds. To curtail technical noise and make GC/MS metabolomics data amenable to analyses describing biologically relevant variability, we propose mixture model normalization (mixnorm) that accommodates truncated data and estimates per-metabolite batch and run-order effects using quality control samples. Mixnorm outperforms other approaches across many metrics, including improved correlation of non-targeted and targeted measurements and superior performance when metabolite detectability varies according to batch. For some metrics, particularly when truncation is less frequent for a metabolite, mean centering and median scaling demonstrate comparable performance to mixnorm. When quality control samples are systematically included in batches, mixnorm is uniquely suited to normalizing non-targeted GC/MS metabolomics data due to explicit accommodation of batch effects, run order and varying thresholds of detectability. Especially in large-scale studies, normalization is crucial for drawing accurate conclusions from non-targeted GC/MS metabolomics data.
Muons in the CMS High Level Trigger System
NASA Astrophysics Data System (ADS)
Verwilligen, Piet; CMS Collaboration
2016-04-01
The trigger systems of LHC detectors play a fundamental role in defining the physics capabilities of the experiments. A reduction of several orders of magnitude in the rate of collected events, with respect to the proton-proton bunch crossing rate generated by the LHC, is mandatory to cope with the limits imposed by the readout and storage system. An accurate and efficient online selection mechanism is thus required to fulfill the task keeping maximal the acceptance to physics signals. The CMS experiment operates using a two-level trigger system. Firstly a Level-1 Trigger (L1T) system, implemented using custom-designed electronics, is designed to reduce the event rate to a limit compatible to the CMS Data Acquisition (DAQ) capabilities. A High Level Trigger System (HLT) follows, aimed at further reducing the rate of collected events finally stored for analysis purposes. The latter consists of a streamlined version of the CMS offline reconstruction software and operates on a computer farm. It runs algorithms optimized to make a trade-off between computational complexity, rate reduction and high selection efficiency. With the computing power available in 2012 the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. An efficient selection of muons at HLT, as well as an accurate measurement of their properties, such as transverse momentum and isolation, is fundamental for the CMS physics programme. The performance of the muon HLT for single and double muon triggers achieved in Run I will be presented. Results from new developments, aimed at improving the performance of the algorithms for the harsher scenarios of collisions per event (pile-up) and luminosity expected for Run II will also be discussed.
LAMDA programmer`s manual. [Final report, Part 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, T.P.; Clark, R.M.; Mostrom, M.A.
This report discusses the following topics on the LAMDA program: General maintenance; CTSS FCL script; DOS batch files; Macintosh MPW scripts; UNICOS FCL script; VAX/MS command file; LINC calling tree; and LAMDA calling tree.
Wager, Justin C; Challis, John H
2016-03-21
During locomotion, the lower limb tendons undergo stretch and recoil, functioning like springs that recycle energy with each step. Cadaveric testing has demonstrated that the arch of the foot operates in this capacity during simple loading, yet it remains unclear whether this function exists during locomotion. In this study, one of the arch׳s passive elastic tissues (the plantar aponeurosis; PA) was investigated to glean insights about it and the entire arch of the foot during running. Subject specific computer models of the foot were driven using the kinematics of eight subjects running at 3.1m/s using two initial contact patterns (rearfoot and non-rearfoot). These models were used to estimate PA strain, force, and elastic energy storage during the stance phase. To examine the release of stored energy, the foot joint moments, powers, and work created by the PA were computed. Mean elastic energy stored in the PA was 3.1±1.6J, which was comparable to in situ testing values. Changes to the initial contact pattern did not change elastic energy storage or late stance PA function, but did alter PA pre-tensioning and function during early stance. In both initial contact patterns conditions, the PA power was positive during late stance, which reveals that the release of the stored elastic energy assists with shortening of the arch during push-off. As the PA is just one of the arch׳s passive elastic tissues, the entire arch may store additional energy and impact the metabolic cost of running. Copyright © 2016 Elsevier Ltd. All rights reserved.
1982-12-01
run to run. A Karl Fischer automatic titrimeter has been ordered to enable routine analysis of water in both the inlet and exit streams to determine...Block-Styrene)," M.S. Thesis, Chemical Engineering, June 1982, by D. E. Zurawski. "Electron Optical Methods and the Study of Corrosion," M.S. Thesis...interface as viewed through a thin transparent metal deposited onto glass. The latter method will permit quantitative studies of the corrosion and
Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance
2013-10-01
are modeled using SPH elements. Model validation runs with monolithic SiC tiles are conducted based on the DoP experiments described in reference...TERMS ,30cal AP M2 Projectile, 762x39 PS Projectile, SPH , Aluminum 5083, SiC, DoP Expeminets, AutoDyn Simulations, Tile Gap 16. SECURITY...range 700 m/s to 1000 m/s are modeled using SPH elements. □ Model validation runs with monolithic SiC tiles are conducted based on the DoP
Microwave assisted pyrolysis of halogenated plastics recovered from waste computers.
Rosi, Luca; Bartoli, Mattia; Frediani, Marco
2018-03-01
Microwave Assisted Pyrolysis (MAP) of the plastic fraction of Waste from Electric and Electronic Equipment (WEEE) from end-life computers was run with different absorbers and set-ups in a multimode batch reactor. A large amount of various different liquid fractions (up to 76.6wt%) were formed together with a remarkable reduction of the solid residue (up to 14.2wt%). The liquid fractions were characterized using the following different techniques: FT-IR ATR, 1 H NMR and a quantitative GC-MS analysis. The liquid fractions showed low density and viscosity, together with a high concentration of useful chemicals such as styrene (up to 117.7mg/mL), xylenes (up to 25.6mg/mL for p-xylene) whereas halogenated compounds were absent or present in a very low amounts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Identification of peptide features in precursor spectra using Hardklör and Krönik
Hoopmann, Michael R.; MacCoss, Michael J.; Moritz, Robert L.
2013-01-01
Hardklör and Krönik are software tools for feature detection and data reduction of high resolution mass spectra. Hardklör is used to reduce peptide isotope distributions to a single monoisotopic mass and charge state, and can deconvolve overlapping peptide isotope distributions. Krönik filters, validates, and summarizes peptide features identified with Hardklör from data obtained during liquid chromatography mass spectrometry (LC-MS). Both software tools contain a simple user interface and can be run from nearly any desktop computer. These tools are freely available from http://proteome.gs.washington.edu/software/hardklor. PMID:22389013
Computer-Assisted Instruction to Teach DOS Commands: A Pilot Study.
ERIC Educational Resources Information Center
McWeeney, Mark G.
1992-01-01
Describes a computer-assisted instruction (CAI) program used to teach DOS commands. Pretest and posttest results for 65 graduate students using the program are reported, and it is concluded that the CAI program significantly aided the students. Sample screen displays for the program and several questions from the pre/posttest are included. (nine…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Youngjoo; Kim, Keeman.
1991-01-01
An operating system shell GPDAS (General Purpose Data Acquisition Shell) on MS-DOS-based microcomputers has been developed to provide flexibility in data acquisition and device control for magnet measurements at the Advanced Photon Source. GPDAS is both a command interpreter and an integrated script-based programming environment. It also incorporates the MS-DOS shell to make use of the existing utility programs for file manipulation and data analysis. Features include: alias definition, virtual memory, windows, graphics, data and procedure backup, background operation, script programming language, and script level debugging. Data acquisition system devices can be controlled through IEEE488 board, multifunction I/O board, digitalmore » I/O board and Gespac crate via Euro G-64 bus. GPDAS is now being used for diagnostics R D and accelerator physics studies as well as for magnet measurements. Their hardware configurations will also be discussed. 3 refs., 3 figs.« less
CRISP90 - SOFTWARE DESIGN ANALYZER SYSTEM
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1994-01-01
The CRISP90 Software Design Analyzer System, an update of CRISP-80, is a set of programs forming a software design and documentation tool which supports top-down, hierarchic, modular, structured design and programming methodologies. The quality of a computer program can often be significantly influenced by the design medium in which the program is developed. The medium must foster the expression of the programmer's ideas easily and quickly, and it must permit flexible and facile alterations, additions, and deletions to these ideas as the design evolves. The CRISP90 software design analyzer system was developed to provide the PDL (Programmer Design Language) programmer with such a design medium. A program design using CRISP90 consists of short, English-like textual descriptions of data, interfaces, and procedures that are imbedded in a simple, structured, modular syntax. The display is formatted into two-dimensional, flowchart-like segments for a graphic presentation of the design. Together with a good interactive full-screen editor or word processor, the CRISP90 design analyzer becomes a powerful tool for the programmer. In addition to being a text formatter, the CRISP90 system prepares material that would be tedious and error prone to extract manually, such as a table of contents, module directory, structure (tier) chart, cross-references, and a statistics report on the characteristics of the design. Referenced modules are marked by schematic logic symbols to show conditional, iterative, and/or concurrent invocation in the program. A keyword usage profile can be generated automatically and glossary definitions inserted into the output documentation. Another feature is the capability to detect changes that were made between versions. Thus, "change-bars" can be placed in the output document along with a list of changed pages and a version history report. Also, items may be marked as "to be determined" and each will appear on a special table until the item is supplied. The CRISP90 software design analyzer system is written in Microsoft QuickBasic. The program requires an IBM PC compatible with a hard disk, 128K RAM, and an ASCII printer. The program operates under MS-DOS/PC-DOS 3.10 or later. The program was developed in 1983 and updated in 1990. Microsoft and MS-DOS are registered trademarks of Microsoft Corporation. IBM PC and PC-DOS are registered trademarks of International Business Machines Corporation. CRISP90 is a copyrighted work with all copyright vested in NASA.
Chen, Jie; Tabatabaei, Ali; Zook, Doug; Wang, Yan; Danks, Anne; Stauber, Kathe
2017-11-30
A robust high-performance liquid chromatography tandem mass spectrometry (LC-MS/MS) assay was developed and qualified for the measurement of cyclic nucleotides (cNTs) in rat brain tissue. Stable isotopically labeled 3',5'-cyclic adenosine- 13 C 5 monophosphate ( 13 C 5 -cAMP) and 3',5'-cyclic guanosine- 13 C, 15 N 2 monophosphate ( 13 C 15 N 2 -cGMP) were used as surrogate analytes to measure endogenous 3',5'-cyclic adenosine monophosphate (cAMP) and 3',5'-cyclic guanosine monophosphate (cGMP). Pre-weighed frozen rat brain samples were rapidly homogenized in 0.4M perchloric acid at a ratio of 1:4 (w/v). Following internal standard addition and dilution, the resulting extracts were analyzed using negative ion mode electrospray ionization LC-MS/MS. The calibration curves for both analytes ranged from 5 to 2000ng/g and showed excellent linearity (r 2 >0.996). Relative surrogate analyte-to-analyte LC-MS/MS responses were determined to correct concentrations derived from the surrogate curves. The intra-run precision (CV%) for 13 C 5 -cAMP and 13 C 15 N 2 -cGMP was below 6.6% and 7.4%, respectively, while the inter-run precision (CV%) was 8.5% and 5.8%, respectively. The intra-run accuracy (Dev%) for 13 C 5 -cAMP and 13 C 15 N 2 -cGMP was <11.9% and 10.3%, respectively, and the inter-run Dev% was <6.8% and 5.5%, respectively. Qualification experiments demonstrated high analyte recoveries, minimal matrix effects and low autosampler carryover. Acceptable frozen storage, freeze/thaw, benchtop, processed sample and autosampler stability were shown in brain sample homogenates as well as post-processed samples. The method was found to be suitable for the analysis of rat brain tissue cAMP and cGMP levels in preclinical biomarker development studies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Kang, Joonhee; Han, Byungchan
2016-07-21
Using first-principles density functional theory calculations and ab initio molecular dynamics (AIMD) simulations, we demonstrate the crystal structure of the Li7P2S8I (LPSI) and Li ionic conductivity at room temperature with its atomic-level mechanism. By successively applying three rigorous conceptual approaches, we identify that the LPSI has a similar symmetry class as Li10GeP2S12 (LGPS) material and estimate the Li ionic conductivity to be 0.3 mS cm(-1) with an activation energy of 0.20 eV, similar to the experimental value of 0.63 mS cm(-1). Iodine ions provide an additional path for Li ion diffusion, but a strong Li-I attractive interaction degrades the Li ionic transport. Calculated density of states (DOS) for LPSI indicate that electrochemical instability can be substantially improved by incorporating iodine at the Li metallic anode via forming a LiI compound. Our methods propose the computational design concept for a sulfide-based solid electrolyte with heteroatom doping for high-voltage Li ion batteries.
Mass spectrometry. [review of techniques
NASA Technical Reports Server (NTRS)
Burlingame, A. L.; Kimble, B. J.; Derrick, P. J.
1976-01-01
Advances in mass spectrometry (MS) and its applications over the past decade are reviewed in depth, with annotated literature references. New instrumentation and techniques surveyed include: modulated-beam MS, chromatographic MS on-line computer techniques, digital computer-compatible quadrupole MS, selected ion monitoring (mass fragmentography), and computer-aided management of MS data and interpretation. Areas of application surveyed include: organic MS and electron impact MS, field ionization kinetics, appearance potentials, translational energy release, studies of metastable species, photoionization, calculations of molecular orbitals, chemical kinetics, field desorption MS, high pressure MS, ion cyclotron resonance, biochemistry, medical/clinical chemistry, pharmacology, and environmental chemistry and pollution studies.
Armenta, Jenny M; Hoeschele, Ina; Lazar, Iulia M
2009-07-01
An isotope tags for relative and absolute quantitation (iTRAQ)-based reversed-phase liquid chromatography (RPLC)-tandem mass spectrometry (MS/MS) method was developed for differential protein expression profiling in complex cellular extracts. The estrogen positive MCF-7 cell line, cultured in the presence of 17beta-estradiol (E2) and tamoxifen (Tam), was used as a model system. MS analysis was performed with a linear trap quadrupole (LTQ) instrument operated by using pulsed Q dissociation (PQD) detection. Optimization experiments were conducted to maximize the iTRAQ labeling efficiency and the number of quantified proteins. MS data filtering criteria were chosen to result in a false positive identification rate of <4%. The reproducibility of protein identifications was approximately 60%-67% between duplicate, and approximately 50% among triplicate LC-MS/MS runs, respectively. The run-to-run reproducibility, in terms of relative standard deviations (RSD) of global mean iTRAQ ratios, was better than 10%. The quantitation accuracy improved with the number of peptides used for protein identification. From a total of 530 identified proteins (P < 0.001) in the E2/Tam treated MCF-7 cells, a list of 255 proteins (quantified by at least two peptides) was generated for differential expression analysis. A method was developed for the selection, normalization, and statistical evaluation of such datasets. An approximate approximately 2-fold change in protein expression levels was necessary for a protein to be selected as a biomarker candidate. According to this data processing strategy, approximately 16 proteins involved in biological processes such as apoptosis, RNA processing/metabolism, DNA replication/transcription/repair, cell proliferation and metastasis, were found to be up- or down-regulated.
ChemoPy: freely available python package for computational biology and chemoinformatics.
Cao, Dong-Sheng; Xu, Qing-Song; Hu, Qian-Nan; Liang, Yi-Zeng
2013-04-15
Molecular representation for small molecules has been routinely used in QSAR/SAR, virtual screening, database search, ranking, drug ADME/T prediction and other drug discovery processes. To facilitate extensive studies of drug molecules, we developed a freely available, open-source python package called chemoinformatics in python (ChemoPy) for calculating the commonly used structural and physicochemical features. It computes 16 drug feature groups composed of 19 descriptors that include 1135 descriptor values. In addition, it provides seven types of molecular fingerprint systems for drug molecules, including topological fingerprints, electro-topological state (E-state) fingerprints, MACCS keys, FP4 keys, atom pairs fingerprints, topological torsion fingerprints and Morgan/circular fingerprints. By applying a semi-empirical quantum chemistry program MOPAC, ChemoPy can also compute a large number of 3D molecular descriptors conveniently. The python package, ChemoPy, is freely available via http://code.google.com/p/pychem/downloads/list, and it runs on Linux and MS-Windows. Supplementary data are available at Bioinformatics online.
ETARA - EVENT TIME AVAILABILITY, RELIABILITY ANALYSIS
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
The ETARA system was written to evaluate the performance of the Space Station Freedom Electrical Power System, but the methodology and software can be modified to simulate any system that can be represented by a block diagram. ETARA is an interactive, menu-driven reliability, availability, and maintainability (RAM) simulation program. Given a Reliability Block Diagram representation of a system, the program simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair times as a function of exponential and/or Weibull distributions. ETARA can calculate availability parameters such as equivalent availability, state availability (percentage of time at a particular output state capability), continuous state duration and number of state occurrences. The program can simulate initial spares allotment and spares replenishment for a resupply cycle. The number of block failures are tabulated both individually and by block type. ETARA also records total downtime, repair time, and time waiting for spares. Maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can also be calculated. The key to using ETARA is the development of a reliability or availability block diagram. The block diagram is a logical graphical illustration depicting the block configuration necessary for a function to be successfully accomplished. Each block can represent a component, a subsystem, or a system. The function attributed to each block is considered for modeling purposes to be either available or unavailable; there are no degraded modes of block performance. A block does not have to represent physically connected hardware in the actual system to be connected in the block diagram. The block needs only to have a role in contributing to an available system function. ETARA can model the RAM characteristics of systems represented by multilayered, nesting block diagrams. There are no restrictions on the number of total blocks or on the number of blocks in a series, parallel, or M-of-N parallel subsystem. In addition, the same block can appear in more than one subsystem if such an arrangement is necessary for an accurate model. ETARA 3.3 is written in APL2 for IBM PC series computers or compatibles running MS-DOS and the APL2 interpreter. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. The standard distribution medium for this package is a set of two 5.25 inch 360K MS-DOS format diskettes. A sample executable is included. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ETARA was developed in 1990 and last updated in 1991.
TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics
Röst, Hannes L.; Liu, Yansheng; D’Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C.; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi
2016-01-01
Large scale, quantitative proteomic studies have become essential for the analysis of clinical cohorts, large perturbation experiments and systems biology studies. While next-generation mass spectrometric techniques such as SWATH-MS have substantially increased throughput and reproducibility, ensuring consistent quantification of thousands of peptide analytes across multiple LC-MS/MS runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we have developed the TRIC software which utilizes fragment ion data to perform cross-run alignment, consistent peak-picking and quantification for high throughput targeted proteomics. TRIC uses a graph-based alignment strategy based on non-linear retention time correction to integrate peak elution information from all LC-MS/MS runs acquired in a study. When compared to state-of-the-art SWATH-MS data analysis, the algorithm was able to reduce the identification error by more than 3-fold at constant recall, while correcting for highly non-linear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem (iPS) cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups and substantially increased the quantitative completeness and biological information in the data, providing insights into protein dynamics of iPS cells. Overall, this study demonstrates the importance of consistent quantification in highly challenging experimental setups, and proposes an algorithm to automate this task, constituting the last missing piece in a pipeline for automated analysis of massively parallel targeted proteomics datasets. PMID:27479329
Validation Study of Unnotched Charpy and Taylor-Anvil Impact Experiments using Kayenta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamojjala, Krishna; Lacy, Jeffrey; Chu, Henry S.
2015-03-01
Validation of a single computational model with multiple available strain-to-failure fracture theories is presented through experimental tests and numerical simulations of the standardized unnotched Charpy and Taylor-anvil impact tests, both run using the same material model (Kayenta). Unnotched Charpy tests are performed on rolled homogeneous armor steel. The fracture patterns using Kayenta’s various failure options that include aleatory uncertainty and scale effects are compared against the experiments. Other quantities of interest include the average value of the absorbed energy and bend angle of the specimen. Taylor-anvil impact tests are performed on Ti6Al4V titanium alloy. The impact speeds of the specimenmore » are 321 m/s and 393 m/s. The goal of the numerical work is to reproduce the damage patterns observed in the laboratory. For the numerical study, the Johnson-Cook failure model is used as the ductile fracture criterion, and aleatory uncertainty is applied to rate-dependence parameters to explore its effect on the fracture patterns.« less
Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea
2015-09-01
The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less
CFD Validation for Hypersonic Flight: Real Gas Flows
2006-01-01
Calculations Two research groups contributed results to this test case. The group of Drs. Marco Marini and Salvatore Borelli from the Italian Aerospace...and Borelli ran calculations at four test con- ditions, and we simulated Run 46, which is the high- enthalpy air case. The test section conditions were...N2 at ρ∞ = 3.31 g/cm3, T∞ = 556 K, u∞ = 4450 m/s. Run 46: Air at ρ∞ = 3.28 g/cm3, T∞ = 672 K, u∞ = 4480 m/s. Marini and Borelli used grids of 144× 40
Microcomputer Decisions for the 1990s [and] Apple's Macintosh: A Viable Choice.
ERIC Educational Resources Information Center
Grosch, Audrey N.
1989-01-01
Discussion of the factors that should be considered when purchasing or upgrading a microcomputer focuses on the MS-DOS and OS/2 operating systems. Macintosh purchasing decisions are discussed in a sidebar. A glossary is provided. (CLB)
Another Program For Generating Interactive Graphics
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
VAX/Ultrix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. When used throughout company for wide range of applications, makes both application program and computer seem transparent, with noticeable improvements in learning curve. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC's and PS/2 computers running AIX, and HP 9000 S
PChopper: high throughput peptide prediction for MRM/SRM transition design.
Afzal, Vackar; Huang, Jeffrey T-J; Atrih, Abdel; Crowther, Daniel J
2011-08-15
The use of selective reaction monitoring (SRM) based LC-MS/MS analysis for the quantification of phosphorylation stoichiometry has been rapidly increasing. At the same time, the number of sites that can be monitored in a single LC-MS/MS experiment is also increasing. The manual processes associated with running these experiments have highlighted the need for computational assistance to quickly design MRM/SRM candidates. PChopper has been developed to predict peptides that can be produced via enzymatic protein digest; this includes single enzyme digests, and combinations of enzymes. It also allows digests to be simulated in 'batch' mode and can combine information from these simulated digests to suggest the most appropriate enzyme(s) to use. PChopper also allows users to define the characteristic of their target peptides, and can automatically identify phosphorylation sites that may be of interest. Two application end points are available for interacting with the system; the first is a web based graphical tool, and the second is an API endpoint based on HTTP REST. Service oriented architecture was used to rapidly develop a system that can consume and expose several services. A graphical tool was built to provide an easy to follow workflow that allows scientists to quickly and easily identify the enzymes required to produce multiple peptides in parallel via enzymatic digests in a high throughput manner.
Patteet, Lisbeth; Maudens, Kristof E; Sabbe, Bernard; Morrens, Manuel; De Doncker, Mireille; Neels, Hugo
2014-02-15
Therapeutic drug monitoring of antipsychotics is important for optimizing therapy, explaining adverse effects, non-response or poor compliance. We developed a UHPLC-MS/MS method for quantification of 16 commonly used and recently marketed antipsychotics and 8 metabolites in serum. After liquid-liquid extraction using methyl tert-butyl ether, analysis was performed on an Agilent Technologies 1290 Infinity LC system coupled with an Agilent Technologies 6460 Triple Quadrupole MS. Separation with a C18 column and gradient elution at 0.5 mL/min resulted in a 6-min run-time. Detection was performed in dynamic MRM, monitoring 3 ion transitions per compound. Isotope labeled internal standards were used for every compound, except for bromperidol and levosulpiride. Mean recovery was 86.8%. Matrix effects were -18.4 to +9.1%. Accuracy ranged between 91.3 and 107.0% at low, medium and high concentrations and between 76.2 and 113.9% at LLOQ. Within-run precision was <15% (CV), except for asenapine and hydroxy-iloperidone. Between-run precision was aberrant only for 7-hydroxy-N-desalkylquetiapine, asenapine and reduced haloperidol. No interferences were found. No problems of instability were observed, even for olanzapine. The method was successfully applied on patient samples. The liquid-liquid extraction and UHPLC-MS/MS technique allows robust target screening and quantification of 23 antipsychotics and metabolites. Copyright © 2013 Elsevier B.V. All rights reserved.
Bayesian Normalization Model for Label-Free Quantitative Analysis by LC-MS
Nezami Ranjbar, Mohammad R.; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.
2016-01-01
We introduce a new method for normalization of data acquired by liquid chromatography coupled with mass spectrometry (LC-MS) in label-free differential expression analysis. Normalization of LC-MS data is desired prior to subsequent statistical analysis to adjust variabilities in ion intensities that are not caused by biological differences but experimental bias. There are different sources of bias including variabilities during sample collection and sample storage, poor experimental design, noise, etc. In addition, instrument variability in experiments involving a large number of LC-MS runs leads to a significant drift in intensity measurements. Although various methods have been proposed for normalization of LC-MS data, there is no universally applicable approach. In this paper, we propose a Bayesian normalization model (BNM) that utilizes scan-level information from LC-MS data. Specifically, the proposed method uses peak shapes to model the scan-level data acquired from extracted ion chromatograms (EIC) with parameters considered as a linear mixed effects model. We extended the model into BNM with drift (BNMD) to compensate for the variability in intensity measurements due to long LC-MS runs. We evaluated the performance of our method using synthetic and experimental data. In comparison with several existing methods, the proposed BNM and BNMD yielded significant improvement. PMID:26357332
Hollander, Karsten; Argubi-Wollesen, Andreas; Reer, Rüdiger; Zech, Astrid
2015-01-01
Possible benefits of barefoot running have been widely discussed in recent years. Uncertainty exists about which footwear strategy adequately simulates barefoot running kinematics. The objective of this study was to investigate the effects of athletic footwear with different minimalist strategies on running kinematics. Thirty-five distance runners (22 males, 13 females, 27.9 ± 6.2 years, 179.2 ± 8.4 cm, 73.4 ± 12.1 kg, 24.9 ± 10.9 km.week-1) performed a treadmill protocol at three running velocities (2.22, 2.78 and 3.33 m.s-1) using four footwear conditions: barefoot, uncushioned minimalist shoes, cushioned minimalist shoes, and standard running shoes. 3D kinematic analysis was performed to determine ankle and knee angles at initial foot-ground contact, rate of rear-foot strikes, stride frequency and step length. Ankle angle at foot strike, step length and stride frequency were significantly influenced by footwear conditions (p<0.001) at all running velocities. Posthoc pairwise comparisons showed significant differences (p<0.001) between running barefoot and all shod situations as well as between the uncushioned minimalistic shoe and both cushioned shoe conditions. The rate of rear-foot strikes was lowest during barefoot running (58.6% at 3.33 m.s-1), followed by running with uncushioned minimalist shoes (62.9%), cushioned minimalist (88.6%) and standard shoes (94.3%). Aside from showing the influence of shod conditions on running kinematics, this study helps to elucidate differences between footwear marked as minimalist shoes and their ability to mimic barefoot running adequately. These findings have implications on the use of footwear applied in future research debating the topic of barefoot or minimalist shoe running. PMID:26011042
Effects of Spontaneous Locomotion on the Cricket's Walking Response to a Wind Stimulus
NASA Astrophysics Data System (ADS)
Gras, Heribert; Bartels, Anke
Tethered walking crickets often respond to single wind puffs (50ms duration) directed from 45° left or right to the abdominal cerci with a short running bout of about 300ms, followed by normal locomotion. To test for an effect of the current behavioral state on the running response, we applied wind stimuli when the insect attained a predefined translatorial and/or rotatorial velocity during spontaneous walking. The latency, duration, and velocity profile of the running bout always proved to be constant, representing a reflexlike all-or-nothing reaction, while the probability of this response was low after even brief standing and increased with the forward speed of spontaneous walking at the moment of stimulation. In contrast, the current rotatorial speed did not affect the stimulus response.
CheD: chemical database compilation tool, Internet server, and client for SQL servers.
Trepalin, S V; Yarkov, A V
2001-01-01
An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.
DECIDE: a software for computer-assisted evaluation of diagnostic test performance.
Chiecchio, A; Bo, A; Manzone, P; Giglioli, F
1993-05-01
The evaluation of the performance of clinical tests is a complex problem involving different steps and many statistical tools, not always structured in an organic and rational system. This paper presents a software which provides an organic system of statistical tools helping evaluation of clinical test performance. The program allows (a) the building and the organization of a working database, (b) the selection of the minimal set of tests with the maximum information content, (c) the search of the model best fitting the distribution of the test values, (d) the selection of optimal diagnostic cut-off value of the test for every positive/negative situation, (e) the evaluation of performance of the combinations of correlated and uncorrelated tests. The uncertainty associated with all the variables involved is evaluated. The program works in a MS-DOS environment with EGA or higher performing graphic card.
Al Haddad, Hani; Méndez-Villanueva, Alberto; Torreño, Nacho; Munguía-Izquierdo, Diego; Suárez-Arrones, Luis
2017-09-22
The aim of this study was to assess the match-to-match variability obtained using GPS devices, collected during official games in professional soccer players. GPS-derived data from nineteen elite soccer players were collected over two consecutive seasons. Time-motion data for players with more than five full-match were analyzed (n=202). Total distance covered (TD), TD >13-18 km/h, TD >18-21 km/h, TD >21 km/h, number of acceleration >2.5-4 m.s-2 and >4 m.s-2 were calculated. The match-to-match variation in running activity was assessed by the typical error expressed as a coefficient of variation (CV,%) and the magnitude of the CV was calculated (effect size). When all players were pooled together, CVs ranged from 5% to 77% (first half) and from 5% to 90% (second half), for TD and number of acceleration >4 m.s-2, and the magnitude of the CVs were rated from small to moderate (effect size = 0.57-0.98). The CVs were likely to increase with running/acceleration intensity, and were likely to differ between playing positions (e.g., TD > 13-18 km/h 3.4% for second strikers vs 14.2% for strikers and 14.9% for wide-defenders vs 9.7% for wide-midfielders). Present findings indicate that variability in players' running performance is high in some variables and likely position-dependent. Such variability should be taken into account when using these variables to prescribe and/or monitor training intensity/load. GPS-derived match-to-match variability in official games' locomotor performance of professional soccer players is high in some variables, particularly for high-speed running, due to the complexity of match running performance and its most influential factors and reliability of the devices.
SOFTCOST - DEEP SPACE NETWORK SOFTWARE COST MODEL
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1994-01-01
The early-on estimation of required resources and a schedule for the development and maintenance of software is usually the least precise aspect of the software life cycle. However, it is desirable to make some sort of an orderly and rational attempt at estimation in order to plan and organize an implementation effort. The Software Cost Estimation Model program, SOFTCOST, was developed to provide a consistent automated resource and schedule model which is more formalized than the often used guesswork model based on experience, intuition, and luck. SOFTCOST was developed after the evaluation of a number of existing cost estimation programs indicated that there was a need for a cost estimation program with a wide range of application and adaptability to diverse kinds of software. SOFTCOST combines several software cost models found in the open literature into one comprehensive set of algorithms that compensate for nearly fifty implementation factors relative to size of the task, inherited baseline, organizational and system environment, and difficulty of the task. SOFTCOST produces mean and variance estimates of software size, implementation productivity, recommended staff level, probable duration, amount of computer resources required, and amount and cost of software documentation. Since the confidence level for a project using mean estimates is small, the user is given the opportunity to enter risk-biased values for effort, duration, and staffing, to achieve higher confidence levels. SOFTCOST then produces a PERT/CPM file with subtask efforts, durations, and precedences defined so as to produce the Work Breakdown Structure (WBS) and schedule having the asked-for overall effort and duration. The SOFTCOST program operates in an interactive environment prompting the user for all of the required input. The program builds the supporting PERT data base in a file for later report generation or revision. The PERT schedule and the WBS schedule may be printed and stored in a file for later use. The SOFTCOST program is written in Microsoft BASIC for interactive execution and has been implemented on an IBM PC-XT/AT operating MS-DOS 2.1 or higher with 256K bytes of memory. SOFTCOST was originally developed for the Zylog Z80 system running under CP/M in 1981. It was converted to run on the IBM PC XT/AT in 1986. SOFTCOST is a copyrighted work with all copyright vested in NASA.
OpenKnowledge for peer-to-peer experimentation in protein identification by MS/MS
2011-01-01
Background Traditional scientific workflow platforms usually run individual experiments with little evaluation and analysis of performance as required by automated experimentation in which scientists are being allowed to access numerous applicable workflows rather than being committed to a single one. Experimental protocols and data under a peer-to-peer environment could potentially be shared freely without any single point of authority to dictate how experiments should be run. In such environment it is necessary to have mechanisms by which each individual scientist (peer) can assess, locally, how he or she wants to be involved with others in experiments. This study aims to implement and demonstrate simple peer ranking under the OpenKnowledge peer-to-peer infrastructure by both simulated and real-world bioinformatics experiments involving multi-agent interactions. Methods A simulated experiment environment with a peer ranking capability was specified by the Lightweight Coordination Calculus (LCC) and automatically executed under the OpenKnowledge infrastructure. The peers such as MS/MS protein identification services (including web-enabled and independent programs) were made accessible as OpenKnowledge Components (OKCs) for automated execution as peers in the experiments. The performance of the peers in these automated experiments was monitored and evaluated by simple peer ranking algorithms. Results Peer ranking experiments with simulated peers exhibited characteristic behaviours, e.g., power law effect (a few dominant peers dominate), similar to that observed in the traditional Web. Real-world experiments were run using an interaction model in LCC involving two different types of MS/MS protein identification peers, viz., peptide fragment fingerprinting (PFF) and de novo sequencing with another peer ranking algorithm simply based on counting the successful and failed runs. This study demonstrated a novel integration and useful evaluation of specific proteomic peers and found MASCOT to be a dominant peer as judged by peer ranking. Conclusion The simulated and real-world experiments in the present study demonstrated that the OpenKnowledge infrastructure with peer ranking capability can serve as an evaluative environment for automated experimentation. PMID:22192521
AN OPTIMIZED 64X64 POINT TWO-DIMENSIONAL FAST FOURIER TRANSFORM
NASA Technical Reports Server (NTRS)
Miko, J.
1994-01-01
Scientists at Goddard have developed an efficient and powerful program-- An Optimized 64x64 Point Two-Dimensional Fast Fourier Transform-- which combines the performance of real and complex valued one-dimensional Fast Fourier Transforms (FFT's) to execute a two-dimensional FFT and its power spectrum coefficients. These coefficients can be used in many applications, including spectrum analysis, convolution, digital filtering, image processing, and data compression. The program's efficiency results from its technique of expanding all arithmetic operations within one 64-point FFT; its high processing rate results from its operation on a high-speed digital signal processor. For non-real-time analysis, the program requires as input an ASCII data file of 64x64 (4096) real valued data points. As output, this analysis produces an ASCII data file of 64x64 power spectrum coefficients. To generate these coefficients, the program employs a row-column decomposition technique. First, it performs a radix-4 one-dimensional FFT on each row of input, producing complex valued results. Then, it performs a one-dimensional FFT on each column of these results to produce complex valued two-dimensional FFT results. Finally, the program sums the squares of the real and imaginary values to generate the power spectrum coefficients. The program requires a Banshee accelerator board with 128K bytes of memory from Atlanta Signal Processors (404/892-7265) installed on an IBM PC/AT compatible computer (DOS ver. 3.0 or higher) with at least one 16-bit expansion slot. For real-time operation, an ASPI daughter board is also needed. The real-time configuration reads 16-bit integer input data directly into the accelerator board, operating on 64x64 point frames of data. The program's memory management also allows accumulation of the coefficient results. The real-time processing rate to calculate and accumulate the 64x64 power spectrum output coefficients is less than 17.0 mSec. Documentation is included in the price of the program. Source code is written in C, 8086 Assembly, and Texas Instruments TMS320C30 Assembly Languages. This program is available on a 5.25 inch 360K MS-DOS format diskette. IBM and IBM PC are registered trademarks of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation.
Multiple scattering in the remote sensing of natural surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wen-Hao; Weeks, R.; Gillespie, A.R.
1996-07-01
Radiosity models predict the amount of light scattered many times (multiple scattering) among scene elements in addition to light interacting with a surface only once (direct reflectance). Such models are little used in remote sensing studies because they require accurate digital terrain models and, typically, large amounts of computer time. We have developed a practical radiosity model that runs relatively quickly within suitable accuracy limits, and have used it to explore problems caused by multiple-scattering in image calibration, terrain correction, and surface roughness estimation for optical images. We applied the radiosity model to real topographic surfaces sampled at two verymore » different spatial scales: 30 m (rugged mountains) and 1 cm (cobbles and gravel on an alluvial fan). The magnitude of the multiple-scattering (MS) effect varies with solar illumination geometry, surface reflectivity, sky illumination and surface roughness. At the coarse scale, for typical illumination geometries, as much as 20% of the image can be significantly affected (>5%) by MS, which can account for as much as {approximately}10% of the radiance from sunlit slopes, and much more for shadowed slopes, otherwise illuminated only by skylight. At the fine scale, radiance from as much as 30-40% of the scene can have a significant MS component, and the MS contribution is locally as high as {approximately}70%, although integrating to the meter scale reduces this limit to {approximately}10%. Because the amount of MS increases with reflectivity as well as roughness, MS effects will distort the shape of reflectance spectra as well as changing their overall amplitude. The change is proportional to surface roughness. Our results have significant implications for determining reflectivity and surface roughness in remote sensing.« less
DCMS: A data analytics and management system for molecular simulation.
Kumar, Anand; Grupcev, Vladimir; Berrada, Meryem; Fogarty, Joseph C; Tu, Yi-Cheng; Zhu, Xingquan; Pandit, Sagar A; Xia, Yuni
Molecular Simulation (MS) is a powerful tool for studying physical/chemical features of large systems and has seen applications in many scientific and engineering domains. During the simulation process, the experiments generate a very large number of atoms and intend to observe their spatial and temporal relationships for scientific analysis. The sheer data volumes and their intensive interactions impose significant challenges for data accessing, managing, and analysis. To date, existing MS software systems fall short on storage and handling of MS data, mainly because of the missing of a platform to support applications that involve intensive data access and analytical process. In this paper, we present the database-centric molecular simulation (DCMS) system our team developed in the past few years. The main idea behind DCMS is to store MS data in a relational database management system (DBMS) to take advantage of the declarative query interface ( i.e. , SQL), data access methods, query processing, and optimization mechanisms of modern DBMSs. A unique challenge is to handle the analytical queries that are often compute-intensive. For that, we developed novel indexing and query processing strategies (including algorithms running on modern co-processors) as integrated components of the DBMS. As a result, researchers can upload and analyze their data using efficient functions implemented inside the DBMS. Index structures are generated to store analysis results that may be interesting to other users, so that the results are readily available without duplicating the analysis. We have developed a prototype of DCMS based on the PostgreSQL system and experiments using real MS data and workload show that DCMS significantly outperforms existing MS software systems. We also used it as a platform to test other data management issues such as security and compression.
French, Deborah
2013-01-16
At our institution, serum testosterone in adult males is measured by immunoassay while female and pediatric specimens are sent to a reference laboratory for liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis due to low concentrations. As this is of significant cost, a testosterone LC-MS/MS assay was developed in-house. A 5500 QTRAP® using electrospray ionization and a Shimadzu Prominence with a C18 column were used. Gradient elution with formic acid, water and methanol:acetonitrile at 0.5 ml/min had a 7-min run-time. A liquid-liquid extraction with hexane:ethyl acetate was carried out on 200 μl of serum. Multiple reaction monitoring was employed. Sample preparation took ~80 min for 21 samples. Six calibrators were used (0-1263 ng/dl; concentration assigned by NIST SRM 971) with 3 quality controls (9, 168 and 532 ng/dl). The limits of detection and quantitation were 1 and 2 ng/dl respectively. Extraction recovery was ~90% and ion suppression ~5%. Within-run and total precision studies yielded <15% CV at the limit of quantitation and <7% CV through the rest of the linear range. Isobaric interferences were baseline separated from testosterone. Method comparisons between this assay, an immunoassay, and another LC-MS/MS assay were completed. An accurate and sensitive LC-MS/MS assay for total testosterone was developed. Bringing this assay in-house reduces turnaround time for clinicians and patients and saves our institution funds. Copyright © 2012 Elsevier B.V. All rights reserved.
Tibiofemoral contact forces during walking, running and sidestepping.
Saxby, David J; Modenese, Luca; Bryant, Adam L; Gerus, Pauline; Killen, Bryce; Fortin, Karine; Wrigley, Tim V; Bennell, Kim L; Cicuttini, Flavia M; Lloyd, David G
2016-09-01
We explored the tibiofemoral contact forces and the relative contributions of muscles and external loads to those contact forces during various gait tasks. Second, we assessed the relationships between external gait measures and contact forces. A calibrated electromyography-driven neuromusculoskeletal model estimated the tibiofemoral contact forces during walking (1.44±0.22ms(-1)), running (4.38±0.42ms(-1)) and sidestepping (3.58±0.50ms(-1)) in healthy adults (n=60, 27.3±5.4years, 1.75±0.11m, and 69.8±14.0kg). Contact forces increased from walking (∼1-2.8 BW) to running (∼3-8 BW), sidestepping had largest maximum total (8.47±1.57 BW) and lateral contact forces (4.3±1.05 BW), while running had largest maximum medial contact forces (5.1±0.95 BW). Relative muscle contributions increased across gait tasks (up to 80-90% of medial contact forces), and peaked during running for lateral contact forces (∼90%). Knee adduction moment (KAM) had weak relationships with tibiofemoral contact forces (all R(2)<0.36) and the relationships were gait task-specific. Step-wise regression of multiple external gait measures strengthened relationships (0.20
A comparison of kinematic algorithms to estimate gait events during overground running.
Smith, Laura; Preece, Stephen; Mason, Duncan; Bramah, Christopher
2015-01-01
The gait cycle is frequently divided into two distinct phases, stance and swing, which can be accurately determined from ground reaction force data. In the absence of such data, kinematic algorithms can be used to estimate footstrike and toe-off. The performance of previously published algorithms is not consistent between studies. Furthermore, previous algorithms have not been tested at higher running speeds nor used to estimate ground contact times. Therefore the purpose of this study was to both develop a new, custom-designed, event detection algorithm and compare its performance with four previously tested algorithms at higher running speeds. Kinematic and force data were collected on twenty runners during overground running at 5.6m/s. The five algorithms were then implemented and estimated times for footstrike, toe-off and contact time were compared to ground reaction force data. There were large differences in the performance of each algorithm. The custom-designed algorithm provided the most accurate estimation of footstrike (True Error 1.2 ± 17.1 ms) and contact time (True Error 3.5 ± 18.2 ms). Compared to the other tested algorithms, the custom-designed algorithm provided an accurate estimation of footstrike and toe-off across different footstrike patterns. The custom-designed algorithm provides a simple but effective method to accurately estimate footstrike, toe-off and contact time from kinematic data. Copyright © 2014 Elsevier B.V. All rights reserved.
Effect of different exercise intensities on the pancreas of animals with metabolic syndrome.
Amaral, Fernanda; Lima, Nathalia Ea; Ornelas, Elisabete; Simardi, Lucila; Fonseca, Fernando Luiz Affonso; Maifrino, Laura Beatriz Mesiano
2015-01-01
Metabolic syndrome (MS) comprises several metabolic disorders that are risk factors for cardiovascular disease and has its source connected to the accumulation of visceral adipose tissue (VAT) and development of insulin resistance. Despite studies showing beneficial results of exercise on several risk factors for cardiovascular disease, studies evaluating the effects of different intensities of exercise training on the pancreas with experimental models are scarce. In total, 20 Wistar rats were used, divided into four groups: control (C), metabolic syndrome (MS and without exercise), metabolic syndrome and practice of walking (MSWalk), and metabolic syndrome and practice of running (MSRun). The applied procedures were induction of MS by fructose in drinking water; experimental protocol of walking and running; weighing of body mass and VAT; sacrifice of animals with blood collection and removal of organs and processing of samples for light microscopy using the analysis of volume densities (Vv) of the studied structures. Running showed a reduction of VAT weight (-54%), triglyceride levels (-40%), Vv[islet] (-62%), Vv[islet.cells] (-22%), Vv[islet.insterstitial] (-44%), and Vv[acinar.insterstitial] (-24%) and an increase of Vv[acini] (+21%) and Vv[acinar.cells] (+22%). Regarding walking, we observed a decrease of VAT weight (-34%) and triglyceride levels (-27%), an increase of Vv[islet.cells] (+72%) and Vv[acinar.cells] (+7%), and a decrease of Vv[acini] (-4%) and Vv[acinar.insterstitial] (-16%) when compared with those in the MS group. Our results suggest that the experimental model with low-intensity exercise (walking) seems to be more particularly recommended for preventing morphological and metabolic disorders occurring in the MS.
Caminal, Pere; Sola, Fuensanta; Gomis, Pedro; Guasch, Eduard; Perera, Alexandre; Soriano, Núria; Mont, Lluis
2018-03-01
This study was conducted to test, in mountain running route conditions, the accuracy of the Polar V800™ monitor as a suitable device for monitoring the heart rate variability (HRV) of runners. Eighteen healthy subjects ran a route that included a range of running slopes such as those encountered in trail and ultra-trail races. The comparative study of a V800 and a Holter SEER 12 ECG Recorder™ included the analysis of RR time series and short-term HRV analysis. A correction algorithm was designed to obtain the corrected Polar RR intervals. Six 5-min segments related to different running slopes were considered for each subject. The correlation between corrected V800 RR intervals and Holter RR intervals was very high (r = 0.99, p < 0.001), and the bias was less than 1 ms. The limits of agreement (LoA) obtained for SDNN and RMSSD were (- 0.25 to 0.32 ms) and (- 0.90 to 1.08 ms), respectively. The effect size (ES) obtained in the time domain HRV parameters was considered small (ES < 0.2). Frequency domain HRV parameters did not differ (p > 0.05) and were well correlated (r ≥ 0.96, p < 0.001). Narrow limits of agreement, high correlations and small effect size suggest that the Polar V800 is a valid tool for the analysis of heart rate variability in athletes while running high endurance events such as marathon, trail, and ultra-trail races.
Feys, Peter; Moumdjian, Lousin; Van Halewyck, Florian; Wens, Inez; Eijnde, Bert O; Van Wijmeersch, Bart; Popescu, Veronica; Van Asch, Paul
2017-11-01
Exercise therapy studies in persons with multiple sclerosis (pwMS) primarily focused on motor outcomes in mid disease stage, while cognitive function and neural correlates were only limitedly addressed. This pragmatic randomized controlled study investigated the effects of a remotely supervised community-located "start-to-run" program on physical and cognitive function, fatigue, quality of life, brain volume, and connectivity. In all, 42 pwMS were randomized to either experimental (EXP) or waiting list control (WLC) group. The EXP group received individualized training instructions during 12 weeks (3×/week), to be performed in their community aiming to participate in a running event. Measures were physical (VO 2max , sit-to-stand test, Six-Minute Walk Test (6MWT), Multiple Sclerosis Walking Scale-12 (MSWS-12)) and cognitive function (Rao's Brief Repeatable Battery (BRB), Paced Auditory Serial Attention Test (PASAT)), fatigue (Fatigue Scale for Motor and Cognitive Function (FSMC)), quality of life (Multiple Sclerosis Impact Scale-29 (MSIS-29)), and imaging. Brain volumes and diffusion tensor imaging (DTI) were quantified using FSL-SIENA/FIRST and FSL-TBSS. In all, 35 pwMS completed the trial. Interaction effects in favor of the EXP group were found for VO 2max , sit-to-stand test, MSWS-12, Spatial Recall Test, FSMC, MSIS-29, and pallidum volume. VO 2max improved by 1.5 mL/kg/min, MSWS-12 by 4, FSMC by 11, and MSIS-29 by 14 points. The Spatial Recall Test improved by more than 10%. Community-located run training improved aerobic capacity, functional mobility, visuospatial memory, fatigue, and quality of life and pallidum volume in pwMS.
Appel, R D; Palagi, P M; Walther, D; Vargas, J R; Sanchez, J C; Ravier, F; Pasquali, C; Hochstrasser, D F
1997-12-01
Although two-dimensional electrophoresis (2-DE) computer analysis software packages have existed ever since 2-DE technology was developed, it is only now that the hardware and software technology allows large-scale studies to be performed on low-cost personal computers or workstations, and that setting up a 2-DE computer analysis system in a small laboratory is no longer considered a luxury. After a first attempt in the seventies and early eighties to develop 2-DE analysis software systems on hardware that had poor or even no graphical capabilities, followed in the late eighties by a wave of innovative software developments that were possible thanks to new graphical interface standards such as XWindows, a third generation of 2-DE analysis software packages has now come to maturity. It can be run on a variety of low-cost, general-purpose personal computers, thus making the purchase of a 2-DE analysis system easily attainable for even the smallest laboratory that is involved in proteome research. Melanie II 2-D PAGE, developed at the University Hospital of Geneva, is such a third-generation software system for 2-DE analysis. Based on unique image processing algorithms, this user-friendly object-oriented software package runs on multiple platforms, including Unix, MS-Windows 95 and NT, and Power Macintosh. It provides efficient spot detection and quantitation, state-of-the-art image comparison, statistical data analysis facilities, and is Internet-ready. Linked to proteome databases such as those available on the World Wide Web, it represents a valuable tool for the "Virtual Lab" of the post-genome area.
Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya
2013-01-01
Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607
The Effect of Increasing Inertia upon Vertical Ground Reaction Forces during Locomotion
NASA Technical Reports Server (NTRS)
DeWitt, John K.; Hagan, R. Donald; Cromwell, Ronita L.
2007-01-01
The addition of inertia to exercising astronauts could increase ground reaction forces and potentially provide a greater health benefit. However, conflicting results have been reported regarding the adaptations to additional mass (inertia) without additional net weight (gravitational force) during locomotion. We examined the effect of increasing inertia while maintaining net gravitational force on vertical ground reaction forces and kinematics during walking and running. Vertical ground reaction force was measured for ten healthy adults (5 male/5 female) during walking (1.34 m/s) and running (3.13 m/s) using a force-measuring treadmill. Subjects completed locomotion at normal weight and mass, and at 10, 20, 30, and 40% of added inertial force. The added gravitational force was relieved with overhead suspension, so that the net force between the subject and treadmill at rest remained equal to 100% body weight. Peak vertical impact forces and loading rates increased with increased inertia during walking, and decreased during running. As inertia increased, peak vertical propulsive forces decreased during walking and did not change during running. Stride time increased during walking and running, and contact time increased during running. Vertical ground reaction force production and adaptations in gait kinematics were different between walking and running. The increased inertial forces were utilized independently from gravitational forces by the motor control system when determining coordination strategies.
de Jager, Andrew D; Bailey, Neville L
2011-09-01
A rapid LC-MS/MS method for confirmatory testing of five major categories of drugs of abuse (amphetamine-type substances, opiates, cocaine, cannabis metabolites and benzodiazepines) in urine has been developed. All drugs of abuse mandated by the Australian/New Zealand Standard AS/NZS 4308:2008 are quantified in a single chromatographic run. Urine samples are diluted with a mixture of isotope labelled internal standards. An on-line trap-and-flush approach, followed by LC-ESI-MS/MS has been successfully used to process samples in a functioning drugs of abuse laboratory. Following injection of diluted urine samples, compounds retained on the trap cartridge are flushed onto a reverse-phase C18 HPLC column (5-μm particle size) with embedded hydrophylic functionality. A total chromatographic run-time of 15 min is required for adequate resolution. Automated quantitation software algorithms have been developed in-house using XML scripting to partially automate the identification of positive samples, taking into account ion ratio (IR) and retention times (Rt). The sensitivity of the assay was found to be adequate for the quantitation of drugs in urine at and below the confirmation cut-off concentrations prescribed by AS/NZS 4308:2008. Copyright © 2011 Elsevier B.V. All rights reserved.
Downloading from the OPAC: The Innovative Interfaces Environment.
ERIC Educational Resources Information Center
Spore, Stuart
1991-01-01
Discussion of downloading from online public access catalogs focuses on downloading to MS-DOS microcomputers from the INNOPAC online catalog system. Tools for capturing and postprocessing downloaded files are described, technical and institutional constraints on downloading are addressed, and an innovative program for overcoming such constraints…
Tsai, Tsung-Heng; Tadesse, Mahlet G.; Di Poto, Cristina; Pannell, Lewis K.; Mechref, Yehia; Wang, Yue; Ressom, Habtom W.
2013-01-01
Motivation: Liquid chromatography-mass spectrometry (LC-MS) has been widely used for profiling expression levels of biomolecules in various ‘-omic’ studies including proteomics, metabolomics and glycomics. Appropriate LC-MS data preprocessing steps are needed to detect true differences between biological groups. Retention time (RT) alignment, which is required to ensure that ion intensity measurements among multiple LC-MS runs are comparable, is one of the most important yet challenging preprocessing steps. Current alignment approaches estimate RT variability using either single chromatograms or detected peaks, but do not simultaneously take into account the complementary information embedded in the entire LC-MS data. Results: We propose a Bayesian alignment model for LC-MS data analysis. The alignment model provides estimates of the RT variability along with uncertainty measures. The model enables integration of multiple sources of information including internal standards and clustered chromatograms in a mathematically rigorous framework. We apply the model to LC-MS metabolomic, proteomic and glycomic data. The performance of the model is evaluated based on ground-truth data, by measuring correlation of variation, RT difference across runs and peak-matching performance. We demonstrate that Bayesian alignment model improves significantly the RT alignment performance through appropriate integration of relevant information. Availability and implementation: MATLAB code, raw and preprocessed LC-MS data are available at http://omics.georgetown.edu/alignLCMS.html Contact: hwr@georgetown.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24013927
The Weekly Fab Five: Things You Should Do Every Week To Keep Your Computer Running in Tip-Top Shape.
ERIC Educational Resources Information Center
Crispen, Patrick
2001-01-01
Describes five steps that school librarians should follow every week to keep their computers running at top efficiency. Explains how to update virus definitions; run Windows update; run ScanDisk to repair errors on the hard drive; run a disk defragmenter; and backup all data. (LRW)
Modeling of Women's 100-m Dash World Record: Wind-Aided or Not?
NASA Astrophysics Data System (ADS)
Hazelrigg, Conner; Waibel, Bryson; Baker, Blane
2015-11-01
On July 16, 1988, Florence Griffith Joyner (FGJ) shattered the women's 100-m dash world record (WR) with a time of 10.49 s, breaking the previous mark by an astonishing 0.27 s. By all accounts FGJ dominated the race that day, securing her place as the premiere female sprinter of that era, and possibly all time. In the aftermath of such an extraordinary performance, track officials immediately assumed that her posted time was wind aided—that is, attained under tailwind conditions beyond the legal limit of 2.0 m/s for world records. However, wind-measuring devices at the track site showed zero wind conditions during her WR performance. Before and during FGJ's race, other wind-measuring devices indicated speeds exceeding 4.0 m/s at the site of the triple jump runway, located on the same field as the running track. Video clips of flags placed near the starting line of FGJ's race also revealed tailwind conditions. Using available data from that era, the study here incorporates modeling techniques to compute velocity and position as functions of time for no wind and tailwind conditions. Modeling under no wind conditions produces a 100-m time of 10.70 s, a performance clearly attainable by FGJ during this stage of her sprinting career. Incorporating tailwinds of 4.0 m/s into the computations reduces this time by approximately 0.20 s, in close agreement with FGJ's record-breaking performance. These results strongly suggest that tailwinds of order 4 m/s were present during FGJ's world record race even though wind-measuring devices at the track site did not register these speeds. In spite of such strong evidence to support a wind-aided race on July 16, 1988, FGJ remains one of the top female sprinters in history and would likely hold the WR even today, given that she attained a non-wind-aided 100-m time of 10.61 s on the day following her WR performance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... meters per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the...-8. Use GFAAS or ICP/MS for the analytical finish. Fugitive emissions from ash handling Visible...
Code of Federal Regulations, 2014 CFR
2014-07-01
... meters per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the...-8. Use GFAAS or ICP/MS for the analytical finish. Fugitive emissions from ash handling Visible...
AUTOMATED SOLID PHASE EXTRACTION GC/MS FOR ANALYSIS OF SEMIVOLATILES IN WATER AND SEDIMENTS
Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line sampl...
Program For Generating Interactive Displays
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
Sun/Unix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. Plus viewed as productivity tool for application developers and application end users, who benefit from resultant consistent and well-designed user interface sheltering them from intricacies of computer. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC and PS/2 compute
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Using a Population-Ecology Simulation in College Courses.
ERIC Educational Resources Information Center
Hinze, Kenneth E.
1984-01-01
Describes instructional use of a microcomputer version of the WORLD2 global population-ecology simulation. Reactions of students and instructors are discussed and a WORLD2 simulation assignment is appended. The BASIC version used by the author runs on Apple II, DOS 3.3, with 80 column board. (MBR)
Jourdil, Jean-François; Némoz, Benjamin; Gautier-Veyret, Elodie; Romero, Charlotte; Stanke-Labesque, Françoise
2018-03-30
Adalimumab (ADA) and infliximab (IFX) are therapeutic monoclonal antibodies (TMabs) targeting tumor necrosis factor-alpha (TNFα). They are used to treat inflammatory diseases. Clinical trials have suggested that therapeutic drug monitoring for ADA or IFX could improve treatment response and cost-effectiveness. However, ADA and IFX were quantified by ELISA in all these studies, and the discrepancies between the results obtained raise questions about their reliability.We describe here the validation of a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for the simultaneous quantification of ADA and IFX in human samples. Full-length antibodies labeled with stable isotopes were added to plasma samples as an internal standard. Samples were then prepared using Mass Spectrometry Immuno Assay (MSIA) followed by trypsin digestion prior ADA and IFX quantification by LC-MS/MS.ADA and IFX were quantified in serum from patients treated with ADA (n=21) or IFX (n=22), and the concentrations obtained were compared with those obtained with a commercial ELISA kit. The chromatography run lasted 8.6 minutes and the quantification range was 1 to 26 mg/L. The method was reproducible, repeatable and accurate. For both levels of internal quality control, for ADA and IFX inter and intra-day coefficients of variation and accuracies were all within 15%, in accordance with FDA recommendations. No significant cross-contamination effect was noted.Good agreement was found between LC-MS/MS and ELISA results, for both ADA and IFX. This LC-MS/MS method can be used for the quantification of ADA and IFX in a single analytical run and for the optimization of LC-MS/MS resource use in clinical pharmacology laboratories.
Shibata, Kaito; Naito, Takafumi; Okamura, Jun; Hosokawa, Seiji; Mineta, Hiroyuki; Kawakami, Junichi
2017-11-30
Proteomic approaches using liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) without an immunopurification technique have not been applied to the determination of serum cetuximab. This study developed a simple and rapid LC-MS/MS method for the absolute determination of cetuximab in human serum and applied it to clinical settings. Surrogate peptides derived from cetuximab digests were selected using a Fourier transform mass spectrometer. Reduced-alkylated serum cetuximab without immunopurification was digested for 20minutes using immobilized trypsin, and the digestion products were purified by solid-phase extraction. The LC-MS/MS was run in positive ion multiple reaction monitoring mode. This method was applied to the determination of serum samples in head and neck cancer patients treated with cetuximab. The chromatographic run time was 10minutes and no peaks interfering with surrogate peptides in serum digestion products were observed. The calibration curve of absolute cetuximab in serum was linear over the concentration range of 4-200μg/mL. The lower limit of quantification of cetuximab in human serum was 4μg/mL. The intra-assay and inter-assay precision and accuracy were less than 13.2% and 88.0-100.7%, respectively. The serum concentration range of cetuximab was 19-140μg/mL in patients. The serum cetuximab concentrations in LC-MS/MS were correlated with those in ELISA (r=0.899, P <0.01) and the mean bias was 1.5% in cancer patients. In conclusion, the present simple and rapid method with acceptable analytical performance can be helpful for evaluating the absolute concentration of serum cetuximab in clinical settings. Copyright © 2017 Elsevier B.V. All rights reserved.
Sodium Binding Sites and Permeation Mechanism in the NaChBac Channel: A Molecular Dynamics Study.
Guardiani, Carlo; Rodger, P Mark; Fedorenko, Olena A; Roberts, Stephen K; Khovanov, Igor A
2017-03-14
NaChBac was the first discovered bacterial sodium voltage-dependent channel, yet computational studies are still limited due to the lack of a crystal structure. In this work, a pore-only construct built using the NavMs template was investigated using unbiased molecular dynamics and metadynamics. The potential of mean force (PMF) from the unbiased run features four minima, three of which correspond to sites IN, CEN, and HFS discovered in NavAb. During the run, the selectivity filter (SF) is spontaneously occupied by two ions, and frequent access of a third one is often observed. In the innermost sites IN and CEN, Na + is fully hydrated by six water molecules and occupies an on-axis position. In site HFS sodium interacts with a glutamate and a serine from the same subunit and is forced to adopt an off-axis placement. Metadynamics simulations biasing one and two ions show an energy barrier in the SF that prevents single-ion permeation. An analysis of the permeation mechanism was performed both computing minimum energy paths in the axial-axial PMF and through a combination of Markov state modeling and transition path theory. Both approaches reveal a knock-on mechanism involving at least two but possibly three ions. The currents predicted from the unbiased simulation using linear response theory are in excellent agreement with single-channel patch-clamp recordings.
GRIDVIEW: Recent Improvements in Research and Education Software for Exploring Mars Topography
NASA Technical Reports Server (NTRS)
Roark, J. H.; Masuoka, C. M.; Frey, H. V.
2004-01-01
GRIDVIEW is being developed by the GEODYNAMICS Branch at NASA's Goddard Space Flight Center and can be downloaded on the web at http://geodynamics.gsfc.nasa.gov/gridview/. The program is very mature and has been successfully used for more than four years, but is still under development as we add new features for data analysis and visualization. The software can run on any computer supported by the IDL virtual machine application supplied by RSI. The virtual machine application is currently available for recent versions of MS Windows, MacOS X, Red Hat Linux and UNIX. Minimum system memory requirement is 32 MB, however loading large data sets may require larger amounts of RAM to function adequately.
Microgravity computing codes. User's guide
NASA Astrophysics Data System (ADS)
1982-01-01
Codes used in microgravity experiments to compute fluid parameters and to obtain data graphically are introduced. The computer programs are stored on two diskettes, compatible with the floppy disk drives of the Apple 2. Two versions of both disks are available (DOS-2 and DOS-3). The codes are written in BASIC and are structured as interactive programs. Interaction takes place through the keyboard of any Apple 2-48K standard system with single floppy disk drive. The programs are protected against wrong commands given by the operator. The programs are described step by step in the same order as the instructions displayed on the monitor. Most of these instructions are shown, with samples of computation and of graphics.
Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.
1993-01-01
The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the analytical finish. Lead 0.00062 milligrams... per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8. Use GFAAS or ICP/MS for the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the analytical finish. Lead 0.00062 milligrams... per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8. Use GFAAS or ICP/MS for the...
de Kanel, J; Vickery, W E; Waldner, B; Monahan, R M; Diamond, F X
1998-05-01
A forensic procedure for the quantitative confirmation of lysergic acid diethylamide (LSD) and the qualitative confirmation of its metabolite, N-demethyl-LSD, in blood, serum, plasma, and urine samples is presented. The Zymark RapidTrace was used to perform fully automated solid-phase extractions of all specimen types. After extract evaporation, confirmations were performed using liquid chromatography (LC) followed by positive electrospray ionization (ESI+) mass spectrometry/mass spectrometry (MS/MS) without derivatization. Quantitation of LSD was accomplished using LSD-d3 as an internal standard. The limit of quantitation (LOQ) for LSD was 0.05 ng/mL. The limit of detection (LOD) for both LSD and N-demethyl-LSD was 0.025 ng/mL. The recovery of LSD was greater than 95% at levels of 0.1 ng/mL and 2.0 ng/mL. For LSD at 1.0 ng/mL, the within-run and between-run (different day) relative standard deviation (RSD) was 2.2% and 4.4%, respectively.
Kostić, Nađa; Dotsikas, Yannis; Jović, Nebojša; Stevanović, Galina; Malenović, Anđelija; Medenica, Mirjana
2014-07-01
This paper presents a LC-MS/MS method for the determination of antiepileptic drug vigabatrin in dried plasma spots (DPS). Due to its zwitterionic chemical structure, a pre-column derivatization procedure was performed, aiming to yield enhanced ionization efficiency and improved chromatographic behaviour. Propyl chloroformate, in the presence of propanol, was selected as the best derivatization reagent, providing a strong signal along with reasonable run time. A relatively novel sample collection technique, DPS, was utilized, offering easy sample handling and analysis, using a sample in micro amount (∼5μL). Derivatized vigabatrin and its internal standard, 4-aminocyclohexanecarboxylic acid, were extracted by liquid-liquid extraction (LLE) and determined in positive ion mode by applying two SRM transitions per analyte. A Zorbax Eclipse XDB-C8 column (150×4.6mm, 5μm particle size) maintained at 30°C, was utilized with running mobile phase composed of acetonitrile: 0.15% formic acid (85:15, v/v). Flow rate was 550μL/min and total run time 4.5min. The assay exhibited excellent linearity over the concentration range of 0.500-50.0μg/mL, which is suitable for the determination of vigabatrin level after per os administration in children and youths with epilepsy, who were on vigabatrin therapy, with or without co-medication. Specificity, accuracy, precision, recovery, matrix-effect and stability were also estimated and assessed within acceptance criteria. Copyright © 2014 Elsevier B.V. All rights reserved.
Characterizing the Mechanical Properties of Running-Specific Prostheses
Beck, Owen N.; Taboga, Paolo; Grabowski, Alena M.
2016-01-01
The mechanical stiffness of running-specific prostheses likely affects the functional abilities of athletes with leg amputations. However, each prosthetic manufacturer recommends prostheses based on subjective stiffness categories rather than performance based metrics. The actual mechanical stiffness values of running-specific prostheses (i.e. kN/m) are unknown. Consequently, we sought to characterize and disseminate the stiffness values of running-specific prostheses so that researchers, clinicians, and athletes can objectively evaluate prosthetic function. We characterized the stiffness values of 55 running-specific prostheses across various models, stiffness categories, and heights using forces and angles representative of those measured from athletes with transtibial amputations during running. Characterizing prosthetic force-displacement profiles with a 2nd degree polynomial explained 4.4% more of the variance than a linear function (p<0.001). The prosthetic stiffness values of manufacturer recommended stiffness categories varied between prosthetic models (p<0.001). Also, prosthetic stiffness was 10% to 39% less at angles typical of running 3 m/s and 6 m/s (10°-25°) compared to neutral (0°) (p<0.001). Furthermore, prosthetic stiffness was inversely related to height in J-shaped (p<0.001), but not C-shaped, prostheses. Running-specific prostheses should be tested under the demands of the respective activity in order to derive relevant characterizations of stiffness and function. In all, our results indicate that when athletes with leg amputations alter prosthetic model, height, and/or sagittal plane alignment, their prosthetic stiffness profiles also change; therefore variations in comfort, performance, etc. may be indirectly due to altered stiffness. PMID:27973573
The Effect of Increasing Mass upon Locomotion
NASA Technical Reports Server (NTRS)
DeWitt, John; Hagan, Donald
2007-01-01
The purpose of this investigation was to determine if increasing body mass while maintaining bodyweight would affect ground reaction forces and joint kinetics during walking and running. It was hypothesized that performing gait with increased mass while maintaining body weight would result in greater ground reaction forces, and would affect the net joint torques and work at the ankle, knee and hip when compared to gait with normal mass and bodyweight. Vertical ground reaction force was measured for ten subjects (5M/5F) during walking (1.34 m/s) and running (3.13 m/s) on a treadmill. Subjects completed one minute of locomotion at normal mass and bodyweight and at four added mass (AM) conditions (10%, 20%, 30% and 40% of body mass) in random order. Three-dimensional joint position data were collected via videography. Walking and running were analyzed separately. The addition of mass resulted in several effects. Peak impact forces and loading rates increased during walking, but decreased during running. Peak propulsive forces decreased during walking and did not change during running. Stride time increased and hip extensor angular impulse and positive work increased as mass was added for both styles of locomotion. Work increased at a greater rate during running than walking. The adaptations to additional mass that occur during walking are different than during running. Increasing mass during exercise in microgravity may be beneficial to increasing ground reaction forces during walking and strengthening hip musculature during both walking and running. Future study in true microgravity is required to determine if the adaptations found would be similar in a weightless environment.
FPGA-based real-time swept-source OCT systems for B-scan live-streaming or volumetric imaging
NASA Astrophysics Data System (ADS)
Bandi, Vinzenz; Goette, Josef; Jacomet, Marcel; von Niederhäusern, Tim; Bachmann, Adrian H.; Duelk, Marcus
2013-03-01
We have developed a Swept-Source Optical Coherence Tomography (Ss-OCT) system with high-speed, real-time signal processing on a commercially available Data-Acquisition (DAQ) board with a Field-Programmable Gate Array (FPGA). The Ss-OCT system simultaneously acquires OCT and k-clock reference signals at 500MS/s. From the k-clock signal of each A-scan we extract a remap vector for the k-space linearization of the OCT signal. The linear but oversampled interpolation is followed by a 2048-point FFT, additional auxiliary computations, and a data transfer to a host computer for real-time, live-streaming of B-scan or volumetric C-scan OCT visualization. We achieve a 100 kHz A-scan rate by parallelization of our hardware algorithms, which run on standard and affordable, commercially available DAQ boards. Our main development tool for signal analysis as well as for hardware synthesis is MATLAB® with add-on toolboxes and 3rd-party tools.
NASA Astrophysics Data System (ADS)
Compton, Duane C.; Snapp, Robert R.
2007-09-01
TWiGS (two-dimensional wavelet transform with generalized cross validation and soft thresholding) is a novel algorithm for denoising liquid chromatography-mass spectrometry (LC-MS) data for use in "shot-gun" proteomics. Proteomics, the study of all proteins in an organism, is an emerging field that has already proven successful for drug and disease discovery in humans. There are a number of constraints that limit the effectiveness of liquid chromatography-mass spectrometry (LC-MS) for shot-gun proteomics, where the chemical signals are typically weak, and data sets are computationally large. Most algorithms suffer greatly from a researcher driven bias, making the results irreproducible and unusable by other laboratories. We thus introduce a new algorithm, TWiGS, that removes electrical (additive white) and chemical noise from LC-MS data sets. TWiGS is developed to be a true two-dimensional algorithm, which operates in the time-frequency domain, and minimizes the amount of researcher bias. It is based on the traditional discrete wavelet transform (DWT), which allows for fast and reproducible analysis. The separable two-dimensional DWT decomposition is paired with generalized cross validation and soft thresholding. The Haar, Coiflet-6, Daubechie-4 and the number of decomposition levels are determined based on observed experimental results. Using a synthetic LC-MS data model, TWiGS accurately retains key characteristics of the peaks in both the time and m/z domain, and can detect peaks from noise of the same intensity. TWiGS is applied to angiotensin I and II samples run on a LC-ESI-TOF-MS (liquid-chromatography-electrospray-ionization) to demonstrate its utility for the detection of low-lying peaks obscured by noise.
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro
2016-08-01
We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.
Resource Sharing of Micro-Software, or, What Ever Happened to All That CP/M Compatibility?
ERIC Educational Resources Information Center
DeYoung, Barbara
1984-01-01
Explores incompatible operating systems as the basic reason why software packages will not work on different microcomputers; defines operating system; explores compatibility issues surrounding the IBM MS-DOS; and presents two future trends in hardware and software developments which indicate a return to true compatibility. (Author/MBR)
Extending the Online Public Access Catalog into the Microcomputer Environment.
ERIC Educational Resources Information Center
Sutton, Brett
1990-01-01
Describes PCBIS, a database program for MS-DOS microcomputers that features a utility for automatically converting online public access catalog search results stored as text files into structured database files that can be searched, sorted, edited, and printed. Topics covered include the general features of the program, record structure, record…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Jacobs, Jon M.
2011-12-01
Quantification of LC-MS peak intensities assigned during peptide identification in a typical comparative proteomics experiment will deviate from run-to-run of the instrument due to both technical and biological variation. Thus, normalization of peak intensities across a LC-MS proteomics dataset is a fundamental step in pre-processing. However, the downstream analysis of LC-MS proteomics data can be dramatically affected by the normalization method selected . Current normalization procedures for LC-MS proteomics data are presented in the context of normalization values derived from subsets of the full collection of identified peptides. The distribution of these normalization values is unknown a priori. If theymore » are not independent from the biological factors associated with the experiment the normalization process can introduce bias into the data, which will affect downstream statistical biomarker discovery. We present a novel approach to evaluate normalization strategies, where a normalization strategy includes the peptide selection component associated with the derivation of normalization values. Our approach evaluates the effect of normalization on the between-group variance structure in order to identify candidate normalization strategies that improve the structure of the data without introducing bias into the normalized peak intensities.« less
Searleman, Adam C.; Iliuk, Anton B.; Collier, Timothy S.; Chodosh, Lewis A.; Tao, W. Andy; Bose, Ron
2014-01-01
Altered protein phosphorylation is a feature of many human cancers that can be targeted therapeutically. Phosphopeptide enrichment is a critical step for maximizing the depth of phosphoproteome coverage by MS, but remains challenging for tissue specimens because of their high complexity. We describe the first analysis of a tissue phosphoproteome using polymer-based metal ion affinity capture (PolyMAC), a nanopolymer that has excellent yield and specificity for phosphopeptide enrichment, on a transgenic mouse model of HER2-driven breast cancer. By combining phosphotyrosine immunoprecipitation with PolyMAC, 411 unique peptides with 139 phosphotyrosine, 45 phosphoserine, and 29 phosphothreonine sites were identified from five LC-MS/MS runs. Combining reverse phase liquid chromatography fractionation at pH 8.0 with PolyMAC identified 1571 unique peptides with 1279 phosphoserine, 213 phosphothreonine, and 21 phosphotyrosine sites from eight LC-MS/MS runs. Linear motif analysis indicated that many of the phosphosites correspond to well-known phosphorylation motifs. Analysis of the tyrosine phosphoproteome with the Drug Gene Interaction database uncovered a network of potential therapeutic targets centered on Src family kinases with inhibitors that are either FDA-approved or in clinical development. These results demonstrate that PolyMAC is well suited for phosphoproteomic analysis of tissue specimens. PMID:24723360
Effects of static stretching on 1-mile uphill run performance.
Lowery, Ryan P; Joy, Jordan M; Brown, Lee E; Oliveira de Souza, Eduardo; Wistocki, David R; Davis, Gregory S; Naimo, Marshall A; Zito, Gina A; Wilson, Jacob M
2014-01-01
It is previously demonstrated that static stretching was associated with a decrease in running economy and distance run during a 30-minute time trial in trained runners. Recently, the detrimental effects of static stretching on economy were found to be limited to the first few minutes of an endurance bout. However, economy remains to be studied for its direct effects on performance during shorter endurance events. The aim of this study was to investigate the effects of static stretching on 1-mile uphill run performance, electromyography (EMG), ground contact time (GCT), and flexibility. Ten trained male distance runners aged 24 ± 5 years with an average VO2max of 64.9 ± 6.5 mL·kg-1·min-1 were recruited. Subjects reported to the laboratory on 3 separate days interspersed by 72 hours. On day 1, anthropometrics and V[Combining Dot Above]O2max were determined on a motor-driven treadmill. On days 2 and 3, subjects performed a 5-minute treadmill warm-up and either performed a series of 6 lower-body stretches for three 30-second repetitions or sat still for 10 minutes. Time to complete a 1-mile run under stretching and nonstretching conditions took place in randomized order. For the performance run, subjects were instructed to run as fast as possible at a set incline of 5% until a distance of 1 mile was completed. Flexibility from the sit and reach test, EMG, GCT, and performance, determined by time to complete the 1-mile run, were recorded after each condition. Time to complete the run was significantly less (6:51 ± 0:28 minutes) in the nonstretching condition as compared with the stretching condition (7:04 ± 0:32 minutes). A significant condition-by-time interaction for muscle activation existed, with no change in the nonstretching condition (pre 91.3 ± 11.6 mV to post 92.2 ± 12.9 mV) but increased in the stretching condition (pre 91.0 ± 11.6 mV to post 105.3 ± 12.9 mV). A significant condition-by-time interaction for GCT was also present, with no changes in the nonstretching condition (pre 211.4 ± 20.8 ms to post 212.5 ± 21.7 ms) but increased in the stretching trial (pre 210.7 ± 19.6 ms to post 237.21 ± 22.4 ms). A significant condition-by-time interaction for flexibility was found, which was increased in the stretching condition (pre 33.1 ± 2 to post 38.8 ± 2) but unchanged in the nonstretching condition (pre 33.5 ± 2 to post 35.2 ± 2). Study findings indicate that static stretching decreases performance in short endurance bouts (∼8%) while increasing GCT and muscle activation. Coaches and athletes may be at risk for decreased performance after a static stretching bout. Therefore, static stretching should be avoided before a short endurance bout.
Chronic exercise is considered one of the most effective means of countering symptoms of the metabolic syndrome (MS) such as obesity and hyperglycemia. Rodent models of forced or voluntary exercise are often used to study the mechanisms of MS and type 2 diabetes. However, there ...
ERIC Educational Resources Information Center
Ratliff, Richard G.; And Others
1976-01-01
A total of 540 college students were run in two verbal discrimination learning studies (the second, a replication of the first) with one of three verbal reward conditions. In both studies, equal numbers of male and female subjects were run in each reward condition by each male and female experimenter. (MS)
Noetzli, Muriel; Ansermot, Nicolas; Dobrinas, Maria; Eap, Chin B
2012-05-01
A previously developed high performance liquid chromatography mass spectrometry (HPLC-MS) procedure for the simultaneous determination of antidementia drugs, including donepezil, galantamine, memantine, rivastigmine and its metabolite NAP 226-90, was transferred to an ultra performance liquid chromatography system coupled to a tandem mass spectrometer (UPLC-MS/MS). The drugs and their internal standards ([(2)H(7)]-donepezil, [(13)C,(2)H(3)]-galantamine, [(13)C(2),(2)H(6)]-memantine, [(2)H(6)]-rivastigmine) were extracted from 250 μL human plasma by protein precipitation with acetonitrile. Chromatographic separation was achieved on a reverse phase column (BEH C18 2.1 mm × 50 mm; 1.7 μm) with a gradient elution of an ammonium acetate buffer at pH 9.3 and acetonitrile at a flow rate of 0.4 mL/min and an overall run time of 4.5 min. The analytes were detected on a tandem quadrupole mass spectrometer operated in positive electrospray ionization mode, and quantification was performed using multiple reaction monitoring. The method was validated according to the recommendations of international guidelines over a calibration range of 1-300 ng/mL for donepezil, galantamine and memantine, and 0.2-50 ng/mL for rivastimgine and NAP 226-90. The trueness (86-108%), repeatability (0.8-8.3%), intermediate precision (2.3-10.9%) and selectivity of the method were found to be satisfactory. Matrix effects variability was inferior to 15% for the analytes and inferior to 5% after correction by internal standards. A method comparison was performed with patients' samples showing similar results between the HPLC-MS and UPLC-MS/MS procedures. Thus, this validated UPLC-MS/MS method allows to reduce the required amount of plasma, to use a simplified sample preparation, and to obtain a higher sensitivity and specificity with a much shortened run-time. Copyright © 2012 Elsevier B.V. All rights reserved.
Dynamic stabilization of rapid hexapedal locomotion.
Jindrich, Devin L; Full, Robert J
2002-09-01
To stabilize locomotion, animals must generate forces appropriate to overcome the effects of perturbations and to maintain a desired speed or direction of movement. We studied the stabilizing mechanism employed by rapidly running insects by using a novel apparatus to perturb running cockroaches (Blaberus discoidalis). The apparatus used chemical propellants to accelerate a small projectile, generating reaction force impulses of less than 10 ms duration. The apparatus was mounted onto the thorax of the insect, oriented to propel the projectile laterally and loaded with propellant sufficient to cause a nearly tenfold increase in lateral velocity relative to maxima observed during unperturbed locomotion. Cockroaches were able to recover from these perturbations in 27+/-12 ms (mean +/- S.D., N=9) when running on a high-friction substratum. Lateral velocity began to decrease 13+/-5 ms (mean +/- S.D., N=11) following the start of a perturbation, a time comparable with the fastest reflexes measured in cockroaches. Cockroaches did not require step transitions to recover from lateral perturbations. Instead, they exhibited viscoelastic behavior in the lateral direction, with spring constants similar to those observed during unperturbed locomotion. The rapid onset of recovery from lateral perturbations supports the possibility that, during fast locomotion, intrinsic properties of the musculoskeletal system augment neural stabilization by reflexes.
CARES/PC - CERAMICS ANALYSIS AND RELIABILITY EVALUATION OF STRUCTURES
NASA Technical Reports Server (NTRS)
Szatmary, S. A.
1994-01-01
The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES/PC performs statistical analysis of data obtained from the fracture of simple, uniaxial tensile or flexural specimens and estimates the Weibull and Batdorf material parameters from this data. CARES/PC is a subset of the program CARES (COSMIC program number LEW-15168) which calculates the fast-fracture reliability or failure probability of ceramic components utilizing the Batdorf and Weibull models to describe the effects of multi-axial stress states on material strength. CARES additionally requires that the ceramic structure be modeled by a finite element program such as MSC/NASTRAN or ANSYS. The more limited CARES/PC does not perform fast-fracture reliability estimation of components. CARES/PC estimates ceramic material properties from uniaxial tensile or from three- and four-point bend bar data. In general, the parameters are obtained from the fracture stresses of many specimens (30 or more are recommended) whose geometry and loading configurations are held constant. Parameter estimation can be performed for single or multiple failure modes by using the least-squares analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests measure the accuracy of the hypothesis that the fracture data comes from a population with a distribution specified by the estimated Weibull parameters. Ninety-percent confidence intervals on the Weibull parameters and the unbiased value of the shape parameter for complete samples are provided when the maximum likelihood technique is used. CARES/PC is written and compiled with the Microsoft FORTRAN v5.0 compiler using the VAX FORTRAN extensions and dynamic array allocation supported by this compiler for the IBM/MS-DOS or OS/2 operating systems. The dynamic array allocation routines allow the user to match the number of fracture sets and test specimens to the memory available. Machine requirements include IBM PC compatibles with optional math coprocessor. Program output is designed to fit 80-column format printers. Executables for both DOS and OS/2 are provided. CARES/PC is distributed on one 5.25 inch 360K MS-DOS format diskette in compressed format. The expansion tool PKUNZIP.EXE is supplied on the diskette. CARES/PC was developed in 1990. IBM PC and OS/2 are trademarks of International Business Machines. MS-DOS and MS OS/2 are trademarks of Microsoft Corporation. VAX is a trademark of Digital Equipment Corporation.
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
System and method for controlling power consumption in a computer system based on user satisfaction
Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok
2014-04-22
Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.
NASA Technical Reports Server (NTRS)
Crawford, Bradley L.
2007-01-01
The angle measurement system (AMS) developed at NASA Langley Research Center (LaRC) is a system for many uses. It was originally developed to check taper fits in the wind tunnel model support system. The system was further developed to measure simultaneous pitch and roll angles using 3 orthogonally mounted accelerometers (3-axis). This 3-axis arrangement is used as a transfer standard from the calibration standard to the wind tunnel facility. It is generally used to establish model pitch and roll zero and performs the in-situ calibration on model attitude devices. The AMS originally used a laptop computer running DOS based software but has recently been upgraded to operate in a windows environment. Other improvements have also been made to the software to enhance its accuracy and add features. This paper will discuss the accuracy and calibration methodologies used in this system and some of the features that have contributed to its popularity.
Yan, Zhengyin; Maher, Noureddine; Torres, Rhoda; Cotto, Carlos; Hastings, Becki; Dasgupta, Malini; Hyman, Rolanda; Huebert, Norman; Caldwell, Gary W
2008-07-01
In addition to matrix effects, common interferences observed in liquid chromatography/tandem mass spectrometry (LC/MS/MS) analyses can be caused by the response of drug-related metabolites to the multiple reaction monitoring (MRM) channel of a given drug, as a result of in-source reactions or decomposition of either phase I or II metabolites. However, it has been largely ignored that, for some drugs, metabolism can lead to the formation of isobaric or isomeric metabolites that exhibit the same MRM transitions as parent drugs. The present study describes two examples demonstrating that interference caused by isobaric or isomeric metabolites is a practical issue in analyzing biological samples by LC/MS/MS. In the first case, two sequential metabolic reactions, demethylation followed by oxidation of a primary alcohol moiety to a carboxylic acid, produced an isobaric metabolite that exhibits a MRM transition identical to the parent drug. Because the drug compound was rapidly metabolized in rats and completely disappeared in plasma samples, the isobaric metabolite appeared as a single peak in the total ion current (TIC) trace and could easily be quantified as the drug since it was eluted at a retention time very close to that of the drug in a 12-min LC run. In the second example, metabolism via the ring-opening of a substituted isoxazole moiety led to the formation of an isomeric product that showed an almost identical collision-induced dissociation (CID) MS spectrum as the original drug. Because two components were co-eluted, the isomeric product could be mistakenly quantified and reported by data processing software as the parent drug if the TIC trace was not carefully inspected. Nowadays, all LC/MS data are processed by computer software in a highly automated fashion, and some analysts may spend much less time to visually examine raw TIC traces than they used to do. Two examples described in this article remind us that quality data require both adequate chromatographic separations and close examination of raw data in LC/MS/MS analyses of drugs in biological matrix.
Benson, Curtis; Paylor, John W; Tenorio, Gustavo; Winship, Ian; Baker, Glen; Kerr, Bradley J
2015-09-01
Multiple sclerosis (MS) is classically defined by motor deficits, but it is also associated with the secondary symptoms of pain, depression, and anxiety. Up to this point modifying these secondary symptoms has been difficult. There is evidence that both MS and the animal model experimental autoimmune encephalomyelitis (EAE), commonly used to study the pathophysiology of the disease, can be modulated by exercise. To examine whether limited voluntary wheel running could modulate EAE disease progression and the co-morbid symptoms of pain, mice with EAE were allowed access to running wheels for 1h every day. Allowing only 1h every day of voluntary running led to a significant delay in the onset of clinical signs of the disease. The development of mechanical allodynia was assessed using Von Frey hairs and indicated that wheel running had a modest positive effect on the pain hypersensitivity associated with EAE. These behavioral changes were associated with reduced numbers of cFOS and phosphorylated NR1 positive cells in the dorsal horn of the spinal cord compared to no-run EAE controls. In addition, within the dorsal horn, voluntary wheel running reduced the number of infiltrating CD3(+) T-cells and reduced the overall levels of Iba1 immunoreactivity. Using high performance liquid chromatography (HPLC), we observed that wheel-running lead to significant changes in the spinal cord levels of the antioxidant glutathione. Oxidative stress has separately been shown to contribute to EAE disease progression and neuropathic pain. Together these results indicate that in mice with EAE, voluntary motor activity can delay the onset of clinical signs and reduce pain symptoms associated with the disease. Copyright © 2015 Elsevier Inc. All rights reserved.
Degroeve, Sven; Maddelein, Davy; Martens, Lennart
2015-07-01
We present an MS(2) peak intensity prediction server that computes MS(2) charge 2+ and 3+ spectra from peptide sequences for the most common fragment ions. The server integrates the Unimod public domain post-translational modification database for modified peptides. The prediction model is an improvement of the previously published MS(2)PIP model for Orbitrap-LTQ CID spectra. Predicted MS(2) spectra can be downloaded as a spectrum file and can be visualized in the browser for comparisons with observations. In addition, we added prediction models for HCD fragmentation (Q-Exactive Orbitrap) and show that these models compute accurate intensity predictions on par with CID performance. We also show that training prediction models for CID and HCD separately improves the accuracy for each fragmentation method. The MS(2)PIP prediction server is accessible from http://iomics.ugent.be/ms2pip. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Visualization of LC-MS/MS proteomics data in MaxQuant.
Tyanova, Stefka; Temu, Tikira; Carlson, Arthur; Sinitcyn, Pavel; Mann, Matthias; Cox, Juergen
2015-04-01
Modern software platforms enable the analysis of shotgun proteomics data in an automated fashion resulting in high quality identification and quantification results. Additional understanding of the underlying data can be gained with the help of advanced visualization tools that allow for easy navigation through large LC-MS/MS datasets potentially consisting of terabytes of raw data. The updated MaxQuant version has a map navigation component that steers the users through mass and retention time-dependent mass spectrometric signals. It can be used to monitor a peptide feature used in label-free quantification over many LC-MS runs and visualize it with advanced 3D graphic models. An expert annotation system aids the interpretation of the MS/MS spectra used for the identification of these peptide features. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
National Collegiate Software Clearinghouse, Durham, NC.
Over 250 microcomputer software packages, intended for use on MS-DOS machines by scholars and teachers in the humanities and social sciences, are included in this catalog. The clearinghouse's first Macintosh listing is included, with many more Macintosh programs and data sets being planned and tested for future inclusion. Most programs were…
Software To Go: A Catalog of Software Available for Loan.
ERIC Educational Resources Information Center
Kurlychek, Ken, Comp.
This catalog lists the holdings of the Software To Go software lending library and clearinghouse for programs and agencies serving students or clients who are deaf or hard of hearing. An introduction describes the clearinghouse and its collection of software, much of it commercial and copyrighted material, for Apple, Macintosh, and IBM (MS-DOS)…
Zhou, Ruokun; Tseng, Chiao-Li; Huan, Tao; Li, Liang
2014-05-20
A chemical isotope labeling or isotope coded derivatization (ICD) metabolomics platform uses a chemical derivatization method to introduce a mass tag to all of the metabolites having a common functional group (e.g., amine), followed by LC-MS analysis of the labeled metabolites. To apply this platform to metabolomics studies involving quantitative analysis of different groups of samples, automated data processing is required. Herein, we report a data processing method based on the use of a mass spectral feature unique to the chemical labeling approach, i.e., any differential-isotope-labeled metabolites are detected as peak pairs with a fixed mass difference in a mass spectrum. A software tool, IsoMS, has been developed to process the raw data generated from one or multiple LC-MS runs by peak picking, peak pairing, peak-pair filtering, and peak-pair intensity ratio calculation. The same peak pairs detected from multiple samples are then aligned to produce a CSV file that contains the metabolite information and peak ratios relative to a control (e.g., a pooled sample). This file can be readily exported for further data and statistical analysis, which is illustrated in an example of comparing the metabolomes of human urine samples collected before and after drinking coffee. To demonstrate that this method is reliable for data processing, five (13)C2-/(12)C2-dansyl labeled metabolite standards were analyzed by LC-MS. IsoMS was able to detect these metabolites correctly. In addition, in the analysis of a (13)C2-/(12)C2-dansyl labeled human urine, IsoMS detected 2044 peak pairs, and manual inspection of these peak pairs found 90 false peak pairs, representing a false positive rate of 4.4%. IsoMS for Windows running R is freely available for noncommercial use from www.mycompoundid.org/IsoMS.
Viidanoja, Jyrki
2015-09-15
A new, sensitive and selective liquid chromatography-electrospray ionization-tandem mass spectrometric (LC-ESI-MS/MS) method was developed for the analysis of Phospholipids (PLs) in bio-oils and fats. This analysis employs hydrophilic interaction liquid chromatography-scheduled multiple reaction monitoring (HILIC-sMRM) with a ZIC-cHILIC column. Eight PL class selective internal standards (homologs) were used for the semi-quantification of 14 PL classes for the first time. More than 400 scheduled MRMs were used for the measurement of PLs with a run time of 34min. The method's performance was evaluated for vegetable oil, animal fat and algae oil. The averaged within-run precision and between-run precision were ≤10% for all of the PL classes that had a direct homologue as an internal standard. The method accuracy was generally within 80-120% for the tested PL analytes in all three sample matrices. Copyright © 2015 Elsevier B.V. All rights reserved.
Kageyama, Shinji; Shinmura, Kazuya; Yamamoto, Hiroko; Goto, Masanori; Suzuki, Koichi; Tanioka, Fumihiko; Tsuneyoshi, Toshihiro; Sugimura, Haruhiko
2008-04-01
The PCR-based DNA fingerprinting method called the methylation-sensitive amplified fragment length polymorphism (MS-AFLP) analysis is used for genome-wide scanning of methylation status. In this study, we developed a method of fluorescence-labeled MS-AFLP (FL-MS-AFLP) analysis by applying a fluorescence-labeled primer and fluorescence-detecting electrophoresis apparatus to the existing method of MS-AFLP analysis. The FL-MS-AFLP analysis enables quantitative evaluation of more than 350 random CpG loci per run. It was shown to allow evaluation of the differences in methylation level of blood DNA of gastric cancer patients and evaluation of hypermethylation and hypomethylation in DNA from gastric cancer tissue in comparison with adjacent non-cancerous tissue.
Tracking at High Level Trigger in CMS
NASA Astrophysics Data System (ADS)
Tosi, M.
2016-04-01
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.
Fast words boundaries localization in text fields for low quality document images
NASA Astrophysics Data System (ADS)
Ilin, Dmitry; Novikov, Dmitriy; Polevoy, Dmitry; Nikolaev, Dmitry
2018-04-01
The paper examines the problem of word boundaries precise localization in document text zones. Document processing on a mobile device consists of document localization, perspective correction, localization of individual fields, finding words in separate zones, segmentation and recognition. While capturing an image with a mobile digital camera under uncontrolled capturing conditions, digital noise, perspective distortions or glares may occur. Further document processing gets complicated because of its specifics: layout elements, complex background, static text, document security elements, variety of text fonts. However, the problem of word boundaries localization has to be solved at runtime on mobile CPU with limited computing capabilities under specified restrictions. At the moment, there are several groups of methods optimized for different conditions. Methods for the scanned printed text are quick but limited only for images of high quality. Methods for text in the wild have an excessively high computational complexity, thus, are hardly suitable for running on mobile devices as part of the mobile document recognition system. The method presented in this paper solves a more specialized problem than the task of finding text on natural images. It uses local features, a sliding window and a lightweight neural network in order to achieve an optimal algorithm speed-precision ratio. The duration of the algorithm is 12 ms per field running on an ARM processor of a mobile device. The error rate for boundaries localization on a test sample of 8000 fields is 0.3
Tian, Baomin; Wong, Wah Yau; Hegmann, Elda; Gaspar, Kim; Kumar, Praveen; Chao, Heman
2015-06-17
A novel immunoconjugate (L-DOS47) was developed and characterized as a therapeutic agent for tumors expressing CEACAM6. The single domain antibody AFAIKL2, which targets CEACAM6, was expressed in the Escherichia coli BL21 (DE3) pT7-7 system. High purity urease (HPU) was extracted and purified from Jack bean meal. AFAIKL2 was activated using N-succinimidyl [4-iodoacetyl] aminobenzoate (SIAB) as the cross-linker and then conjugated to urease. The activation and conjugation reactions were controlled by altering pH. Under these conditions, the material ratio achieved conjugation ratios of 8-11 antibodies per urease molecule, the residual free urease content was practically negligible (<2%), and high purity (>95%) L-DOS47 conjugate was produced using only ultradiafiltration to remove unreacted antibody and hydrolyzed cross-linker. L-DOS47 was characterized by a panel of analytical techniques including SEC, IEC, Western blot, ELISA, and LC-MS(E) peptide mapping. As the antibody-urease conjugate ratio increased, a higher binding signal was observed. The specificity and cytotoxicity of L-DOS47 was confirmed by screening in four cell lines (BxPC-3, A549, MCF7, and CEACAM6-transfected H23). BxPC-3, a CEACAM6-expressing cell line was found to be most susceptible to L-DOS47. L-DOS47 is being investigated as a potential therapeutic agent in human phase I clinical studies for nonsmall cell lung cancer.
NASA Astrophysics Data System (ADS)
Abdul Ghani, B.
2005-09-01
"TEA CO 2 Laser Simulator" has been designed to simulate the dynamic emission processes of the TEA CO 2 laser based on the six-temperature model. The program predicts the behavior of the laser output pulse (power, energy, pulse duration, delay time, FWHM, etc.) depending on the physical and geometrical input parameters (pressure ratio of gas mixture, reflecting area of the output mirror, media length, losses, filling and decay factors, etc.). Program summaryTitle of program: TEA_CO2 Catalogue identifier: ADVW Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVW Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: P.IV DELL PC Setup: Atomic Energy Commission of Syria, Scientific Services Department, Mathematics and Informatics Division Operating system: MS-Windows 9x, 2000, XP Programming language: Delphi 6.0 No. of lines in distributed program, including test data, etc.: 47 315 No. of bytes in distributed program, including test data, etc.:7 681 109 Distribution format:tar.gz Classification: 15 Laser Physics Nature of the physical problem: "TEA CO 2 Laser Simulator" is a program that predicts the behavior of the laser output pulse by studying the effect of the physical and geometrical input parameters on the characteristics of the output laser pulse. The laser active medium consists of a CO 2-N 2-He gas mixture. Method of solution: Six-temperature model, for the dynamics emission of TEA CO 2 laser, has been adapted in order to predict the parameters of laser output pulses. A simulation of the laser electrical pumping was carried out using two approaches; empirical function equation (8) and differential equation (9). Typical running time: The program's running time mainly depends on both integration interval and step; for a 4 μs period of time and 0.001 μs integration step (defaults values used in the program), the running time will be about 4 seconds. Restrictions on the complexity: Using a very small integration step might leads to stop the program run due to the huge number of calculating points and to a small paging file size of the MS-Windows virtual memory. In such case, it is recommended to enlarge the paging file size to the appropriate size, or to use a bigger value of integration step.
COSTMODL - AN AUTOMATED SOFTWARE DEVELOPMENT COST ESTIMATION TOOL
NASA Technical Reports Server (NTRS)
Roush, G. B.
1994-01-01
The cost of developing computer software consumes an increasing portion of many organizations' budgets. As this trend continues, the capability to estimate the effort and schedule required to develop a candidate software product becomes increasingly important. COSTMODL is an automated software development estimation tool which fulfills this need. Assimilating COSTMODL to any organization's particular environment can yield significant reduction in the risk of cost overruns and failed projects. This user-customization capability is unmatched by any other available estimation tool. COSTMODL accepts a description of a software product to be developed and computes estimates of the effort required to produce it, the calendar schedule required, and the distribution of effort and staffing as a function of the defined set of development life-cycle phases. This is accomplished by the five cost estimation algorithms incorporated into COSTMODL: the NASA-developed KISS model; the Basic, Intermediate, and Ada COCOMO models; and the Incremental Development model. This choice affords the user the ability to handle project complexities ranging from small, relatively simple projects to very large projects. Unique to COSTMODL is the ability to redefine the life-cycle phases of development and the capability to display a graphic representation of the optimum organizational structure required to develop the subject project, along with required staffing levels and skills. The program is menu-driven and mouse sensitive with an extensive context-sensitive help system that makes it possible for a new user to easily install and operate the program and to learn the fundamentals of cost estimation without having prior training or separate documentation. The implementation of these functions, along with the customization feature, into one program makes COSTMODL unique within the industry. COSTMODL was written for IBM PC compatibles, and it requires Turbo Pascal 5.0 or later and Turbo Professional 5.0 for recompilation. An executable is provided on the distribution diskettes. COSTMODL requires 512K RAM. The standard distribution medium for COSTMODL is three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. COSTMODL was developed in 1991. IBM PC is a registered trademark of International Business Machines. Borland and Turbo Pascal are registered trademarks of Borland International, Inc. Turbo Professional is a trademark of TurboPower Software. MS-DOS is a registered trademark of Microsoft Corporation. Turbo Professional is a trademark of TurboPower Software.
TopoMS: Comprehensive topological exploration for molecular and condensed-matter systems.
Bhatia, Harsh; Gyulassy, Attila G; Lordi, Vincenzo; Pask, John E; Pascucci, Valerio; Bremer, Peer-Timo
2018-06-15
We introduce TopoMS, a computational tool enabling detailed topological analysis of molecular and condensed-matter systems, including the computation of atomic volumes and charges through the quantum theory of atoms in molecules, as well as the complete molecular graph. With roots in techniques from computational topology, and using a shared-memory parallel approach, TopoMS provides scalable, numerically robust, and topologically consistent analysis. TopoMS can be used as a command-line tool or with a GUI (graphical user interface), where the latter also enables an interactive exploration of the molecular graph. This paper presents algorithmic details of TopoMS and compares it with state-of-the-art tools: Bader charge analysis v1.0 (Arnaldsson et al., 01/11/17) and molecular graph extraction using Critic2 (Otero-de-la-Roza et al., Comput. Phys. Commun. 2014, 185, 1007). TopoMS not only combines the functionality of these individual codes but also demonstrates up to 4× performance gain on a standard laptop, faster convergence to fine-grid solution, robustness against lattice bias, and topological consistency. TopoMS is released publicly under BSD License. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Simulating three dimensional wave run-up over breakwaters covered by antifer units
NASA Astrophysics Data System (ADS)
Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader
2014-06-01
The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
NASA Astrophysics Data System (ADS)
Tibermacine, T.; Merazga, A.; Ledra, M.; Ouhabab, N.
2015-09-01
The constant photocurrent method in the ac-mode (ac-CPM) is used to determine the defect density of states (DOS) in hydrogenated microcrystalline silicon (μc-Si:H) prepared by very high frequency plasma-enhanced chemical vapor deposition (VHF-PECVD). The absorption coefficient spectrum (ac-α(hv)), is measured under ac-CPM conditions at 60 Hz. The measured ac-α(hv) is converted by the CPM spectroscopy into a DOS distribution covering a portion in the lower energy range of occupied states. We have found that the density of valence band-tail states falls exponentially towards the gap with a typical band-tail width of 63 meV. Independently, computer simulations of the ac-CPM are developed using a DOS model that is consistent with the measured ac-α(hv) in the present work and a previously measured transient photocurrent (TPC) for the same material. The DOS distribution model suggested by the measurements in the lower and in the upper part of the energy-gap, as well as by the numerical modelling in the middle part of the energy-gap, coincide reasonably well with the real DOS distribution in hydrogenated microcrystalline silicon because the computed ac-α(hv) is found to agree satisfactorily with the measured ac-α(hv).
Ogawa, Shoujiro; Kittaka, Hiroki; Nakata, Akiho; Komatsu, Kenji; Sugiura, Takahiro; Satoh, Mamoru; Nomura, Fumio; Higashi, Tatsuya
2017-03-20
The plasma/serum concentration of 25-hydroxyvitamin D 3 [25(OH)D 3 ] is a diagnostic index for vitamin D deficiency/insufficiency, which is associated with a wide range of diseases, such as rickets, cancer and diabetes. We have reported that the derivatization with 4-(4-dimethylaminophenyl)-1,2,4-triazoline-3,5-dione (DAPTAD) works well in the liquid chromatography/electrospray ionization-tandem mass spectrometry (LC/ESI-MS/MS) assay of the serum/plasma 25(OH)D 3 for enhancing the sensitivity and the separation from a potent interfering metabolite, 3-epi-25-hydroxyvitamin D 3 [3-epi-25(OH)D 3 ]. However, enhancing the analysis throughput remains an issue in the LC/ESI-MS/MS assay of 25(OH)D 3 . The most obvious restriction of the LC/MS/MS throughput is the chromatographic run time. In this study, we developed an enhanced throughput method for the determination of the plasma 25(OH)D 3 by LC/ESI-MS/MS combined with the derivatization using the triplex ( 2 H 0 -, 2 H 3 - and 2 H 6 -) DAPTAD isotopologues. After separate derivatization with 1 of 3 different isotopologues, the 3 samples were combined and injected together into LC/ESI-MS/MS. Based on the mass differences between the isotopologues, the derivatized 25(OH)D 3 in the 3 different samples were quantified within a single run. The developed method tripled the hourly analysis throughput without sacrificing assay performance, i.e., ease of pretreatment of plasma sample (only deproteinization), limit of quantification (1.0ng/mL when a 5μL-plasma was used), precision (intra-assay RSD≤5.9% and inter-assay RSD≤5.5%), accuracy (98.7-102.2%), matrix effects, and capability of separating from an interfering metabolite, 3-epi-25(OH)D 3 . The multiplexing of samples by the isotopologue derivatization was applied to the analysis of plasma samples of healthy subjects and the developed method was proven to have a satisfactory applicability. Copyright © 2016 Elsevier B.V. All rights reserved.
WinSCP for Windows File Transfers | High-Performance Computing | NREL
WinSCP for Windows File Transfers WinSCP for Windows File Transfers WinSCP for can used to securely transfer files between your local computer running Microsoft Windows and a remote computer running Linux
Fast response air-to-fuel ratio measurements using a novel device based on a wide band lambda sensor
NASA Astrophysics Data System (ADS)
Regitz, S.; Collings, N.
2008-07-01
A crucial parameter influencing the formation of pollutant gases in internal combustion engines is the air-to-fuel ratio (AFR). During transients on gasoline and diesel engines, significant AFR excursions from target values can occur, but cycle-by-cycle AFR resolution, which is helpful in understanding the origin of deviations, is difficult to achieve with existing hardware. This is because current electrochemical devices such as universal exhaust gas oxygen (UEGO) sensors have a time constant of 50-100 ms, depending on the engine running conditions. This paper describes the development of a fast reacting device based on a wide band lambda sensor which has a maximum time constant of ~20 ms and enables cyclic AFR measurements for engine speeds of up to ~4000 rpm. The design incorporates a controlled sensor environment which results in insensitivity to sample temperature and pressure. In order to guide the development process, a computational model was developed to predict the effect of pressure and temperature on the diffusion mechanism. Investigations regarding the sensor output and response were carried out, and sensitivities to temperature and pressure are examined. Finally, engine measurements are presented.
The Software Element of the NASA Portable Electronic Device Radiated Emissions Investigation
NASA Technical Reports Server (NTRS)
Koppen, Sandra V.; Williams, Reuben A. (Technical Monitor)
2002-01-01
NASA Langley Research Center's (LaRC) High Intensity Radiated Fields Laboratory (HIRF Lab) recently conducted a series of electromagnetic radiated emissions tests under a cooperative agreement with Delta Airlines and an interagency agreement with the FAA. The frequency spectrum environment at a commercial airport was measured on location. The environment survey provides a comprehensive picture of the complex nature of the electromagnetic environment present in those areas outside the aircraft. In addition, radiated emissions tests were conducted on portable electronic devices (PEDs) that may be brought onboard aircraft. These tests were performed in both semi-anechoic and reverberation chambers located in the HIRF Lab. The PEDs included cell phones, laptop computers, electronic toys, and family radio systems. The data generated during the tests are intended to support the research on the effect of radiated emissions from wireless devices on aircraft systems. Both tests systems relied on customized control and data reduction software to provide test and instrument control, data acquisition, a user interface, real time data reduction, and data analysis. The software executed on PC's running MS Windows 98 and 2000, and used Agilent Pro Visual Engineering Environment (VEE) development software, Common Object Model (COM) technology, and MS Excel.
Annotation: a computational solution for streamlining metabolomics analysis
Domingo-Almenara, Xavier; Montenegro-Burke, J. Rafael; Benton, H. Paul; Siuzdak, Gary
2017-01-01
Metabolite identification is still considered an imposing bottleneck in liquid chromatography mass spectrometry (LC/MS) untargeted metabolomics. The identification workflow usually begins with detecting relevant LC/MS peaks via peak-picking algorithms and retrieving putative identities based on accurate mass searching. However, accurate mass search alone provides poor evidence for metabolite identification. For this reason, computational annotation is used to reveal the underlying metabolites monoisotopic masses, improving putative identification in addition to confirmation with tandem mass spectrometry. This review examines LC/MS data from a computational and analytical perspective, focusing on the occurrence of neutral losses and in-source fragments, to understand the challenges in computational annotation methodologies. Herein, we examine the state-of-the-art strategies for computational annotation including: (i) peak grouping or full scan (MS1) pseudo-spectra extraction, i.e., clustering all mass spectral signals stemming from each metabolite; (ii) annotation using ion adduction and mass distance among ion peaks; (iii) incorporation of biological knowledge such as biotransformations or pathways; (iv) tandem MS data; and (v) metabolite retention time calibration, usually achieved by prediction from molecular descriptors. Advantages and pitfalls of each of these strategies are discussed, as well as expected future trends in computational annotation. PMID:29039932
SuperState: a computer program for the control of operant behavioral experimentation.
Zhang, Fuqiang
2006-09-15
Operant behavioral researches require precise control of experimental devices for delivering stimuli and monitoring behavioral responses. The author developed a software solution named SuperState for controlling hardware devices and running reinforcement schedules. The Microsoft Windows compatible software was written by use of an object-oriented programming language Borland Delphi 5.0, which has simplified the programming of the application. SuperState is a stand-alone easy-to-use green software, without the need for the experimenter to master any scripting languages. It features: (1) control of multiple operant cages running independent reinforcement schedules; (2) enough cage devices (16 digital inputs and 16 digital outputs for each cage) suitable for the need of most operant behavioral equipments; (3) control of most standard ISA-type digital interface cards including Med-Associates Super-port cards and a PCI-type card AC6412, and highly expandable to support other PCI-type interface cards; (4) high-resolution device control (1ms); (5) a built-in real-time cumulative recorder; (6) extensive data analyzing including event recorder, cumulative recorder, block analyzing; the summarized results can be transferred easily to Microsoft Excel spreadsheets through the Clipboard.
NASA Astrophysics Data System (ADS)
Hatze, Herbert; Baca, Arnold
1993-01-01
The development of noninvasive techniques for the determination of biomechanical body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers, etc.) receives increasing attention from the medical sciences (e,.g., orthopaedic gait analysis), bioengineering, sport biomechanics, and the various space programs. In the present paper, a novel method is presented for determining body segment parameters rapidly and accurately. It is based on the video-image processing of four different body configurations and a finite mass-element human body model. The four video images of the subject in question are recorded against a black background, thus permitting the application of shape recognition procedures incorporating edge detection and calibration algorithms. In this way, a total of 181 object space dimensions of the subject's body segments can be reconstructed and used as anthropometric input data for the mathematical finite mass- element body model. The latter comprises 17 segments (abdomino-thoracic, head-neck, shoulders, upper arms, forearms, hands, abdomino-pelvic, thighs, lower legs, feet) and enables the user to compute all the required segment parameters for each of the 17 segments by means of the associated computer program. The hardware requirements are an IBM- compatible PC (1 MB memory) operating under MS-DOS or PC-DOS (Version 3.1 onwards) and incorporating a VGA-board with a feature connector for connecting it to a super video windows framegrabber board for which there must be available a 16-bit large slot. In addition, a VGA-monitor (50 - 70 Hz, horizontal scan rate at least 31.5 kHz), a common video camera and recorder, and a simple rectangular calibration frame are required. The advantage of the new method lies in its ease of application, its comparatively high accuracy, and in the rapid availability of the body segment parameters, which is particularly useful in clinical practice. An example of its practical application illustrates the technique.
Tempest: GPU-CPU computing for high-throughput database spectral matching.
Milloy, Jeffrey A; Faherty, Brendan K; Gerber, Scott A
2012-07-06
Modern mass spectrometers are now capable of producing hundreds of thousands of tandem (MS/MS) spectra per experiment, making the translation of these fragmentation spectra into peptide matches a common bottleneck in proteomics research. When coupled with experimental designs that enrich for post-translational modifications such as phosphorylation and/or include isotopically labeled amino acids for quantification, additional burdens are placed on this computational infrastructure by shotgun sequencing. To address this issue, we have developed a new database searching program that utilizes the massively parallel compute capabilities of a graphical processing unit (GPU) to produce peptide spectral matches in a very high throughput fashion. Our program, named Tempest, combines efficient database digestion and MS/MS spectral indexing on a CPU with fast similarity scoring on a GPU. In our implementation, the entire similarity score, including the generation of full theoretical peptide candidate fragmentation spectra and its comparison to experimental spectra, is conducted on the GPU. Although Tempest uses the classical SEQUEST XCorr score as a primary metric for evaluating similarity for spectra collected at unit resolution, we have developed a new "Accelerated Score" for MS/MS spectra collected at high resolution that is based on a computationally inexpensive dot product but exhibits scoring accuracy similar to that of the classical XCorr. In our experience, Tempest provides compute-cluster level performance in an affordable desktop computer.
Alagandula, Ravali; Zhou, Xiang; Guo, Baochuan
2017-01-15
Liquid chromatography/tandem mass spectrometry (LC/MS/MS) is the gold standard of urine drug testing. However, current LC-based methods are time consuming, limiting the throughput of MS-based testing and increasing the cost. This is particularly problematic for quantification of drugs such as phenobarbital, which is often analyzed in a separate run because they must be negatively ionized. This study examined the feasibility of using a dilute-and-shoot flow-injection method without LC separation to quantify drugs with phenobarbital as a model system. Briefly, a urine sample containing phenobarbital was first diluted by 10 times, followed by flow injection of the diluted sample to mass spectrometer. Quantification and detection of phenobarbital were achieved by an electrospray negative ionization MS/MS system operated in the multiple reaction monitoring (MRM) mode with the stable-isotope-labeled drug as internal standard. The dilute-and-shoot flow-injection method developed was linear with a dynamic range of 50-2000 ng/mL of phenobarbital and correlation coefficient > 0.9996. The coefficients of variation and relative errors for intra- and inter-assays at four quality control (QC) levels (50, 125, 445 and 1600 ng/mL) were 3.0% and 5.0%, respectively. The total run time to quantify one sample was 2 min, and the sensitivity and specificity of the method did not deteriorate even after 1200 consecutive injections. Our method can accurately and robustly quantify phenobarbital in urine without LC separation. Because of its 2 min run time, the method can process 720 samples per day. This feasibility study shows that the dilute-and-shoot flow-injection method can be a general way for fast analysis of drugs in urine. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line 10-m...
RAPPORT: running scientific high-performance computing applications on the cloud.
Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt
2013-01-28
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
GEANT4 distributed computing for compact clusters
NASA Astrophysics Data System (ADS)
Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.
2014-11-01
A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.
Shaik, Abdul Naveed; Grater, Richard; Lulla, Mukesh; Williams, David A; Gan, Lawrence L; Bohnert, Tonika; LeDuc, Barbara W
2016-01-01
Warfarin is an anticoagulant used in the treatment of thrombosis and thromboembolism. It is given as a racemic mixture of R and S enantiomers. These two enantiomers show differences in metabolism by CYPs: S-warfarin undergoes 7 hydroxylation by CYP2C9 and R-warfarin by CYP3A4 to form 10 hydroxy warfarin. In addition, warfarin is acted upon by different CYPs to form the minor metabolites 3'-hydroxy, 4'-hydroxy, 6-hydroxy, and 8-hydroxy warfarin. For analysis, separation of these metabolites is necessary since all have the same m/z ratio and similar fragmentation pattern. Enzyme kinetics for the formation of all of the six hydroxylated metabolites of warfarin from human liver microsomes were determined using an LC-MS/MS QTrap and LC-MS/MS with a differential mobility spectrometry (DMS) (SelexION™) interface to compare the kinetic parameters. These two methods were chosen to compare their selectivity and sensitivity. Substrate curves for 3'-OH, 4'-OH, 6-OH, 7-OH, 8-OH and 10-OH warfarin formation were generated to determine the kinetic parameters (Km and Vmax) in human liver microsomal preparations. The limit of quantitation (LOQ) for all the six hydroxylated metabolites of warfarin were in the range of 1-3nM using an LC-MS/MS QTrap method which had a run time of 22min. In contrast, the LOQ for all the six hydroxylated metabolites using DMS interface technology was 100nM with a run time of 2.8min. We compare these two MS methods and discuss the kinetics of metabolite formation for the metabolites generated from racemic warfarin. In addition, we show inhibition of major metabolic pathways of warfarin by sulfaphenazole and ketoconazole which are known specific inhibitors of CYP2C9 and CYP3A4 respectively. Copyright © 2015 Elsevier B.V. All rights reserved.
Liebenberg, Jacobus; Woo, Jeonghyun; Park, Sang-Kyoon; Yoon, Suk-Hoon; Cheung, Roy Tsz-Hei; Ryu, Jiseon
2018-01-01
Background Tibial stress fracture (TSF) is a common injury in basketball players. This condition has been associated with high tibial shock and impact loading, which can be affected by running speed, footwear condition, and footstrike pattern. However, these relationships were established in runners but not in basketball players, with very little research done on impact loading and speed. Hence, this study compared tibial shock, impact loading, and foot strike pattern in basketball players running at different speeds with different shoe cushioning properties/performances. Methods Eighteen male collegiate basketball players performed straight running trials with different shoe cushioning (regular-, better-, and best-cushioning) and running speed conditions (3.0 m/s vs. 6.0 m/s) on a flat instrumented runway. Tri-axial accelerometer, force plate and motion capture system were used to determine tibial accelerations, vertical ground reaction forces and footstrike patterns in each condition, respectively. Comfort perception was indicated on a 150 mm Visual Analogue Scale. A 2 (speed) × 3 (footwear) repeated measures ANOVA was used to examine the main effects of shoe cushioning and running speeds. Results Greater tibial shock (P < 0.001; η2 = 0.80) and impact loading (P < 0.001; η2 = 0.73–0.87) were experienced at faster running speeds. Interestingly, shoes with regular-cushioning or best-cushioning resulted in greater tibial shock (P = 0.03; η2 = 0.39) and impact loading (P = 0.03; η2 = 0.38–0.68) than shoes with better-cushioning. Basketball players continued using a rearfoot strike during running, regardless of running speed and footwear cushioning conditions (P > 0.14; η2 = 0.13). Discussion There may be an optimal band of shoe cushioning for better protection against TSF. These findings may provide insights to formulate rehabilitation protocols for basketball players who are recovering from TSF. PMID:29770274
Running Pace Decrease during a Marathon Is Positively Related to Blood Markers of Muscle Damage
Del Coso, Juan; Fernández, David; Abián-Vicen, Javier; Salinero, Juan José; González-Millán, Cristina; Areces, Francisco; Ruiz, Diana; Gallo, César; Calleja-González, Julio; Pérez-González, Benito
2013-01-01
Background Completing a marathon is one of the most challenging sports activities, yet the source of running fatigue during this event is not completely understood. The aim of this investigation was to determine the cause(s) of running fatigue during a marathon in warm weather. Methodology/Principal Findings We recruited 40 amateur runners (34 men and 6 women) for the study. Before the race, body core temperature, body mass, leg muscle power output during a countermovement jump, and blood samples were obtained. During the marathon (27 °C; 27% relative humidity) running fatigue was measured as the pace reduction from the first 5-km to the end of the race. Within 3 min after the marathon, the same pre-exercise variables were obtained. Results Marathoners reduced their running pace from 3.5 ± 0.4 m/s after 5-km to 2.9 ± 0.6 m/s at the end of the race (P<0.05), although the running fatigue experienced by the marathoners was uneven. Marathoners with greater running fatigue (> 15% pace reduction) had elevated post-race myoglobin (1318 ± 1411 v 623 ± 391 µg L−1; P<0.05), lactate dehydrogenase (687 ± 151 v 583 ± 117 U L−1; P<0.05), and creatine kinase (564 ± 469 v 363 ± 158 U L−1; P = 0.07) in comparison with marathoners that preserved their running pace reasonably well throughout the race. However, they did not differ in their body mass change (−3.1 ± 1.0 v −3.0 ± 1.0%; P = 0.60) or post-race body temperature (38.7 ± 0.7 v 38.9 ± 0.9 °C; P = 0.35). Conclusions/Significance Running pace decline during a marathon was positively related with muscle breakdown blood markers. To elucidate if muscle damage during a marathon is related to mechanistic or metabolic factors requires further investigation. PMID:23460881
Computational Methods for Feedback Controllers for Aerodynamics Flow Applications
2007-08-15
Iteration #, and y-translation by: »> Fy=[unf(:,8);runA(:,8);runB(:,8);runC(:,8);runD(:,S); runE (:,8)]; >> Oy-[unf(:,23) ;runA(:,23) ;runB(:,23) ;runC(:,23...runD(:,23) ; runE (:,23)]; >> Iter-[unf(:,1);runA(U ,l);runB(:,l);runC(:,l) ;runD(:,l); runE (:,l)]; >> plot(Fy) Cobalt version 4.0 €blso!,,tic,,. ř-21
Is There a Disk of Satellites around the Milky Way?
NASA Astrophysics Data System (ADS)
Maji, Moupiya; Zhu, Qirong; Marinacci, Federico; Li, Yuexing
2017-07-01
The “disk of satellites” (DoS) around the Milky Way is a highly debated topic with conflicting interpretations of observations and their theoretical models. We perform a comprehensive analysis of all of the dwarfs detected in the Milky Way and find that the DoS structure depends strongly on the plane identification method and the sample size. In particular, we demonstrate that a small sample size can artificially produce a highly anisotropic spatial distribution and a strong clustering of the angular momentum of the satellites. Moreover, we calculate the evolution of the 11 classical satellites with proper motion measurements and find that the thin DoS in which they currently reside is transient. Furthermore, we analyze two cosmological simulations using the same initial conditions of a Milky-Way-sized galaxy, an N-body run with dark matter only, and a hydrodynamic one with both baryonic and dark matter, and find that the hydrodynamic simulation produces a more anisotropic distribution of satellites than the N-body one. Our results suggest that an anisotropic distribution of satellites in galaxies can originate from baryonic processes in the hierarchical structure formation model, but the claimed highly flattened, coherently rotating DoS of the Milky Way may be biased by the small-number selection effect. These findings may help resolve the contradictory claims of DoS in galaxies and the discrepancy among numerical simulations.
Is There a Disk of Satellites around the Milky Way?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maji, Moupiya; Zhu, Qirong; Li, Yuexing
2017-07-01
The “disk of satellites” (DoS) around the Milky Way is a highly debated topic with conflicting interpretations of observations and their theoretical models. We perform a comprehensive analysis of all of the dwarfs detected in the Milky Way and find that the DoS structure depends strongly on the plane identification method and the sample size. In particular, we demonstrate that a small sample size can artificially produce a highly anisotropic spatial distribution and a strong clustering of the angular momentum of the satellites. Moreover, we calculate the evolution of the 11 classical satellites with proper motion measurements and find thatmore » the thin DoS in which they currently reside is transient. Furthermore, we analyze two cosmological simulations using the same initial conditions of a Milky-Way-sized galaxy, an N -body run with dark matter only, and a hydrodynamic one with both baryonic and dark matter, and find that the hydrodynamic simulation produces a more anisotropic distribution of satellites than the N -body one. Our results suggest that an anisotropic distribution of satellites in galaxies can originate from baryonic processes in the hierarchical structure formation model, but the claimed highly flattened, coherently rotating DoS of the Milky Way may be biased by the small-number selection effect. These findings may help resolve the contradictory claims of DoS in galaxies and the discrepancy among numerical simulations.« less
No Influence of Positive Emotion on Orbitofrontal Reality Filtering: Relevance for Confabulation
Liverani, Maria Chiara; Manuel, Aurélie L.; Guggisberg, Adrian G.; Nahum, Louis; Schnider, Armin
2016-01-01
Orbitofrontal reality filtering (ORFi) is a mechanism that allows us to keep thought and behavior in phase with reality. Its failure induces reality confusion with confabulation and disorientation. Confabulations have been claimed to have a positive emotional bias, suggesting that they emanate from a tendency to embellish the situation of a handicap. Here we tested the influence of positive emotion on ORFi in healthy subjects using a paradigm validated in reality confusing patients and with a known electrophysiological signature, a frontal positivity at 200–300 ms after memory evocation. Subjects made two continuous recognition tasks (“two runs”), composed of the same set of neutral and positive pictures, but arranged in different order. In both runs, participants had to indicate picture repetitions within, and only within, the ongoing run. The first run measures learning and recognition. The second run, where all items are familiar, requires ORFi to avoid false positive responses. High-density evoked potentials were recorded from 19 healthy subjects during completion of the task. Performance was more accurate and faster on neutral than positive pictures in both runs and for all conditions. Evoked potential correlates of emotion and reality filtering occurred at 260–350 ms but dissociated in terms of amplitude and topography. In both runs, positive stimuli evoked a more negative frontal potential than neutral ones. In the second run, the frontal positivity characteristic of reality filtering was separately, and to the same degree, expressed for positive and neutral stimuli. We conclude that ORFi, the ability to place oneself correctly in time and space, is not influenced by emotional positivity of the processed material. PMID:27303276
Ground reaction forces and kinematics in distance running in older-aged men.
Bus, Sicco A
2003-07-01
The biomechanics of distance running has not been studied before in older-aged runners but may be different than in younger-aged runners because of musculoskeletal degeneration at older age. This study aimed at determining whether the stance phase kinematics and ground reaction forces in running are different between younger- and older-aged men. Lower-extremity kinematics using three-dimensional motion analysis and ground reaction forces (GRF) using a force plate were assessed in 16 older-aged (55-65 yr) and 13 younger-aged (20-35 yr) well-trained male distance runners running at a self-selected (SRS) and a controlled (CRS) speed of 3.3 m.s-1. The older subjects ran at significantly lower self-selected speeds than the younger subjects (mean 3.34 vs 3.77 m.s-1). In both speed conditions, the older runners exhibited significantly more knee flexion at heel strike and significantly less knee flexion and extension range of motion. No age group differences were present in subtalar joint motion. Impact peak force (1.91 vs 1.70 BW) and maximal initial loading rate (107.5 vs 85.5 BW.s-1) were significantly higher in the older runners at the CRS. Maximal peak vertical and anteroposterior forces and impulses were significantly lower in the older runners at the SRS. The biomechanics of running is different between older- and younger-aged runners on several relevant parameters. The larger impact peak force and initial loading rate indicate a loss of shock-absorbing capacity in the older runners. This may increase their susceptibility to lower-extremity overuse injuries. Moreover, it emphasizes the focus on optimizing cushioning properties in the design and prescription of running shoes and suggests that older-aged runners should be cautious with running under conditions of high impact.
Proposal for grid computing for nuclear applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
Logistics Force Planner Assistant (Log Planner)
1989-09-01
elements. The system is implemented on a MS-DOS based microcomputer, using the "Knowledge Pro’ software tool., 20 DISTRIBUTION/AVAILABILITY OF... service support structure. 3. A microcomputer-based knowledge system was developed and successfully demonstrated. Four modules of information are...combat service support (CSS) units planning process to Army Staff logistics planners. Personnel newly assigned to logistics planning need an
A Software Package for Neural Network Applications Development
NASA Technical Reports Server (NTRS)
Baran, Robert H.
1993-01-01
Original Backprop (Version 1.2) is an MS-DOS package of four stand-alone C-language programs that enable users to develop neural network solutions to a variety of practical problems. Original Backprop generates three-layer, feed-forward (series-coupled) networks which map fixed-length input vectors into fixed length output vectors through an intermediate (hidden) layer of binary threshold units. Version 1.2 can handle up to 200 input vectors at a time, each having up to 128 real-valued components. The first subprogram, TSET, appends a number (up to 16) of classification bits to each input, thus creating a training set of input output pairs. The second subprogram, BACKPROP, creates a trilayer network to do the prescribed mapping and modifies the weights of its connections incrementally until the training set is leaned. The learning algorithm is the 'back-propagating error correction procedures first described by F. Rosenblatt in 1961. The third subprogram, VIEWNET, lets the trained network be examined, tested, and 'pruned' (by the deletion of unnecessary hidden units). The fourth subprogram, DONET, makes a TSR routine by which the finished product of the neural net design-and-training exercise can be consulted under other MS-DOS applications.
ASAP- ARTIFICIAL SATELLITE ANALYSIS PROGRAM
NASA Technical Reports Server (NTRS)
Kwok, J.
1994-01-01
The Artificial Satellite Analysis Program (ASAP) is a general orbit prediction program which incorporates sufficient orbit modeling accuracy for mission design, maneuver analysis, and mission planning. ASAP is suitable for studying planetary orbit missions with spacecraft trajectories of reconnaissance (flyby) and exploratory (mapping) nature. Sample data is included for a geosynchronous station drift cycle study, a Venus radar mapping strategy, a frozen orbit about Mars, and a repeat ground trace orbit. ASAP uses Cowell's method in the numerical integration of the equations of motion. The orbital mechanics calculation contains perturbations due to non-sphericity (up to a 40 X 40 field) of the planet, lunar and solar effects, and drag and solar radiation pressure. An 8th order Runge-Kutta integration scheme with variable step size control is used for efficient propagation. The input includes the classical osculating elements, orbital elements of the sun relative to the planet, reference time and dates, drag coefficient, gravitational constants, and planet radius, rotation rate, etc. The printed output contains Cartesian coordinates, velocity, equinoctial elements, and classical elements for each time step or event step. At each step, selected output is added to a plot file. The ASAP package includes a program for sorting this plot file. LOTUS 1-2-3 is used in the supplied examples to graph the results, but any graphics software package could be used to process the plot file. ASAP is not written to be mission-specific. Instead, it is intended to be used for most planetary orbiting missions. As a consequence, the user has to have some basic understanding of orbital mechanics to provide the correct input and interpret the subsequent output. ASAP is written in FORTRAN 77 for batch execution and has been implemented on an IBM PC compatible computer operating under MS-DOS. The ASAP package requires a math coprocessor and a minimum of 256K RAM. This program was last updated in 1988 with version 2.03. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. Lotus and 1-2-3 are registered trademarks of Lotus Development Corporation.
The effect of inflammation and its reduction on brain plasticity in multiple sclerosis: MRI evidence
d'Ambrosio, Alessandro; Petsas, Nikolaos; Wise, Richard G.; Sbardella, Emilia; Allen, Marek; Tona, Francesca; Fanelli, Fulvia; Foster, Catherine; Carnì, Marco; Gallo, Antonio; Pantano, Patrizia; Pozzilli, Carlo
2016-01-01
Abstract Brain plasticity is the basis for systems‐level functional reorganization that promotes recovery in multiple sclerosis (MS). As inflammation interferes with plasticity, its pharmacological modulation may restore plasticity by promoting desired patterns of functional reorganization. Here, we tested the hypothesis that brain plasticity probed by a visuomotor adaptation task is impaired with MS inflammation and that pharmacological reduction of inflammation facilitates its restoration. MS patients were assessed twice before (sessions 1 and 2) and once after (session 3) the beginning of Interferon beta (IFN beta), using behavioural and structural MRI measures. During each session, 2 functional MRI runs of a visuomotor task, separated by 25‐minutes of task practice, were performed. Within‐session between‐run change in task‐related functional signal was our imaging marker of plasticity. During session 1, patients were compared with healthy controls. Comparison of patients' sessions 2 and 3 tested the effect of reduced inflammation on our imaging marker of plasticity. The proportion of patients with gadolinium‐enhancing lesions reduced significantly during IFN beta. In session 1, patients demonstrated a greater between‐run difference in functional MRI activity of secondary visual areas and cerebellum than controls. This abnormally large practice‐induced signal change in visual areas, and in functionally connected posterior parietal and motor cortices, was reduced in patients in session 3 compared with 2. Our results suggest that MS inflammation alters short‐term plasticity underlying motor practice. Reduction of inflammation with IFN beta is associated with a restoration of this plasticity, suggesting that modulation of inflammation may enhance recovery‐oriented strategies that rely on patients' brain plasticity. Hum Brain Mapp 37:2431–2445, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:26991559
Tomassini, Valentina; d'Ambrosio, Alessandro; Petsas, Nikolaos; Wise, Richard G; Sbardella, Emilia; Allen, Marek; Tona, Francesca; Fanelli, Fulvia; Foster, Catherine; Carnì, Marco; Gallo, Antonio; Pantano, Patrizia; Pozzilli, Carlo
2016-07-01
Brain plasticity is the basis for systems-level functional reorganization that promotes recovery in multiple sclerosis (MS). As inflammation interferes with plasticity, its pharmacological modulation may restore plasticity by promoting desired patterns of functional reorganization. Here, we tested the hypothesis that brain plasticity probed by a visuomotor adaptation task is impaired with MS inflammation and that pharmacological reduction of inflammation facilitates its restoration. MS patients were assessed twice before (sessions 1 and 2) and once after (session 3) the beginning of Interferon beta (IFN beta), using behavioural and structural MRI measures. During each session, 2 functional MRI runs of a visuomotor task, separated by 25-minutes of task practice, were performed. Within-session between-run change in task-related functional signal was our imaging marker of plasticity. During session 1, patients were compared with healthy controls. Comparison of patients' sessions 2 and 3 tested the effect of reduced inflammation on our imaging marker of plasticity. The proportion of patients with gadolinium-enhancing lesions reduced significantly during IFN beta. In session 1, patients demonstrated a greater between-run difference in functional MRI activity of secondary visual areas and cerebellum than controls. This abnormally large practice-induced signal change in visual areas, and in functionally connected posterior parietal and motor cortices, was reduced in patients in session 3 compared with 2. Our results suggest that MS inflammation alters short-term plasticity underlying motor practice. Reduction of inflammation with IFN beta is associated with a restoration of this plasticity, suggesting that modulation of inflammation may enhance recovery-oriented strategies that rely on patients' brain plasticity. Hum Brain Mapp 37:2431-2445, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.
Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A
2017-04-01
Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.
NASA Astrophysics Data System (ADS)
Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun
2015-01-01
Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).
Isotope-dilution gas chromatography-mass spectrometry method for the analysis of hydroxyurea.
Garg, Uttam; Scott, David; Frazee, Clint; Kearns, Gregory; Neville, Kathleen
2015-06-01
Hydroxyurea is used in the treatment of various malignancies and sickle cell disease. There are limited studies on the pharmacokinetics of hydroxyurea, particularly in pediatric patients. An accurate, precise, and sensitive method is needed to support such studies and to monitor therapeutic adherence. We describe a novel gas chromatography-mass spectrometry (GC-MS) method for the determination of hydroxyurea concentration in plasma using stable labeled hydroxyurea C N2 as an internal standard. The method involved an organic extraction followed by the preparation of trimethylsilyl (TMS) derivatives of hydroxyurea for GC-MS selected ion-monitoring analysis. The following mass-to-charge (m/z) ratio ions for silated hydroxyurea and hydroxyurea C N2 were monitored: hydroxyurea-quantitative ion 277, qualifier ions 292 and 249; hydroxyurea C N2-quantitative ion 280, qualifier ion 295. This method was evaluated for reportable range, accuracy, within-run and between-run imprecisions, and limits of quantification. The reportable range for the method was 0.1-100 mcg/mL. All results were accurate within an allowable error of 15%. Within-run and between-run imprecisions were <15%. Samples were stable for at least 4 hours at room temperature, 2 months at -20°C, and 6 months at -70°C, and after 3 freeze/thaw cycles. Extraction efficiency for 1-, 5-, 10-, and 50-mcg/mL samples averaged 2.2%, 1.8%, 1.6%, and 1.4%, respectively. The isotope-dilution GC-MS method for analysis of hydroxyurea described here is accurate, sensitive, precise, and robust. Its characteristics make the method suitable for supporting pharmacokinetic studies and/or clinical therapeutic monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-20
An automated drift time extraction and computed associated collision cross section software tool for small molecule analysis with ion mobility spectrometry-mass spectrometry (IMS-MS). The software automatically extracts drift times and computes associated collision cross sections for small molecules analyzed using ion mobility spectrometry-mass spectrometry (IMS-MS) based on a target list of expected ions provided by the user.
NASA Astrophysics Data System (ADS)
Hansen, Rebecca L.; Lee, Young Jin
2017-09-01
Metabolomics experiments require chemical identifications, often through MS/MS analysis. In mass spectrometry imaging (MSI), this necessitates running several serial tissue sections or using a multiplex data acquisition method. We have previously developed a multiplex MSI method to obtain MS and MS/MS data in a single experiment to acquire more chemical information in less data acquisition time. In this method, each raster step is composed of several spiral steps and each spiral step is used for a separate scan event (e.g., MS or MS/MS). One main limitation of this method is the loss of spatial resolution as the number of spiral steps increases, limiting its applicability for high-spatial resolution MSI. In this work, we demonstrate multiplex MS imaging is possible without sacrificing spatial resolution by the use of overlapping spiral steps, instead of spatially separated spiral steps as used in the previous work. Significant amounts of matrix and analytes are still left after multiple spectral acquisitions, especially with nanoparticle matrices, so that high quality MS and MS/MS data can be obtained on virtually the same tissue spot. This method was then applied to visualize metabolites and acquire their MS/MS spectra in maize leaf cross-sections at 10 μm spatial resolution. [Figure not available: see fulltext.
Numerically modeling oxide entrainment in the filling of castings: The effect of the webber number
NASA Astrophysics Data System (ADS)
Cuesta, Rafael; Delgado, Angel; Maroto, Antonio; Mozo, David
2006-11-01
In the casting of aluminum alloys and, in general, in the casting of film-forming alloys, the entrainment of oxides into the bulk liquid severely reduces the strength of the cast part. To avoid this, the melt velocity must be kept below a certain value, namely critical velocity, which is widely assumed to be 0.5 m/s. In this paper the authors investigate, by means of fluid-dynamic computer simulation, the dependence of critical veiocity on geometrical features of the running channels and thermophysical properties of the molten metal. For each of the geometries studied, once the critical velocity is exceeded, the amount of oxide entrained in the liquid is quantified. The analysis of the results reveals that surface entrainment is much more related to the non-dimensional Webber number than to melt velocity.
Accelerated numerical processing of electronically recorded holograms with reduced speckle noise.
Trujillo, Carlos; Garcia-Sucerquia, Jorge
2013-09-01
The numerical reconstruction of digitally recorded holograms suffers from speckle noise. An accelerated method that uses general-purpose computing in graphics processing units to reduce that noise is shown. The proposed methodology utilizes parallelized algorithms to record, reconstruct, and superimpose multiple uncorrelated holograms of a static scene. For the best tradeoff between reduction of the speckle noise and processing time, the method records, reconstructs, and superimposes six holograms of 1024 × 1024 pixels in 68 ms; for this case, the methodology reduces the speckle noise by 58% compared with that exhibited by a single hologram. The fully parallelized method running on a commodity graphics processing unit is one order of magnitude faster than the same technique implemented on a regular CPU using its multithreading capabilities. Experimental results are shown to validate the proposal.
Li, Fumin; Wang, Jun; Jenkins, Rand
2016-05-01
There is an ever-increasing demand for high-throughput LC-MS/MS bioanalytical assays to support drug discovery and development. Matrix effects of sofosbuvir (protonated) and paclitaxel (sodiated) were thoroughly evaluated using high-throughput chromatography (defined as having a run time ≤1 min) under 14 elution conditions with extracts from protein precipitation, liquid-liquid extraction and solid-phase extraction. A slight separation, in terms of retention time, between underlying matrix components and sofosbuvir/paclitaxel can greatly alleviate matrix effects. High-throughput chromatography, with proper optimization, can provide rapid and effective chromatographic separation under 1 min to alleviate matrix effects and enhance assay ruggedness for regulated bioanalysis.
Murai, Akihiko; Kurosaki, Kosuke; Yamane, Katsu; Nakamura, Yoshihiko
2010-12-01
In this paper, we present a system that estimates and visualizes muscle tensions in real time using optical motion capture and electromyography (EMG). The system overlays rendered musculoskeletal human model on top of a live video image of the subject. The subject therefore has an impression that he/she sees the muscles with tension information through the cloth and skin. The main technical challenge lies in real-time estimation of muscle tension. Since existing algorithms using mathematical optimization to distribute joint torques to muscle tensions are too slow for our purpose, we develop a new algorithm that computes a reasonable approximation of muscle tensions based on the internal connections between muscles known as neuronal binding. The algorithm can estimate the tensions of 274 muscles in only 16 ms, and the whole visualization system runs at about 15 fps. The developed system is applied to assisting sport training, and the user case studies show its usefulness. Possible applications include interfaces for assisting rehabilitation. Copyright © 2010 Elsevier Ltd. All rights reserved.
Mei, Shenghui; Zhu, Leting; Li, Xingang; Wang, Jiaqing; Jiang, Xueyun; Chen, Haiyan; Huo, Jiping; Yang, Li; Lin, Song; Zhao, Zhigang
2017-01-01
Methotrexate (MTX) plasma concentration is routinely monitored to guide the dosage regimen of rescue drugs. This study aims to develop and validate an ultra-performance liquid chromatography tandem mass spectrometry (UPLC-MS/MS) method for plasma MTX analysis, and to establish its agreement with the fluorescence polarization immunoassay (FPIA) in patients with high-dose MTX therapy. Separation was achieved by gradient elution with methanol and water (0.05% formic acid) at 40°C with a run time of 3 min. The intra- and inter-day inaccuracy and imprecision of the UPLC-MS/MS method were -4.25 to 3.1 and less than 7.63%, respectively. The IS-normalized recovery and matrix effect were 87.05 to 92.81 and 124.43 to 134.57%. The correlation coefficients between UPLC-MS/MS and FPIA were greater than 0.98. The UPLC-MS/MS method was in agreement with the FPIA at high levels of MTX (1.0 - 100 μmol/L), but not at low levels (0.01 - 1.0 μmol/L). Further studies are warranted to confirm these results.
De Pooter, Jan; El Haddad, Milad; Stroobandt, Roland; De Buyzere, Marc; Timmermans, Frank
2017-06-01
QRS duration (QRSD) plays a key role in the field of cardiac resynchronization therapy (CRT). Computer-calculated QRSD assessments are widely used, however inter-manufacturer differences have not been investigated in CRT candidates. QRSD was assessed in 377 digitally stored ECGs: 139 narrow QRS, 140 LBBB and 98 ventricular paced ECGs. Manual QRSD was measured as global QRSD, using digital calipers, by two independent observers. Computer-calculated QRSD was assessed by Marquette 12SL (GE Healthcare, Waukesha, WI, USA) and SEMA3 (Schiller, Baar, Switzerland). Inter-manufacturer differences of computer-calculated QRSD assessments vary among different QRS morphologies: narrow QRSD: 4 [2-9] ms (median [IQR]), p=0.010; LBBB QRSD: 7 [2-10] ms, p=0.003 and paced QRSD: 13 [6-18] ms, p=0.007. Interobserver differences of manual QRSD assessments measured: narrow QRSD: 4 [2-6] ms, p=non-significant; LBBB QRSD: 6 [3-12] ms, p=0.006; paced QRSD: 8 [4-18] ms, p=0.001. In LBBB ECGs, intraclass correlation coefficients (ICCs) were comparable for inter-manufacturer and interobserver agreement (ICC 0.830 versus 0.837). When assessing paced QRSD, manual measurements showed higher ICC compared to inter-manufacturer agreement (ICC 0.902 versus 0.776). Using guideline cutoffs of 130ms, up to 15% of the LBBB ECGs would be misclassified as <130ms or ≥130ms by at least one method. Using a cutoff of 150ms, this number increases to 33% of ECGs being misclassified. However, by combining LBBB-morphology and QRSD, the number of misclassified ECGs can be decreased by half. Inter-manufacturer differences in computer-calculated QRSD assessments are significant and may compromise adequate selection of individual CRT candidates when using QRSD as sole parameter. Paced QRSD should preferentially be assessed by manual QRSD measurements. Copyright © 2017 Elsevier B.V. All rights reserved.
Evidence for abiotic sulfurization of marine dissolved organic matter in sulfidic environments
NASA Astrophysics Data System (ADS)
Pohlabeln, A. M.; Niggemann, J.; Dittmar, T.
2016-02-01
Sedimentary organic matter abiotically sulfurizes in sulfidic marine environments. Here we hypothesize that sulfurization also affects dissolved organic matter (DOM), and that sulfidic marine environments are sources of dissolved organic sulfur (DOS) to the ocean. To test these hypotheses we studied solid-phase extractable (SPE) DOS in the Black Sea at various water column depths (oxic and anoxic) and in sediment porewaters from the German Wadden Sea. The concentration and molecular composition of SPE-DOS from these sites and from the oxic water columns of the North Sea (Germany) and of the North Pacific were compared. In support of our hypotheses, SPE-DOS concentrations were elevated in sulfidic waters compared to oxic waters. For a detailed molecular characterization of SPE-DOS, selective wet-chemical alteration experiments targeting different sulfur-containing functional groups were applied prior to Fourier-transform ion cyclotron resonance mass spectrometry (FT-ICR-MS). These experiments included harsh hydrolysis, selective derivatization of thiols, oxidation, and deoxygenation to test for thioesters, sulfonic acid esters, alkylsulfates, thiols, non-aromatic thioethers, and sulfoxides. Additionally, collision-induced fragmentation experiments were applied to test for sulfonic acids. The tests revealed that the sulfonic acid group was the main structural feature in SPE-DOS, independent of the environmental conditions of the sampling site. Only in Wadden Sea anoxic porewater also non-aromatic thioethers were found which are presumably not stable in oxic waters. The findings from our field studies were confirmed in laboratory experiments, where we abiotically sulfurized marine and algal-derived DOM under conditions similar to that in anoxic marine sediments.
Pichler, Peter; Mazanek, Michael; Dusberger, Frederico; Weilnböck, Lisa; Huber, Christian G; Stingl, Christoph; Luider, Theo M; Straube, Werner L; Köcher, Thomas; Mechtler, Karl
2012-11-02
While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC-MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge.
2012-01-01
While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC–MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge. PMID:23088386
Numerical study of canister filters with alternatives filter cap configurations
NASA Astrophysics Data System (ADS)
Mohammed, A. N.; Daud, A. R.; Abdullah, K.; Seri, S. M.; Razali, M. A.; Hushim, M. F.; Khalid, A.
2017-09-01
Air filtration system and filter play an important role in getting a good quality air into turbo machinery such as gas turbine. The filtration system and filter has improved the quality of air and protect the gas turbine part from contaminants which could bring damage. During separation of contaminants from the air, pressure drop cannot be avoided but it can be minimized thus helps to reduce the intake losses of the engine [1]. This study is focused on the configuration of the filter in order to obtain the minimal pressure drop along the filter. The configuration used is the basic filter geometry provided by Salutary Avenue Manufacturing Sdn Bhd. and two modified canister filter cap which is designed based on the basic filter model. The geometries of the filter are generated by using SOLIDWORKS software and Computational Fluid Dynamics (CFD) software is used to analyse and simulates the flow through the filter. In this study, the parameters of the inlet velocity are 0.032 m/s, 0.063 m/s, 0.094 m/s and 0.126 m/s. The total pressure drop produce by basic, modified filter 1 and 2 is 292.3 Pa, 251.11 Pa and 274.7 Pa. The pressure drop reduction for the modified filter 1 is 41.19 Pa and 14.1% lower compared to basic filter and the pressure drop reduction for modified filter 2 is 17.6 Pa and 6.02% lower compared to the basic filter. The pressure drops for the basic filter are slightly different with the Salutary Avenue filter due to limited data and experiment details. CFD software are very reliable in running a simulation rather than produces the prototypes and conduct the experiment thus reducing overall time and cost in this study.
Wang, Guanghui; Wu, Wells W; Zeng, Weihua; Chou, Chung-Lin; Shen, Rong-Fong
2006-05-01
A critical step in protein biomarker discovery is the ability to contrast proteomes, a process referred generally as quantitative proteomics. While stable-isotope labeling (e.g., ICAT, 18O- or 15N-labeling, or AQUA) remains the core technology used in mass spectrometry-based proteomic quantification, increasing efforts have been directed to the label-free approach that relies on direct comparison of peptide peak areas between LC-MS runs. This latter approach is attractive to investigators for its simplicity as well as cost effectiveness. In the present study, the reproducibility and linearity of using a label-free approach to highly complex proteomes were evaluated. Various amounts of proteins from different proteomes were subjected to repeated LC-MS analyses using an ion trap or Fourier transform mass spectrometer. Highly reproducible data were obtained between replicated runs, as evidenced by nearly ideal Pearson's correlation coefficients (for ion's peak areas or retention time) and average peak area ratios. In general, more than 50% and nearly 90% of the peptide ion ratios deviated less than 10% and 20%, respectively, from the average in duplicate runs. In addition, the multiplicity ratios of the amounts of proteins used correlated nicely with the observed averaged ratios of peak areas calculated from detected peptides. Furthermore, the removal of abundant proteins from the samples led to an improvement in reproducibility and linearity. A computer program has been written to automate the processing of data sets from experiments with groups of multiple samples for statistical analysis. Algorithms for outlier-resistant mean estimation and for adjusting statistical significance threshold in multiplicity of testing were incorporated to minimize the rate of false positives. The program was applied to quantify changes in proteomes of parental and p53-deficient HCT-116 human cells and found to yield reproducible results. Overall, this study demonstrates an alternative approach that allows global quantification of differentially expressed proteins in complex proteomes. The utility of this method to biomarker discovery is likely to synergize with future improvements in the detecting sensitivity of mass spectrometers.
2011-01-01
Background Labeling whole Arabidopsis (Arabidopsis thaliana) plants to high enrichment with 13C for proteomics and metabolomics applications would facilitate experimental approaches not possible by conventional methods. Such a system would use the plant's native capacity for carbon fixation to ubiquitously incorporate 13C from 13CO2 gas. Because of the high cost of 13CO2 it is critical that the design conserve the labeled gas. Results A fully enclosed automated plant growth enclosure has been designed and assembled where the system simultaneously monitors humidity, temperature, pressure and 13CO2 concentration with continuous adjustment of humidity, pressure and 13CO2 levels controlled by a computer running LabView software. The enclosure is mounted on a movable cart for mobility among growth environments. Arabidopsis was grown in the enclosure for up to 8 weeks and obtained on average >95 atom% enrichment for small metabolites, such as amino acids and >91 atom% for large metabolites, including proteins and peptides. Conclusion The capability of this labeling system for isotope dilution experiments was demonstrated by evaluation of amino acid turnover using GC-MS as well as protein turnover using LC-MS/MS. Because this 'open source' Arabidopsis 13C-labeling growth environment was built using readily available materials and software, it can be adapted easily to accommodate many different experimental designs. PMID:21310072
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
NASA Astrophysics Data System (ADS)
Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.
2010-03-01
Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.
Do, Thanh Nhut; Gelin, Maxim F; Tan, Howe-Siang
2017-10-14
We derive general expressions that incorporate finite pulse envelope effects into a coherent two-dimensional optical spectroscopy (2DOS) technique. These expressions are simpler and less computationally intensive than the conventional triple integral calculations needed to simulate 2DOS spectra. The simplified expressions involving multiplications of arbitrary pulse spectra with 2D spectral response function are shown to be exactly equal to the conventional triple integral calculations of 2DOS spectra if the 2D spectral response functions do not vary with population time. With minor modifications, they are also accurate for 2D spectral response functions with quantum beats and exponential decay during population time. These conditions cover a broad range of experimental 2DOS spectra. For certain analytically defined pulse spectra, we also derived expressions of 2D spectra for arbitrary population time dependent 2DOS spectral response functions. Having simpler and more efficient methods to calculate experimentally relevant 2DOS spectra with finite pulse effect considered will be important in the simulation and understanding of the complex systems routinely being studied by using 2DOS.
Jerz, Gerold; Wybraniec, Sławomir; Gebers, Nadine; Winterhalter, Peter
2010-07-02
In this study, preparative ion-pair high-speed countercurrent chromatography was directly coupled to an electrospray ionization mass-spectrometry device (IP-HSCCC/ESI-MS-MS) for target-guided fractionation of high molecular weight acyl-oligosaccharide linked betacyanins from purple bracts of Bougainvillea glabra (Nyctaginaceae). The direct identification of six principal acyl-oligosaccharide linked betacyanins in the mass range between m/z 859 and m/z 1359 was achieved by positive ESI-MS ionization and gave access to the genuine pigment profile already during the proceeding of the preparative separation. Inclusively, all MS/MS-fragmentation data were provided during the chromatographic run for a complete analysis of substitution pattern. On-line purity evaluation of the recovered fractions is of high value in target-guided screening procedures and for immediate decisions about suitable fractions used for further structural analysis. The applied preparative hyphenation was shown to be a versatile screening method for on-line monitoring of countercurrent chromatographic separations of polar crude pigment extracts and also traced some minor concentrated compounds. For the separation of 760mg crude pigment extract the biphasic solvent system tert.-butylmethylether/n-butanol/acetonitrile/water 2:2:1:5 (v/v/v/v) was used with addition of ion-pair forming reagent trifluoroacetic acid. The preparative HSCCC-eluate had to be modified by post-column addition of a make-up solvent stream containing formic acid to reduce ion-suppression caused by trifluoroacetic acid and later significantly maximized response of ESI-MS/MS detection of target substances. A variable low-pressure split-unit guided a micro-eluate to the ESI-MS-interface for sensitive and direct on-line detection, and the major volume of the effluent stream was directed to the fraction collector for preparative sample recovery. The applied make-up solvent mixture significantly improved smoothness of the continuously measured IP-HSCCC-ESI-MS base peak ion trace in the experimental range of m/z 50-2200 by masking stationary phase bleeding and generating a stable single solvent phase for ESI-MS/MS detection. Immediate structural data were retrieved throughout the countercurrent chromatography run containing complete MS/MS-fragmentation pattern of the separated acyl-substituted betanidin oligoglycosides. Single ion monitoring indicated clearly the base-line separation of higher concentrated acylated betacyanin components. Copyright 2010 Elsevier B.V. All rights reserved.
Everything You Always Wanted to Know about Computers but Were Afraid to Ask.
ERIC Educational Resources Information Center
DiSpezio, Michael A.
1989-01-01
An overview of the basics of computers is presented. Definitions and discussions of processing, programs, memory, DOS, anatomy and design, central processing unit (CPU), disk drives, floppy disks, and peripherals are included. This article was designed to help teachers to understand basic computer terminology. (CW)
Personal-Computer Video-Terminal Emulator
NASA Technical Reports Server (NTRS)
Buckley, R. H.; Koromilas, A.; Smith, R. M.; Lee, G. E.; Giering, E. W.
1985-01-01
OWL-1200 video terminal emulator has been written for IBM Personal Computer. The OWL-1200 is a simple user terminal with some intelligent capabilities. These capabilities include screen formatting and block transmission of data. Emulator is written in PASCAL and Assembler for the IBM Personal Computer operating under DOS 1.1.
Simulation of LHC events on a millions threads
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2015-12-01
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
Herath, H M D R; Shaw, P N; Cabot, P; Hewavitharana, A K
2010-06-15
The high-performance liquid chromatography (HPLC) column is capable of enrichment/pre-concentration of trace impurities in the mobile phase during the column equilibration, prior to sample injection and elution. These impurities elute during gradient elution and result in significant chromatographic peaks. Three types of purified water were tested for their impurity levels, and hence their performances as mobile phase, in HPLC followed by total ion current (TIC) mode of MS. Two types of HPLC-grade water produced 3-4 significant peaks in solvent blanks while LC/MS-grade water produced no peaks (although peaks were produced by LC/MS-grade water also after a few days of standing). None of the three waters produced peaks in HPLC followed by UV-Vis detection. These peaks, if co-eluted with analyte, are capable of suppressing or enhancing the analyte signal in a MS detector. As it is not common practice to run solvent blanks in TIC mode, when quantification is commonly carried out using single ion monitoring (SIM) or single or multiple reaction monitoring (SRM or MRM), the effect of co-eluting impurities on the analyte signal and hence on the accuracy of the results is often unknown to the analyst. Running solvent blanks in TIC mode, regardless of the MS mode used for quantification, is essential in order to detect this problem and to take subsequent precautions. Copyright (c) 2010 John Wiley & Sons, Ltd.
Albrecht, Simone; Mittermayr, Stefan; Smith, Josh; Martín, Silvia Millán; Doherty, Margaret; Bones, Jonathan
2017-01-01
Quantitative glycomics represents an actively expanding research field ranging from the discovery of disease-associated glycan alterations to the quantitative characterization of N-glycans on therapeutic proteins. Commonly used analytical platforms for comparative relative quantitation of complex glycan samples include MALDI-TOF-MS or chromatographic glycan profiling with subsequent data alignment and statistical evaluation. Limitations of such approaches include run-to-run technical variation and the potential introduction of subjectivity during data processing. Here, we introduce an offline 2D LC-MS E workflow for the fractionation and relative quantitation of twoplex isotopically labeled N-linked oligosaccharides using neutral 12 C 6 and 13 C 6 aniline (Δmass = 6 Da). Additional linkage-specific derivatization of sialic acids using 4-(4,6-dimethoxy-1,3,5-trizain-2-yl)-4-methylmorpholinium chloride offered simultaneous and advanced in-depth structural characterization. The potential of the method was demonstrated for the differential analysis of structurally defined N-glycans released from serum proteins of patients diagnosed with various stages of colorectal cancer. The described twoplex 12 C 6 / 13 C 6 aniline 2D LC-MS platform is ideally suited for differential glycomic analysis of structurally complex N-glycan pools due to combination and analysis of samples in a single LC-MS injection and the associated minimization in technical variation. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Running Jobs on the Peregrine System | High-Performance Computing | NREL
on the Peregrine high-performance computing (HPC) system. Running Different Types of Jobs Batch jobs scheduling policies - queue names, limits, etc. Requesting different node types Sample batch scripts
Zhang, Chenghao; Luo, Huafei; Wu, Yubo; Zhang, Junyun; Zhang, Furong; Lin, Guobei; Wang, Hao
2016-02-01
A chiral UFLC-MS/MS method was established and validated for quantifying d-threo-methylphenidate (d-threo-MPH), l-threo-methylphenidate (l-threo-MPH), d-threo-ethylphenidate (d-threo-EPH), l-threo-ethylphenidate (l-threo-EPH) and d,l-threo-ritalinic acid (d,l-threo-RA) in rat plasma over the linearity range of 1-500ng/mL. Chiral separation was performed on an Astec Chirobiotic V2 column (5μm, 250×2.1mm) with isocratic elution using methanol containing 0.003% ammonium acetate (w/v) and 0.003% trifluoroacetic acid (v/v) at a flow of 0.3mL/min. All analytes and IS were extracted from rat plasma by a one-step liquid-liquid extraction (LLE) method. The intra- and inter-run accuracies were within 85-115%, and the intra- and inter-run precision were <10% for all analytes. Extraction recoveries were 55-62% for d-threo-MPH, 54-60% for l-threo-MPH, 55-60% for d-threo-EPH, 53-57% for l-threo-EPH and 25-30% for d,l-threo-RA. The validated UFLC-MS/MS method successfully applied to the pharmacokinetic interaction study of oral d-threo-MPH and l-threo-MPH (alone or in combination) in female Sprague Dawley rats. The EPH was not detected in rat plasma following oral administrated MPH without EtOH. As far as it is known to the authors, this study is the first one step liquid-liquid extraction method to extract and UFLC-MS/MS method to quantify d-threo-MPH, l-threo-MPH, d-threo-EPH, l-threo-EPH and d,l-threo-RA simultaneously. Copyright © 2015 Elsevier B.V. All rights reserved.
1991-05-28
R.E., Anal. Chem., 1991, 63, 114. 14. Ozubko, R.S., Clungston, D.M., Furimsky , E., Anal. Chem., 1981, 53, 183. 15. Hayes, P.C., Jr., Anderson, S.D...Adv. Study Inst. Sec. A, 1983, 46, 471. Ozubko, R.S., Clunqston, D.M., Furimsky , E., Anal. Chem., 1981, 53, 183. PC-MATLAB for 80386-based MS-DOS
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
NASA Astrophysics Data System (ADS)
Chuluunbaatar, O.; Gusev, A. A.; Vinitsky, S. I.; Abrashkevich, A. G.
2008-11-01
A FORTRAN 77 program for calculating energy values, reaction matrix and corresponding radial wave functions in a coupled-channel approximation of the hyperspherical adiabatic approach is presented. In this approach, a multi-dimensional Schrödinger equation is reduced to a system of the coupled second-order ordinary differential equations on a finite interval with homogeneous boundary conditions: (i) the Dirichlet, Neumann and third type at the left and right boundary points for continuous spectrum problem, (ii) the Dirichlet and Neumann type conditions at left boundary point and Dirichlet, Neumann and third type at the right boundary point for the discrete spectrum problem. The resulting system of radial equations containing the potential matrix elements and first-derivative coupling terms is solved using high-order accuracy approximations of the finite element method. As a test desk, the program is applied to the calculation of the reaction matrix and radial wave functions for 3D-model of a hydrogen-like atom in a homogeneous magnetic field. This version extends the previous version 1.0 of the KANTBP program [O. Chuluunbaatar, A.A. Gusev, A.G. Abrashkevich, A. Amaya-Tapia, M.S. Kaschiev, S.Y. Larsen, S.I. Vinitsky, Comput. Phys. Commun. 177 (2007) 649-675]. Program summaryProgram title: KANTBP Catalogue identifier: ADZH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 20 403 No. of bytes in distributed program, including test data, etc.: 147 563 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Intel Xeon EM64T, Alpha 21264A, AMD Athlon MP, Pentium IV Xeon, Opteron 248, Intel Pentium IV Operating system: OC Linux, Unix AIX 5.3, SunOS 5.8, Solaris, Windows XP RAM: This depends on the number of differential equations; the number and order of finite elements; the number of hyperradial points; and the number of eigensolutions required. The test run requires 2 MB Classification: 2.1, 2.4 External routines: GAULEG and GAUSSJ [2] Nature of problem: In the hyperspherical adiabatic approach [3-5], a multidimensional Schrödinger equation for a two-electron system [6] or a hydrogen atom in magnetic field [7-9] is reduced by separating radial coordinate ρ from the angular variables to a system of the second-order ordinary differential equations containing the potential matrix elements and first-derivative coupling terms. The purpose of this paper is to present the finite element method procedure based on the use of high-order accuracy approximations for calculating approximate eigensolutions of the continuum spectrum for such systems of coupled differential equations on finite intervals of the radial variable ρ∈[ρ,ρ]. This approach can be used in the calculations of effects of electron screening on low-energy fusion cross sections [10-12]. Solution method: The boundary problems for the coupled second-order differential equations are solved by the finite element method using high-order accuracy approximations [13]. The generalized algebraic eigenvalue problem AF=EBF with respect to pair unknowns ( E,F) arising after the replacement of the differential problem by the finite-element approximation is solved by the subspace iteration method using the SSPACE program [14]. The generalized algebraic eigenvalue problem (A-EB)F=λDF with respect to pair unknowns ( λ,F) arising after the corresponding replacement of the scattering boundary problem in open channels at fixed energy value, E, is solved by the LDL factorization of symmetric matrix and back-substitution methods using the DECOMP and REDBAK programs, respectively [14]. As a test desk, the program is applied to the calculation of the reaction matrix and corresponding radial wave functions for 3D-model of a hydrogen-like atom in a homogeneous magnetic field described in [9] on finite intervals of the radial variable ρ∈[ρ,ρ]. For this benchmark model the required analytical expressions for asymptotics of the potential matrix elements and first-derivative coupling terms, and also asymptotics of radial solutions of the boundary problems for coupled differential equations have been produced with help of a MAPLE computer algebra system. Restrictions: The computer memory requirements depend on: the number of differential equations; the number and order of finite elements; the total number of hyperradial points; and the number of eigensolutions required. Restrictions due to dimension sizes may be easily alleviated by altering PARAMETER statements (see Section 3 and [1] for details). The user must also supply subroutine POTCAL for evaluating potential matrix elements. The user should also supply subroutines ASYMEV (when solving the eigenvalue problem) or ASYMS0 and ASYMSC (when solving the scattering problem) which evaluate asymptotics of the radial wave functions at left and right boundary points in case of a boundary condition of the third type for the above problems. Running time: The running time depends critically upon: the number of differential equations; the number and order of finite elements; the total number of hyperradial points on interval [ ρ,ρ]; and the number of eigensolutions required. The test run which accompanies this paper took 2 s without calculation of matrix potentials on the Intel Pentium IV 2.4 GHz. References: [1] O. Chuluunbaatar, A.A. Gusev, A.G. Abrashkevich, A. Amaya-Tapia, M.S. Kaschiev, S.Y. Larsen, S.I. Vinitsky, Comput. Phys. Commun. 177 (2007) 649-675; http://cpc.cs.qub.ac.uk/summaries/ADZHv10.html. [2] W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes: The Art of Scientific Computing, Cambridge University Press, Cambridge, 1986. [3] J. Macek, J. Phys. B 1 (1968) 831-843. [4] U. Fano, Rep. Progr. Phys. 46 (1983) 97-165. [5] C.D. Lin, Adv. Atom. Mol. Phys. 22 (1986) 77-142. [6] A.G. Abrashkevich, D.G. Abrashkevich, M. Shapiro, Comput. Phys. Commun. 90 (1995) 311-339. [7] M.G. Dimova, M.S. Kaschiev, S.I. Vinitsky, J. Phys. B 38 (2005) 2337-2352. [8] O. Chuluunbaatar, A.A. Gusev, V.L. Derbov, M.S. Kaschiev, L.A. Melnikov, V.V. Serov, S.I. Vinitsky, J. Phys. A 40 (2007) 11485-11524. [9] O. Chuluunbaatar, A.A. Gusev, V.P. Gerdt, V.A. Rostovtsev, S.I. Vinitsky, A.G. Abrashkevich, M.S. Kaschiev, V.V. Serov, Comput. Phys. Commun. 178 (2007) 301 330; http://cpc.cs.qub.ac.uk/summaries/AEAAv10.html. [10] H.J. Assenbaum, K. Langanke, C. Rolfs, Z. Phys. A 327 (1987) 461-468. [11] V. Melezhik, Nucl. Phys. A 550 (1992) 223-234. [12] L. Bracci, G. Fiorentini, V.S. Melezhik, G. Mezzorani, P. Pasini, Phys. Lett. A 153 (1991) 456-460. [13] A.G. Abrashkevich, D.G. Abrashkevich, M.S. Kaschiev, I.V. Puzynin, Comput. Phys. Commun. 85 (1995) 40-64. [14] K.J. Bathe, Finite Element Procedures in Engineering Analysis, Englewood Cliffs, Prentice-Hall, New York, 1982.
An 81.6 μW FastICA processor for epileptic seizure detection.
Yang, Chia-Hsiang; Shih, Yi-Hsin; Chiueh, Herming
2015-02-01
To improve the performance of epileptic seizure detection, independent component analysis (ICA) is applied to multi-channel signals to separate artifacts and signals of interest. FastICA is an efficient algorithm to compute ICA. To reduce the energy dissipation, eigenvalue decomposition (EVD) is utilized in the preprocessing stage to reduce the convergence time of iterative calculation of ICA components. EVD is computed efficiently through an array structure of processing elements running in parallel. Area-efficient EVD architecture is realized by leveraging the approximate Jacobi algorithm, leading to a 77.2% area reduction. By choosing proper memory element and reduced wordlength, the power and area of storage memory are reduced by 95.6% and 51.7%, respectively. The chip area is minimized through fixed-point implementation and architectural transformations. Given a latency constraint of 0.1 s, an 86.5% area reduction is achieved compared to the direct-mapped architecture. Fabricated in 90 nm CMOS, the core area of the chip is 0.40 mm(2). The FastICA processor, part of an integrated epileptic control SoC, dissipates 81.6 μW at 0.32 V. The computation delay of a frame of 256 samples for 8 channels is 84.2 ms. Compared to prior work, 0.5% power dissipation, 26.7% silicon area, and 3.4 × computation speedup are achieved. The performance of the chip was verified by human dataset.
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio; Giampiccolo, Elisabetta; Gresta, Stefano
Few automated data acquisition and processing systems operate on mainframes, some run on UNIX-based workstations and others on personal computers, equipped with either DOS/WINDOWS or UNIX-derived operating systems. Several large and complex software packages for automatic and interactive analysis of seismic data have been developed in recent years (mainly for UNIX-based systems). Some of these programs use a variety of artificial intelligence techniques. The first operational version of a new software package, named PC-Seism, for analyzing seismic data from a local network is presented in Patanè et al. (1999). This package, composed of three separate modules, provides an example of a new generation of visual object-oriented programs for interactive and automatic seismic data-processing running on a personal computer. In this work, we mainly discuss the automatic procedures implemented in the ASDP (Automatic Seismic Data-Processing) module and real time application to data acquired by a seismic network running in eastern Sicily. This software uses a multi-algorithm approach and a new procedure MSA (multi-station-analysis) for signal detection, phase grouping and event identification and location. It is designed for an efficient and accurate processing of local earthquake records provided by single-site and array stations. Results from ASDP processing of two different data sets recorded at Mt. Etna volcano by a regional network are analyzed to evaluate its performance. By comparing the ASDP pickings with those revised manually, the detection and subsequently the location capabilities of this software are assessed. The first data set is composed of 330 local earthquakes recorded in the Mt. Etna erea during 1997 by the telemetry analog seismic network. The second data set comprises about 970 automatic locations of more than 2600 local events recorded at Mt. Etna during the last eruption (July 2001) at the present network. For the former data set, a comparison of the automatic results with the manual picks indicates that the ASDP module can accurately pick 80% of the P-waves and 65% of S-waves. The on-line application on the latter data set shows that automatic locations are affected by larger errors, due to the preliminary setting of the configuration parameters in the program. However, both automatic ASDP and manual hypocenter locations are comparable within the estimated error bounds. New improvements of the PC-Seism software for on-line analysis are also discussed.
WinHPC System | High-Performance Computing | NREL
System WinHPC System NREL's WinHPC system is a computing cluster running the Microsoft Windows operating system. It allows users to run jobs requiring a Windows environment such as ANSYS and MATLAB
NASA Astrophysics Data System (ADS)
Ma, Peng; Pan, Yong; Jiang, Juncheng; Zhu, Shunguan
2017-10-01
A novel explosive, ethylenediamine triethylenediamine tetraperchlorate (ETT), was synthesized by a rapid " one-pot" method. The molecular and crystal structures of ETT were determined by X-ray diffraction (XRD) and Fourier transformed infrared (FTIR) spectroscopy. The purity of the ETT was characterized by hydrogen nuclear magnetic resonance (H-NMR) spectra and elemental analysis (EA). The chemical and physical properties of the co-crystal ETT were further explored including impact sensitivity, velocity of detonation, and thermal behavior. The impact sensitivity of the ETT (h50% = 9.50 cm) is much lower than that of its components, ethylenediamine diperchlorate (ED) (h50% = 5.60 cm) and triethylenediamine diperchlorate (TD) (h50% = 2.10 cm). The measured detonation velocity is 8956 m/s (ρ = 1.873 g/cm3), which is much higher than that of TNT (6900 m/s) or RDX (8350 m/s). The co-crystal ETT shows a unique thermal behavior with a decomposition peak temperature at 365 °C. Band structure and density of states (DOS) of the ETT were confirmed by the CASTEP code. The first-principles tight-binding method within the general gradient approximation (GGA) was employed to study the electronic band structure as well as the DOS and Fermi energy. Hirshfeld surfaces were applied to analyze the intermolecular interactions in the co-crystal, and the results showed that weak interaction was dominantly mediated by H … O hydrogen bond. By analyzing the bond length at different temperatures, N-H covalent bond is the trigger bond for the ETT.
Gich, Jordi; Freixenet, Jordi; Garcia, Rafael; Vilanova, Joan Carles; Genís, David; Silva, Yolanda; Montalban, Xavier; Ramió-Torrentà, Lluís
2015-09-01
Cognitive rehabilitation is often delayed in multiple sclerosis (MS). To develop a free and specific cognitive rehabilitation programme for MS patients to be used from early stages that does not interfere with daily living activities. MS-line!, cognitive rehabilitation materials consisting of written, manipulative and computer-based materials with difficulty levels developed by a multidisciplinary team. Mathematical, problem-solving and word-based exercises were designed. Physical materials included spatial, coordination and reasoning games. Computer-based material included logic and reasoning, working memory and processing speed games. Cognitive rehabilitation exercises that are specific for MS patients have been successfully developed. © The Author(s), 2014.
Reactive Aggregate Model Protecting Against Real-Time Threats
2014-09-01
on the underlying functionality of three core components. • MS SQL server 2008 backend database. • Microsoft IIS running on Windows server 2008...services. The capstone tested a Linux-based Apache web server with the following software implementations: • MySQL as a Linux-based backend server for...malicious compromise. 1. Assumptions • GINA could connect to a backend MS SQL database through proper configuration of DotNetNuke. • GINA had access
Viidanoja, Jyrki
2017-01-13
A new liquid chromatography-electrospray ionization-tandem Mass Spectrometry (LC-ESI-MS/MS) method was developed for the determination of more than 20 C 1 -C 6 alkyl and alkanolamines in aqueous matrices. The method employs Hydrophilic Interaction Liquid Chromatography Multiple Reaction Monitoring (HILIC-MRM) with a ZIC-pHILIC column and four stable isotope labeled amines as internal standards for signal normalization and quantification of the amines. The method was validated using a refinery process water sample that was obtained from a cooling cycle of crude oil distillation. The averaged within run precision, between run precision and accuracy were generally within 2-10%, 1-9% and 80-120%, respectively, depending on the analyte and concentration level. Selected aqueous process samples were analyzed with the method. Copyright © 2016 Elsevier B.V. All rights reserved.
Lee, Byeong Ill; Park, Min-Ho; Heo, Soon Chul; Park, Yuri; Shin, Seok-Ho; Byeon, Jin-Ju; Kim, Jae Ho; Shin, Young G
2018-03-01
A liquid chromatographic-electrospray ionization-time-of-flight/mass spectrometric (LC-ESI-TOF/MS) method was developed and applied for the determination of WKYMVm peptide in rat plasma to support preclinical pharmacokinetics studies. The method consisted of micro-elution solid-phase extraction (SPE) for sample preparation and LC-ESI-TOF/MS in the positive ion mode for analysis. Phenanthroline (10 mg/mL) was added to rat blood immediately for plasma preparation followed by addition of trace amount of 2 m hydrogen chloride to plasma before SPE for stability of WKYMVm peptide. Then sample preparation using micro-elution SPE was performed with verapamil as an internal standard. A quadratic regression (weighted 1/concentration 2 ), with the equation y = ax 2 + bx + c was used to fit calibration curves over the concentration range of 3.02-2200 ng/mL for WKYMVm peptide. The quantification run met the acceptance criteria of ±25% accuracy and precision values. For quality control samples at 15, 165 and 1820 ng/mL from the quantification experiment, the within-run and the between-run accuracy ranged from 92.5 to 123.4% with precision values ≤15.1% for WKYMVm peptide from the nominal values. This novel LC-ESI-TOF/MS method was successfully applied to evaluate the pharmacokinetics of WKYMVm peptide in rat plasma. Copyright © 2017 John Wiley & Sons, Ltd.
Sherwood, Carly A; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A; Martin, Daniel B
2009-07-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition.
Sherwood, Carly A.; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A.; Martin, Daniel B.
2009-01-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition. PMID:19405522
Beyond the proteome: Mass Spectrometry Special Interest Group (MS-SIG) at ISMB/ECCB 2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Soyoung; Payne, Samuel H.; Schaab, Christoph
2014-07-02
Mass spectrometry special interest group (MS-SIG) aims to bring together experts from the global research community to discuss highlights and challenges in the field of mass spectrometry (MS)-based proteomics and computational biology. The rapid echnological developments in MS-based proteomics have enabled the generation of a large amount of meaningful information on hundreds to thousands of proteins simultaneously from a biological sample; however, the complexity of the MS data require sophisticated computational algorithms and software for data analysis and interpretation. This year’s MS-SIG meeting theme was ‘Beyond the Proteome’ with major focuses on improving protein identification/quantification and using proteomics data tomore » solve interesting problems in systems biology and clinical research.« less
Analyzing Spacecraft Telecommunication Systems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric
2004-01-01
Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.
NASA Astrophysics Data System (ADS)
Sridevi, B.; Supriya, T. S.; Rajaram, S.
2013-01-01
The current generation of wireless networks has been designed predominantly to support voice and more recently data traffic. WiMAX is currently one of the hottest technologies in wireless. The main motive of the mobile technologies is to provide seamless cost effective mobility. But this is affected by Authentication cost and handover delay since on each handoff the Mobile Station (MS) has to undergo all steps of authentication. Pre-Authentication is used to reduce the handover delay and increase the speed of the Intra-ASN Handover. Proposed Pre-Authentication method is intended to reduce the authentication delay by getting pre authenticated by central authority called Pre Authentication Authority (PAA). MS requests PAA for Pre Authentication Certificate (PAC) before performing handoff. PAA verifies the identity of MS and provides PAC to MS and also to the neighboring target Base Stations (tBSs). MS having time bound PAC can skip the authentication process when recognized by target BS during handoff. It also prevents the DOS (Denial Of Service) attack and Replay attack. It has no wastage of unnecessary key exchange of the resources. The proposed work is simulated by NS2 model and by MATLAB.
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Donnell, B.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Donnell, B.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (MACINTOSH VERSION)
NASA Technical Reports Server (NTRS)
Riley, G.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (DEC VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Donnell, B.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
Real-time stylistic prediction for whole-body human motions.
Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun
2012-01-01
The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.
Guitton, Yann; Tremblay-Franco, Marie; Le Corguillé, Gildas; Martin, Jean-François; Pétéra, Mélanie; Roger-Mele, Pierrick; Delabrière, Alexis; Goulitquer, Sophie; Monsoor, Misharl; Duperier, Christophe; Canlet, Cécile; Servien, Rémi; Tardivel, Patrick; Caron, Christophe; Giacomoni, Franck; Thévenot, Etienne A
2017-12-01
Metabolomics is a key approach in modern functional genomics and systems biology. Due to the complexity of metabolomics data, the variety of experimental designs, and the multiplicity of bioinformatics tools, providing experimenters with a simple and efficient resource to conduct comprehensive and rigorous analysis of their data is of utmost importance. In 2014, we launched the Workflow4Metabolomics (W4M; http://workflow4metabolomics.org) online infrastructure for metabolomics built on the Galaxy environment, which offers user-friendly features to build and run data analysis workflows including preprocessing, statistical analysis, and annotation steps. Here we present the new W4M 3.0 release, which contains twice as many tools as the first version, and provides two features which are, to our knowledge, unique among online resources. First, data from the four major metabolomics technologies (i.e., LC-MS, FIA-MS, GC-MS, and NMR) can be analyzed on a single platform. By using three studies in human physiology, alga evolution, and animal toxicology, we demonstrate how the 40 available tools can be easily combined to address biological issues. Second, the full analysis (including the workflow, the parameter values, the input data and output results) can be referenced with a permanent digital object identifier (DOI). Publication of data analyses is of major importance for robust and reproducible science. Furthermore, the publicly shared workflows are of high-value for e-learning and training. The Workflow4Metabolomics 3.0 e-infrastructure thus not only offers a unique online environment for analysis of data from the main metabolomics technologies, but it is also the first reference repository for metabolomics workflows. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S
2007-01-01
We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.
Park, Min-Ho; Lee, Yun Young; Cho, Kyung Hee; La, Sookie; Lee, Hee Joo; Yim, Dong-Seok; Ban, Sooho; Park, Moon-Young; Kim, Yong-Chul; Kim, Yoon-Gyoon; Shin, Young G
2016-03-01
A liquid chromatography-triple quadrupole mass spectrometric (LC-MS/MS) method was developed and validated for the determination of 5-nitro-5'-hydroxy-indirubin-3'-oxime (AGM-130) in human plasma to support a microdose clinical trial. The method consisted of a liquid-liquid extraction for sample preparation and LC-MS/MS analysis in the positive ion mode using TurboIonSpray(TM) for analysis. d3 -AGM-130 was used as the internal standard. A linear regression (weighted 1/concentration) was used to fit calibration curves over the concentration range of 10-2000 pg/mL for AGM-130. There were no endogenous interference components in the blank human plasma tested. The accuracy at the lower limit of quantitation was 96.6% with a precision (coefficient of variation, CV) of 4.4%. For quality control samples at 30, 160 and 1600 pg/mL, the between run CV was ≤5.0 %. Between-run accuracy ranged from 98.1 to 101.0%. AGM-130 was stable in 50% acetonitrile for 168 h at 4°C and 6 h at room temperature. AGM-130 was also stable in human plasma at room temperature for 6 h and through three freeze-thaw cycles. The variability of selected samples for the incurred sample reanalysis was ≤12.7% when compared with the original sample concentrations. This validated LC-MS/MS method for determination of AGM-130 was used to support a phase 0 microdose clinical trial. Copyright © 2015 John Wiley & Sons, Ltd.
Danese, Elisa; Tarperi, Cantor; Salvagno, Gian Luca; Guzzo, Alessandra; Sanchis-Gomar, Fabian; Festa, Luca; Bertinato, Luciano; Montagnana, Martina; Schena, Federico; Lippi, Giuseppe
2018-03-20
The sympatho-adrenergic activation during exercise is implicated in many cardiovascular respiratory and metabolic adaptations which have been thought to partially explain the different levels of performance observed between trained and untrained subjects. To date, no evidence exists about the association between competition performance and markers of "acute stress response". We designed this study to investigate; (i) the acute sympatho-adrenergic activation during endurance exercise in recreational runners by measuring plasma levels of free metanephrine (MN) and normethanephrine (NMN) before and after a half-marathon run; (ii) the association between the metanephrines levels and the running time. 26 amateur runners (15 males, 11 females) aged 30 to 63 years were enrolled. The quantification of MN and NMN was performed by LC-MS/MS. Anthropometric ergonomic and routine laboratory data were recorded. Statistical analyses included paired T -test, univariate and multivariate regressions. The post-run values of MN and NMN displayed a nearly 3.5 and 7 fold increase respectively compared to the baseline values ( p < 0.0001 for both). NMN pre-run values and pre/post run delta values showed a significant direct and inverse association ( p = 0.021 and p = 0.033, respectively) with running performance. No correlations were found for MN values. NMN is a reliable marker of sympatho-adrenergic activation by exercise and can predict endurance performance in the individual athlete. Adaptation phenomenon occurring not only in the adrenal medulla might represent the biological mechanism underlying this association. Further studies on sympatho-adrenergic activation, competition performance and training status should contemplate the measurement of these metabolites instead of their unstable precursors.
2011-08-01
5 Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis...classification of streaming data. Example input images (top left). All digit prototypes (cluster centers) found, with size proportional to frequency (top...Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis 1 http
PLNoise: a package for exact numerical simulation of power-law noises
NASA Astrophysics Data System (ADS)
Milotti, Edoardo
2006-08-01
Many simulations of stochastic processes require colored noises: here I describe a small program library that generates samples with a tunable power-law spectral density: the algorithm can be modified to generate more general colored noises, and is exact for all time steps, even when they are unevenly spaced (as may often happen in the case of astronomical data, see e.g. [N.R. Lomb, Astrophys. Space Sci. 39 (1976) 447]. The method is exact in the sense that it reproduces a process that is theoretically guaranteed to produce a range-limited power-law spectrum 1/f with -1<β⩽1. The algorithm has a well-behaved computational complexity, it produces a nearly perfect Gaussian noise, and its computational efficiency depends on the required degree of noise Gaussianity. Program summaryTitle of program: PLNoise Catalogue identifier:ADXV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXV_v1_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Programming language used: ANSI C Computer: Any computer with an ANSI C compiler: the package has been tested with gcc version 3.2.3 on Red Hat Linux 3.2.3-52 and gcc version 4.0.0 and 4.0.1 on Apple Mac OS X-10.4 Operating system: All operating systems capable of running an ANSI C compiler No. of lines in distributed program, including test data, etc.:6238 No. of bytes in distributed program, including test data, etc.:52 387 Distribution format:tar.gz RAM: The code of the test program is very compact (about 50 Kbytes), but the program works with list management and allocates memory dynamically; in a typical run (like the one discussed in Section 4 in the long write-up) with average list length 2ṡ10, the RAM taken by the list is 200 Kbytes. External routines: The package needs external routines to generate uniform and exponential deviates. The implementation described here uses the random number generation library ranlib freely available from Netlib [B.W. Brown, J. Lovato, K. Russell, ranlib, available from Netlib, http://www.netlib.org/random/index.html, select the C version ranlib.c], but it has also been successfully tested with the random number routines in Numerical Recipes [W.H. Press, S.A. Teulkolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, second ed., Cambridge Univ. Press, Cambridge, 1992, pp. 274-290]. Notice that ranlib requires a pair of routines from the linear algebra package LINPACK, and that the distribution of ranlib includes the C source of these routines, in case LINPACK is not installed on the target machine. Nature of problem: Exact generation of different types of Gaussian colored noise. Solution method: Random superposition of relaxation processes [E. Milotti, Phys. Rev. E 72 (2005) 056701]. Unusual features: The algorithm is theoretically guaranteed to be exact, and unlike all other existing generators it can generate samples with uneven spacing. Additional comments: The program requires an initialization step; for some parameter sets this may become rather heavy. Running time: Running time varies widely with different input parameters, however in a test run like the one in Section 4 in this work, the generation routine took on average about 7 ms for each sample.
NASA Astrophysics Data System (ADS)
Chuluunbaatar, O.; Gusev, A. A.; Vinitsky, S. I.; Abrashkevich, A. G.
2009-08-01
A FORTRAN 77 program is presented for calculating with the given accuracy eigenvalues, eigenfunctions and their first derivatives with respect to the parameter of the parametric self-adjoined Sturm-Liouville problem with the parametric third type boundary conditions on the finite interval. The program calculates also potential matrix elements - integrals of the eigenfunctions multiplied by their first derivatives with respect to the parameter. Eigenvalues and matrix elements computed by the ODPEVP program can be used for solving the bound state and multi-channel scattering problems for a system of the coupled second-order ordinary differential equations with the help of the KANTBP programs [O. Chuluunbaatar, A.A. Gusev, A.G. Abrashkevich, A. Amaya-Tapia, M.S. Kaschiev, S.Y. Larsen, S.I. Vinitsky, Comput. Phys. Commun. 177 (2007) 649-675; O. Chuluunbaatar, A.A. Gusev, S.I. Vinitsky, A.G. Abrashkevich, Comput. Phys. Commun. 179 (2008) 685-693]. As a test desk, the program is applied to the calculation of the potential matrix elements for an integrable 2D-model of three identical particles on a line with pair zero-range potentials, a 3D-model of a hydrogen atom in a homogeneous magnetic field and a hydrogen atom on a three-dimensional sphere. Program summaryProgram title: ODPEVP Catalogue identifier: AEDV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3001 No. of bytes in distributed program, including test data, etc.: 24 195 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Intel Xeon EM64T, Alpha 21264A, AMD Athlon MP, Pentium IV Xeon, Opteron 248, Intel Pentium IV Operating system: OC Linux, Unix AIX 5.3, SunOS 5.8, Solaris, Windows XP RAM: depends on the number and order of finite elements; the number of points; and the number of eigenfunctions required. Test run requires 4 MB Classification: 2.1, 2.4 External routines: GAULEG [3] Nature of problem: The three-dimensional boundary problem for the elliptic partial differential equation with an axial symmetry similar to the Schrödinger equation with the Coulomb and transverse oscillator potentials is reduced to the two-dimensional one. The latter finds wide applications in modeling of photoionization and recombination of oppositively charged particles (positrons, antiprotons) in the magnet-optical trap [4], optical absorption in quantum wells [5], and channeling of likely charged particles in thin doped films [6,7] or neutral atoms and molecules in artificial waveguides or surfaces [8,9]. In the adiabatic approach [10] known in mathematics as Kantorovich method [11] the solution of the two-dimensional elliptic partial differential equation is expanded over basis functions with respect to the fast variable (for example, angular variable) and depended on the slow variable (for example, radial coordinate ) as a parameter. An averaging of the problem by such a basis leads to a system of the second-order ordinary differential equations which contain potential matrix elements and the first-derivative coupling terms (see, e.g., [12,13,14]). The purpose of this paper is to present the finite element method procedure based on the use of high-order accuracy approximations for calculating eigenvalues, eigenfunctions and their first derivatives with respect to the parameter of the parametric self-adjoined Sturm-Liouville problem with the parametric third type boundary conditions on the finite interval. The program developed calculates potential matrix elements - integrals of the eigenfunctions multiplied by their derivatives with respect to the parameter. These matrix elements can be used for solving the bound state and multi-channel scattering problems for a system of the coupled second-order ordinary differential equations with the help of the KANTBP programs [1,2]. Solution method: The parametric self-adjoined Sturm-Liouville problem with the parametric third type boundary conditions is solved by the finite element method using high-order accuracy approximations [15]. The generalized algebraic eigenvalue problem AF=EBF with respect to a pair of unknown ( E,F) arising after the replacement of the differential problem by the finite-element approximation is solved by the subspace iteration method using the SSPACE program [16]. First derivatives of the eigenfunctions with respect to the parameter which contained in potential matrix elements of the coupled system equations are obtained by solving the inhomogeneous algebraic equations. As a test desk, the program is applied to the calculation of the potential matrix elements for an integrable 2D-model of three identical particles on a line with pair zero-range potentials described in [1,17,18], a 3D-model of a hydrogen atom in a homogeneous magnetic field described in [14,19] and a hydrogen atom on a three-dimensional sphere [20]. Restrictions: The computer memory requirements depend on: the number and order of finite elements; the number of points; and the number of eigenfunctions required. Restrictions due to dimension sizes may be easily alleviated by altering PARAMETER statements (see sections below and listing for details). The user must also supply DOUBLE PRECISION functions POTCCL and POTCC1 for evaluating potential function U(ρ,z) of Eq. (1) and its first derivative with respect to parameter ρ. The user should supply DOUBLE PRECISION functions F1FUNC and F2FUNC that evaluate functions f(z) and f(z) of Eq. (1). The user must also supply subroutine BOUNCF for evaluating the parametric third type boundary conditions. Running time: The running time depends critically upon: the number and order of finite elements; the number of points on interval [z,z]; and the number of eigenfunctions required. The test run which accompanies this paper took 2 s with calculation of matrix potentials on the Intel Pentium IV 2.4 GHz. References:O. Chuluunbaatar, A.A. Gusev, A.G. Abrashkevich, A. Amaya-Tapia, M.S. Kaschiev, S.Y. Larsen, S.I. Vinitsky, Comput. Phys. Comm. 177 (2007) 649-675 O. Chuluunbaatar, A.A. Gusev, S.I. Vinitsky, A.G. Abrashkevich, Comput. Phys. Comm. 179 (2008) 685-693. W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes: The Art of Scientific Computing, Cambridge University Press, Cambridge, 1986. O. Chuluunbaatar, A.A. Gusev, S.I. Vinitsky, V.L. Derbov, L.A. Melnikov, V.V. Serov, Phys. Rev. A 77 (2008) 034702-1-4. E.M. Kazaryan, A.A. Kostanyan, H.A. Sarkisyan, Physica E 28 (2005) 423-430. Yu.N. Demkov, J.D. Meyer, Eur. Phys. J. B 42 (2004) 361-365. P.M. Krassovitskiy, N.Zh. Takibaev, Bull. Russian Acad. Sci. Phys. 70 (2006) 815-818. V.S. Melezhik, J.I. Kim, P. Schmelcher, Phys. Rev. A 76 (2007) 053611-1-15. F.M. Pen'kov, Phys. Rev. A 62 (2000) 044701-1-4. M. Born, X. Huang, Dynamical Theory of Crystal Lattices, The Clarendon Press, Oxford, England, 1954. L.V. Kantorovich, V.I. Krylov, Approximate Methods of Higher Analysis, Wiley, New York, 1964. U. Fano, Colloq. Int. C.N.R.S. 273 (1977) 127;A.F. Starace, G.L. Webster, Phys. Rev. A 19 (1979) 1629-1640. C.V. Clark, K.T. Lu, A.F. Starace, in: H.G. Beyer, H. Kleinpoppen (eds.), Progress in Atomic Spectroscopy, Part C, Plenum, New York, 1984, pp. 247-320. O. Chuluunbaatar, A.A. Gusev, V.L. Derbov, M.S. Kaschiev, L.A. Melnikov, V.V. Serov, S.I. Vinitsky, J. Phys. A 40 (2007) 11485-11524. A.G. Abrashkevich, D.G. Abrashkevich, M.S. Kaschiev, I.V. Puzynin, Comput. Phys. Comm. 85 (1995) 40-64. K.J. Bathe, Finite Element Procedures in Engineering Analysis, Englewood Cliffs, Prentice-Hall, New York, 1982. O. Chuluunbaatar, A.A. Gusev, M.S. Kaschiev, V.A. Kaschieva, A. Amaya-Tapia, S.Y. Larsen, S.I. Vinitsky, J. Phys. B 39 (2006) 243-269. Yu.A. Kuperin, P.B. Kurasov, Yu.B. Melnikov, S.P. Merkuriev, Ann. Phys. 205 (1991) 330-361. O. Chuluunbaatar, A.A. Gusev, V.P. Gerdt, V.A. Rostovtsev, S.I. Vinitsky, A.G. Abrashkevich, M.S. Kaschiev, V.V. Serov, Comput. Phys. Comm. 178 (2008) 301-330. A.G. Abrashkevich, M.S. Kaschiev, S.I. Vinitsky, J. Comp. Phys. 163 (2000) 328-348.
Design for Run-Time Monitor on Cloud Computing
NASA Astrophysics Data System (ADS)
Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Nesvizhskii, Alexey I.
2013-01-01
Analysis of protein interaction networks and protein complexes using affinity purification and mass spectrometry (AP/MS) is among most commonly used and successful applications of proteomics technologies. One of the foremost challenges of AP/MS data is a large number of false positive protein interactions present in unfiltered datasets. Here we review computational and informatics strategies for detecting specific protein interaction partners in AP/MS experiments, with a focus on incomplete (as opposite to genome-wide) interactome mapping studies. These strategies range from standard statistical approaches, to empirical scoring schemes optimized for a particular type of data, to advanced computational frameworks. The common denominator among these methods is the use of label-free quantitative information such as spectral counts or integrated peptide intensities that can be extracted from AP/MS data. We also discuss related issues such as combining multiple biological or technical replicates, and dealing with data generated using different tagging strategies. Computational approaches for benchmarking of scoring methods are discussed, and the need for generation of reference AP/MS datasets is highlighted. Finally, we discuss the possibility of more extended modeling of experimental AP/MS data, including integration with external information such as protein interaction predictions based on functional genomics data. PMID:22611043
NASA Astrophysics Data System (ADS)
Santhana Vannan, S. K.; Boyer, A.; Deb, D.; Beaty, T.; Wei, Y.; Wei, Z.
2017-12-01
The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) for biogeochemical dynamics is one of the NASA Earth Observing System Data and Information System (EOSDIS) data centers. ORNL DAAC (https://daac.ornl.gov) is responsible for data archival, product development and distribution, and user support for biogeochemical and ecological data and models. In particular, ORNL DAAC has been providing data management support for NASA's terrestrial ecology field campaign programs for the last several decades. Field campaigns combine ground, aircraft, and satellite-based measurements in specific ecosystems over multi-year time periods. The data collected during NASA field campaigns are archived at the ORNL DAAC (https://daac.ornl.gov/get_data/). This paper describes the effort of the ORNL DAAC team for data rescue of a First ISLSCP Field Experiment (FIFE) dataset containing airborne and satellite data observations from the 1980s. The data collected during the FIFE campaign contain high resolution aerial imageries collected over Kansas. The data rescue workflow was prepared to test for successful recovery of the data from a CD-ROM and to ensure that the data are usable and preserved for the future. The imageries contain spectral reflectance data that can be used as a historical benchmark to examine climatological and ecological changes in the Kansas region since the 1980s. Below are the key steps taken to convert the files to modern standards. Decompress the imageries using custom compression software provided with the data. The compression algorithm created for MS-DOS in 1980s had to be set up to run on modern computer systems. Decompressed files were geo-referenced by using metadata information stored in separate compressed header files. Standardized file names were applied (File names and details were described in separate readme documents). Image files were converted to GeoTIFF format with embedded georeferencing information. Leverage Open Geospatial Consortium (OGC) Web services to provide dynamic data transformation and visualization. We will describe the steps in detail and share lessons learned during the AGU session.
Gary Garland
2015-04-15
This is a study of the brine formulations that we were using in our testing were stable over time. The data includes charts, as well as, all of the original data from the ICP-MS runs to complete this study.
Nonlinear Analysis of a Bolted Marine Riser Connector Using NASTRAN Substructuring
NASA Technical Reports Server (NTRS)
Fox, G. L.
1984-01-01
Results of an investigation of the behavior of a bolted, flange type marine riser connector is reported. The method used to account for the nonlinear effect of connector separation due to bolt preload and axial tension load is described. The automated multilevel substructing capability of COSMIC/NASTRAN was employed at considerable savings in computer run time. Simplified formulas for computer resources, i.e., computer run times for modules SDCOMP, FBS, and MPYAD, as well as disk storage space, are presented. Actual run time data on a VAX-11/780 is compared with the formulas presented.
Barefoot running and hip kinematics: good news for the knee?
McCarthy, Colm; Fleming, Neil; Donne, Bernard; Blanksby, Brian
2015-05-01
Patellofemoral pain and iliotibial band syndromes are common running injuries. Excessive hip adduction (HADD), hip internal rotation (HIR), and contralateral pelvic drop (CLPD) during running have been suggested as causes of injury in female runners. This study compared these kinematic variables during barefoot and shod running. Three-dimensional gait analyses of 23 habitually shod, uninjured female recreational athletes running at 3.33 m·s while shod and barefoot were studied. Spatiotemporal and kinematic data at initial contact (IC), 10% of stance (corresponding to the vertical impact peak), and peak angles were collected from each participant for HADD, HIR, and CLPD, and differences were compared across footwear conditions. Step rates when running barefoot were 178 ± 13 versus 172 ± 11 steps per minute when shod (P < 0.001). Foot-strike patterns changed from a group mean heel-toe latency indicating a rear-foot strike (20.8 ms) when shod, to one indicating a forefoot strike (-1.1 ms) when barefoot (P < 0.001). HADD was lower at IC and at 10% of stance when running barefoot (2.3° ± 3.6° vs. 3.9° ± 4.0°, P < 0.001 and 2.8° ± 3.5° vs. 3.8° ± 3.7°, P < 0.01), as was HIR (7.9° ± 6.1° vs. 10.8° ± 6.1°, P < 0.001 and 4.1° ± 6.3° vs. 7.0° ± 5.8°, P < 0.01) and CLPD (0.4° ± 2.4° vs. -0.4° ± 2.3°, P < 0.01 and 0.8° ± 2.7° vs. 0.3° ± 2.5°, P < 0.05). There were no significant differences detected in peak data for hip kinematics. Barefoot running resulted in lower HADD, HIR, and CLPD when compared to being shod at both IC and 10% of stance, where the body's kinetic energy is absorbed by the lower limb. Because excessive HADD, HIR, and CLPD have been associated with knee injuries in female runners, barefoot running could have potential for injury prevention or treatment in this cohort.
Woods, J W; Jones, R R; Schoultz, T W; Kuenz, M; Moore, R L
1988-08-01
In late 1984, the "General Professional Education of the Physician" (GPEP) report recommended, among other things, that medical curricula be revised to rely less on lectures and more on independent study and problem solving. We seem to have anticipated, in 1980, the findings of the GPEP panel by formulating and starting to test the hypothesis that certain "core" information in medical curricula can be as effectively delivered by technology-based self-study means as by lecture or formal laboratory. We began, at that time, to prepare a series of self-study materials using, at first, videotape and then computer-controlled optical videodiscs. The content area selected for study was basic microscopic pathology. The series was planned to cover the following areas of study: cellular alterations and adaptations, cell injury, acute inflammation, chronic inflammation and wound healing, cellular accumulations, circulatory disturbances, necrosis, and neoplasia. All are intended to provide learning experiences in basic pathology. The first two programs were released for testing in 1983 as a two-sided videodisc accompanied by computer-driven pretests, study modules, and posttests that used Apple computers and Pioneer (DiscoVision) videodisc players. An MS DOS (eg, IBM) version of the computer programs was released in 1984. The first two programs are now used in 57 US, Canadian, European, and Philippine health professions schools, and over 1300 student and faculty evaluations have been received. Student and faculty evaluations of these first two programs were very positive, and, as a result, the others are in production and will be completed in 1988. Only when a critical mass of curriculum is available can we really test our stated hypothesis. In the meantime, it is worthwhile to report the evaluation of the first two programs.
Interactive cutting path analysis programs
NASA Technical Reports Server (NTRS)
Weiner, J. M.; Williams, D. S.; Colley, S. R.
1975-01-01
The operation of numerically controlled machine tools is interactively simulated. Four programs were developed to graphically display the cutting paths for a Monarch lathe, Cintimatic mill, Strippit sheet metal punch, and the wiring path for a Standard wire wrap machine. These programs are run on a IMLAC PDS-ID graphic display system under the DOS-3 disk operating system. The cutting path analysis programs accept input via both paper tape and disk file.
Scalable computing for evolutionary genomics.
Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert
2012-01-01
Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.
Fingerprinting Communication and Computation on HPC Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean
2010-06-02
How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less
Adler, Georg; Lembach, Yvonne
2015-08-01
Cognitive impairments may have a severe impact on everyday functioning and quality of life of patients with multiple sclerosis (MS). However, there are some methodological problems in the assessment and only a few studies allow a representative estimate of the prevalence and severity of cognitive impairments in MS patients. We applied a computer-based method, the memory and attention test (MAT), in 531 outpatients with MS, who were assessed at nine neurological practices or specialized outpatient clinics. The findings were compared with those obtained in an age-, sex- and education-matched control group of 84 healthy subjects. Episodic short-term memory was substantially decreased in the MS patients. About 20% of them reached a score of only less than two standard deviations below the mean of the control group. The episodic short-term memory score was negatively correlated with the EDSS score. Minor but also significant impairments in the MS patients were found for verbal short-term memory, episodic working memory and selective attention. The computer-based MAT was found to be useful for a routine assessment of cognition in MS outpatients.
Sun, Xiaojun; Lin, Lei; Liu, Xinyue; Zhang, Fuming; Chi, Lianli; Xia, Qiangwei; Linhardt, Robert J
2016-02-02
Heparins, highly sulfated, linear polysaccharides also known as glycosaminoglycans, are among the most challenging biopolymers to analyze. Hyphenated techniques in conjunction with mass spectrometry (MS) offer rapid analysis of complex glycosaminoglycan mixtures, providing detailed structural and quantitative data. Previous analytical approaches have often relied on liquid chromatography (LC)-MS, and some have limitations including long separation times, low resolution of oligosaccharide mixtures, incompatibility of eluents, and often require oligosaccharide derivatization. This study examines the analysis of glycosaminoglycan oligosaccharides using a novel electrokinetic pump-based capillary electrophoresis (CE)-MS interface. CE separation and electrospray were optimized using a volatile ammonium bicarbonate electrolyte and a methanol-formic acid sheath fluid. The online analyses of highly sulfated heparin oligosaccharides, ranging from disaccharides to low molecular weight heparins, were performed within a 10 min time frame, offering an opportunity for higher-throughput analysis. Disaccharide compositional analysis as well as top-down analysis of low molecular weight heparin was demonstrated. Using normal polarity CE separation and positive-ion electrospray ionization MS, excellent run-to-run reproducibility (relative standard deviation of 3.6-5.1% for peak area and 0.2-0.4% for peak migration time) and sensitivity (limit of quantification of 2.0-5.9 ng/mL and limit of detection of 0.6-1.8 ng/mL) could be achieved.
Quantitative T2(*) assessment of knee joint cartilage after running a marathon.
Hesper, Tobias; Miese, Falk R; Hosalkar, Harish S; Behringer, Michael; Zilkens, Christoph; Antoch, Gerald; Krauspe, Rüdiger; Bittersohl, Bernd
2015-02-01
To study the effect of repetitive joint loading on the T2(*) assessment of knee joint cartilage. T2(*) mapping was performed in 10 non-professional marathon runners (mean age: 28.7±3.97 years) with no morphologically evident cartilage damage within 48h prior to and following the marathon and after a period of approximately four weeks. Bulk and zonal T2(*) values at the medial and lateral tibiofemoral compartment and the patellofemoral compartment were assessed by means of region of interest analysis. Pre- and post-marathon values were compared. There was a small increase in the T2(*) after running the marathon (30.47±5.16ms versus 29.84±4.97ms, P<0.05) while the T2(*) values before the marathon and those after the period of convalescence were similar (29.84±4.97ms versus 29.81±5.17ms, P=0.855). Regional analyses revealed lower T2(*) values in the medial tibial plateau (P<0.001). It appears that repetitive joint loading has a transient influence on the T2(*) values. However, this effect is small and probably not clinically relevant. The low T2(*) values in the medial tibial plateau may be related to functional demand or early cartilage degeneration. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Retention time alignment of LC/MS data by a divide-and-conquer algorithm.
Zhang, Zhongqi
2012-04-01
Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.
Cronly, Mark; Behan, P; Foley, B; Malone, E; Earley, S; Gallagher, M; Shearan, P; Regan, L
2010-12-01
A confirmatory method has been developed to allow for the analysis of fourteen prohibited medicinal additives in pig and poultry compound feed. These compounds are prohibited for use as feed additives although some are still authorised for use in medicated feed. Feed samples are extracted by acetonitrile with addition of sodium sulfate. The extracts undergo a hexane wash to aid with sample purification. The extracts are then evaporated to dryness and reconstituted in initial mobile phase. The samples undergo an ultracentrifugation step prior to injection onto the LC-MS/MS system and are analysed in a run time of 26 min. The LC-MS/MS system is run in MRM mode with both positive and negative electrospray ionisation. The method was validated over three days and is capable of quantitatively analysing for metronidazole, dimetridazole, ronidazole, ipronidazole, chloramphenicol, sulfamethazine, dinitolimide, ethopabate, carbadox and clopidol. The method is also capable of qualitatively analysing for sulfadiazine, tylosin, virginiamycin and avilamycin. A level of 100 microg kg(-1) was used for validation purposes and the method is capable of analysing to this level for all the compounds. Validation criteria of trueness, precision, repeatability and reproducibility along with measurement uncertainty are calculated for all analytes. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Sun, Wei; Ho, Stacy; Fang, Xiaojun Rick; O'Shea, Thomas; Liu, Hanlan
2018-05-10
An ultra-high pressure liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method was successfully developed and qualified for the simultaneous determination of triamcinolone hexacetonide (TAH) and triamcinolone acetonide (TAA, the active metabolite of TAH) in rabbit plasma. To prevent the hydrolysis of TAH to TAA ex vivo during sample collection and processing, we evaluated the effectiveness of several esterase inhibitors to stabilize TAH in plasma. Phenylmethanesulfonyl fluoride (PMSF) at 2.0 mM was chosen to stabilize TAH in rabbit plasma. The developed method is highly sensitive with a lower limit of quantitation of 10.0 pg/mL for both TAA and TAH using a 300 μL plasma aliquot. The method demonstrated good linearity, accuracy, precision, sensitivity, selectivity, recovery, matrix effects, dilution integrity, carryover, and stability. Linearity was obtained over the range of 10-2500 pg/mL. Both intra- and inter-run coefficients of variation were less than 9.1% and accuracies across the assay range were all within 100 ± 8.4%. The run time is under 5 minutes. The method was successfully implemented to support a rabbit pharmacokinetic study of TAH and TAA following a single intra-articular administration of TAH (Aristospan ® ). Copyright © 2018 Elsevier B.V. All rights reserved.
Wahlen, Raimund
2004-04-01
A high-performance liquid chromatography-inductively coupled plasma-mass spectrometry (HPLC-ICP-MS) method has been developed for the fast and accurate analysis of arsenobetaine (AsB) in fish samples extracted by accelerated solvent extraction. The combined extraction and analysis approach is validated using certified reference materials for AsB in fish and during a European intercomparison exercise with a blind sample. Up to six species of arsenic (As) can be separated and quantitated in the extracts within a 10-min isocratic elution. The method is optimized so as to minimize time-consuming sample preparation steps and allow for automated extraction and analysis of large sample batches. A comparison of standard addition and external calibration show no significant difference in the results obtained, which indicates that the LC-ICP-MS method is not influenced by severe matrix effects. The extraction procedure can process up to 24 samples in an automated manner, yet the robustness of the developed HPLC-ICP-MS approach is highlighted by the capability to run more than 50 injections per sequence, which equates to a total run-time of more than 12 h. The method can therefore be used to rapidly and accurately assess the proportion of nontoxic AsB in fish samples with high total As content during toxicological screening studies.
Klausz, Gabriella; Keller, Éva; Sára, Zoltán; Székely-Körmöczy, Péter; Laczay, Péter; Ary, Kornélia; Sótonyi, Péter; Róna, Kálmán
2015-12-01
A liquid chromatography-electrospray-mass spectrometry method (LC/MS) has been developed and validated for determination of praziquantel (PZQ), pyrantel (PYR), febantel (FBT), and the active metabolites fenbendazole (FEN) and oxfendazole (OXF), in dog plasma, using mebendazole as internal standard (IS). The method consists of solid-phase extractions on Strata-X polymeric cartridges. Chromatographic separation was carried out on a Phenomenex Gemini C6 -Phenyl column using binary gradient elution containing methanol and 50 mm ammonium-formate (pH 3). The method was linear (r(2) ≥ 0.990) over concentration ranges of 3-250 ng/mL for PYR andFEB, 5-250 ng/mL for OXF and FEN, and 24-1000 ng/mL for PZQ. The mean precisions were 1.3-10.6% (within-run) and 2.5-9.1% (between-run), and mean accuracies were 90.7-109.4% (within-run) and 91.6-108.2% (between-run). The relative standard deviations (RSD) were <9.1%. The mean recoveries of five targeted compounds from dog plasma ranged from 77 to 94%.The new LC/MS method described herein was fully validated and successfully applied to the bioequivalence studies of different anthelmintic formulations such as tablets containing PZQ, PYR embonate and FBT in dogs after oral administration. Copyright © 2015 John Wiley & Sons, Ltd.
Shi, Ping; Hu, Sijung; Yu, Hongliu
2018-02-01
The aim of this study was to analyze the recovery of heart rate variability (HRV) after treadmill exercise and to investigate the autonomic nervous system response after exercise. Frequency domain indices, i.e., LF(ms 2 ), HF(ms 2 ), LF(n.u.), HF(n.u.) and LF/HF, and lagged Poincaré plot width (SD1 m ) and length (SD2 m ) were introduced for comparison between the baseline period (Pre-E) before treadmill running and two periods after treadmill running (Post-E1 and Post-E2). The correlations between lagged Poincaré plot indices and frequency domain indices were applied to reveal the long-range correlation between linear and nonlinear indices during the recovery of HRV. The results suggested entirely attenuated autonomic nervous activity to the heart following the treadmill exercise. After the treadmill running, the sympathetic nerves achieved dominance and the parasympathetic activity was suppressed, which lasted for more than 4 min. The correlation coefficients between lagged Poincaré plot indices and spectral power indices could separate not only Pre-E and two sessions after the treadmill running, but also the two sessions in recovery periods, i.e., Post-E1 and Post-E2. Lagged Poincaré plot as an innovative nonlinear method showed a better performance over linear frequency domain analysis and conventional nonlinear Poincaré plot.
Storage and retrieval of digital images in dermatology.
Bittorf, A; Krejci-Papa, N C; Diepgen, T L
1995-11-01
Differential diagnosis in dermatology relies on the interpretation of visual information in the form of clinical and histopathological images. Up until now, reference images have had to be retrieved from textbooks and/or appropriate journals. To overcome inherent limitations of those storage media with respect to the number of images stored, display, and search parameters available, we designed a computer-based database of digitized dermatologic images. Images were taken from the photo archive of the Dermatological Clinic of the University of Erlangen. A database was designed using the Entity-Relationship approach. It was implemented on a PC-Windows platform using MS Access* and MS Visual Basic®. As WWW-server a Sparc 10 workstation was used with the CERN Hypertext-Transfer-Protocol-Daemon (httpd) 3.0 pre 6 software running. For compressed storage on a hard drive, a quality factor of 60 allowed on-screen differential diagnosis and corresponded to a compression factor of 1:35 for clinical images and 1:40 for histopathological images. Hierarchical keys of clinical or histopathological criteria permitted multi-criteria searches. A script using the Common Gateway Interface (CGI) enabled remote search and image retrieval via the World-Wide-Web (W3). A dermatologic image database, featurig clinical and histopathological images was constructed which allows for multi-parameter searches and world-wide remote access.
The running coupling of the minimal sextet composite Higgs model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fodor, Zoltan; Holland, Kieran; Kuti, Julius
We compute the renormalized running coupling of SU(3) gauge theory coupled to N f = 2 flavors of massless Dirac fermions in the 2-index-symmetric (sextet) representation. This model is of particular interest as a minimal realization of the strongly interacting composite Higgs scenario. A recently proposed finite volume gradient flow scheme is used. The calculations are performed at several lattice spacings with two different implementations of the gradient flow allowing for a controlled continuum extrapolation and particular attention is paid to estimating the systematic uncertainties. For small values of the renormalized coupling our results for the β-function agree with perturbation theory. For moderate couplings we observe a downward deviation relative to the 2-loop β-function but in the coupling range where the continuum extrapolation is fully under control we do not observe an infrared fixed point. The explored range includes the locations of the zero of the 3-loop and the 4-loop β-functions in themore » $$\\overline{MS}$$ scheme. The absence of a non-trivial zero in the β-function in the explored range of the coupling is consistent with our earlier findings based on hadronic observables, the chiral condensate and the GMOR relation. The present work is the first to report continuum non-perturbative results for the sextet model.« less
Develop a solution for protecting and securing enterprise networks from malicious attacks
NASA Astrophysics Data System (ADS)
Kamuru, Harshitha; Nijim, Mais
2014-05-01
In the world of computer and network security, there are myriad ways to launch an attack, which, from the perspective of a network, can usually be defined as "traffic that has huge malicious intent." Firewall acts as one of the measure in order to secure the device from incoming unauthorized data. There are infinite number of computer attacks that no firewall can prevent, such as those executed locally on the machine by a malicious user. From the network's perspective, there are numerous types of attack. All the attacks that degrade the effectiveness of data can be grouped into two types: brute force and precision. The Firewall that belongs to Juniper has the capability to protect against both types of attack. Denial of Service (DoS) attacks are one of the most well-known network security threats under brute force attacks, which is largely due to the high-profile way in which they can affect networks. Over the years, some of the largest, most respected Internet sites have been effectively taken offline by Denial of Service (DOS) attacks. A DoS attack typically has a singular focus, namely, to cause the services running on a particular host or network to become unavailable. Some DoS attacks exploit vulnerabilities in an operating system and cause it to crash, such as the infamous Win nuke attack. Others submerge a network or device with traffic so that there are no more resources to handle legitimate traffic. Precision attacks typically involve multiple phases and often involves a bit more thought than brute force attacks, all the way from reconnaissance to machine ownership. Before a precision attack is launched, information about the victim needs to be gathered. This information gathering typically takes the form of various types of scans to determine available hosts, networks, and ports. The hosts available on a network can be determined by ping sweeps. The available ports on a machine can be located by port scans. Screens cover a wide variety of attack traffic as they are configured on a per-zone basis. Depending on the type of screen being configured, there may be additional settings beyond simply blocking the traffic. Attack prevention is also a native function of any firewall. Juniper Firewall handles traffic on a per-flow basis. We can use flows or sessions as a way to determine whether traffic attempting to traverse the firewall is legitimate. We control the state-checking components resident in Juniper Firewall by configuring "flow" settings. These settings allow you to configure state checking for various conditions on the device. You can use flow settings to protect against TCP hijacking, and to generally ensure that the fire-wall is performing full state processing when desired. We take a case study of attack on a network and perform study of the detection of the malicious packets on a Net screen Firewall. A new solution for securing enterprise networks will be developed here.
The effects of perceived USB-delay for sensor and embedded system development.
Du, J; Kade, D; Gerdtman, C; Ozcan, O; Linden, M
2016-08-01
Perceiving delay in computer input devices is a problem which gets even more eminent when being used in healthcare applications and/or in small, embedded systems. Therefore, the amount of delay found as acceptable when using computer input devices was investigated in this paper. A device was developed to perform a benchmark test for the perception of delay. The delay can be set from 0 to 999 milliseconds (ms) between a receiving computer and an available USB-device. The USB-device can be a mouse, a keyboard or some other type of USB-connected input device. Feedback from performed user tests with 36 people form the basis for the determination of time limitations for the USB data processing in microprocessors and embedded systems without users' noticing the delay. For this paper, tests were performed with a personal computer and a common computer mouse, testing the perception of delays between 0 and 500 ms. The results of our user tests show that perceived delays up to 150 ms were acceptable and delays larger than 300 ms were not acceptable at all.
Robot computer problem solving system
NASA Technical Reports Server (NTRS)
Becker, J. D.; Merriam, E. W.
1974-01-01
The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.
NASA Technical Reports Server (NTRS)
Davarian, F.
1994-01-01
The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.
Public Risk Assessment Program
NASA Technical Reports Server (NTRS)
Mendeck, Gavin
2010-01-01
The Public Entry Risk Assessment (PERA) program addresses risk to the public from shuttle or other spacecraft re-entry trajectories. Managing public risk to acceptable levels is a major component of safe spacecraft operation. PERA is given scenario inputs of vehicle trajectory, probability of failure along that trajectory, the resulting debris characteristics, and field size and distribution, and returns risk metrics that quantify the individual and collective risk posed by that scenario. Due to the large volume of data required to perform such a risk analysis, PERA was designed to streamline the analysis process by using innovative mathematical analysis of the risk assessment equations. Real-time analysis in the event of a shuttle contingency operation, such as damage to the Orbiter, is possible because PERA allows for a change to the probability of failure models, therefore providing a much quicker estimation of public risk. PERA also provides the ability to generate movie files showing how the entry risk changes as the entry develops. PERA was designed to streamline the computation of the enormous amounts of data needed for this type of risk assessment by using an average distribution of debris on the ground, rather than pinpointing the impact point of every piece of debris. This has reduced the amount of computational time significantly without reducing the accuracy of the results. PERA was written in MATLAB; a compiled version can run from a DOS or UNIX prompt.
LENMODEL: A forward model for calculating length distributions and fission-track ages in apatite
NASA Astrophysics Data System (ADS)
Crowley, Kevin D.
1993-05-01
The program LENMODEL is a forward model for annealing of fission tracks in apatite. It provides estimates of the track-length distribution, fission-track age, and areal track density for any user-supplied thermal history. The program approximates the thermal history, in which temperature is represented as a continuous function of time, by a series of isothermal steps of various durations. Equations describing the production of tracks as a function of time and annealing of tracks as a function of time and temperature are solved for each step. The step calculations are summed to obtain estimates for the entire thermal history. Computational efficiency is maximized by performing the step calculations backwards in model time. The program incorporates an intuitive and easy-to-use graphical interface. Thermal history is input to the program using a mouse. Model options are specified by selecting context-sensitive commands from a bar menu. The program allows for considerable selection of equations and parameters used in the calculations. The program was written for PC-compatible computers running DOS TM 3.0 and above (and Windows TM 3.0 or above) with VGA or SVGA graphics and a Microsoft TM-compatible mouse. Single copies of a runtime version of the program are available from the author by written request as explained in the last section of this paper.
Gordon, C J; Phillips, P M; Johnstone, A F M
2016-01-01
Chronic exercise is considered as one of the most effective means of countering symptoms of the metabolic syndrome (MS) such as obesity and hyperglycemia. Rodent models of forced or voluntary exercise are often used to study the mechanisms of MS and type 2 diabetes. However, there is little known on the impact of genetic strain on the metabolic response to exercise. We studied the effects of housing rats with running wheels (RW) for 65 days compared to sedentary (SED) housing in five female rat strains: Sprague-Dawley (SD), Long-Evans (LE), Wistar (WIS), spontaneously hypertensive (SHR), and Wistar-Kyoto (WKY). Key parameters measured were total distance run, body composition, food consumption, motor activity, ventilatory responses by plethysmography, and resting metabolic rate (MR). WKY and SHR ran significantly more than the WIS, LE, and SD strains. Running-induced reduction in body fat was affected by strain but not by distance run. LE's lost 6% fat after 21 d of running whereas WKY's lost 2% fat but ran 40% more than LE's. LE and WIS lost body weight while the SHR and WKY strains gained weight during running. Food intake with RW was markedly increased in SHR, WIS, and WKY while LE and SD showed modest increases. Exploratory motor activity was reduced sharply by RW in all but the SD strain. Ventilatory parameters were primarily altered by RW in the SHR, WKY, and WIS strains. MR was unaffected by RW. In an overall ranking of physiological and behavioral responses to RW, the SD strain was considered the least responsive whereas the WIS was scored as most responsive. In terms of RW-induced fat loss, the LE strain appears to be the most ideal. These results should be useful in the future selection of rat models to study benefits of volitional exercise. Published by Elsevier Inc.
Xiong, Yeping; Zhao, Yuan-Yuan; Goruk, Sue; Oilund, Kirsten; Field, Catherine J; Jacobs, René L; Curtis, Jonathan M
2012-12-12
A hydrophilic interaction liquid chromatography-tandem mass spectrometry (HILIC LC-MS/MS) method was developed and validated to simultaneously quantify six aqueous choline-related compounds and eight major phospholipids classes in a single run. HILIC chromatography was coupled to positive ion electrospray mass spectrometry. A combination of multiple scan modes including precursor ion scan, neutral loss scan and multiple reaction monitoring was optimized for the determination of each compound or class in a single LC/MS run. This work developed a simplified extraction scheme in which both free choline and related compounds along with phospholipids were extracted into a homogenized phase using chloroform/methanol/water (1:2:0.8) and diluted into methanol for the analysis of target compounds in a variety of sample matrices. The analyte recoveries were evaluated by spiking tissues and food samples with two isotope-labeled internal standards, PC-d(3) and Cho-d(3). Recoveries of between 90% and 115% were obtained by spiking a range of sample matrices with authentic standards containing all 14 of the target analytes. The precision of the analysis ranged from 1.6% to 13%. Accuracy and precision was comparable to that obtained by quantification of selected phospholipid classes using (31)P NMR. A variety of sample matrices including egg yolks, human diets and animal tissues were analyzed using the validated method. The measurements of total choline in selected foods were found to be in good agreement with values obtained from the USDA choline database. Copyright © 2012 Elsevier B.V. All rights reserved.
Athar Masood, M; Veenstra, Timothy D
2017-08-26
Urine Drug Testing (UDT) is an important analytical/bio-analytical technique that has inevitably become an integral and vital part of a testing program for diagnostic purposes. This manuscript presents a tailor-made LC-MS/MS quantitative assay method development and validation for a custom group of 33 pain panel drugs and their metabolites belonging to different classes (opiates, opioids, benzodiazepines, illicit, amphetamines, etc.) that are prescribed in pain management and depressant therapies. The LC-MS/MS method incorporates two experiments to enhance the sensitivity of the assay and has a run time of about 7 min. with no prior purification of the samples required and a flow rate of 0.7 mL/min. The method also includes the second stage metabolites for some drugs that belong to different classes but have first stage similar metabolic pathways that will enable to correctly identify the right drug or to flag the drug that might be due to specimen tampering. Some real case examples and difficulties in peak picking were provided with some of the analytes in subject samples. Finally, the method was deliberated with some randomly selected de-identified clinical subject samples, and the data evaluated from "direct dilute and shoot analysis" and after "glucuronide hydrolysis" were compared. This method is now used to run routinely more than 100 clinical subjects samples on a daily basis. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Rodríguez-Gómez, R; Zafra-Gómez, A; Camino-Sánchez, F J; Ballesteros, O; Navalón, A
2014-07-04
In the present work, two specific, accurate and sensitive methods for the determination of endocrine disrupting chemicals (EDCs) in human breast milk are developed and validated. Bisphenol A and its main chlorinated derivatives, five benzophenone-UV filters and four parabens were selected as target analytes. The method involves a stir-bar sorptive extraction (SBSE) procedure followed by a solvent desorption prior to GC-MS/MS or UHPLC-MS/MS analysis. A derivatization step is also necessary when GC analysis is performed. The GC column used was a capillary HP-5MS with a run time of 26min. For UHPLC analysis, the stationary phase was a non-polar Acquity UPLC(®) BEH C18 column and the run time was 10min. In both cases, the analytes were detected and quantified using a triple quadrupole mass spectrometer (QqQ). Quality parameters such as linearity, accuracy (trueness and precision), sensitivity and selectivity were examined and yielded good results. The limits of quantification (LOQs) ranged from 0.3 to 5.0ngmL(-1) for GC and from 0.2 to 1.0ngmL(-1) for LC. The relative standard deviation (RSD) was lower than 15% and the recoveries ranged from 92 to 114% in all cases, being slightly unfavorable the results obtained with LC. The methods were satisfactorily applied for the determination of target compounds in human milk samples from 10 randomly selected women. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Switzar, Linda; Nicolardi, Simone; Rutten, Julie W.; Oberstein, Saskia A. J. Lesnik; Aartsma-Rus, Annemieke; van der Burgt, Yuri E. M.
2016-01-01
Disulfide bonds are an important class of protein post-translational modifications, yet this structurally crucial modification type is commonly overlooked in mass spectrometry (MS)-based proteomics approaches. Recently, the benefits of online electrochemistry-assisted reduction of protein S-S bonds prior to MS analysis were exemplified by successful characterization of disulfide bonds in peptides and small proteins. In the current study, we have combined liquid chromatography (LC) with electrochemistry (EC) and mass analysis by Fourier transform ion cyclotron resonance (FTICR) MS in an online LC-EC-MS platform to characterize protein disulfide bonds in a bottom-up proteomics workflow. A key advantage of a LC-based strategy is the use of the retention time in identifying both intra- and interpeptide disulfide bonds. This is demonstrated by performing two sequential analyses of a certain protein digest, once without and once with electrochemical reduction. In this way, the "parent" disulfide-linked peptide detected in the first run has a retention time-based correlation with the EC-reduced peptides detected in the second run, thus simplifying disulfide bond mapping. Using this platform, both inter- and intra-disulfide-linked peptides were characterized in two different proteins, ß-lactoglobulin and ribonuclease B. In order to prevent disulfide reshuffling during the digestion process, proteins were digested at a relatively low pH, using (a combination of) the high specificity proteases trypsin and Glu-C. With this approach, disulfide bonds in ß-lactoglobulin and ribonuclease B were comprehensively identified and localized, showing that online LC-EC-MS is a useful tool for the characterization of protein disulfide bonds.
Kim, Sung-Woo; Abd El-Aty, A M; Choi, Jeong-Heui; Lee, Young-Jun; Lieu, Truong T B; Chung, Hyung Suk; Rahman, Md Musfiqur; Choi, Ok-Ja; Shin, Ho-Chul; Rhee, Gyu-Seek; Chang, Moon-Ik; Kim, Hee Jung; Shim, Jae-Han
2016-06-15
The effects of various washing procedures, including stagnant, running, and stagnant and running tap water, and the use of washing solutions and additives, namely NaCl (1% and 2%), vinegar (2%, 5%, and 10%), detergent (0.5% and 1%), and charcoal (1% and 2%), on the reduction rate of diethofencarb were estimated in field-incurred crown daisy, a model of leafy vegetables, grown under greenhouses located in 3 different areas (Gwangju, Naju, and Muan). The original Quick, Easy, Cheap, Effective, Rugged, and Safe "QuEChERS" method was modified for extraction and liquid chromatography-tandem mass spectrometry (LC/MS/MS) was used for analysis. The recovery of diethofencarb in unwashed and washed samples was satisfactory and ranged between 84.28% and 115.32% with relative standard deviations (RSDs) of <6%. The residual levels decreased following washing with stagnant, running, and stagnant+running tap water (i.e., decline in levels increased from 65.08% to 85.02%, 69.99 to 86.79, and 74.75 to 88.96, respectively). The percentage of decline increased and ranged from 77.46% to 91.19% following washing with various solutions. Application of 1% detergent was found to be the most effective washing method for reducing the residues in crown daisy. Additionally, washing with stagnant and running tap water or even stagnant water for 5 min might reduce the residue levels substantially, making the prepared food safe for human consumption. Copyright © 2016 Elsevier Ltd. All rights reserved.
Active Nodal Task Seeking for High-Performance, Ultra-Dependable Computing
1994-07-01
implementation. Figure 1 shows a hardware organization of ANTS: stand-alone computing nodes inter - connected by buses. 2.1 Run Time Partitioning The...nodes in 14 respond to changing loads [27] or system reconfiguration [26]. Existing techniques are all source-initiated or server-initiated [27]. 5.1...short-running task segments. The task segments must be short-running in order that processors will become avalable often enough to satisfy changing
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
ERIC Educational Resources Information Center
Pollard, Jim
This report reviews eight IBM-compatible software packages that are available to secondary schools to teach computer-aided drafting (CAD). Software packages to be considered were selected following reviews of CAD periodicals, computers in education periodicals, advertisements, and recommendations of teachers. The packages were then rated by…
The Impact and Promise of Open-Source Computational Material for Physics Teaching
NASA Astrophysics Data System (ADS)
Christian, Wolfgang
2017-01-01
A computer-based modeling approach to teaching must be flexible because students and teachers have different skills and varying levels of preparation. Learning how to run the ``software du jour'' is not the objective for integrating computational physics material into the curriculum. Learning computational thinking, how to use computation and computer-based visualization to communicate ideas, how to design and build models, and how to use ready-to-run models to foster critical thinking is the objective. Our computational modeling approach to teaching is a research-proven pedagogy that predates computers. It attempts to enhance student achievement through the Modeling Cycle. This approach was pioneered by Robert Karplus and the SCIS Project in the 1960s and 70s and later extended by the Modeling Instruction Program led by Jane Jackson and David Hestenes at Arizona State University. This talk describes a no-cost open-source computational approach aligned with a Modeling Cycle pedagogy. Our tools, curricular material, and ready-to-run examples are freely available from the Open Source Physics Collection hosted on the AAPT-ComPADRE digital library. Examples will be presented.
Colt: an experiment in wormhole run-time reconfiguration
NASA Astrophysics Data System (ADS)
Bittner, Ray; Athanas, Peter M.; Musgrove, Mark
1996-10-01
Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.
Virtualization and cloud computing in dentistry.
Chow, Frank; Muftu, Ali; Shorter, Richard
2014-01-01
The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.
Coulomb gap triptych in a periodic array of metal nanocrystals.
Chen, Tianran; Skinner, Brian; Shklovskii, B I
2012-09-21
The Coulomb gap in the single-particle density of states (DOS) is a universal consequence of electron-electron interaction in disordered systems with localized electron states. Here we show that in arrays of monodisperse metallic nanocrystals, there is not one but three identical adjacent Coulomb gaps, which together form a structure that we call a "Coulomb gap triptych." We calculate the DOS and the conductivity in two- and three-dimensional arrays using a computer simulation. Unlike in the conventional Coulomb glass models, in nanocrystal arrays the DOS has a fixed width in the limit of large disorder. The Coulomb gap triptych can be studied via tunneling experiments.
Gaubert, Alexandra; Jeudy, Jérémy; Rougemont, Blandine; Bordes, Claire; Lemoine, Jérôme; Casabianca, Hervé; Salvador, Arnaud
2016-07-01
In a stricter legislative context, greener detergent formulations are developed. In this way, synthetic surfactants are frequently replaced by bio-sourced surfactants and/or used at lower concentrations in combination with enzymes. In this paper, a LC-MS/MS method was developed for the identification and quantification of enzymes in laundry detergents. Prior to the LC-MS/MS analyses, a specific sample preparation protocol was developed due to matrix complexity (high surfactant percentages). Then for each enzyme family mainly used in detergent formulations (protease, amylase, cellulase, and lipase), specific peptides were identified on a high resolution platform. A LC-MS/MS method was then developed in selected reaction monitoring (SRM) MS mode for the light and corresponding heavy peptides. The method was linear on the peptide concentration ranges 25-1000 ng/mL for protease, lipase, and cellulase; 50-1000 ng/mL for amylase; and 5-1000 ng/mL for cellulase in both water and laundry detergent matrices. The application of the developed analytical strategy to real commercial laundry detergents enabled enzyme identification and absolute quantification. For the first time, identification and absolute quantification of enzymes in laundry detergent was realized by LC-MS/MS in a single run. Graphical Abstract Identification and quantification of enzymes by LC-MS/MS.
Framework for architecture-independent run-time reconfigurable applications
NASA Astrophysics Data System (ADS)
Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.
2000-10-01
Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.
Keeping Cool: Use of Air Conditioning by Australians with Multiple Sclerosis
Summers, Michael P.; Simmons, Rex D.; Verikios, George
2012-01-01
Despite the known difficulties many people with MS have with high ambient temperatures, there are no reported studies of air conditioning use and MS. This study systematically examined air conditioner use by Australians with MS. A short survey was sent to all participants in the Australian MS Longitudinal Study cohort with a response rate of 76% (n = 2,385). Questions included hours of air-conditioner use, areas cooled, type and age of equipment, and the personal effects of overheating. Air conditioners were used by 81.9% of respondents, with an additional 9.6% who could not afford an air conditioner. Regional and seasonal variation in air conditioning use was reported, with a national annual mean of 1,557 hours running time. 90.7% reported negative effects from overheating including increased fatigue, an increase in other MS symptoms, reduced household and social activities, and reduced work capacity. Households that include people with MS spend between 4 and 12 times more on keeping cool than average Australian households. PMID:22548176
Keeping cool: use of air conditioning by australians with multiple sclerosis.
Summers, Michael P; Simmons, Rex D; Verikios, George
2012-01-01
Despite the known difficulties many people with MS have with high ambient temperatures, there are no reported studies of air conditioning use and MS. This study systematically examined air conditioner use by Australians with MS. A short survey was sent to all participants in the Australian MS Longitudinal Study cohort with a response rate of 76% (n = 2,385). Questions included hours of air-conditioner use, areas cooled, type and age of equipment, and the personal effects of overheating. Air conditioners were used by 81.9% of respondents, with an additional 9.6% who could not afford an air conditioner. Regional and seasonal variation in air conditioning use was reported, with a national annual mean of 1,557 hours running time. 90.7% reported negative effects from overheating including increased fatigue, an increase in other MS symptoms, reduced household and social activities, and reduced work capacity. Households that include people with MS spend between 4 and 12 times more on keeping cool than average Australian households.
Data reduction of isotope-resolved LC-MS spectra.
Du, Peicheng; Sudha, Rajagopalan; Prystowsky, Michael B; Angeletti, Ruth Hogue
2007-06-01
Data reduction of liquid chromatography-mass spectrometry (LC-MS) spectra can be a challenge due to the inherent complexity of biological samples, noise and non-flat baseline. We present a new algorithm, LCMS-2D, for reliable data reduction of LC-MS proteomics data. LCMS-2D can reliably reduce LC-MS spectra with multiple scans to a list of elution peaks, and subsequently to a list of peptide masses. It is capable of noise removal, and deconvoluting peaks that overlap in m/z, in retention time, or both, by using a novel iterative peak-picking step, a 'rescue' step, and a modified variable selection method. LCMS-2D performs well with three sets of annotated LC-MS spectra, yielding results that are better than those from PepList, msInspect and the vendor software BioAnalyst. The software LCMS-2D is available under the GNU general public license from http://www.bioc.aecom.yu.edu/labs/angellab/as a standalone C program running on LINUX.
Peng, Minzhi; Liu, Li; Jiang, Minyan; Liang, Cuili; Zhao, Xiaoyuan; Cai, Yanna; Sheng, Huiying; Ou, Zhiying; Luo, Hong
2013-08-01
Measurement of carnitine and acylcarnitines in plasma is important in diagnosis of fatty acid β-oxidation disorders and organic acidemia. The usual method uses flow injection tandem mass spectrometry (FIA-MS/MS), which has limitations. A rapid and more accurate method was developed to be used for high-risk screening and diagnosis. Carnitine and acylcarnitines were separated by hydrophilic interaction liquid chromatography (HILIC) without derivatization and detected with a QTRAP MS/MS System. Total analysis time was 9.0min. The imprecision of within- and between-run were less than 6% and 17%, respectively. Recoveries were in the range of 85-110% at three concentrations. Some acylcarnitine isomers could be separated, such as dicarboxylic and hydroxyl acylcarnitines. The method could also separate interferent to avoid false positive results. 216 normal samples and 116 patient samples were detected with the validated method, and 49 patients were identified with fatty acid oxidation disorders or organic acidemias. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Lewis, B. W.; Brown, K. G.; Wood, G. M., Jr.; Puster, R. L.; Paulin, P. A.; Fishel, C. E.; Ellerbe, D. A.
1986-01-01
Knowledge of test gas composition is important in wind-tunnel experiments measuring aerothermodynamic interactions. This paper describes measurements made by sampling the top of the test section during runs of the Langley 7-Inch High-Temperature Tunnel. The tests were conducted to determine the mixing of gas injected from a flat-plate model into a combustion-heated hypervelocity test stream and to monitor the CO2 produced in the combustion. The Mass Spectrometric (MS) measurements yield the mole fraction of N2 or He and CO2 reaching the sample inlets. The data obtained for several tunnel run conditions are related to the pressures measured in the tunnel test section and at the MS ionizer inlet. The apparent distributions of injected gas species and tunnel gas (CO2) are discussed relative to the sampling techniques. The measurements provided significant real-time data for the distribution of injected gases in the test section. The jet N2 diffused readily from the test stream, but the jet He was mostly entrained. The amounts of CO2 and Ar diffusing upward in the test section for several run conditions indicated the variability of the combustion-gas test-stream composition.
Scilab software package for the study of dynamical systems
NASA Astrophysics Data System (ADS)
Bordeianu, C. C.; Beşliu, C.; Jipa, Al.; Felea, D.; Grossu, I. V.
2008-05-01
This work presents a new software package for the study of chaotic flows and maps. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well known examples are implemented, with the capability of the users inserting their own ODE. Program summaryProgram title: Chaos Catalogue identifier: AEAP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 885 No. of bytes in distributed program, including test data, etc.: 5925 Distribution format: tar.gz Programming language: Scilab 3.1.1 Computer: PC-compatible running Scilab on MS Windows or Linux Operating system: Windows XP, Linux RAM: below 100 Megabytes Classification: 6.2 Nature of problem: Any physical model containing linear or nonlinear ordinary differential equations (ODE). Solution method: Numerical solving of ordinary differential equations. The chaotic behavior of the nonlinear dynamical system is analyzed using Poincaré sections, phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropies. Restrictions: The package routines are normally able to handle ODE systems of high orders (up to order twelve and possibly higher), depending on the nature of the problem. Running time: 10 to 20 seconds for problems that do not involve Lyapunov exponents calculation; 60 to 1000 seconds for problems that involve high orders ODE and Lyapunov exponents calculation.
Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor
Delbruck, Tobi; Lang, Manuel
2013-01-01
Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most “threatening” ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1 m from the goal even with the fastest-shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows), the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2 ± 2 ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided1. PMID:24311999
STS-41 MS Shepherd uses DTO 1206 portable computer on OV-103's middeck
1990-10-10
STS-41 Mission Specialist (MS) William M. Shepherd uses Detailed Test Objective (DTO) Space Station Cursor Control Device Evaluation MACINTOSH portable computer on the middeck of Discovery, Orbiter Vehicle (OV) 103. The computer is velcroed to forward lockers MF71C and MF71E. Surrounding Shepherd are checklists, the field sequential (FS) crew cabin camera, and a lighting fixture.
Progress in Machine Learning Studies for the CMS Computing Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
Progress in Machine Learning Studies for the CMS Computing Infrastructure
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...
2017-12-06
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
Hereford, Richard
2006-01-01
The software described here is used to process and analyze daily weather and surface-water data. The programs are refinements of earlier versions that include minor corrections and routines to calculate frequencies above a threshold on an annual or seasonal basis. Earlier versions of this software were used successfully to analyze historical precipitation patterns of the Mojave Desert and the southern Colorado Plateau regions, ecosystem response to climate variation, and variation of sediment-runoff frequency related to climate (Hereford and others, 2003; 2004; in press; Griffiths and others, 2006). The main program described here (Day_Cli_Ann_v5.3) uses daily data to develop a time series of various statistics for a user specified accounting period such as a year or season. The statistics include averages and totals, but the emphasis is on the frequency of occurrence in days of relatively rare weather or runoff events. These statistics are indices of climate variation; for a discussion of climate indices, see the Climate Research Unit website of the University of East Anglia (http://www.cru.uea.ac.uk/projects/stardex/) and the Climate Change Indices web site (http://cccma.seos.uvic.ca/ETCCDMI/indices.html). Specifically, the indices computed with this software are the frequency of high intensity 24-hour rainfall, unusually warm temperature, and unusually high runoff. These rare, or extreme events, are those greater than the 90th percentile of precipitation, streamflow, or temperature computed for the period of record of weather or gaging stations. If they cluster in time over several decades, extreme events may produce detectable change in the physical landscape and ecosystem of a given region. Although the software has been tested on a variety of data, as with any software, the user should carefully evaluate the results with their data. The programs were designed for the range of precipitation, temperature, and streamflow measurements expected in the semiarid Southwest United States. The user is encouraged to review the examples provided with the software. The software is written in Fortran 90 with Fortran 95 extensions and was compiled with the Digital Visual Fortran compiler version 6.6. The executables run on Windows 2000 and XP, and they operate in a MS-DOS console window that has only very simple graphical options such as font size and color, background color, and size of the window. Error trapping was not written into the programs. Typically, when an error occurs, the console window closes without a message.
Jones, Drew R; Wu, Zhiping; Chauhan, Dharminder; Anderson, Kenneth C; Peng, Junmin
2014-04-01
Global metabolomics relies on highly reproducible and sensitive detection of a wide range of metabolites in biological samples. Here we report the optimization of metabolome analysis by nanoflow ultraperformance liquid chromatography coupled to high-resolution orbitrap mass spectrometry. Reliable peak features were extracted from the LC-MS runs based on mandatory detection in duplicates and additional noise filtering according to blank injections. The run-to-run variation in peak area showed a median of 14%, and the false discovery rate during a mock comparison was evaluated. To maximize the number of peak features identified, we systematically characterized the effect of sample loading amount, gradient length, and MS resolution. The number of features initially rose and later reached a plateau as a function of sample amount, fitting a hyperbolic curve. Longer gradients improved unique feature detection in part by time-resolving isobaric species. Increasing the MS resolution up to 120000 also aided in the differentiation of near isobaric metabolites, but higher MS resolution reduced the data acquisition rate and conferred no benefits, as predicted from a theoretical simulation of possible metabolites. Moreover, a biphasic LC gradient allowed even distribution of peak features across the elution, yielding markedly more peak features than the linear gradient. Using this robust nUPLC-HRMS platform, we were able to consistently analyze ~6500 metabolite features in a single 60 min gradient from 2 mg of yeast, equivalent to ~50 million cells. We applied this optimized method in a case study of drug (bortezomib) resistant and drug-sensitive multiple myeloma cells. Overall, 18% of metabolite features were matched to KEGG identifiers, enabling pathway enrichment analysis. Principal component analysis and heat map data correctly clustered isogenic phenotypes, highlighting the potential for hundreds of small molecule biomarkers of cancer drug resistance.
Multitasking the code ARC3D. [for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Barton, John T.; Hsiung, Christopher C.
1986-01-01
The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.; Stevens, K.
1984-01-01
Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.
A numerical differentiation library exploiting parallel architectures
NASA Astrophysics Data System (ADS)
Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.
2009-08-01
We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.
MONO FOR CROSS-PLATFORM CONTROL SYSTEM ENVIRONMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Hiroshi; Timossi, Chris
2006-10-19
Mono is an independent implementation of the .NET Frameworkby Novell that runs on multiple operating systems (including Windows,Linux and Macintosh) and allows any .NET compatible application to rununmodified. For instance Mono can run programs with graphical userinterfaces (GUI) developed with the C# language on Windows with VisualStudio (a full port of WinForm for Mono is in progress). We present theresults of tests we performed to evaluate the portability of our controlssystem .NET applications from MS Windows to Linux.
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Mielke, Steven L.; Clarkson, Kenneth L.; Truhlar, Donald G.
2012-08-01
We present a Fortran program package, MSTor, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsional motions by the recently proposed MS-T method. This method interpolates between the local harmonic approximation in the low-temperature limit, and the limit of free internal rotation of all torsions at high temperature. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes six utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Catalogue identifier: AEMF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 77 434 No. of bytes in distributed program, including test data, etc.: 3 264 737 Distribution format: tar.gz Programming language: Fortran 90, C, and Perl Computer: Itasca (HP Linux cluster, each node has two-socket, quad-core 2.8 GHz Intel Xeon X5560 “Nehalem EP” processors), Calhoun (SGI Altix XE 1300 cluster, each node containing two quad-core 2.66 GHz Intel Xeon “Clovertown”-class processors sharing 16 GB of main memory), Koronis (Altix UV 1000 server with 190 6-core Intel Xeon X7542 “Westmere” processors at 2.66 GHz), Elmo (Sun Fire X4600 Linux cluster with AMD Opteron cores), and Mac Pro (two 2.8 GHz Quad-core Intel Xeon processors) Operating system: Linux/Unix/Mac OS RAM: 2 Mbytes Classification: 16.3, 16.12, 23 Nature of problem: Calculation of the partition functions and thermodynamic functions (standard-state energy, enthalpy, entropy, and free energy as functions of temperatures) of complex molecules involving multiple torsional motions. Solution method: The multi-structural approximation with torsional anharmonicity (MS-T). The program also provides results for the multi-structural local harmonic approximation [1]. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multi-torsional problems for which one can afford to calculate all the conformations and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes and six utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomain defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 24 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 seconds. J. Zheng, T. Yu, E. Papajak, I.M. Alecu, S.L. Mielke, D.G. Truhlar, Practical methods for including torsional anharmonicity in thermochemical calculations of complex molecules: The internal-coordinate multi-structural approximation, Phys. Chem. Chem. Phys. 13 (2011) 10885-10907.
Tomlinson, David R; Bashir, Yaver; Betts, Timothy R; Rajappan, Kim
2009-05-01
Patients with left ventricular systolic dysfunction and electrocardiographic QRS duration (QRSd) >or=120 ms may obtain symptomatic and prognostic benefits from cardiac resynchronization therapy (CRT). However, clinical trials do not describe the methods used to measure QRSd. We investigated the effect of electrocardiogram (ECG) display format and paper speed on the accuracy of manual QRSd assessment and concordance of manual QRSd with computer-calculated mean and maximal QRSd. Six cardiologists undertook QRSd measurements on ECGs, with computer-calculated mean QRSd close to 120 ms. Display formats were 12-lead, 6-limb, and 6-precordial leads, each at 25 and 50 mm/s. When the computer-calculated mean was used to define QRSd, manual assessment demonstrated 97 and 83% concordance at categorizing QRSd as < and >or=120 ms, respectively. Using the computer-calculated maximal QRSd, manual assessment demonstrated 83% concordance when QRSd was <120 ms and 19% concordance when QRSd was >or=120 ms. The six-precordial lead format demonstrated significantly less intra and inter-observer variabilities than the 12-lead, but this did not improve concordance rates. Manual QRSd assessments demonstrate significant variability, and concordance with computer-calculated measurement depends on whether QRSd is defined as the mean or maximal value. Consensus is required both on the most appropriate definition of QRSd and its measurement.
Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811
Design and development of a run-time monitor for multi-core architectures in cloud computing.
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.
Ms2lda.org: web-based topic modelling for substructure discovery in mass spectrometry.
Wandy, Joe; Zhu, Yunfeng; van der Hooft, Justin J J; Daly, Rónán; Barrett, Michael P; Rogers, Simon
2017-09-14
We recently published MS2LDA, a method for the decomposition of sets of molecular fragment data derived from large metabolomics experiments. To make the method more widely available to the community, here we present ms2lda.org, a web application that allows users to upload their data, run MS2LDA analyses and explore the results through interactive visualisations. Ms2lda.org takes tandem mass spectrometry data in many standard formats and allows the user to infer the sets of fragment and neutral loss features that co-occur together (Mass2Motifs). As an alternative workflow, the user can also decompose a dataset onto predefined Mass2Motifs. This is accomplished through the web interface or programmatically from our web service. The website can be found at http://ms2lda.org , while the source code is available at https://github.com/sdrogers/ms2ldaviz under the MIT license. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
1989-12-01
Interrupt Procedures ....... 29 13. Support for a Larger Memory Model ................ 29 C. IMPLEMENTATION ........................................ 29...describe the programmer’s model of the hardware utilized in the microcomputers and interrupt driven serial communication considerations. Chapter III...Central Processor Unit The programming model of Table 2.1 is common to the Intel 8088, 8086 and 80x86 series of microprocessors used in the IBM PC/AT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Krüger, Ralf; Vogeser, Michael; Burghardt, Stephan; Vogelsberger, Rita; Lackner, Karl J
2010-12-01
Posaconazole is a novel antifungal drug for oral application intended especially for therapy of invasive mycoses. Due to variable gastrointestinal absorption, adverse side effects, and suspected drug-drug interactions, therapeutic drug monitoring (TDM) of posaconazole is recommended. A fast ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method for quantification of posaconazole with a run-time <3 min was developed and compared to a LC-MS/MS method and HPLC method with fluorescence detection. During evaluation of UPLC-MS/MS, two earlier eluting peaks were observed in the MRM trace of posaconazole. This was only seen in patient samples, but not in spiked calibrator samples. Comparison with LC-MS/MS disclosed a significant bias with higher concentrations measured by LC-MS/MS, while UPLC-MS/MS showed excellent agreement with the commercially available HPLC method. In the LC-MS/MS procedure, comparably wide and left side shifted peaks were noticed. This could be ascribed to in-source fragmentation of conjugate metabolites during electrospray ionisation. Precursor and product ion scans confirmed the assumption that the additional compounds are posaconazole glucuronides. Reducing the cone voltage led to disappearance of the glucuronide peaks. Slight modification of the LC-MS/MS method enabled separation of the main interference, leading to significantly reduced deviation. These results highlight the necessity to reliably eliminate interference from labile drug metabolites for correct TDM results, either by sufficient separation or selective MS conditions. The presented UPLC-MS/MS method provides a reliable and fast assay for TDM of posaconazole.
Compressed quantum computation using a remote five-qubit quantum computer
NASA Astrophysics Data System (ADS)
Hebenstreit, M.; Alsina, D.; Latorre, J. I.; Kraus, B.
2017-05-01
The notion of compressed quantum computation is employed to simulate the Ising interaction of a one-dimensional chain consisting of n qubits using the universal IBM cloud quantum computer running on log2(n ) qubits. The external field parameter that controls the quantum phase transition of this model translates into particular settings of the quantum gates that generate the circuit. We measure the magnetization, which displays the quantum phase transition, on a two-qubit system, which simulates a four-qubit Ising chain, and show its agreement with the theoretical prediction within a certain error. We also discuss the relevant point of how to assess errors when using a cloud quantum computer with a limited amount of runs. As a solution, we propose to use validating circuits, that is, to run independent controlled quantum circuits of similar complexity to the circuit of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas
2012-07-14
The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively onmore » such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.« less
A description of shock attenuation for children running.
Mercer, John A; Dufek, Janet S; Mangus, Brent C; Rubley, Mack D; Bhanot, Kunal; Aldridge, Jennifer M
2010-01-01
A growing number of children are participating in organized sport activities, resulting in a concomitant increase in lower extremity injuries. Little is known about the impact generated when children are running or how this impact is attenuated in child runners. To describe shock attenuation characteristics for children running at different speeds on a treadmill and at a single speed over ground. Prospective cohort study. Biomechanics laboratory. Eleven boys (age = 10.5 +/- 0.9 years, height = 143.7 +/- 8.3 cm, mass = 39.4 +/- 10.9 kg) and 7 girls (age = 9.9 +/- 1.1 years, height = 136.2 +/- 7.7 cm, mass = 35.1 +/- 9.6 kg) participated. Participants completed 4 running conditions, including 3 treadmill (TM) running speeds (preferred, fast [0.5 m/s more than preferred], and slow [0.5 m/s less than preferred]) and 1 overground (OG) running speed. We measured leg peak impact acceleration (LgPk), head peak impact acceleration (HdPk), and shock attenuation (ratio of LgPk to HdPk). Shock attenuation (F(2,16) = 4.80, P = .01) was influenced by the interaction of speed and sex. Shock attenuation increased across speeds (slow, preferred, fast) for boys (P < .05) but not for girls (P > .05). Both LgPk (F(1,16) = 5.04, P = .04) and HdPk (F(1,16) = 6.04, P = .03) were different across speeds, and both were greater for girls than for boys. None of the dependent variables were influenced by the interaction of setting (TM, OG) and sex (P >or= .05). Shock attenuation (F(1,16) = 33.51, P < .001) and LgPk (F(1,16) = 31.54, P < .001) were different between TM and OG, and each was greater when running OG than on the TM, regardless of sex. Shock attenuation was between 66% and 76% for children running under a variety of conditions. Girls had greater peak impact accelerations at the leg and head levels than boys but achieved similar shock attenuation. We do not know how these shock attenuation characteristics are related to overuse injuries.
A Two Element Laminar Flow Airfoil Optimized for Cruise. M.S. Thesis
NASA Technical Reports Server (NTRS)
Steen, Gregory Glen
1994-01-01
Numerical and experimental results are presented for a new two-element, fixed-geometry natural laminar flow airfoil optimized for cruise Reynolds numbers on the order of three million. The airfoil design consists of a primary element and an independent secondary element with a primary to secondary chord ratio of three to one. The airfoil was designed to improve the cruise lift-to-drag ratio while maintaining an appropriate landing capability when compared to conventional airfoils. The airfoil was numerically developed utilizing the NASA Langley Multi-Component Airfoil Analysis computer code running on a personal computer. Numerical results show a nearly 11.75 percent decrease in overall wing drag with no increase in stall speed at sailplane cruise conditions when compared to a wing based on an efficient single element airfoil. Section surface pressure, wake survey, transition location, and flow visualization results were obtained in the Texas A&M University Low Speed Wind Tunnel. Comparisons between the numerical and experimental data, the effects of the relative position and angle of the two elements, and Reynolds number variations from 8 x 10(exp 5) to 3 x 10(exp 6) for the optimum geometry case are presented.
LC coupled to ESI, MALDI and ICP MS - A multiple hyphenation for metalloproteomic studies.
Coufalíková, Kateřina; Benešová, Iva; Vaculovič, Tomáš; Kanický, Viktor; Preisler, Jan
2017-05-22
A new multiple detection arrangement for liquid chromatography (LC) that supplements conventional electrospray ionization (ESI) mass spectrometry (MS) detection with two complementary detection techniques, matrix-assisted laser desorption/ionization (MALDI) MS and substrate-assisted laser desorption inductively coupled plasma (SALD ICP) MS has been developed. The combination of the molecular and elemental detectors in a single separation run is accomplished by utilizing a commercial MALDI target made of conductive plastic. The proposed platform provides a number of benefits in today's metalloproteomic applications, which are demonstrated by analysis of a metallothionein mixture. To maintain metallothionein complexes, separation is carried out at a neutral pH. The effluent is split; a major portion is directed to ESI MS while the remaining 1.8% fraction is deposited onto a plastic MALDI target. Dried droplets are overlaid with MALDI matrix and analysed consecutively by MALDI MS and SALD ICP MS. In the ESI MS spectra, the MT isoform complexes with metals and their stoichiometry are determined; the apoforms are revealed in the MALDI MS spectra. Quantitative determination of metallothionein isoforms is performed via determination of metals in the complexes of the individual protein isoforms using SALD ICP MS. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhang, Bo; Pirmoradian, Mohammad; Chernobrovkin, Alexey; Zubarev, Roman A.
2014-01-01
Based on conventional data-dependent acquisition strategy of shotgun proteomics, we present a new workflow DeMix, which significantly increases the efficiency of peptide identification for in-depth shotgun analysis of complex proteomes. Capitalizing on the high resolution and mass accuracy of Orbitrap-based tandem mass spectrometry, we developed a simple deconvolution method of “cloning” chimeric tandem spectra for cofragmented peptides. Additional to a database search, a simple rescoring scheme utilizes mass accuracy and converts the unwanted cofragmenting events into a surprising advantage of multiplexing. With the combination of cloning and rescoring, we obtained on average nine peptide-spectrum matches per second on a Q-Exactive workbench, whereas the actual MS/MS acquisition rate was close to seven spectra per second. This efficiency boost to 1.24 identified peptides per MS/MS spectrum enabled analysis of over 5000 human proteins in single-dimensional LC-MS/MS shotgun experiments with an only two-hour gradient. These findings suggest a change in the dominant “one MS/MS spectrum - one peptide” paradigm for data acquisition and analysis in shotgun data-dependent proteomics. DeMix also demonstrated higher robustness than conventional approaches in terms of lower variation among the results of consecutive LC-MS/MS runs. PMID:25100859