Sample records for cyber-203 vector processor

  1. Evaluation of the SPAR thermal analyzer on the CYBER-203 computer

    NASA Technical Reports Server (NTRS)

    Robinson, J. C.; Riley, K. M.; Haftka, R. T.

    1982-01-01

    The use of the CYBER 203 vector computer for thermal analysis is investigated. Strengths of the CYBER 203 include the ability to perform, in vector mode using a 64 bit word, 50 million floating point operations per second (MFLOPS) for addition and subtraction, 25 MFLOPS for multiplication and 12.5 MFLOPS for division. The speed of scalar operation is comparable to that of a CDC 7600 and is some 2 to 3 times faster than Langley's CYBER 175s. The CYBER 203 has 1,048,576 64-bit words of real memory with an 80 nanosecond (nsec) access time. Memory is bit addressable and provides single error correction, double error detection (SECDED) capability. The virtual memory capability handles data in either 512 or 65,536 word pages. The machine has 256 registers with a 40 nsec access time. The weaknesses of the CYBER 203 include the amount of vector operation overhead and some data storage limitations. In vector operations there is a considerable amount of time before a single result is produced so that vector calculation speed is slower than scalar operation for short vectors.

  2. Use of CYBER 203 and CYBER 205 computers for three-dimensional transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Keller, J. D.

    1983-01-01

    Experiences are discussed for modifying two three-dimensional transonic flow computer programs (FLO 22 and FLO 27) for use on the CDC CYBER 203 computer system. Both programs were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine: leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, vectorizing parts of the existing algorithm in the program, and incorporating a vectorizable algorithm (ZEBRA I or ZEBRA II) in the program. Comparison runs of the programs were made on CDC CYBER 175. CYBER 203, and two pipe CDC CYBER 205 computer systems.

  3. CYBER-205 Devectorizer

    NASA Technical Reports Server (NTRS)

    Lakeotes, Christopher D.

    1990-01-01

    DEVECT (CYBER-205 Devectorizer) is CYBER-205 FORTRAN source-language-preprocessor computer program reducing vector statements to standard FORTRAN. In addition, DEVECT has many other standard and optional features simplifying conversion of vector-processor programs for CYBER 200 to other computers. Written in FORTRAN IV.

  4. Experiences in using the CYBER 203 for three-dimensional transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Keller, J. D.

    1982-01-01

    In this paper, the authors report on some of their experiences modifying two three-dimensional transonic flow programs (FLO22 and FLO27) for use on the NASA Langley Research Center CYBER 203. Both of the programs discussed were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine, including: (1) leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, (2) vectorizing parts of the existing algorithm in the program, and (3) incorporating a new vectorizable algorithm (ZEBRA I or ZEBRA II) in the program.

  5. Navier-Stokes solution on the CYBER-203 by a pseudospectral technique

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J.; Hussaini, M. Y.; Bokhari, S.; Orszag, S. A.

    1983-01-01

    A three-level, time-split, mixed spectral/finite difference method for the numerical solution of the three-dimensional, compressible Navier-Stokes equations has been developed and implemented on the Control Data Corporation (CDC) CYBER-203. This method uses a spectral representation for the flow variables in the streamwise and spanwise coordinates, and central differences in the normal direction. The five dependent variables are interleaved one horizontal plane at a time and the array of their values at the grid points of each horizontal plane is a typical vector in the computation. The code is organized so as to require, per time step, a single forward-backward pass through the entire data base. The one-and two-dimensional Fast Fourier Transforms are performed using software especially developed for the CYBER-203.

  6. Algorithms for solving large sparse systems of simultaneous linear equations on vector processors

    NASA Technical Reports Server (NTRS)

    David, R. E.

    1984-01-01

    Very efficient algorithms for solving large sparse systems of simultaneous linear equations have been developed for serial processing computers. These involve a reordering of matrix rows and columns in order to obtain a near triangular pattern of nonzero elements. Then an LU factorization is developed to represent the matrix inverse in terms of a sequence of elementary Gaussian eliminations, or pivots. In this paper it is shown how these algorithms are adapted for efficient implementation on vector processors. Results obtained on the CYBER 200 Model 205 are presented for a series of large test problems which show the comparative advantages of the triangularization and vector processing algorithms.

  7. Numerical algorithms for finite element computations on concurrent processors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    The work of several graduate students which relate to the NASA grant is briefly summarized. One student has worked on a detailed analysis of the so-called ijk forms of Gaussian elemination and Cholesky factorization on concurrent processors. Another student has worked on the vectorization of the incomplete Cholesky conjugate method on the CYBER 205. Two more students implemented various versions of Gaussian elimination and Cholesky factorization on the FLEX/32.

  8. The seasonal-cycle climate model

    NASA Technical Reports Server (NTRS)

    Marx, L.; Randall, D. A.

    1981-01-01

    The seasonal cycle run which will become the control run for the comparison with runs utilizing codes and parameterizations developed by outside investigators is discussed. The climate model currently exists in two parallel versions: one running on the Amdahl and the other running on the CYBER 203. These two versions are as nearly identical as machine capability and the requirement for high speed performance will allow. Developmental changes are made on the Amdahl/CMS version for ease of testing and rapidity of turnaround. The changes are subsequently incorporated into the CYBER 203 version using vectorization techniques where speed improvement can be realized. The 400 day seasonal cycle run serves as a control run for both medium and long range climate forecasts alsensitivity studies.

  9. Transition to turbulence in plane channel flows

    NASA Technical Reports Server (NTRS)

    Biringen, S.

    1984-01-01

    Results obtained from a numerical simulation of the final stages of transition to turbulence in plane channel flow are described. Three dimensional, incompressible Navier-Stokes equations are numerically integrated to obtain the time evolution of two and three dimensional finite amplitude disturbances. Computations are performed on CYBER-203 vector processor for a 32x51x32 grid. Results are presented for no-slip boundary conditions at the solid walls as well as for periodic suction blowing to simulate active control of transition by mass transfer. Solutions indicate that the method is capable of simulating the complex character of vorticity dynamics during the various stages of transition and final breakdown. In particular, evidence points to the formation of a lambda-shape vortex and the subsequent system of horseshoe vortices inclined to the main flow direction as the main elements of transition. Calculations involving periodic suction-blowing indicate that interference with a wave of suitable phase and amplitude reduces the disturbance growth rates.

  10. Experiences with explicit finite-difference schemes for complex fluid dynamics problems on STAR-100 and CYBER-203 computers

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.

    1982-01-01

    Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.

  11. Survey of new vector computers: The CRAY 1S from CRAY research; the CYBER 205 from CDC and the parallel computer from ICL - architecture and programming

    NASA Technical Reports Server (NTRS)

    Gentzsch, W.

    1982-01-01

    Problems which can arise with vector and parallel computers are discussed in a user oriented context. Emphasis is placed on the algorithms used and the programming techniques adopted. Three recently developed supercomputers are examined and typical application examples are given in CRAY FORTRAN, CYBER 205 FORTRAN and DAP (distributed array processor) FORTRAN. The systems performance is compared. The addition of parts of two N x N arrays is considered. The influence of the architecture on the algorithms and programming language is demonstrated. Numerical analysis of magnetohydrodynamic differential equations by an explicit difference method is illustrated, showing very good results for all three systems. The prognosis for supercomputer development is assessed.

  12. Fundamental organometallic reactions: Applications on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Rappe, A. K.

    1984-01-01

    Two of the most challenging problems of Organometallic chemistry (loosely defined) are pollution control with the large space velocities needed and nitrogen fixation, a process so capably done by nature and so relatively poorly done by man (industry). For a computational chemist these problems are on the fringe of what is possible with conventional computers (large models needed and accurate energetics required). A summary of the algorithmic modification needed to address these problems on a vector processor such as the CYBER 205 and a sketch of findings to date on deNOx catalysis and nitrogen fixation are presented.

  13. An M-step preconditioned conjugate gradient method for parallel computation

    NASA Technical Reports Server (NTRS)

    Adams, L.

    1983-01-01

    This paper describes a preconditioned conjugate gradient method that can be effectively implemented on both vector machines and parallel arrays to solve sparse symmetric and positive definite systems of linear equations. The implementation on the CYBER 203/205 and on the Finite Element Machine is discussed and results obtained using the method on these machines are given.

  14. Comparison of the MPP with other supercomputers for LANDSAT data processing

    NASA Technical Reports Server (NTRS)

    Ozga, Martin

    1987-01-01

    The massively parallel processor is compared to the CRAY X-MP and the CYBER-205 for LANDSAT data processing. The maximum likelihood classification algorithm is the basis for comparison since this algorithm is simple to implement and vectorizes very well. The algorithm was implemented on all three machines and tested by classifying the same full scene of LANDSAT multispectral scan data. Timings are compared as well as features of the machines and available software.

  15. A method for computation of inviscid three-dimensional flow over blunt bodies having large embedded subsonic regions

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Hamilton, H. H., II

    1981-01-01

    A computational technique for computing the three-dimensional inviscid flow over blunt bodies having large regions of embedded subsonic flow is detailed. Results, which were obtained using the CDC Cyber 203 vector processing computer, are presented for several analytic shapes with some comparison to experimental data. Finally, windward surface pressure computations over the first third of the Space Shuttle vehicle are compared with experimental data for angles of attack between 25 and 45 degrees.

  16. Vectorized schemes for conical potential flow using the artificial density method

    NASA Technical Reports Server (NTRS)

    Bradley, P. F.; Dwoyer, D. L.; South, J. C., Jr.; Keen, J. M.

    1984-01-01

    A method is developed to determine solutions to the full-potential equation for steady supersonic conical flow using the artificial density method. Various update schemes used generally for transonic potential solutions are investigated. The schemes are compared for speed and robustness. All versions of the computer code have been vectorized and are currently running on the CYBER-203 computer. The update schemes are vectorized, where possible, either fully (explicit schemes) or partially (implicit schemes). Since each version of the code differs only by the update scheme and elements other than the update scheme are completely vectorizable, comparisons of computational effort and convergence rate among schemes are a measure of the specific scheme's performance. Results are presented for circular and elliptical cones at angle of attack for subcritical and supercritical crossflows.

  17. Preliminary study of the use of the STAR-100 computer for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Keller, J. D.; Jameson, A.

    1977-01-01

    An explicit method for solving the transonic small-disturbance potential equation is presented. This algorithm, which is suitable for the new vector-processor computers such as the CDC STAR-100, is compared to successive line over-relaxation (SLOR) on a simple test problem. The convergence rate of the explicit scheme is slower than that of SLOR, however, the efficiency of the explicit scheme on the STAR-100 computer is sufficient to overcome the slower convergence rate and allow an overall speedup compared to SLOR on the CYBER 175 computer.

  18. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1984-01-01

    Several short summaries of the work performed during this reporting period are presented. Topics discussed in this document include: (1) resilient seeded errors via simple techniques; (2) knowledge representation for engineering design; (3) analysis of faults in a multiversion software experiment; (4) implementation of parallel programming environment; (5) symbolic execution of concurrent programs; (6) two computer graphics systems for visualization of pressure distribution and convective density particles; (7) design of a source code management system; (8) vectorizing incomplete conjugate gradient on the Cyber 203/205; (9) extensions of domain testing theory and; (10) performance analyzer for the pisces system.

  19. Computation of transonic potential flow about 3 dimensional inlets, ducts, and bodies

    NASA Technical Reports Server (NTRS)

    Reyhner, T. A.

    1982-01-01

    An analysis was developed and a computer code, P465 Version A, written for the prediction of transonic potential flow about three dimensional objects including inlet, duct, and body geometries. Finite differences and line relaxation are used to solve the complete potential flow equation. The coordinate system used for the calculations is independent of body geometry. Cylindrical coordinates are used for the computer code. The analysis is programmed in extended FORTRAN 4 for the CYBER 203 vector computer. The programming of the analysis is oriented toward taking advantage of the vector processing capabilities of this computer. Comparisons of computed results with experimental measurements are presented to verify the analysis. Descriptions of program input and output formats are also presented.

  20. User's guide for NASCRIN: A vectorized code for calculating two-dimensional supersonic internal flow fields

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1984-01-01

    A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.

  1. Radiation of sound from unflanged cylindrical ducts

    NASA Technical Reports Server (NTRS)

    Hartharan, S. L.; Bayliss, A.

    1983-01-01

    Calculations of sound radiated from unflanged cylindrical ducts are presented. The numerical simulation models the problem of an aero-engine inlet. The time dependent linearized Euler equations are solved from a state of rest until a harmonic solution is attained. A fourth order accurate finite difference scheme is used and solutions are obtained from a fully vectorized Cyber-203 computer program. Cases of both plane waves and spin modes are treated. Spin modes model the sound generated by a turbofan engine. Boundary conditions for both plane waves and spin modes are treated. Solutions obtained are compared with experiments conducted at NASA Langley Research Center.

  2. Comparisons of some large scientific computers

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1981-01-01

    In 1975, the National Aeronautics and Space Administration (NASA) began studies to assess the technical and economic feasibility of developing a computer having sustained computational speed of one billion floating point operations per second and a working memory of at least 240 million words. Such a powerful computer would allow computational aerodynamics to play a major role in aeronautical design and advanced fluid dynamics research. Based on favorable results from these studies, NASA proceeded with developmental plans. The computer was named the Numerical Aerodynamic Simulator (NAS). To help insure that the estimated cost, schedule, and technical scope were realistic, a brief study was made of past large scientific computers. Large discrepancies between inception and operation in scope, cost, or schedule were studied so that they could be minimized with NASA's proposed new compter. The main computers studied were the ILLIAC IV, STAR 100, Parallel Element Processor Ensemble (PEPE), and Shuttle Mission Simulator (SMS) computer. Comparison data on memory and speed were also obtained on the IBM 650, 704, 7090, 360-50, 360-67, 360-91, and 370-195; the CDC 6400, 6600, 7600, CYBER 203, and CYBER 205; CRAY 1; and the Advanced Scientific Computer (ASC). A few lessons learned conclude the report.

  3. Efficient spares matrix multiplication scheme for the CYBER 203

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.

    1984-01-01

    This work has been directed toward the development of an efficient algorithm for performing this computation on the CYBER-203. The desire to provide software which gives the user the choice between the often conflicting goals of minimizing central processing (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of three types of storage is selected for each diagonal. For each storage type, an initialization sub-routine estimates the CPU and storage requirements based upon results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the resources. The three storage types employed were chosen to be efficient on the CYBER-203 for diagonals which are sparse, moderately sparse, or dense; however, for many densities, no diagonal type is most efficient with respect to both resource requirements. The user-supplied weights dictate the choice.

  4. Treecode with a Special-Purpose Processor

    NASA Astrophysics Data System (ADS)

    Makino, Junichiro

    1991-08-01

    We describe an implementation of the modified Barnes-Hut tree algorithm for a gravitational N-body calculation on a GRAPE (GRAvity PipE) backend processor. GRAPE is a special-purpose computer for N-body calculations. It receives the positions and masses of particles from a host computer and then calculates the gravitational force at each coordinate specified by the host. To use this GRAPE processor with the hierarchical tree algorithm, the host computer must maintain a list of all nodes that exert force on a particle. If we create this list for each particle of the system at each timestep, the number of floating-point operations on the host and that on GRAPE would become comparable, and the increased speed obtained by using GRAPE would be small. In our modified algorithm, we create a list of nodes for many particles. Thus, the amount of the work required of the host is significantly reduced. This algorithm was originally developed by Barnes in order to vectorize the force calculation on a Cyber 205. With this algorithm, the computing time of the force calculation becomes comparable to that of the tree construction, if the GRAPE backend processor is sufficiently fast. The obtained speed-up factor is 30 to 50 for a RISC-based host computer and GRAPE-1A with a peak speed of 240 Mflops.

  5. R&D100 Finalist: Neuromorphic Cyber Microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follett, David; Naegle, John; Suppona, Roger

    The Neuromorphic Cyber Microscope provides security analysts with unprecedented visibility of their network, computer and storage assets. This processor is the world's first practical implementation of neuromorphic technology to a major computer science mission. Working with Lewis Rhodes Labs, engineers at Sandia National Laboratories have created a device that is orders of magnitude faster at analyzing data to identify cyber-attacks.

  6. High-performance ultra-low power VLSI analog processor for data compression

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1996-01-01

    An apparatus for data compression employing a parallel analog processor. The apparatus includes an array of processor cells with N columns and M rows wherein the processor cells have an input device, memory device, and processor device. The input device is used for inputting a series of input vectors. Each input vector is simultaneously input into each column of the array of processor cells in a pre-determined sequential order. An input vector is made up of M components, ones of which are input into ones of M processor cells making up a column of the array. The memory device is used for providing ones of M components of a codebook vector to ones of the processor cells making up a column of the array. A different codebook vector is provided to each of the N columns of the array. The processor device is used for simultaneously comparing the components of each input vector to corresponding components of each codebook vector, and for outputting a signal representative of the closeness between the compared vector components. A combination device is used to combine the signal output from each processor cell in each column of the array and to output a combined signal. A closeness determination device is then used for determining which codebook vector is closest to an input vector from the combined signals, and for outputting a codebook vector index indicating which of the N codebook vectors was the closest to each input vector input into the array.

  7. Flow solution on a dual-block grid around an airplane

    NASA Technical Reports Server (NTRS)

    Eriksson, Lars-Erik

    1987-01-01

    The compressible flow around a complex fighter-aircraft configuration (fuselage, cranked delta wing, canard, and inlet) is simulated numerically using a novel grid scheme and a finite-volume Euler solver. The patched dual-block grid is generated by an algebraic procedure based on transfinite interpolation, and the explicit Runge-Kutta time-stepping Euler solver is implemented with a high degree of vectorization on a Cyber 205 processor. Results are presented in extensive graphs and diagrams and characterized in detail. The concentration of grid points near the wing apex in the present scheme is shown to facilitate capture of the vortex generated by the leading edge at high angles of attack and modeling of its interaction with the canard wake.

  8. Strategies for vectorizing the sparse matrix vector product on the CRAY XMP, CRAY 2, and CYBER 205

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Partridge, Harry

    1987-01-01

    Large, randomly sparse matrix vector products are important in a number of applications in computational chemistry, such as matrix diagonalization and the solution of simultaneous equations. Vectorization of this process is considered for the CRAY XMP, CRAY 2, and CYBER 205, using a matrix of dimension of 20,000 with from 1 percent to 6 percent nonzeros. Efficient scatter/gather capabilities add coding flexibility and yield significant improvements in performance. For the CYBER 205, it is shown that minor changes in the IO can reduce the CPU time by a factor of 50. Similar changes in the CRAY codes make a far smaller improvement.

  9. Is It Time for a US Cyber Force?

    DTIC Science & Technology

    2015-02-17

    network of information technology (IT) and resident data, including the Internet , telecommunications networks, computer systems, and embedded processors...and controllers.13 JP 3-12 further goes on to explain cyberspace in terms of three layers: physical network, logical network, and cyber- persona .14...zero day) vulnerabilities against Microsoft operating system code using trusted hardware vendor certificates to cloak their presence. Though not

  10. Theoretical studies of Resonance Enhanced Stimulated Raman Scattering (RESRS) of frequency doubled Alexandrite laser wavelength in cesium vapor

    NASA Technical Reports Server (NTRS)

    Lawandy, Nabil M.

    1987-01-01

    The third phase of research will focus on the propagation and energy extraction of the pump and SERS beams in a variety of configurations including oscillator structures. In order to address these questions a numerical code capable of allowing for saturation and full transverse beam evolution is required. The method proposed is based on a discretized propagation energy extraction model which uses a Kirchoff integral propagator coupled to the three level Raman model already developed. The model will have the resolution required by diffraction limits and will use the previous density matrix results in the adiabatic following limit. Owing to its large computational requirements, such a code must be implemented on a vector array processor. One code on the Cyber is being tested by using previously understood two-level laser models as guidelines for interpreting the results. Two tests were implemented: the evolution of modes in a passive resonator and the evolution of a stable state of the adiabatically eliminated laser equations. These results show mode shapes and diffraction losses for the first case and relaxation oscillations for the second one. Finally, in order to clarify the computing methodology used to exploit the speed of the Cyber's computational speed, the time it takes to perform both of the computations previously mentioned to run on the Cyber and VAX 730 must be measured. Also included is a short description of the current laser model (CAVITY.FOR) and a flow chart of the test computations.

  11. Multithreading in vector processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evangelinos, Constantinos; Kim, Changhoan; Nair, Ravi

    In one embodiment, a system includes a processor having a vector processing mode and a multithreading mode. The processor is configured to operate on one thread per cycle in the multithreading mode. The processor includes a program counter register having a plurality of program counters, and the program counter register is vectorized. Each program counter in the program counter register represents a distinct corresponding thread of a plurality of threads. The processor is configured to execute the plurality of threads by activating the plurality of program counters in a round robin cycle.

  12. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  13. Resiliency in Future Cyber Combat

    DTIC Science & Technology

    2016-04-04

    including the Internet , telecommunications networks, computer systems, and embed- ded processors and controllers.”6 One important point emerging from the...definition is that while the Internet is part of cyberspace, it is not all of cyberspace. Any computer processor capable of communicating with a...central proces- sor on a modern car are all part of cyberspace, although only some of them are routinely connected to the Internet . Most modern

  14. Hypercluster - Parallel processing for computational mechanics

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    1988-01-01

    An account is given of the development status, performance capabilities and implications for further development of NASA-Lewis' testbed 'hypercluster' parallel computer network, in which multiple processors communicate through a shared memory. Processors have local as well as shared memory; the hypercluster is expanded in the same manner as the hypercube, with processor clusters replacing the normal single processor node. The NASA-Lewis machine has three nodes with a vector personality and one node with a scalar personality. Each of the vector nodes uses four board-level vector processors, while the scalar node uses four general-purpose microcomputer boards.

  15. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  16. Development of a distributed-parameter mathematical model for simulation of cryogenic wind tunnels

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1983-01-01

    A one-dimensional distributed-parameter dynamic model of a cryogenic wind tunnel was developed which accounts for internal and external heat transfer, viscous momentum losses, and slotted-test-section dynamics. Boundary conditions imposed by liquid-nitrogen injection, gas venting, and the tunnel fan were included. A time-dependent numerical solution to the resultant set of partial differential equations was obtained on a CDC CYBER 203 vector-processing digital computer at a usable computational rate. Preliminary computational studies were performed by using parameters of the Langley 0.3-Meter Transonic Cryogenic Tunnel. Studies were performed by using parameters from the National Transonic Facility (NTF). The NTF wind-tunnel model was used in the design of control loops for Mach number, total temperature, and total pressure and for determining interactions between the control loops. It was employed in the application of optimal linear-regulator theory and eigenvalue-placement techniques to develop Mach number control laws.

  17. Adapting iterative algorithms for solving large sparse linear systems for efficient use on the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Kincaid, D. R.; Young, D. M.

    1984-01-01

    Adapting and designing mathematical software to achieve optimum performance on the CYBER 205 is discussed. Comments and observations are made in light of recent work done on modifying the ITPACK software package and on writing new software for vector supercomputers. The goal was to develop very efficient vector algorithms and software for solving large sparse linear systems using iterative methods.

  18. Developement of an Optimum Interpolation Analysis Method for the CYBER 205

    NASA Technical Reports Server (NTRS)

    Nestler, M. S.; Woollen, J.; Brin, Y.

    1985-01-01

    A state-of-the-art technique to assimilate the diverse observational database obtained during FGGE, and thus create initial conditions for numerical forecasts is described. The GLA optimum interpolation (OI) analysis method analyzes pressure, winds, and temperature at sea level, mixing ratio at six mandatory pressure levels up to 300 mb, and heights and winds at twelve levels up to 50 mb. Conversion to the CYBER 205 required a major re-write of the Amdahl OI code to take advantage of the CYBER vector processing capabilities. Structured programming methods were used to write the programs and this has resulted in a modular, understandable code. Among the contributors to the increased speed of the CYBER code are a vectorized covariance-calculation routine, an extremely fast matrix equation solver, and an innovative data search and sort technique.

  19. Efficient packet forwarding using cyber-security aware policies

    DOEpatents

    Ros-Giralt, Jordi

    2017-04-04

    For balancing load, a forwarder can selectively direct data from the forwarder to a processor according to a loading parameter. The selective direction includes forwarding the data to the processor for processing, transforming and/or forwarding the data to another node, and dropping the data. The forwarder can also adjust the loading parameter based on, at least in part, feedback received from the processor. One or more processing elements can store values associated with one or more flows into a structure without locking the structure. The stored values can be used to determine how to direct the flows, e.g., whether to process a flow or to drop it. The structure can be used within an information channel providing feedback to a processor.

  20. Efficient packet forwarding using cyber-security aware policies

    DOEpatents

    Ros-Giralt, Jordi

    2017-10-25

    For balancing load, a forwarder can selectively direct data from the forwarder to a processor according to a loading parameter. The selective direction includes forwarding the data to the processor for processing, transforming and/or forwarding the data to another node, and dropping the data. The forwarder can also adjust the loading parameter based on, at least in part, feedback received from the processor. One or more processing elements can store values associated with one or more flows into a structure without locking the structure. The stored values can be used to determine how to direct the flows, e.g., whether to process a flow or to drop it. The structure can be used within an information channel providing feedback to a processor.

  1. Implicit, nonswitching, vector-oriented algorithm for steady transonic flow

    NASA Technical Reports Server (NTRS)

    Lottati, I.

    1983-01-01

    A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.

  2. Implementation and Assessment of Advanced Analog Vector-Matrix Processor

    NASA Technical Reports Server (NTRS)

    Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.

  3. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  4. Documentation of the GLAS fourth order general calculation model. Volume 3: Vectorized code for the Cyber 205

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.

    1983-01-01

    Volume 3 of a 3-volume technical memoranda which contains documentation of the GLAS fourth order genera circulation model is presented. The volume contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A dictionary of FORTRAN variables used in the Scalar Version, and listings of the FORTRAN Code compiled with the C-option, are included. Cross reference maps of local variables are included for each subroutine.

  5. A GaAs vector processor based on parallel RISC microprocessors

    NASA Astrophysics Data System (ADS)

    Misko, Tim A.; Rasset, Terry L.

    A vector processor architecture based on the development of a 32-bit microprocessor using gallium arsenide (GaAs) technology has been developed. The McDonnell Douglas vector processor (MVP) will be fabricated completely from GaAs digital integrated circuits. The MVP architecture includes a vector memory of 1 megabyte, a parallel bus architecture with eight processing elements connected in parallel, and a control processor. The processing elements consist of a reduced instruction set CPU (RISC) with four floating-point coprocessor units and necessary memory interface functions. This architecture has been simulated for several benchmark programs including complex fast Fourier transform (FFT), complex inner product, trigonometric functions, and sort-merge routine. The results of this study indicate that the MVP can process a 1024-point complex FFT at a speed of 112 microsec (389 megaflops) while consuming approximately 618 W of power in a volume of approximately 0.1 ft-cubed.

  6. System balance analysis for vector computers

    NASA Technical Reports Server (NTRS)

    Knight, J. C.; Poole, W. G., Jr.; Voight, R. G.

    1975-01-01

    The availability of vector processors capable of sustaining computing rates of 10 to the 8th power arithmetic results pers second raised the question of whether peripheral storage devices representing current technology can keep such processors supplied with data. By examining the solution of a large banded linear system on these computers, it was found that even under ideal conditions, the processors will frequently be waiting for problem data.

  7. Compute Server Performance Results

    NASA Technical Reports Server (NTRS)

    Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,

  8. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  9. Rational calculation accuracy in acousto-optical matrix-vector processor

    NASA Astrophysics Data System (ADS)

    Oparin, V. V.; Tigin, Dmitry V.

    1994-01-01

    The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.

  10. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  11. Optical systolic array processor using residue arithmetic

    NASA Technical Reports Server (NTRS)

    Jackson, J.; Casasent, D.

    1983-01-01

    The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.

  12. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted

    1990-01-01

    Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.

  13. Real-time optical laboratory solution of parabolic differential equations

    NASA Technical Reports Server (NTRS)

    Casasent, David; Jackson, James

    1988-01-01

    An optical laboratory matrix-vector processor is used to solve parabolic differential equations (the transient diffusion equation with two space variables and time) by an explicit algorithm. This includes optical matrix-vector nonbase-2 encoded laboratory data, the combination of nonbase-2 and frequency-multiplexed data on such processors, a high-accuracy optical laboratory solution of a partial differential equation, new data partitioning techniques, and a discussion of a multiprocessor optical matrix-vector architecture.

  14. Vectorization with SIMD extensions speeds up reconstruction in electron tomography.

    PubMed

    Agulleiro, J I; Garzón, E M; García, I; Fernández, J J

    2010-06-01

    Electron tomography allows structural studies of cellular structures at molecular detail. Large 3D reconstructions are needed to meet the resolution requirements. The processing time to compute these large volumes may be considerable and so, high performance computing techniques have been used traditionally. This work presents a vector approach to tomographic reconstruction that relies on the exploitation of the SIMD extensions available in modern processors in combination to other single processor optimization techniques. This approach succeeds in producing full resolution tomograms with an important reduction in processing time, as evaluated with the most common reconstruction algorithms, namely WBP and SIRT. The main advantage stems from the fact that this approach is to be run on standard computers without the need of specialized hardware, which facilitates the development, use and management of programs. Future trends in processor design open excellent opportunities for vector processing with processor's SIMD extensions in the field of 3D electron microscopy.

  15. Matrix-vector multiplication using digital partitioning for more accurate optical computing

    NASA Technical Reports Server (NTRS)

    Gary, C. K.

    1992-01-01

    Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.

  16. A Parallel Framework with Block Matrices of a Discrete Fourier Transform for Vector-Valued Discrete-Time Signals.

    PubMed

    Soto-Quiros, Pablo

    2015-01-01

    This paper presents a parallel implementation of a kind of discrete Fourier transform (DFT): the vector-valued DFT. The vector-valued DFT is a novel tool to analyze the spectra of vector-valued discrete-time signals. This parallel implementation is developed in terms of a mathematical framework with a set of block matrix operations. These block matrix operations contribute to analysis, design, and implementation of parallel algorithms in multicore processors. In this work, an implementation and experimental investigation of the mathematical framework are performed using MATLAB with the Parallel Computing Toolbox. We found that there is advantage to use multicore processors and a parallel computing environment to minimize the high execution time. Additionally, speedup increases when the number of logical processors and length of the signal increase.

  17. A high-accuracy optical linear algebra processor for finite element applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  18. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    DOEpatents

    Gschwind, Michael K [Chappaqua, NY

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  19. Effective Vectorization with OpenMP 4.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Joseph N.; Hernandez, Oscar R.; Lopez, Matthew Graham

    This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMDmore » is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.« less

  20. Reduction of solar vector magnetograph data using a microMSP array processor

    NASA Technical Reports Server (NTRS)

    Kineke, Jack

    1990-01-01

    The processing of raw data obtained by the solar vector magnetograph at NASA-Marshall requires extensive arithmetic operations on large arrays of real numbers. The objectives of this summer faculty fellowship study are to: (1) learn the programming language of the MicroMSP Array Processor and adapt some existing data reduction routines to exploit its capabilities; and (2) identify other applications and/or existing programs which lend themselves to array processor utilization which can be developed by undergraduate student programmers under the provisions of project JOVE.

  1. Memory efficient solution of the primitive equations for numerical weather prediction on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Tuccillo, J. J.

    1984-01-01

    Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.

  2. Ultra-Reliable Digital Avionics (URDA) processor

    NASA Astrophysics Data System (ADS)

    Branstetter, Reagan; Ruszczyk, William; Miville, Frank

    1994-10-01

    Texas Instruments Incorporated (TI) developed the URDA processor design under contract with the U.S. Air Force Wright Laboratory and the U.S. Army Night Vision and Electro-Sensors Directorate. TI's approach couples advanced packaging solutions with advanced integrated circuit (IC) technology to provide a high-performance (200 MIPS/800 MFLOPS) modular avionics processor module for a wide range of avionics applications. TI's processor design integrates two Ada-programmable, URDA basic processor modules (BPM's) with a JIAWG-compatible PiBus and TMBus on a single F-22 common integrated processor-compatible form-factor SEM-E avionics card. A separate, high-speed (25-MWord/second 32-bit word) input/output bus is provided for sensor data. Each BPM provides a peak throughput of 100 MIPS scalar concurrent with 400-MFLOPS vector processing in a removable multichip module (MCM) mounted to a liquid-flowthrough (LFT) core and interfacing to a processor interface module printed wiring board (PWB). Commercial RISC technology coupled with TI's advanced bipolar complementary metal oxide semiconductor (BiCMOS) application specific integrated circuit (ASIC) and silicon-on-silicon packaging technologies are used to achieve the high performance in a miniaturized package. A Mips R4000-family reduced instruction set computer (RISC) processor and a TI 100-MHz BiCMOS vector coprocessor (VCP) ASIC provide, respectively, the 100 MIPS of a scalar processor throughput and 400 MFLOPS of vector processing throughput for each BPM. The TI Aladdim ASIC chipset was developed on the TI Aladdin Program under contract with the U.S. Army Communications and Electronics Command and was sponsored by the Advanced Research Projects Agency with technical direction from the U.S. Army Night Vision and Electro-Sensors Directorate.

  3. NASCRIN - NUMERICAL ANALYSIS OF SCRAMJET INLET

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1994-01-01

    The NASCRIN program was developed for analyzing two-dimensional flow fields in supersonic combustion ramjet (scramjet) inlets. NASCRIN solves the two-dimensional Euler or Navier-Stokes equations in conservative form by an unsplit, explicit, two-step finite-difference method. A more recent explicit-implicit, two-step scheme has also been incorporated in the code for viscous flow analysis. An algebraic, two-layer eddy-viscosity model is used for the turbulent flow calculations. NASCRIN can analyze both inviscid and viscous flows with no struts, one strut, or multiple struts embedded in the flow field. NASCRIN can be used in a quasi-three-dimensional sense for some scramjet inlets under certain simplifying assumptions. Although developed for supersonic internal flow, NASCRIN may be adapted to a variety of other flow problems. In particular, it should be readily adaptable to subsonic inflow with supersonic outflow, supersonic inflow with subsonic outflow, or fully subsonic flow. The NASCRIN program is available for batch execution on the CDC CYBER 203. The vectorized FORTRAN version was developed in 1983. NASCRIN has a central memory requirement of approximately 300K words for a grid size of about 3,000 points.

  4. Development of Effective Aerobic Cometabolic Systems for the In Situ Transformation of Problematic Chlorinated Solvent Mixtures

    DTIC Science & Technology

    2005-02-01

    vector contained the ampicillin resistance gene , only transformed colonies should grow on the plates and since the inserted DNA should have knocked...flava 340 209 203 69 2422 Nitrogen-fixing bacterium MI753 340 209 203 69 2571 Pseudomonas spinosa 340 209 203 217 2458 Xylophilus ampelinus 340 209...277 203 198 2767 Unidentified bacterium 340 277 203 198 2674 Type 0803 filamentous bacterium 340 277 203 198 2581 Xylophilus ampelinus 340 209 203 217

  5. Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU

    NASA Astrophysics Data System (ADS)

    Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.

    1982-06-01

    In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.

  6. Nuclear Power Plant Cyber Security Discrete Dynamic Event Tree Analysis (LDRD 17-0958) FY17 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, Timothy A.; Denman, Matthew R.; Williams, R. A.

    Instrumentation and control of nuclear power is transforming from analog to modern digital assets. These control systems perform key safety and security functions. This transformation is occurring in new plant designs as well as in the existing fleet of plants as the operation of those plants is extended to 60 years. This transformation introduces new and unknown issues involving both digital asset induced safety issues and security issues. Traditional nuclear power risk assessment tools and cyber security assessment methods have not been modified or developed to address the unique nature of cyber failure modes and of cyber security threat vulnerabilities.more » iii This Lab-Directed Research and Development project has developed a dynamic cyber-risk in- formed tool to facilitate the analysis of unique cyber failure modes and the time sequencing of cyber faults, both malicious and non-malicious, and impose those cyber exploits and cyber faults onto a nuclear power plant accident sequence simulator code to assess how cyber exploits and cyber faults could interact with a plants digital instrumentation and control (DI&C) system and defeat or circumvent a plants cyber security controls. This was achieved by coupling an existing Sandia National Laboratories nuclear accident dynamic simulator code with a cyber emulytics code to demonstrate real-time simulation of cyber exploits and their impact on automatic DI&C responses. Studying such potential time-sequenced cyber-attacks and their risks (i.e., the associated impact and the associated degree of difficulty to achieve the attack vector) on accident management establishes a technical risk informed framework for developing effective cyber security controls for nuclear power.« less

  7. Development of a CRAY 1 version of the SINDA program. [thermo-structural analyzer program

    NASA Technical Reports Server (NTRS)

    Juba, S. M.; Fogerson, P. E.

    1982-01-01

    The SINDA thermal analyzer program was transferred from the UNIVAC 1110 computer to a CYBER And then to a CRAY 1. Significant changes to the code of the program were required in order to execute efficiently on the CYBER and CRAY. The program was tested on the CRAY using a thermal math model of the shuttle which was too large to run on either the UNIVAC or CYBER. An effort was then begun to further modify the code of SINDA in order to make effective use of the vector capabilities of the CRAY.

  8. Navier-Stokes Simulation of Homogeneous Turbulence on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Wu, C. T.; Ferziger, J. H.; Chapman, D. R.; Rogallo, R. S.

    1984-01-01

    A computer code which solves the Navier-Stokes equations for three dimensional, time-dependent, homogenous turbulence has been written for the CYBER 205. The code has options for both 64-bit and 32-bit arithmetic. With 32-bit computation, mesh sizes up to 64 (3) are contained within core of a 2 million 64-bit word memory. Computer speed timing runs were made for various vector lengths up to 6144. With this code, speeds a little over 100 Mflops have been achieved on a 2-pipe CYBER 205. Several problems encountered in the coding are discussed.

  9. Monte Carlo simulation of Ising models by multispin coding on a vector computer

    NASA Astrophysics Data System (ADS)

    Wansleben, Stephan; Zabolitzky, John G.; Kalle, Claus

    1984-11-01

    Rebbi's efficient multispin coding algorithm for Ising models is combined with the use of the vector computer CDC Cyber 205. A speed of 21.2 million updates per second is reached. This is comparable to that obtained by special- purpose computers.

  10. Vectorized program architectures for supercomputer-aided circuit design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzoli, V.; Ferlito, M.; Neri, A.

    1986-01-01

    Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the ''semantic'' vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scaler computers (CDC 7600), with cost reductions of more than one order of magnitude. Thismore » could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.« less

  11. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  12. Vector generator scan converter

    DOEpatents

    Moore, James M.; Leighton, James F.

    1990-01-01

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O (input/output) channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardward for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold.

  13. Vector generator scan converter

    DOEpatents

    Moore, J.M.; Leighton, J.F.

    1988-02-05

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardware for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold. 7 figs.

  14. Vectorized multigrid Poisson solver for the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Barkai, D.; Brandt, M. A.

    1984-01-01

    The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.

  15. M-step preconditioned conjugate gradient methods

    NASA Technical Reports Server (NTRS)

    Adams, L.

    1983-01-01

    Preconditioned conjugate gradient methods for solving sparse symmetric and positive finite systems of linear equations are described. Necessary and sufficient conditions are given for when these preconditioners can be used and an analysis of their effectiveness is given. Efficient computer implementations of these methods are discussed and results on the CYBER 203 and the Finite Element Machine under construction at NASA Langley Research Center are included.

  16. Vectorization of a classical trajectory code on a floating point systems, Inc. Model 164 attached processor.

    PubMed

    Kraus, Wayne A; Wagner, Albert F

    1986-04-01

    A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.

  17. A hybrid algorithm for parallel molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Mangiardi, Chris M.; Meyer, R.

    2017-10-01

    This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.

  18. Citizen ’Cyber’ Airmen: Maintaining Ready and Proficient Cyberspace Operators in the Reserve Components

    DTIC Science & Technology

    2016-02-01

    Internet service providers and global supply chains, over which DOD has no direct authority to mitigate risk effectively. The global technology supply...cyberspace. CO are composed of the military, intelligence, and ordinary business operations of DOD in and through cyberspace. Cyberspace, while a global ...infrastructures, including the Internet , telecommunications networks, computer systems, and embedded processors and controllers, and the content that flows across

  19. Imer-product array processor for retrieval of stored images represented by bipolar binary (+1,-1) pixels using partial input trinary pixels represented by (+1,-1)

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Awwal, Abdul A. S. (Inventor); Karim, Mohammad A. (Inventor)

    1993-01-01

    An inner-product array processor is provided with thresholding of the inner product during each iteration to make more significant the inner product employed in estimating a vector to be used as the input vector for the next iteration. While stored vectors and estimated vectors are represented in bipolar binary (1,-1), only those elements of an initial partial input vector that are believed to be common with those of a stored vector are represented in bipolar binary; the remaining elements of a partial input vector are set to 0. This mode of representation, in which the known elements of a partial input vector are in bipolar binary form and the remaining elements are set equal to 0, is referred to as trinary representation. The initial inner products corresponding to the partial input vector will then be equal to the number of known elements. Inner-product thresholding is applied to accelerate convergence and to avoid convergence to a negative input product.

  20. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System

    PubMed Central

    Beruvides, Gerardo

    2017-01-01

    Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.). The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors’ knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions. PMID:28906450

  1. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System.

    PubMed

    Castaño, Fernando; Beruvides, Gerardo; Haber, Rodolfo E; Artuñedo, Antonio

    2017-09-14

    Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.). The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors' knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions.

  2. GaAs Supercomputing: Architecture, Language, And Algorithms For Image Processing

    NASA Astrophysics Data System (ADS)

    Johl, John T.; Baker, Nick C.

    1988-10-01

    The application of high-speed GaAs processors in a parallel system matches the demanding computational requirements of image processing. The architecture of the McDonnell Douglas Astronautics Company (MDAC) vector processor is described along with the algorithms and language translator. Most image and signal processing algorithms can utilize parallel processing and show a significant performance improvement over sequential versions. The parallelization performed by this system is within each vector instruction. Since each vector has many elements, each requiring some computation, useful concurrent arithmetic operations can easily be performed. Balancing the memory bandwidth with the computation rate of the processors is an important design consideration for high efficiency and utilization. The architecture features a bus-based execution unit consisting of four to eight 32-bit GaAs RISC microprocessors running at a 200 MHz clock rate for a peak performance of 1.6 BOPS. The execution unit is connected to a vector memory with three buses capable of transferring two input words and one output word every 10 nsec. The address generators inside the vector memory perform different vector addressing modes and feed the data to the execution unit. The functions discussed in this paper include basic MATRIX OPERATIONS, 2-D SPATIAL CONVOLUTION, HISTOGRAM, and FFT. For each of these algorithms, assembly language programs were run on a behavioral model of the system to obtain performance figures.

  3. Towards a cyber-physical era: soft computing framework based multi-sensor array for water quality monitoring

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Jyotirmoy; Gupta, Karunesh K.; Gupta, Rajiv

    2018-02-01

    New concepts and techniques are replacing traditional methods of water quality parameter measurement systems. This paper introduces a cyber-physical system (CPS) approach for water quality assessment in a distribution network. Cyber-physical systems with embedded sensors, processors and actuators can be designed to sense and interact with the water environment. The proposed CPS is comprised of sensing framework integrated with five different water quality parameter sensor nodes and soft computing framework for computational modelling. Soft computing framework utilizes the applications of Python for user interface and fuzzy sciences for decision making. Introduction of multiple sensors in a water distribution network generates a huge number of data matrices, which are sometimes highly complex, difficult to understand and convoluted for effective decision making. Therefore, the proposed system framework also intends to simplify the complexity of obtained sensor data matrices and to support decision making for water engineers through a soft computing framework. The target of this proposed research is to provide a simple and efficient method to identify and detect presence of contamination in a water distribution network using applications of CPS.

  4. You Are What You Read: The Belief Systems of Cyber-Bystanders on Social Networking Sites

    PubMed Central

    Leung, Angel N. M.; Wong, Natalie; Farver, JoAnn M.

    2018-01-01

    The present study tested how exposure to two types of responses to a hypothetical simulated Facebook setting influenced cyber-bystanders’ perceived control and normative beliefs using a 4 cyberbully-victim group (pure cyberbullies, non-involved, pure cyberbullied victims, and cyberbullied-victims) × 2 condition (offend vs. defend) experimental design. 203 Hong Kong Chinese secondary school and university students (132 females, 71 males; 12 to 28; M = 16.70; SD = 3.03 years old) were randomly assigned into one of two conditions. Results showed that participants’ involvement in cyberbullying significantly related to their control beliefs about bully and victim assisting behaviors, while exposure to the two different conditions (offend vs. defend comments) was related to both their control and normative beliefs. In general, the defend condition promoted higher control beliefs to help the victims and promoted higher normative beliefs to help the victims. Regardless of their past involvement in cyberbullying and exposure to offend vs. defend conditions, both cyber-bullies and cyber-victims were more inclined to demonstrate normative beliefs to help victims than to assist bullies. These results have implications for examining environmental influences in predicting bystander behaviors in cyberbullying contexts, and for creating a positive environment to motivate adolescents to become “upstanders” in educational programs to combat cyberbullying. PMID:29740362

  5. You Are What You Read: The Belief Systems of Cyber-Bystanders on Social Networking Sites.

    PubMed

    Leung, Angel N M; Wong, Natalie; Farver, JoAnn M

    2018-01-01

    The present study tested how exposure to two types of responses to a hypothetical simulated Facebook setting influenced cyber-bystanders' perceived control and normative beliefs using a 4 cyberbully-victim group (pure cyberbullies, non-involved, pure cyberbullied victims, and cyberbullied-victims) × 2 condition (offend vs. defend) experimental design. 203 Hong Kong Chinese secondary school and university students (132 females, 71 males; 12 to 28; M = 16.70; SD = 3.03 years old) were randomly assigned into one of two conditions. Results showed that participants' involvement in cyberbullying significantly related to their control beliefs about bully and victim assisting behaviors, while exposure to the two different conditions (offend vs. defend comments) was related to both their control and normative beliefs. In general, the defend condition promoted higher control beliefs to help the victims and promoted higher normative beliefs to help the victims. Regardless of their past involvement in cyberbullying and exposure to offend vs. defend conditions, both cyber-bullies and cyber-victims were more inclined to demonstrate normative beliefs to help victims than to assist bullies. These results have implications for examining environmental influences in predicting bystander behaviors in cyberbullying contexts, and for creating a positive environment to motivate adolescents to become "upstanders" in educational programs to combat cyberbullying.

  6. SAFETY ON UNTRUSTED NETWORK DEVICES (SOUND)

    DTIC Science & Technology

    2017-10-10

    in the Cyber & Communication Technologies Group , but not on the SOUND project, would review the code, design and perform attacks against a live...3.5 Red Team As part of our testing , we planned to conduct Red Team assessments. In these assessments, a group of engineers from BAE who worked...developed under the DARPA CRASH program and SOUND were designed to be companion projects. SAFE focused on the processor and the host, SOUND focused on

  7. A sparse matrix algorithm on the Boolean vector machine

    NASA Technical Reports Server (NTRS)

    Wagner, Robert A.; Patrick, Merrell L.

    1988-01-01

    VLSI technology is being used to implement a prototype Boolean Vector Machine (BVM), which is a large network of very small processors with equally small memories that operate in SIMD mode; these use bit-serial arithmetic, and communicate via cube-connected cycles network. The BVM's bit-serial arithmetic and the small memories of individual processors are noted to compromise the system's effectiveness in large numerical problem applications. Attention is presently given to the implementation of a basic matrix-vector iteration algorithm for space matrices of the BVM, in order to generate over 1 billion useful floating-point operations/sec for this iteration algorithm. The algorithm is expressed in a novel language designated 'BVM'.

  8. FPGA wavelet processor design using language for instruction-set architectures (LISA)

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios

    2007-04-01

    The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.

  9. Right-Brain/Left-Brain Integrated Associative Processor Employing Convertible Multiple-Instruction-Stream Multiple-Data-Stream Elements

    NASA Astrophysics Data System (ADS)

    Hayakawa, Hitoshi; Ogawa, Makoto; Shibata, Tadashi

    2005-04-01

    A very large scale integrated circuit (VLSI) architecture for a multiple-instruction-stream multiple-data-stream (MIMD) associative processor has been proposed. The processor employs an architecture that enables seamless switching from associative operations to arithmetic operations. The MIMD element is convertible to a regular central processing unit (CPU) while maintaining its high performance as an associative processor. Therefore, the MIMD associative processor can perform not only on-chip perception, i.e., searching for the vector most similar to an input vector throughout the on-chip cache memory, but also arithmetic and logic operations similar to those in ordinary CPUs, both simultaneously in parallel processing. Three key technologies have been developed to generate the MIMD element: associative-operation-and-arithmetic-operation switchable calculation units, a versatile register control scheme within the MIMD element for flexible operations, and a short instruction set for minimizing the memory size for program storage. Key circuit blocks were designed and fabricated using 0.18 μm complementary metal-oxide-semiconductor (CMOS) technology. As a result, the full-featured MIMD element is estimated to be 3 mm2, showing the feasibility of an 8-parallel-MIMD-element associative processor in a single chip of 5 mm× 5 mm.

  10. An efficient sparse matrix multiplication scheme for the CYBER 205 computer

    NASA Technical Reports Server (NTRS)

    Lambiotte, Jules J., Jr.

    1988-01-01

    This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.

  11. Addressing Modeling Challenges in Cyber-Physical Systems

    DTIC Science & Technology

    2011-03-04

    A. Lee and Eleftherios Matsikoudis. The semantics of dataflow with firing. In Grard Huet, Gordon Plotkin, Jean - Jacques Lévy, and Yves Bertot...Computer-Aided Design of Integrated Circuits and Systems, 20(3), 2001. [12] Luca P. Carloni, Roberto Passerone, Alessandro Pinto , and Alberto Sangiovanni...gst/fullpage.html?res= 9504EFDA1738F933A2575AC0A9679C8B63. 20 [15] Abhijit Davare, Douglas Densmore, Trevor Meyerowitz, Alessandro Pinto , Alberto

  12. Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python

    USGS Publications Warehouse

    Laura, Jason R.; Rey, Sergio J.

    2017-01-01

    Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.

  13. Solving the Cauchy-Riemann equations on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    Discussed is the implementation of a single algorithm on three parallel-vector computers. The algorithm is a relaxation scheme for the solution of the Cauchy-Riemann equations; a set of coupled first order partial differential equations. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, and SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The machine architectures are briefly described. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Conclusions are presented.

  14. Dual-scale topology optoelectronic processor.

    PubMed

    Marsden, G C; Krishnamoorthy, A V; Esener, S C; Lee, S H

    1991-12-15

    The dual-scale topology optoelectronic processor (D-STOP) is a parallel optoelectronic architecture for matrix algebraic processing. The architecture can be used for matrix-vector multiplication and two types of vector outer product. The computations are performed electronically, which allows multiplication and summation concepts in linear algebra to be generalized to various nonlinear or symbolic operations. This generalization permits the application of D-STOP to many computational problems. The architecture uses a minimum number of optical transmitters, which thereby reduces fabrication requirements while maintaining area-efficient electronics. The necessary optical interconnections are space invariant, minimizing space-bandwidth requirements.

  15. Implementation of kernels on the Maestro processor

    NASA Astrophysics Data System (ADS)

    Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.

    Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.

  16. A high performance linear equation solver on the VPP500 parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi

    1994-12-31

    This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.

  17. A computer program for estimation from incomplete multinomial data

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1978-01-01

    Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.

  18. Real-Time Symbol Extraction From Grey-Level Images

    NASA Astrophysics Data System (ADS)

    Massen, R.; Simnacher, M.; Rosch, J.; Herre, E.; Wuhrer, H. W.

    1988-04-01

    A VME-bus image pipeline processor for extracting vectorized contours from grey-level images in real-time is presented. This 3 Giga operation per second processor uses large kernel convolvers and new non-linear neighbourhood processing algorithms to compute true 1-pixel wide and noise-free contours without thresholding even from grey-level images with quite varying edge sharpness. The local edge orientation is used as an additional cue to compute a list of vectors describing the closed and open contours in real-time and to dump a CAD-like symbolic image description into a symbol memory at pixel clock rate.

  19. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    NASA Technical Reports Server (NTRS)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  20. Generic accelerated sequence alignment in SeqAn using vectorization and multi-threading.

    PubMed

    Rahn, René; Budach, Stefan; Costanza, Pascal; Ehrhardt, Marcel; Hancox, Jonny; Reinert, Knut

    2018-05-03

    Pairwise sequence alignment is undoubtedly a central tool in many bioinformatics analyses. In this paper, we present a generically accelerated module for pairwise sequence alignments applicable for a broad range of applications. In our module, we unified the standard dynamic programming kernel used for pairwise sequence alignments and extended it with a generalized inter-sequence vectorization layout, such that many alignments can be computed simultaneously by exploiting SIMD (Single Instruction Multiple Data) instructions of modern processors. We then extended the module by adding two layers of thread-level parallelization, where we a) distribute many independent alignments on multiple threads and b) inherently parallelize a single alignment computation using a work stealing approach producing a dynamic wavefront progressing along the minor diagonal. We evaluated our alignment vectorization and parallelization on different processors, including the newest Intel® Xeon® (Skylake) and Intel® Xeon Phi™ (KNL) processors, and use cases. The instruction set AVX512-BW (Byte and Word), available on Skylake processors, can genuinely improve the performance of vectorized alignments. We could run single alignments 1600 times faster on the Xeon Phi™ and 1400 times faster on the Xeon® than executing them with our previous sequential alignment module. The module is programmed in C++ using the SeqAn (Reinert et al., 2017) library and distributed with version 2.4. under the BSD license. We support SSE4, AVX2, AVX512 instructions and included UME::SIMD, a SIMD-instruction wrapper library, to extend our module for further instruction sets. We thoroughly test all alignment components with all major C++ compilers on various platforms. rene.rahn@fu-berlin.de.

  1. Implementation of an ADI method on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    The implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, an SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the FLEX/32 and CRAY/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  2. Implementation of an ADI method on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    In this paper the implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are the MPP, an SIMD machine with 16-Kbit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the Flex/32 and Cray/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally conclusions are presented.

  3. Optimizing Performance of Combustion Chemistry Solvers on Intel's Many Integrated Core (MIC) Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Hariswaran; Grout, Ray W

    This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved heremore » through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.« less

  4. User's guide to PANCOR: A panel method program for interference assessment in slotted-wall wind tunnels

    NASA Technical Reports Server (NTRS)

    Kemp, William B., Jr.

    1990-01-01

    Guidelines are presented for use of the computer program PANCOR to assess the interference due to tunnel walls and model support in a slotted wind tunnel test section at subsonic speeds. Input data requirements are described in detail and program output and general program usage are described. The program is written for effective automatic vectorization on a CDC CYBER 200 class vector processing system.

  5. 77 FR 9724 - Request for Petitions To Modify the Rules of Origin Under the Dominican Republic-Central America...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-17

    ... ministerial-level body responsible for supervising the implementation of the CAFTA-DR, agreed to consider... appropriate. Section 203(o)(3) of the Act authorizes the President to proclaim modifications to the CAFTA-DR...; and (3) the level and breadth of interest that manufacturers, processors, traders, and consumers in...

  6. Iterative color-multiplexed, electro-optical processor.

    PubMed

    Psaltis, D; Casasent, D; Carlotto, M

    1979-11-01

    A noncoherent optical vector-matrix multiplier using a linear LED source array and a linear P-I-N photodiode detector array has been combined with a 1-D adder in a feedback loop. The resultant iterative optical processor and its use in solving simultaneous linear equations are described. Operation on complex data is provided by a novel color-multiplexing system.

  7. On nonlinear finite element analysis in single-, multi- and parallel-processors

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R.; Islam, M.; Salama, M.

    1982-01-01

    Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.

  8. Acoustooptic linear algebra processors - Architectures, algorithms, and applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1984-01-01

    Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

  9. Effective traffic features selection algorithm for cyber-attacks samples

    NASA Astrophysics Data System (ADS)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  10. A wideband software reconfigurable modem

    NASA Astrophysics Data System (ADS)

    Turner, J. H., Jr.; Vickers, H.

    A wideband modem is described which provides signal processing capability for four Lx-band signals employing QPSK, MSK and PPM waveforms and employs a software reconfigurable architecture for maximum system flexibility and graceful degradation. The current processor uses a 2901 and two 8086 microprocessors per channel and performs acquisition, tracking, and data demodulation for JITDS, GPS, IFF and TACAN systems. The next generation processor will be implemented using a VHSIC chip set employing a programmable complex array vector processor module, a GP computer module, customized gate array modules, and a digital array correlator. This integrated processor has application to a wide number of diverse system waveforms, and will bring the benefits of VHSIC technology insertion into avionic antijam communications systems.

  11. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Analysis of the precision parameters of an optoelectronic vector-matrix processor of digital information

    NASA Astrophysics Data System (ADS)

    Odinokov, S. B.; Petrov, A. V.

    1995-10-01

    Mathematical models of components of a vector-matrix optoelectronic multiplier are considered. Perturbing factors influencing a real optoelectronic system — noise and errors of radiation sources and detectors, nonlinearity of an analogue—digital converter, nonideal optical systems — are taken into account. Analytic expressions are obtained for relating the precision of such a multiplier to the probability of an error amounting to one bit, to the parameters describing the quality of the multiplier components, and to the quality of the optical system of the processor. Various methods of increasing the dynamic range of a multiplier are considered at the technical systems level.

  12. Beyond core count: a look at new mainstream computing platforms for HEP workloads

    NASA Astrophysics Data System (ADS)

    Szostek, P.; Nowak, A.; Bitzes, G.; Valsan, L.; Jarp, S.; Dotti, A.

    2014-06-01

    As Moore's Law continues to deliver more and more transistors, the mainstream processor industry is preparing to expand its investments in areas other than simple core count. These new interests include deep integration of on-chip components, advanced vector units, memory, cache and interconnect technologies. We examine these moving trends with parallelized and vectorized High Energy Physics workloads in mind. In particular, we report on practical experience resulting from experiments with scalable HEP benchmarks on the Intel "Ivy Bridge-EP" and "Haswell" processor families. In addition, we examine the benefits of the new "Haswell" microarchitecture and its impact on multiple facets of HEP software. Finally, we report on the power efficiency of new systems.

  13. A Parallel Vector Machine for the PM Programming Language

    NASA Astrophysics Data System (ADS)

    Bellerby, Tim

    2016-04-01

    PM is a new programming language which aims to make the writing of computational geoscience models on parallel hardware accessible to scientists who are not themselves expert parallel programmers. It is based around the concept of communicating operators: language constructs that enable variables local to a single invocation of a parallelised loop to be viewed as if they were arrays spanning the entire loop domain. This mechanism enables different loop invocations (which may or may not be executing on different processors) to exchange information in a manner that extends the successful Communicating Sequential Processes idiom from single messages to collective communication. Communicating operators avoid the additional synchronisation mechanisms, such as atomic variables, required when programming using the Partitioned Global Address Space (PGAS) paradigm. Using a single loop invocation as the fundamental unit of concurrency enables PM to uniformly represent different levels of parallelism from vector operations through shared memory systems to distributed grids. This paper describes an implementation of PM based on a vectorised virtual machine. On a single processor node, concurrent operations are implemented using masked vector operations. Virtual machine instructions operate on vectors of values and may be unmasked, masked using a Boolean field, or masked using an array of active vector cell locations. Conditional structures (such as if-then-else or while statement implementations) calculate and apply masks to the operations they control. A shift in mask representation from Boolean to location-list occurs when active locations become sufficiently sparse. Parallel loops unfold data structures (or vectors of data structures for nested loops) into vectors of values that may additionally be distributed over multiple computational nodes and then split into micro-threads compatible with the size of the local cache. Inter-node communication is accomplished using standard OpenMP and MPI. Performance analyses of the PM vector machine, demonstrating its scaling properties with respect to domain size and the number of processor nodes will be presented for a range of hardware configurations. The PM software and language definition are being made available under unrestrictive MIT and Creative Commons Attribution licenses respectively: www.pm-lang.org.

  14. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  15. Potential of minicomputer/array-processor system for nonlinear finite-element analysis

    NASA Technical Reports Server (NTRS)

    Strohkorb, G. A.; Noor, A. K.

    1983-01-01

    The potential of using a minicomputer/array-processor system for the efficient solution of large-scale, nonlinear, finite-element problems is studied. A Prime 750 is used as the host computer, and a software simulator residing on the Prime is employed to assess the performance of the Floating Point Systems AP-120B array processor. Major hardware characteristics of the system such as virtual memory and parallel and pipeline processing are reviewed, and the interplay between various hardware components is examined. Effective use of the minicomputer/array-processor system for nonlinear analysis requires the following: (1) proper selection of the computational procedure and the capability to vectorize the numerical algorithms; (2) reduction of input-output operations; and (3) overlapping host and array-processor operations. A detailed discussion is given of techniques to accomplish each of these tasks. Two benchmark problems with 1715 and 3230 degrees of freedom, respectively, are selected to measure the anticipated gain in speed obtained by using the proposed algorithms on the array processor.

  16. Three-dimensional flow over a conical afterbody containing a centered propulsive jet: A numerical simulation

    NASA Technical Reports Server (NTRS)

    Deiwert, G. S.; Rothmund, H.

    1984-01-01

    The supersonic flow field over a body of revolution incident to the free stream is simulated numerically on a large, array processor (the CDC CYBER 205). The configuration is composed of a cone-cylinder forebody followed by a conical afterbody from which emanates a centered, supersonic propulsive jet. The free-stream Mach number is 2, the jet-exist Mach number is 2.5, and the jet-to-free-stream static pressure ratio is 3. Both the external flow and the exhaust are ideal air at a common total temperature.

  17. Cyber security evaluation of II&C technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Ken

    The Light Water Reactor Sustainability (LWRS) Program is a research and development program sponsored by the Department of Energy, which is conducted in close collaboration with industry to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants The LWRS Program serves to help the US nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. Within the LWRS Program, the Advanced Instrumentation, Information, and Control (II&C) Systems Technologies Pathway conducts targeted research and development (R&D) tomore » address aging and reliability concerns with the legacy instrumentation and control and related information systems of the U.S. operating light water reactor (LWR) fleet. The II&C Pathway is conducted by Idaho National Laboratory (INL). Cyber security is a common concern among nuclear utilities and other nuclear industry stakeholders regarding the digital technologies that are being developed under this program. This concern extends to the point of calling into question whether these types of technologies could ever be deployed in nuclear plants given the possibility that the information in them can be compromised and the technologies themselves can potentially be exploited to serve as attack vectors for adversaries. To this end, a cyber security evaluation has been conducted of these technologies to determine whether they constitute a threat beyond what the nuclear plants already manage within their regulatory-required cyber security programs. Specifically, the evaluation is based on NEI 08-09, which is the industry’s template for cyber security programs and evaluations, accepted by the Nuclear Regulatory Commission (NRC) as responsive to the requirements of the nuclear power plant cyber security regulation found in 10 CFR 73.54. The evaluation was conducted by a cyber security team with expertise in nuclear utility cyber security programs and experience in conducting these evaluations. The evaluation has determined that, for the most part, cyber security will not be a limiting factor in the application of these technologies to nuclear power plant applications.« less

  18. Content modification attacks on consensus seeking multi-agent system with double-integrator dynamics.

    PubMed

    Dong, Yimeng; Gupta, Nirupam; Chopra, Nikhil

    2016-11-01

    In this paper, vulnerability of a distributed consensus seeking multi-agent system (MAS) with double-integrator dynamics against edge-bound content modification cyber attacks is studied. In particular, we define a specific edge-bound content modification cyber attack called malignant content modification attack (MCoMA), which results in unbounded growth of an appropriately defined group disagreement vector. Properties of MCoMA are utilized to design detection and mitigation algorithms so as to impart resilience in the considered MAS against MCoMA. Additionally, the proposed detection mechanism is extended to detect the general edge-bound content modification attacks (not just MCoMA). Finally, the efficacies of the proposed results are illustrated through numerical simulations.

  19. Content modification attacks on consensus seeking multi-agent system with double-integrator dynamics

    NASA Astrophysics Data System (ADS)

    Dong, Yimeng; Gupta, Nirupam; Chopra, Nikhil

    2016-11-01

    In this paper, vulnerability of a distributed consensus seeking multi-agent system (MAS) with double-integrator dynamics against edge-bound content modification cyber attacks is studied. In particular, we define a specific edge-bound content modification cyber attack called malignant content modification attack (MCoMA), which results in unbounded growth of an appropriately defined group disagreement vector. Properties of MCoMA are utilized to design detection and mitigation algorithms so as to impart resilience in the considered MAS against MCoMA. Additionally, the proposed detection mechanism is extended to detect the general edge-bound content modification attacks (not just MCoMA). Finally, the efficacies of the proposed results are illustrated through numerical simulations.

  20. Atmospheric Modeling And Sensor Simulation (AMASS) study

    NASA Technical Reports Server (NTRS)

    Parker, K. G.

    1985-01-01

    A 4800 band synchronous communications link was established between the Perkin-Elmer (P-E) 3250 Atmospheric Modeling and Sensor Simulation (AMASS) system and the Cyber 205 located at the Goddard Space Flight Center. An extension study of off-the-shelf array processors offering standard interface to the Perkin-Elmer was conducted to determine which would meet computational requirements of the division. A Floating Point Systems AP-120B was borrowed from another Marshall Space Flight Center laboratory for evaluation. It was determined that available array processors did not offer significantly more capabilities than the borrowed unit, although at least three other vendors indicated that standard Perkin-Elmer interfaces would be marketed in the future. Therefore, the recommendation was made to continue to utilize the 120B ad to keep monitoring the AP market. Hardware necessary to support requirements of the ASD as well as to enhance system performance was specified and procured. Filters were implemented on the Harris/McIDAS system including two-dimensional lowpass, gradient, Laplacian, and bicubic interpolation routines.

  1. A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories

    NASA Technical Reports Server (NTRS)

    Barkai, D.; Moriarty, K. J. M.; Rebbi, C.

    1984-01-01

    New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS.

  2. Multitasking 3-D forward modeling using high-order finite difference methods on the Cray X-MP/416

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terki-Hassaine, O.; Leiss, E.L.

    1988-01-01

    The CRAY X-MP/416 was used to multitask 3-D forward modeling by the high-order finite difference method. Flowtrace analysis reveals that the most expensive operation in the unitasked program is a matrix vector multiplication. The in-core and out-of-core versions of a reentrant subroutine can perform any fraction of the matrix vector multiplication independently, a pattern compatible with multitasking. The matrix vector multiplication routine can be distributed over two to four processors. The rest of the program utilizes the microtasking feature that lets the system treat independent iterations of DO-loops as subtasks to be performed by any available processor. The availability ofmore » the Solid-State Storage Device (SSD) meant the I/O wait time was virtually zero. A performance study determined a theoretical speedup, taking into account the multitasking overhead. Multitasking programs utilizing both macrotasking and microtasking features obtained actual speedups that were approximately 80% of the ideal speedup.« less

  3. A Framework for Event Prioritization in Cyber Network Defense

    DTIC Science & Technology

    2014-07-15

    U.S.," Reuters (US online Edition), pp. http://www.reuters.com/article/2013/11/06/net-us-usa-china- hacking - idUSBRE9A51AN20131106, 6 Nov 2013. [2...expressed individually, or as a single vector by using the magnitude of these three factors (cell J5 in Table 5

  4. Selection of optimum median-filter-based ambiguity removal algorithm parameters for NSCAT. [NASA scatterometer

    NASA Technical Reports Server (NTRS)

    Shaffer, Scott; Dunbar, R. Scott; Hsiao, S. Vincent; Long, David G.

    1989-01-01

    The NASA Scatterometer, NSCAT, is an active spaceborne radar designed to measure the normalized radar backscatter coefficient (sigma0) of the ocean surface. These measurements can, in turn, be used to infer the surface vector wind over the ocean using a geophysical model function. Several ambiguous wind vectors result because of the nature of the model function. A median-filter-based ambiguity removal algorithm will be used by the NSCAT ground data processor to select the best wind vector from the set of ambiguous wind vectors. This process is commonly known as dealiasing or ambiguity removal. The baseline NSCAT ambiguity removal algorithm and the method used to select the set of optimum parameter values are described. An extensive simulation of the NSCAT instrument and ground data processor provides a means of testing the resulting tuned algorithm. This simulation generates the ambiguous wind-field vectors expected from the instrument as it orbits over a set of realistic meoscale wind fields. The ambiguous wind field is then dealiased using the median-based ambiguity removal algorithm. Performance is measured by comparison of the unambiguous wind fields with the true wind fields. Results have shown that the median-filter-based ambiguity removal algorithm satisfies NSCAT mission requirements.

  5. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  6. Using algebra for massively parallel processor design and utilization

    NASA Technical Reports Server (NTRS)

    Campbell, Lowell; Fellows, Michael R.

    1990-01-01

    This paper summarizes the author's advances in the design of dense processor networks. Within is reported a collection of recent constructions of dense symmetric networks that provide the largest know values for the number of nodes that can be placed in a network of a given degree and diameter. The constructions are in the range of current potential engineering significance and are based on groups of automorphisms of finite-dimensional vector spaces.

  7. Parallel and fault-tolerant algorithms for hypercube multiprocessors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aykanat, C.

    1988-01-01

    Several techniques for increasing the performance of parallel algorithms on distributed-memory message-passing multi-processor systems are investigated. These techniques are effectively implemented for the parallelization of the Scaled Conjugate Gradient (SCG) algorithm on a hypercube connected message-passing multi-processor. Significant performance improvement is achieved by using these techniques. The SCG algorithm is used for the solution phase of an FE modeling system. Almost linear speed-up is achieved, and it is shown that hypercube topology is scalable for an FE class of problem. The SCG algorithm is also shown to be suitable for vectorization, and near supercomputer performance is achieved on a vectormore » hypercube multiprocessor by exploiting both parallelization and vectorization. Fault-tolerance issues for the parallel SCG algorithm and for the hypercube topology are also addressed.« less

  8. Multi-color incomplete Cholesky conjugate gradient methods for vector computers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Poole, E. L.

    1986-01-01

    In this research, we are concerned with the solution on vector computers of linear systems of equations, Ax = b, where A is a larger, sparse symmetric positive definite matrix. We solve the system using an iterative method, the incomplete Cholesky conjugate gradient method (ICCG). We apply a multi-color strategy to obtain p-color matrices for which a block-oriented ICCG method is implemented on the CYBER 205. (A p-colored matrix is a matrix which can be partitioned into a pXp block matrix where the diagonal blocks are diagonal matrices). This algorithm, which is based on a no-fill strategy, achieves O(N/p) length vector operations in both the decomposition of A and in the forward and back solves necessary at each iteration of the method. We discuss the natural ordering of the unknowns as an ordering that minimizes the number of diagonals in the matrix and define multi-color orderings in terms of disjoint sets of the unknowns. We give necessary and sufficient conditions to determine which multi-color orderings of the unknowns correpond to p-color matrices. A performance model is given which is used both to predict execution time for ICCG methods and also to compare an ICCG method to conjugate gradient without preconditioning or another ICCG method. Results are given from runs on the CYBER 205 at NASA's Langley Research Center for four model problems.

  9. Cyber threat impact assessment and analysis for space vehicle architectures

    NASA Astrophysics Data System (ADS)

    McGraw, Robert M.; Fowler, Mark J.; Umphress, David; MacDonald, Richard A.

    2014-06-01

    This paper covers research into an assessment of potential impacts and techniques to detect and mitigate cyber attacks that affect the networks and control systems of space vehicles. Such systems, if subverted by malicious insiders, external hackers and/or supply chain threats, can be controlled in a manner to cause physical damage to the space platforms. Similar attacks on Earth-borne cyber physical systems include the Shamoon, Duqu, Flame and Stuxnet exploits. These have been used to bring down foreign power generation and refining systems. This paper discusses the potential impacts of similar cyber attacks on space-based platforms through the use of simulation models, including custom models developed in Python using SimPy and commercial SATCOM analysis tools, as an example STK/SOLIS. The paper discusses the architecture and fidelity of the simulation model that has been developed for performing the impact assessment. The paper walks through the application of an attack vector at the subsystem level and how it affects the control and orientation of the space vehicle. SimPy is used to model and extract raw impact data at the bus level, while STK/SOLIS is used to extract raw impact data at the subsystem level and to visually display the effect on the physical plant of the space vehicle.

  10. Characterization of Endogenous Plasmids from Lactobacillus salivarius UCC118▿ †

    PubMed Central

    Fang, Fang; Flynn, Sarah; Li, Yin; Claesson, Marcus J.; van Pijkeren, Jan-Peter; Collins, J. Kevin; van Sinderen, Douwe; O'Toole, Paul W.

    2008-01-01

    The genome of Lactobacillus salivarius UCC118 comprises a 1.83-Mb chromosome, a 242-kb megaplasmid (pMP118), and two smaller plasmids of 20 kb (pSF118-20) and 44 kb (pSF118-44). Annotation and bioinformatic analyses suggest that both of the smaller plasmids replicate by a theta replication mechanism. Furthermore, it appears that they are transmissible, although neither possesses a complete set of conjugation genes. Plasmid pSF118-20 encodes a toxin-antitoxin system composed of pemI and pemK homologs, and this plasmid could be cured when PemI was produced in trans. The minimal replicon of pSF118-20 was determined by deletion analysis. Shuttle vector derivatives of pSF118-20 were generated that included the replication region (pLS203) and the replication region plus mobilization genes (pLS208). The plasmid pLS203 was stably maintained without selection in Lactobacillus plantarum, Lactobacillus fermentum, and the pSF118-20-cured derivative strain of L. salivarius UCC118 (strain LS201). Cloning in pLS203 of genes encoding luciferase and green fluorescent protein, and expression from a constitutive L. salivarius promoter, demonstrated the utility of this vector for the expression of heterologous genes in Lactobacillus. This study thus expands the knowledge base and vector repertoire of probiotic lactobacilli. PMID:18390685

  11. Documentation of the GLAS fourth order general circulation model. Volume 2: Scalar code

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.

    1983-01-01

    Volume 2, of a 3 volume technical memoranda contains a detailed documentation of the GLAS fourth order general circulation model. Volume 2 contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A variable name dictionary for the scalar code, and code listings are outlined.

  12. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1984-01-01

    The research efforts of University of Virginia students under a NASA sponsored program are summarized and the status of the program is reported. The research includes: testing method evaluations for N version programming; a representation scheme for modeling three dimensional objects; fault tolerant protocols for real time local area networks; performance investigation of Cyber network; XFEM implementation; and vectorizing incomplete Cholesky conjugate gradients.

  13. Hypercluster Parallel Processor

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela

    1992-01-01

    Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.

  14. Time-variant analysis of rotorcraft systems dynamics - An exploitation of vector processors

    NASA Technical Reports Server (NTRS)

    Amirouche, F. M. L.; Xie, M.; Shareef, N. H.

    1993-01-01

    In this paper a generalized algorithmic procedure is presented for handling constraints in mechanical transmissions. The latter are treated as multibody systems of interconnected rigid/flexible bodies. The constraint Jacobian matrices are generated automatically and suitably updated in time, depending on the geometrical and kinematical constraint conditions describing the interconnection between shafts or gears. The type of constraints are classified based on the interconnection of the bodies by assuming that one or more points of contact exist between them. The effects due to elastic deformation of the flexible bodies are included by allowing each body element to undergo small deformations. The procedure is based on recursively formulated Kane's dynamical equations of motion and the finite element method, including the concept of geometrical stiffening effects. The method is implemented on an IBM-3090-600j vector processor with pipe-lining capabilities. A significant increase in the speed of execution is achieved by vectorizing the developed code in computationally intensive areas. An example consisting of two meshing disks rotating at high angular velocity is presented. Applications are intended for the study of the dynamic behavior of helicopter transmissions.

  15. Application of graph-based semi-supervised learning for development of cyber COP and network intrusion detection

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Colonna-Romano, John; Eslami, Mohammed

    2017-05-01

    The United States increasingly relies on cyber-physical systems to conduct military and commercial operations. Attacks on these systems have increased dramatically around the globe. The attackers constantly change their methods, making state-of-the-art commercial and military intrusion detection systems ineffective. In this paper, we present a model to identify functional behavior of network devices from netflow traces. Our model includes two innovations. First, we define novel features for a host IP using detection of application graph patterns in IP's host graph constructed from 5-min aggregated packet flows. Second, we present the first application, to the best of our knowledge, of Graph Semi-Supervised Learning (GSSL) to the space of IP behavior classification. Using a cyber-attack dataset collected from NetFlow packet traces, we show that GSSL trained with only 20% of the data achieves higher attack detection rates than Support Vector Machines (SVM) and Naïve Bayes (NB) classifiers trained with 80% of data points. We also show how to improve detection quality by filtering out web browsing data, and conclude with discussion of future research directions.

  16. The Sequential Implementation of Array Processors when there is Directional Uncertainty

    DTIC Science & Technology

    1975-08-01

    University of Washington kindly supplied office space and ccputing facilities. -The author hat, benefited greatly from discussions with several other...if i Q- inverse of Q I L general observation space R general vector of observation _KR general observation vector of dimension K Exiv] "Tf -- ’ -"-T’T...7" i ’i ’:"’ - ’ ; ’ ’ ’ ’ ’ ’" ’"- Glossary of Symbols (continued) R. ith observation 1 Rm real vector space of dimension m R(T) autocorrelation

  17. Development and application of the GIM code for the Cyber 203 computer

    NASA Technical Reports Server (NTRS)

    Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.

    1982-01-01

    The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.

  18. Multiprocessing MCNP on an IBM RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-01-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less

  19. Report from the MPP Working Group to the NASA Associate Administrator for Space Science and Applications

    NASA Technical Reports Server (NTRS)

    Fischer, James R.; Grosch, Chester; Mcanulty, Michael; Odonnell, John; Storey, Owen

    1987-01-01

    NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era.

  20. Mass storage at NSA

    NASA Technical Reports Server (NTRS)

    Shields, Michael F.

    1993-01-01

    The need to manage large amounts of data on robotically controlled devices has been critical to the mission of this Agency for many years. In many respects this Agency has helped pioneer, with their industry counterparts, the development of a number of products long before these systems became commercially available. Numerous attempts have been made to field both robotically controlled tape and optical disk technology and systems to satisfy our tertiary storage needs. Custom developed products were architected, designed, and developed without vendor partners over the past two decades to field workable systems to handle our ever increasing storage requirements. Many of the attendees of this symposium are familiar with some of the older products, such as: the Braegen Automated Tape Libraries (ATL's), the IBM 3850, the Ampex TeraStore, just to name a few. In addition, we embarked on an in-house development of a shared disk input/output support processor to manage our every increasing tape storage needs. For all intents and purposes, this system was a file server by current definitions which used CDC Cyber computers as the control processors. It served us well and was just recently removed from production usage.

  1. First experience of vectorizing electromagnetic physics models for detector simulation

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Bianchini, C.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; de Fine Licht, J.; Duhem, L.; Elvira, D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Presbyterian, M.; Shadura, O.; Seghal, R.; Wenzel, S.

    2015-12-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

  2. WE-DE-BRA-11: A Study of Motion Tracking Accuracy of Robotic Radiosurgery Using a Novel CCD Camera Based End-To-End Test System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, L; M Yang, Y; Nelson, B

    Purpose: A novel end-to-end test system using a CCD camera and a scintillator based phantom (XRV-124, Logos Systems Int’l) capable of measuring the beam-by-beam delivery accuracy of Robotic Radiosurgery (CyberKnife) was developed and reported in our previous work. This work investigates its application in assessing the motion tracking (Synchrony) accuracy for CyberKnife. Methods: A QA plan with Anterior and Lateral beams (with 4 different collimator sizes) was created (Multiplan v5.3) for the XRV-124 phantom. The phantom was placed on a motion platform (superior and inferior movement), and the plans were delivered on the CyberKnife M6 system using four motion patterns:more » static, Sine- wave, Sine with 15° phase shift, and a patient breathing pattern composed of 2cm maximum motion with 4 second breathing cycle. Under integral recording mode, the time-averaged beam vectors (X, Y, Z) were measured by the phantom and compared with static delivery. In dynamic recording mode, the beam spots were recorded at a rate of 10 frames/second. The beam vector deviation from average position was evaluated against the various breathing patterns. Results: The average beam position of the six deliveries with no motion and three deliveries with Synchrony tracking on ideal motion (sinewave without phase shift) all agree within −0.03±0.00 mm, 0.10±0.04, and 0.04±0.03 in the X, Y, and X directions. Radiation beam width (FWHM) variations are within ±0.03 mm. Dynamic video record showed submillimeter tracking stability for both regular and irregular breathing pattern; however the tracking error up to 3.5 mm was observed when a 15 degree phase shift was introduced. Conclusion: The XRV-124 system is able to provide 3D and 4D targeting accuracy for CyberKnife delivery with Synchrony. The experimental results showed sub-millimeter delivery in phantom with excellent correlation in target to breathing motion. The accuracy was degraded when irregular motion and phase shift was introduced.« less

  3. User's guide to STIPPAN: A panel method program for slotted tunnel interference prediction

    NASA Technical Reports Server (NTRS)

    Kemp, W. B., Jr.

    1985-01-01

    Guidelines are presented for use of the computer program STIPPAN to simulate the subsonic flow in a slotted wind tunnel test section with a known model disturbance. Input data requirements are defined in detail and other aspects of the program usage are discussed in more general terms. The program is written for use in a CDC CYBER 200 class vector processing system.

  4. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.

  5. ms2: A molecular simulation tool for thermodynamic properties

    NASA Astrophysics Data System (ADS)

    Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran

    2011-11-01

    This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.

  6. Protamine sulfate-nanodiamond hybrid nanoparticles as a vector for MiR-203 restoration in esophageal carcinoma cells

    NASA Astrophysics Data System (ADS)

    Cao, Minjun; Deng, Xiongwei; Su, Shishuai; Zhang, Fang; Xiao, Xiangqian; Hu, Qin; Fu, Yongwei; Yang, Burton B.; Wu, Yan; Sheng, Wang; Zeng, Yi

    2013-11-01

    We report an innovative approach for miRNA-203 delivery in esophageal cancer cells using protamine sulphate (PS)-nanodiamond (ND) nanoparticles. The efficient delivery of miR-203 significantly suppressed the proliferation and migration of cancer cells through targeting Ran and ΔNp63, exhibiting a great potential for PS@ND nanoparticles in miRNA-based cancer therapy.We report an innovative approach for miRNA-203 delivery in esophageal cancer cells using protamine sulphate (PS)-nanodiamond (ND) nanoparticles. The efficient delivery of miR-203 significantly suppressed the proliferation and migration of cancer cells through targeting Ran and ΔNp63, exhibiting a great potential for PS@ND nanoparticles in miRNA-based cancer therapy. Electronic supplementary information (ESI) available: (1) Experimental section; (2) Results: serum stability of miR-203/PS@NDs and miR-203 release curve (Fig. S1). Cytotoxicity assay of PS@NDs to Ec-109 cells (Fig. S2); confocal image and FACS analysis of intracellular uptake of cy3-labeled miR-203 (Fig. S3 and S4); real-time PCR analysis of miR-203 restoration (Fig. S5); Ran and ΔNp63 expression (Fig. S6); the sizes and zeta potentials of miRNA/PS@NDs (Table S1); the sequences of the microRNA mimics and primers (Table S2, S3 and S4). See DOI: 10.1039/c3nr04056a

  7. Performance of a three-dimensional Navier-Stokes code on CYBER 205 for high-speed juncture flows

    NASA Technical Reports Server (NTRS)

    Lakshmanan, B.; Tiwari, S. N.

    1987-01-01

    A vectorized 3D Navier-Stokes code has been implemented on CYBER 205 for solving the supersonic laminar flow over a swept fin/flat plate junction. The code extends MacCormack's predictor-corrector finite volume scheme to a generalized coordinate system in a locally one dimensional time split fashion. A systematic parametric study is conducted to examine the effect of fin sweep on the computed flow field. Calculated results for the pressure distribution on the flat plate and fin leading edge are compared with the experimental measurements of a right angle blunt fin/flat plate junction. The decrease in the extent of the separated flow region and peak pressure on the fin leading edge, and weakening of the two reversed supersonic zones with increase in fin sweep have been clearly observed in the numerical simulation.

  8. Performance of a Bounce-Averaged Global Model of Super-Thermal Electron Transport in the Earth's Magnetic Field

    NASA Technical Reports Server (NTRS)

    McGuire, Tim

    1998-01-01

    In this paper, we report the results of our recent research on the application of a multiprocessor Cray T916 supercomputer in modeling super-thermal electron transport in the earth's magnetic field. In general, this mathematical model requires numerical solution of a system of partial differential equations. The code we use for this model is moderately vectorized. By using Amdahl's Law for vector processors, it can be verified that the code is about 60% vectorized on a Cray computer. Speedup factors on the order of 2.5 were obtained compared to the unvectorized code. In the following sections, we discuss the methodology of improving the code. In addition to our goal of optimizing the code for solution on the Cray computer, we had the goal of scalability in mind. Scalability combines the concepts of portabilty with near-linear speedup. Specifically, a scalable program is one whose performance is portable across many different architectures with differing numbers of processors for many different problem sizes. Though we have access to a Cray at this time, the goal was to also have code which would run well on a variety of architectures.

  9. An implementation of a tree code on a SIMD, parallel computer

    NASA Technical Reports Server (NTRS)

    Olson, Kevin M.; Dorband, John E.

    1994-01-01

    We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.

  10. Multiprocessing MCNP on an IBN RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-01-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors P and the fraction f of task time that multiprocesses, can be formulated using Amdahl's law: S(f, P) =1/(1-f+f/P). However, for most applications, this theoretical limit cannot be achieved because of additional terms (e.g., multitasking overhead, memory overlap, etc.) that are not included in Amdahl's law. Monte Carlo transport is a natural candidate for multiprocessing because the particle tracks are generally independent, and the precision of the result increases as the square Foot of the number of particles tracked.« less

  11. Multiprocessing MCNP on an IBM RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-03-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl`s Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl`s Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less

  12. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  13. Coding for parallel execution of hardware-in-the-loop millimeter-wave scene generation models on multicore SIMD processor architectures

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.

    2013-05-01

    Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.

  14. Development of Universal Controller Architecture for SiC Based Power Electronic Building Blocks

    DTIC Science & Technology

    2017-10-30

    time control and control network routing and the other for non -real time instrumentation and monitoring. The two subsystems are isolated and share...directly to the processor without any software intervention. We use a non -real time I Gb/s Ethernet interface for monitoring and control of the module...NOTC1 802.lW Spanning tree Prot. 76.96 184.0 107.04 Multiple point Private Line l NOTC1 203.2 382.3 179.1 N/ A Non applicable 1 No traffic control at

  15. Parallel-vector unsymmetric Eigen-Solver on high performance computers

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Jiangning, Qin

    1993-01-01

    The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.

  16. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  17. Static assignment of complex stochastic tasks using stochastic majorization

    NASA Technical Reports Server (NTRS)

    Nicol, David; Simha, Rahul; Towsley, Don

    1992-01-01

    We consider the problem of statically assigning many tasks to a (smaller) system of homogeneous processors, where a task's structure is modeled as a branching process, and all tasks are assumed to have identical behavior. We show how the theory of majorization can be used to obtain a partial order among possible task assignments. Our results show that if the vector of numbers of tasks assigned to each processor under one mapping is majorized by that of another mapping, then the former mapping is better than the latter with respect to a large number of objective functions. In particular, we show how measurements of finishing time, resource utilization, and reliability are all captured by the theory. We also show how the theory may be applied to the problem of partitioning a pool of processors for distribution among parallelizable tasks.

  18. A Kosloff/Basal method, 3D migration program implemented on the CYBER 205 supercomputer

    NASA Technical Reports Server (NTRS)

    Pyle, L. D.; Wheat, S. R.

    1984-01-01

    Conventional finite difference migration has relied on approximations to the acoustic wave equation which allow energy to propagate only downwards. Although generally reliable, such approaches usually do not yield an accurate migration for geological structures with strong lateral velocity variations or with steeply dipping reflectors. An earlier study by D. Kosloff and E. Baysal (Migration with the Full Acoustic Wave Equation) examined an alternative approach based on the full acoustic wave equation. The 2D, Fourier type algorithm which was developed was tested by Kosloff and Baysal against synthetic data and against physical model data. The results indicated that such a scheme gives accurate migration for complicated structures. This paper describes the development and testing of a vectorized, 3D migration program for the CYBER 205 using the Kosloff/Baysal method. The program can accept as many as 65,536 zero offset (stacked) traces.

  19. Optimal design of structures with multiple design variables per group and multiple loading conditions on the personal computer

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Rogers, J. L., Jr.

    1986-01-01

    A finite element based programming system for minimum weight design of a truss-type structure subjected to displacement, stress, and lower and upper bounds on design variables is presented. The programming system consists of a number of independent processors, each performing a specific task. These processors, however, are interfaced through a well-organized data base, thus making the tasks of modifying, updating, or expanding the programming system much easier in a friendly environment provided by many inexpensive personal computers. The proposed software can be viewed as an important step in achieving a 'dummy' finite element for optimization. The programming system has been implemented on both large and small computers (such as VAX, CYBER, IBM-PC, and APPLE) although the focus is on the latter. Examples are presented to demonstrate the capabilities of the code. The present programming system can be used stand-alone or as part of the multilevel decomposition procedure to obtain optimum design for very large scale structural systems. Furthermore, other related research areas such as developing optimization algorithms (or in the larger level: a structural synthesis program) for future trends in using parallel computers may also benefit from this study.

  20. Geospace simulations on the Cell BE processor

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D.

    2008-12-01

    OpenGGCM (Open Geospace General circulation Model) is an established numerical code that simulates the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is limited by computational constraints on grid resolution. We investigate porting of the MHD solver to the Cell BE architecture, a novel inhomogeneous multicore architecture capable of up to 230 GFlops per processor. Realizing this high performance on the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallel approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the vector/SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We obtained excellent performance numbers, a speed-up of a factor of 25 compared to just using the main processor, while still keeping the numerical implementation details of the code maintainable.

  1. Search for the standard model Higgs boson produced in association with a vector boson and decaying into a tau pair in p p collisions at s = 8 TeV with the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    2016-05-17

    A search for the standard model Higgs boson produced in association with a vector boson with the decay H → τ τ is presented. The data correspond to 20.3 fb -1 of integrated luminosity from proton-proton collisions at √ s = 8 TeV recorded by the ATLAS experiment at the LHC during 2012.

  2. Fuzzy logic particle tracking velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1993-01-01

    Fuzzy logic has proven to be a simple and robust method for process control. Instead of requiring a complex model of the system, a user defined rule base is used to control the process. In this paper the principles of fuzzy logic control are applied to Particle Tracking Velocimetry (PTV). Two frames of digitally recorded, single exposure particle imagery are used as input. The fuzzy processor uses the local particle displacement information to determine the correct particle tracks. Fuzzy PTV is an improvement over traditional PTV techniques which typically require a sequence (greater than 2) of image frames for accurately tracking particles. The fuzzy processor executes in software on a PC without the use of specialized array or fuzzy logic processors. A pair of sample input images with roughly 300 particle images each, results in more than 200 velocity vectors in under 8 seconds of processing time.

  3. Construction of a Cyber Attack Model for Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varuttamaseni, Athi; Bari, Robert A.; Youngblood, Robert

    The consideration of how one compromised digital equipment can impact neighboring equipment is critical to understanding the progression of cyber attacks. The degree of influence that one component may have on another depends on a variety of factors, including the sharing of resources such as network bandwidth or processing power, the level of trust between components, and the inclusion of segmentation devices such as firewalls. The interactions among components via mechanisms that are unique to the digital world are not usually considered in traditional PRA. This means potential sequences of events that may occur during an attack may be missedmore » if one were to only look at conventional accident sequences. This paper presents a method where, starting from the initial attack vector, the progression of a cyber attack can be modeled. The propagation of the attack is modeled by considering certain attributes of the digital components in the system. These attributes determine the potential vulnerability of a component to a class of attack and the capability gained by the attackers once they are in control of the equipment. The use of attributes allows similar components (components with the same set of attributes) to be modeled in the same way, thereby reducing the computing resources required for analysis of large systems.« less

  4. Dataflow Integration and Simulation Techniques for DSP System Design Tools

    DTIC Science & Technology

    2007-01-01

    Lebak, M. Richards , and D. Campbell, “VSIPL: An object-based open standard API for vector, signal, and image processing,” in Proceedings of the...Inc., document Version 0.98a. [56] P. Marwedel and G. Goossens , Eds., Code Generation for Embedded Processors. Kluwer Academic Publishers, 1995. [57

  5. Does the Intel Xeon Phi processor fit HEP workloads?

    NASA Astrophysics Data System (ADS)

    Nowak, A.; Bitzes, G.; Dotti, A.; Lazzaro, A.; Jarp, S.; Szostek, P.; Valsan, L.; Botezatu, M.; Leduc, J.

    2014-06-01

    This paper summarizes the five years of CERN openlab's efforts focused on the Intel Xeon Phi co-processor, from the time of its inception to public release. We consider the architecture of the device vis a vis the characteristics of HEP software and identify key opportunities for HEP processing, as well as scaling limitations. We report on improvements and speedups linked to parallelization and vectorization on benchmarks involving software frameworks such as Geant4 and ROOT. Finally, we extrapolate current software and hardware trends and project them onto accelerators of the future, with the specifics of offline and online HEP processing in mind.

  6. Vectorization for Molecular Dynamics on Intel Xeon Phi Corpocessors

    NASA Astrophysics Data System (ADS)

    Yi, Hongsuk

    2014-03-01

    Many modern processors are capable of exploiting data-level parallelism through the use of single instruction multiple data (SIMD) execution. The new Intel Xeon Phi coprocessor supports 512 bit vector registers for the high performance computing. In this paper, we have developed a hierarchical parallelization scheme for accelerated molecular dynamics simulations with the Terfoff potentials for covalent bond solid crystals on Intel Xeon Phi coprocessor systems. The scheme exploits multi-level parallelism computing. We combine thread-level parallelism using a tightly coupled thread-level and task-level parallelism with 512-bit vector register. The simulation results show that the parallel performance of SIMD implementations on Xeon Phi is apparently superior to their x86 CPU architecture.

  7. Problems and Mitigation Strategies for Developing and Validating Statistical Cyber Defenses

    DTIC Science & Technology

    2014-04-01

    Clustering Support  Vector  Machine   (SVM)  Classification Netflow Twitter   Training  Datasets Trained  SVMs Enriched  Feature   State...requests. • Netflow data for TCP connections • E-mail data from SMTP logs • Chat data from XMPP logs • Microtext data (from Twitter message archives...summary data from Bro and Netflow data captured on the BBN network over the period of 1 month, plus simulated attacks WHOIS Domain name record

  8. The Forest Method as a New Parallel Tree Method with the Sectional Voronoi Tessellation

    NASA Astrophysics Data System (ADS)

    Yahagi, Hideki; Mori, Masao; Yoshii, Yuzuru

    1999-09-01

    We have developed a new parallel tree method which will be called the forest method hereafter. This new method uses the sectional Voronoi tessellation (SVT) for the domain decomposition. The SVT decomposes a whole space into polyhedra and allows their flat borders to move by assigning different weights. The forest method determines these weights based on the load balancing among processors by means of the overload diffusion (OLD). Moreover, since all the borders are flat, before receiving the data from other processors, each processor can collect enough data to calculate the gravity force with precision. Both the SVT and the OLD are coded in a highly vectorizable manner to accommodate on vector parallel processors. The parallel code based on the forest method with the Message Passing Interface is run on various platforms so that a wide portability is guaranteed. Extensive calculations with 15 processors of Fujitsu VPP300/16R indicate that the code can calculate the gravity force exerted on 105 particles in each second for some ideal dark halo. This code is found to enable an N-body simulation with 107 or more particles for a wide dynamic range and is therefore a very powerful tool for the study of galaxy formation and large-scale structure in the universe.

  9. Software for System for Controlling a Magnetically Levitated Rotor

    NASA Technical Reports Server (NTRS)

    Morrison, Carlos R. (Inventor)

    2004-01-01

    In a rotor assembly having a rotor supported for rotation by magnetic bearings, a processor controlled by software or firmware controls the generation of force vectors that position the rotor relative to its bearings in a 'bounce' mode in which the rotor axis is displaced from the principal axis defined between the bearings and a 'tilt' mode in which the rotor axis is tilted or inclined relative to the principal axis. Waveform driven perturbations are introduced to generate force vectors that excite the rotor in either the 'bounce' or 'tilt' modes.

  10. System for Controlling a Magnetically Levitated Rotor

    NASA Technical Reports Server (NTRS)

    Morrison, Carlos R. (Inventor)

    2006-01-01

    In a rotor assembly having a rotor supported for rotation by magnetic bearings, a processor controlled by software or firmware controls the generation of force vectors that position the rotor relative to its bearings in a "bounce" mode in which the rotor axis is displaced from the principal axis defined between the bearings and a "tilt" mode in which the rotor axis is tilted or inclined relative to the principal axis. Waveform driven perturbations are introduced to generate force vectors that excite the rotor in either the "bounce" or "tilt" modes.

  11. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  12. Special purpose parallel computer architecture for real-time control and simulation in robotic applications

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1993-01-01

    This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.

  13. Development of software for the MSFC solar vector magnetograph

    NASA Technical Reports Server (NTRS)

    Kineke, Jack

    1996-01-01

    The Marshall Space Flight Center Solar Vector Magnetograph is a special purpose telescope used to measure the vector magnetic field in active areas on the surface of the sun. This instrument measures the linear and circular polarization intensities (the Stokes vectors Q, U and V) produced by the Zeeman effect on a specific spectral line due to the solar magnetic field from which the longitudinal and transverse components of the magnetic field may be determined. Beginning in 1990 as a Summer Faculty Fellow in project JOVE and continuing under NASA Grant NAG8-1042, the author has been developing computer software to perform these computations, first using a DEC MicroVAX system equipped with a high speed array processor, and more recently using a DEC AXP/OSF system. This summer's work is a continuation of this development.

  14. Using a multifrontal sparse solver in a high performance, finite element code

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Lucas, Robert; Raefsky, Arthur

    1990-01-01

    We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP.

  15. A vector scanning processing technique for pulsed laser velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1989-01-01

    Pulsed-laser-sheet velocimetry yields two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high-precision (1-percent) velocity estimates, but can require hours of processing time on specialized array processors. Sometimes, however, a less accurate (about 5 percent) data-reduction technique which also gives unambiguous velocity vector information is acceptable. Here, a direct space-domain processing technique is described and shown to be far superior to previous methods in achieving these objectives. It uses a novel data coding and reduction technique and has no 180-deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 min on an 80386-based PC, producing a two-dimensional velocity-vector map of the flowfield. Pulsed-laser velocimetry data can thus be reduced quickly and reasonably accurately, without specialized array processing hardware.

  16. Improved Magnetic STAR Methods for Real-Time, Point-by-Point Localization of Unexploded Ordnance and Buried Mines

    DTIC Science & Technology

    2008-09-01

    of magnetic UXO. The prototype STAR Sensor comprises: a) A cubic array of eight fluxgate magnetometers . b) A 24-channel data acquisition/signal...array (shaded boxes) of eight low noise Triaxial Fluxgate Magnetometers (TFM) develops 24 channels of vector B- field data. Processor hardware

  17. Scalable Vector Media-processors for Embedded Systems

    DTIC Science & Technology

    2002-05-01

    Set Architecture for Multimedia “When you do the common things in life in an uncommon way, you will command the attention of the world.” George ...Bibliography [ABHS89] M. August, G. Brost , C. Hsiung, and C. Schiffleger. Cray X-MP: The Birth of a Super- computer. IEEE Computer, 22(1):45–52, January

  18. Implementation and analysis of a Navier-Stokes algorithm on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1988-01-01

    The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  19. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    1999-01-01

    This paper reviews three recent works on the numerical methods to integrate ordinary differential equations (ODE), which are specially designed for parallel, vector, and/or multi-processor-unit(PU) computers. The first is the Picard-Chebyshev method (Fukushima, 1997a). It obtains a global solution of ODE in the form of Chebyshev polynomial of large (> 1000) degree by applying the Picard iteration repeatedly. The iteration converges for smooth problems and/or perturbed dynamics. The method runs around 100-1000 times faster in the vector mode than in the scalar mode of a certain computer with vector processors (Fukushima, 1997b). The second is a parallelization of a symplectic integrator (Saha et al., 1997). It regards the implicit midpoint rules covering thousands of timesteps as large-scale nonlinear equations and solves them by the fixed-point iteration. The method is applicable to Hamiltonian systems and is expected to lead an acceleration factor of around 50 in parallel computers with more than 1000 PUs. The last is a parallelization of the extrapolation method (Ito and Fukushima, 1997). It performs trial integrations in parallel. Also the trial integrations are further accelerated by balancing computational load among PUs by the technique of folding. The method is all-purpose and achieves an acceleration factor of around 3.5 by using several PUs. Finally, we give a perspective on the parallelization of some implicit integrators which require multiple corrections in solving implicit formulas like the implicit Hermitian integrators (Makino and Aarseth, 1992), (Hut et al., 1995) or the implicit symmetric multistep methods (Fukushima, 1998), (Fukushima, 1999).

  20. Use of Trust Vectors in Support of the CyberCraft Initiative

    DTIC Science & Technology

    2007-03-01

    for determining the trust value of a recommendation path [3]. Xiong and Liu [23] proposed a trust model for peer-to-peer (P2P) eCommerce transactions...Amount of Satisfaction: This is based on the feedback from the other peer in an eCommerce transaction. Feedback is the basis for most trust models in...a P2P eCommerce environment. Number of Transactions: As the name suggests, this is the number of transac- tions performed by the rated peer in a

  1. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    DOE PAGES

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...

    2015-05-22

    The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less

  2. Benchmark tests on the digital equipment corporation Alpha AXP 21164-based AlphaServer 8400, including a comparison of optimized vector and superscalar processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wasserman, H.J.

    1996-02-01

    The second generation of the Digital Equipment Corp. (DEC) DECchip Alpha AXP microprocessor is referred to as the 21164. From the viewpoint of numerically-intensive computing, the primary difference between it and its predecessor, the 21064, is that the 21164 has twice the multiply/add throughput per clock period (CP), a maximum of two floating point operations (FLOPS) per CP vs. one for 21064. The AlphaServer 8400 is a shared-memory multiprocessor server system that can accommodate up to 12 CPUs and up to 14 GB of memory. In this report we will compare single processor performance of the 8400 system with thatmore » of the International Business Machines Corp. (IBM) RISC System/6000 POWER-2 microprocessor running at 66 MHz, the Silicon Graphics, Inc. (SGI) MIPS R8000 microprocessor running at 75 MHz, and the Cray Research, Inc. CRAY J90. The performance comparison is based on a set of Fortran benchmark codes that represent a portion of the Los Alamos National Laboratory supercomputer workload. The advantage of using these codes, is that the codes also span a wide range of computational characteristics, such as vectorizability, problem size, and memory access pattern. The primary disadvantage of using them is that detailed, quantitative analysis of performance behavior of all codes on all machines is difficult. One important addition to the benchmark set appears for the first time in this report. Whereas the older version was written for a vector processor, the newer version is more optimized for microprocessor architectures. Therefore, we have for the first time, an opportunity to measure performance on a single application using implementations that expose the respective strengths of vector and superscalar architecture. All results in this report are from single processors. A subsequent article will explore shared-memory multiprocessing performance of the 8400 system.« less

  3. An efficient optical architecture for sparsely connected neural networks

    NASA Technical Reports Server (NTRS)

    Hine, Butler P., III; Downie, John D.; Reid, Max B.

    1990-01-01

    An architecture for general-purpose optical neural network processor is presented in which the interconnections and weights are formed by directing coherent beams holographically, thereby making use of the space-bandwidth products of the recording medium for sparsely interconnected networks more efficiently that the commonly used vector-matrix multiplier, since all of the hologram area is in use. An investigation is made of the use of computer-generated holograms recorded on such updatable media as thermoplastic materials, in order to define the interconnections and weights of a neural network processor; attention is given to limits on interconnection densities, diffraction efficiencies, and weighing accuracies possible with such an updatable thin film holographic device.

  4. Three-dimensional transonic potential flow about complex 3-dimensional configurations

    NASA Technical Reports Server (NTRS)

    Reyhner, T. A.

    1984-01-01

    An analysis has been developed and a computer code written to predict three-dimensional subsonic or transonic potential flow fields about lifting or nonlifting configurations. Possible condfigurations include inlets, nacelles, nacelles with ground planes, S-ducts, turboprop nacelles, wings, and wing-pylon-nacelle combinations. The solution of the full partial differential equation for compressible potential flow written in terms of a velocity potential is obtained using finite differences, line relaxation, and multigrid. The analysis uses either a cylindrical or Cartesian coordinate system. The computational mesh is not body fitted. The analysis has been programmed in FORTRAN for both the CDC CYBER 203 and the CRAY-1 computers. Comparisons of computed results with experimental measurement are presented. Descriptions of the program input and output formats are included.

  5. Vectorization of a penalty function algorithm for well scheduling

    NASA Technical Reports Server (NTRS)

    Absar, I.

    1984-01-01

    In petroleum engineering, the oil production profiles of a reservoir can be simulated by using a finite gridded model. This profile is affected by the number and choice of wells which in turn is a result of various production limits and constraints including, for example, the economic minimum well spacing, the number of drilling rigs available and the time required to drill and complete a well. After a well is available it may be shut in because of excessive water or gas productions. In order to optimize the field performance a penalty function algorithm was developed for scheduling wells. For an example with some 343 wells and 15 different constraints, the scheduling routine vectorized for the CYBER 205 averaged 560 times faster performance than the scalar version.

  6. Improved multi-stage neonatal seizure detection using a heuristic classifier and a data-driven post-processor.

    PubMed

    Ansari, A H; Cherian, P J; Dereymaeker, A; Matic, V; Jansen, K; De Wispelaere, L; Dielman, C; Vervisch, J; Swarte, R M; Govaert, P; Naulaers, G; De Vos, M; Van Huffel, S

    2016-09-01

    After identifying the most seizure-relevant characteristics by a previously developed heuristic classifier, a data-driven post-processor using a novel set of features is applied to improve the performance. The main characteristics of the outputs of the heuristic algorithm are extracted by five sets of features including synchronization, evolution, retention, segment, and signal features. Then, a support vector machine and a decision making layer remove the falsely detected segments. Four datasets including 71 neonates (1023h, 3493 seizures) recorded in two different university hospitals, are used to train and test the algorithm without removing the dubious seizures. The heuristic method resulted in a false alarm rate of 3.81 per hour and good detection rate of 88% on the entire test databases. The post-processor, effectively reduces the false alarm rate by 34% while the good detection rate decreases by 2%. This post-processing technique improves the performance of the heuristic algorithm. The structure of this post-processor is generic, improves our understanding of the core visually determined EEG features of neonatal seizures and is applicable for other neonatal seizure detectors. The post-processor significantly decreases the false alarm rate at the expense of a small reduction of the good detection rate. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Optimization of the Brillouin operator on the KNL architecture

    NASA Astrophysics Data System (ADS)

    Dürr, Stephan

    2018-03-01

    Experiences with optimizing the matrix-times-vector application of the Brillouin operator on the Intel KNL processor are reported. Without adjustments to the memory layout, performance figures of 360 Gflop/s in single and 270 Gflop/s in double precision are observed. This is with Nc = 3 colors, Nv = 12 right-hand-sides, Nthr = 256 threads, on lattices of size 323 × 64, using exclusively OMP pragmas. Interestingly, the same routine performs quite well on Intel Core i7 architectures, too. Some observations on the much harderWilson fermion matrix-times-vector optimization problem are added.

  8. Efficacy of Code Optimization on Cache-based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important computational algorithms employed at NASA Ames require different programming styles on vector machines and cache-based machines, respectively, neither architecture class appeared to be favored by particular algorithms in principle. Practice tells us that the situation is more complicated. This report presents observations and some analysis of performance tuning for cache-based systems. We point out several counterintuitive results that serve as a cautionary reminder that memory accesses are not the only factors that determine performance, and that within the class of cache-based systems, significant differences exist.

  9. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    DOE PAGES

    Vincenti, H.; Lobet, M.; Lehe, R.; ...

    2016-09-19

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries:  OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less

  10. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, H.; Lobet, M.; Lehe, R.

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries:  OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less

  11. System and method employing a minimum distance and a load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A

    2014-12-23

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.

  12. System and method employing a self-organizing map load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.

    2014-06-17

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.

  13. Checking the Goldbach conjecture up to 4\\cdot 10^11

    NASA Astrophysics Data System (ADS)

    Sinisalo, Matti K.

    1993-10-01

    One of the most studied problems in additive number theory, Goldbach's conjecture, states that every even integer greater than or equal to 4 can be expressed as a sum of two primes. In this paper checking of this conjecture up to 4 \\cdot {10^{11}} by the IBM 3083 mainframe with vector processor is reported.

  14. Unobtrusive Software and System Health Management with R2U2 on a Parallel MIMD Coprocessor

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Moosbrugger, Patrick

    2017-01-01

    Dynamic monitoring of software and system health of a complex cyber-physical system requires observers that continuously monitor variables of the embedded software in order to detect anomalies and reason about root causes. There exists a variety of techniques for code instrumentation, but instrumentation might change runtime behavior and could require costly software re-certification. In this paper, we present R2U2E, a novel realization of our real-time, Realizable, Responsive, and Unobtrusive Unit (R2U2). The R2U2E observers are executed in parallel on a dedicated 16-core EPIPHANY co-processor, thereby avoiding additional computational overhead to the system under observation. A DMA-based shared memory access architecture allows R2U2E to operate without any code instrumentation or program interference.

  15. Cyber Forensics Ontology for Cyber Criminal Investigation

    NASA Astrophysics Data System (ADS)

    Park, Heum; Cho, Sunho; Kwon, Hyuk-Chul

    We developed Cyber Forensics Ontology for the criminal investigation in cyber space. Cyber crime is classified into cyber terror and general cyber crime, and those two classes are connected with each other. The investigation of cyber terror requires high technology, system environment and experts, and general cyber crime is connected with general crime by evidence from digital data and cyber space. Accordingly, it is difficult to determine relational crime types and collect evidence. Therefore, we considered the classifications of cyber crime, the collection of evidence in cyber space and the application of laws to cyber crime. In order to efficiently investigate cyber crime, it is necessary to integrate those concepts for each cyber crime-case. Thus, we constructed a cyber forensics domain ontology for criminal investigation in cyber space, according to the categories of cyber crime, laws, evidence and information of criminals. This ontology can be used in the process of investigating of cyber crime-cases, and for data mining of cyber crime; classification, clustering, association and detection of crime types, crime cases, evidences and criminals.

  16. Feasibility study for a numerical aerodynamic simulation facility. Volume 3: FMP language specification/user manual

    NASA Technical Reports Server (NTRS)

    Kenner, B. G.; Lincoln, N. R.

    1979-01-01

    The manual is intended to show the revisions and additions to the current STAR FORTRAN. The changes are made to incorporate an FMP (Flow Model Processor) for use in the Numerical Aerodynamic Simulation Facility (NASF) for the purpose of simulating fluid flow over three-dimensional bodies in wind tunnel environments and in free space. The FORTRAN programming language for the STAR-100 computer contains both CDC and unique STAR extensions to the standard FORTRAN. Several of the STAR FORTRAN extensions to standard FOR-TRAN allow the FORTRAN user to exploit the vector processing capabilities of the STAR computer. In STAR FORTRAN, vectors can be expressed with an explicit notation, functions are provided that return vector results, and special call statements enable access to any machine instruction.

  17. PS3 CELL Development for Scientific Computation and Research

    NASA Astrophysics Data System (ADS)

    Christiansen, M.; Sevre, E.; Wang, S. M.; Yuen, D. A.; Liu, S.; Lyness, M. D.; Broten, M.

    2007-12-01

    The Cell processor is one of the most powerful processors on the market, and researchers in the earth sciences may find its parallel architecture to be very useful. A cell processor, with 7 cores, can easily be obtained for experimentation by purchasing a PlayStation 3 (PS3) and installing linux and the IBM SDK. Each core of the PS3 is capable of 25 GFLOPS giving a potential limit of 150 GFLOPS when using all 6 SPUs (synergistic processing units) by using vectorized algorithms. We have used the Cell's computational power to create a program which takes simulated tsunami datasets, parses them, and returns a colorized height field image using ray casting techniques. As expected, the time required to create an image is inversely proportional to the number of SPUs used. We believe that this trend will continue when multiple PS3s are chained using OpenMP functionality and are in the process of researching this. By using the Cell to visualize tsunami data, we have found that its greatest feature is its power. This fact entwines well with the needs of the scientific community where the limiting factor is time. Any algorithm, such as the heat equation, that can be subdivided into multiple parts can take advantage of the PS3 Cell's ability to split the computations across the 6 SPUs reducing required run time by one sixth. Further vectorization of the code can allow for 4 simultanious floating point operations by using the SIMD (single instruction multiple data) capabilities of the SPU increasing efficiency 24 times.

  18. Navier-Stokes calculations for DFVLR F5-wing in wind tunnel using Runge-Kutta time-stepping scheme

    NASA Technical Reports Server (NTRS)

    Vatsa, V. N.; Wedan, B. W.

    1988-01-01

    A three-dimensional Navier-Stokes code using an explicit multistage Runge-Kutta type of time-stepping scheme is used for solving the transonic flow past a finite wing mounted inside a wind tunnel. Flow past the same wing in free air was also computed to assess the effect of wind-tunnel walls on such flows. Numerical efficiency is enhanced through vectorization of the computer code. A Cyber 205 computer with 32 million words of internal memory was used for these computations.

  19. Preliminary results in implementing a model of the world economy on the CYBER 205: A case of large sparse nonsymmetric linear equations

    NASA Technical Reports Server (NTRS)

    Szyld, D. B.

    1984-01-01

    A brief description of the Model of the World Economy implemented at the Institute for Economic Analysis is presented, together with our experience in converting the software to vector code. For each time period, the model is reduced to a linear system of over 2000 variables. The matrix of coefficients has a bordered block diagonal structure, and we show how some of the matrix operations can be carried out on all diagonal blocks at once.

  20. Equation solvers for distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.

  1. A low-cost vector processor boosting compute-intensive image processing operations

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  2. A CPU benchmark for protein crystallographic refinement.

    PubMed

    Bourne, P E; Hendrickson, W A

    1990-01-01

    The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.

  3. Using domain decomposition in the multigrid NAS parallel benchmark on the Fujitsu VPP500

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J.C.H.; Lung, H.; Katsumata, Y.

    1995-12-01

    In this paper, we demonstrate how domain decomposition can be applied to the multigrid algorithm to convert the code for MPP architectures. We also discuss the performance and scalability of this implementation on the new product line of Fujitsu`s vector parallel computer, VPP500. This computer has Fujitsu`s well-known vector processor as the PE each rated at 1.6 C FLOPS. The high speed crossbar network rated at 800 MB/s provides the inter-PE communication. The results show that the physical domain decomposition is the best way to solve MG problems on VPP500.

  4. Optimizing the inner loop of the gravitational force interaction on modern processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Michael S

    2010-12-08

    We have achieved superior performance on multiple generations of the fastest supercomputers in the world with our hashed oct-tree N-body code (HOT), spanning almost two decades and garnering multiple Gordon Bell Prizes for significant achievement in parallel processing. Execution time for our N-body code is largely influenced by the force calculation in the inner loop. Improvements to the inner loop using SSE3 instructions has enabled the calculation of over 200 million gravitational interactions per second per processor on a 2.6 GHz Opteron, for a computational rate of over 7 Gflops in single precision (700/0 of peak). We obtain optimal performancemore » some processors (including the Cell) by decomposing the reciprocal square root function required for a gravitational interaction into a table lookup, Chebychev polynomial interpolation, and Newton-Raphson iteration, using the algorithm of Karp. By unrolling the loop by a factor of six, and using SPU intrinsics to compute on vectors, we obtain performance of over 16 Gflops on a single Cell SPE. Aggregated over the 8 SPEs on a Cell processor, the overall performance is roughly 130 Gflops. In comparison, the ordinary C version of our inner loop only obtains 1.6 Gflops per SPE with the spuxlc compiler.« less

  5. Acceleration of spiking neural network based pattern recognition on NVIDIA graphics processors.

    PubMed

    Han, Bing; Taha, Tarek M

    2010-04-01

    There is currently a strong push in the research community to develop biological scale implementations of neuron based vision models. Systems at this scale are computationally demanding and generally utilize more accurate neuron models, such as the Izhikevich and the Hodgkin-Huxley models, in favor of the more popular integrate and fire model. We examine the feasibility of using graphics processing units (GPUs) to accelerate a spiking neural network based character recognition network to enable such large scale systems. Two versions of the network utilizing the Izhikevich and Hodgkin-Huxley models are implemented. Three NVIDIA general-purpose (GP) GPU platforms are examined, including the GeForce 9800 GX2, the Tesla C1060, and the Tesla S1070. Our results show that the GPGPUs can provide significant speedup over conventional processors. In particular, the fastest GPGPU utilized, the Tesla S1070, provided a speedup of 5.6 and 84.4 over highly optimized implementations on the fastest central processing unit (CPU) tested, a quadcore 2.67 GHz Xeon processor, for the Izhikevich and the Hodgkin-Huxley models, respectively. The CPU implementation utilized all four cores and the vector data parallelism offered by the processor. The results indicate that GPUs are well suited for this application domain.

  6. Gain in computational efficiency by vectorization in the dynamic simulation of multi-body systems

    NASA Technical Reports Server (NTRS)

    Amirouche, F. M. L.; Shareef, N. H.

    1991-01-01

    An improved technique for the identification and extraction of the exact quantities associated with the degrees of freedom at the element as well as the flexible body level is presented. It is implemented in the dynamic equations of motions based on the recursive formulation of Kane et al. (1987) and presented in a matrix form, integrating the concepts of strain energy, the finite-element approach, modal analysis, and reduction of equations. This technique eliminates the CPU intensive matrix multiplication operations in the code's hot spots for the dynamic simulation of the interconnected rigid and flexible bodies. A study of a simple robot with flexible links is presented by comparing the execution times on a scalar machine and a vector-processor with and without vector options. Performance figures demonstrating the substantial gains achieved by the technique are plotted.

  7. An efficient parallel algorithm for matrix-vector multiplication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in themore » well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.« less

  8. CyberShake-derived ground-motion prediction models for the Los Angeles region with application to earthquake early warning

    USGS Publications Warehouse

    Bose, Maren; Graves, Robert; Gill, David; Callaghan, Scott; Maechling, Phillip J.

    2014-01-01

    Real-time applications such as earthquake early warning (EEW) typically use empirical ground-motion prediction equations (GMPEs) along with event magnitude and source-to-site distances to estimate expected shaking levels. In this simplified approach, effects due to finite-fault geometry, directivity and site and basin response are often generalized, which may lead to a significant under- or overestimation of shaking from large earthquakes (M > 6.5) in some locations. For enhanced site-specific ground-motion predictions considering 3-D wave-propagation effects, we develop support vector regression (SVR) models from the SCEC CyberShake low-frequency (<0.5 Hz) and broad-band (0–10 Hz) data sets. CyberShake encompasses 3-D wave-propagation simulations of >415 000 finite-fault rupture scenarios (6.5 ≤ M ≤ 8.5) for southern California defined in UCERF 2.0. We use CyberShake to demonstrate the application of synthetic waveform data to EEW as a ‘proof of concept’, being aware that these simulations are not yet fully validated and might not appropriately sample the range of rupture uncertainty. Our regression models predict the maximum and the temporal evolution of instrumental intensity (MMI) at 71 selected test sites using only the hypocentre, magnitude and rupture ratio, which characterizes uni- and bilateral rupture propagation. Our regression approach is completely data-driven (where here the CyberShake simulations are considered data) and does not enforce pre-defined functional forms or dependencies among input parameters. The models were established from a subset (∼20 per cent) of CyberShake simulations, but can explain MMI values of all >400 k rupture scenarios with a standard deviation of about 0.4 intensity units. We apply our models to determine threshold magnitudes (and warning times) for various active faults in southern California that earthquakes need to exceed to cause at least ‘moderate’, ‘strong’ or ‘very strong’ shaking in the Los Angeles (LA) basin. These thresholds are used to construct a simple and robust EEW algorithm: to declare a warning, the algorithm only needs to locate the earthquake and to verify that the corresponding magnitude threshold is exceeded. The models predict that a relatively moderate M6.5–7 earthquake along the Palos Verdes, Newport-Inglewood/Rose Canyon, Elsinore or San Jacinto faults with a rupture propagating towards LA could cause ‘very strong’ to ‘severe’ shaking in the LA basin; however, warning times for these events could exceed 30 s.

  9. A vector scanning processing technique for pulsed laser velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1989-01-01

    Pulsed laser sheet velocimetry yields nonintrusive measurements of two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high precision (1 pct) velocity estimates, but can require several hours of processing time on specialized array processors. Under some circumstances, a simple, fast, less accurate (approx. 5 pct), data reduction technique which also gives unambiguous velocity vector information is acceptable. A direct space domain processing technique was examined. The direct space domain processing technique was found to be far superior to any other techniques known, in achieving the objectives listed above. It employs a new data coding and reduction technique, where the particle time history information is used directly. Further, it has no 180 deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 minutes on an 80386 based PC, producing a 2-D velocity vector map of the flow field. Hence, using this new space domain vector scanning (VS) technique, pulsed laser velocimetry data can be reduced quickly and reasonably accurately, without specialized array processing hardware.

  10. Scaling Support Vector Machines On Modern HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Fu, Haohuan; Song, Shuaiwen

    2015-02-01

    We designed and implemented MIC-SVM, a highly efficient parallel SVM for x86 based multicore and many-core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi co-processor (MIC). We propose various novel analysis methods and optimization techniques to fully utilize the multilevel parallelism provided by these architectures and serve as general optimization methods for other machine learning tools.

  11. Numerical simulation of unsteady viscous flows

    NASA Technical Reports Server (NTRS)

    Hankey, Wilbur L.

    1987-01-01

    Most unsteady viscous flows may be grouped into two categories, i.e., forced and self-sustained oscillations. Examples of forced oscillations occur in turbomachinery and in internal combustion engines while self-sustained oscillations prevail in vortex shedding, inlet buzz, and wing flutter. Numerical simulation of these phenomena was achieved due to the advancement of vector processor computers. Recent progress in the simulation of unsteady viscous flows is addressed.

  12. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  13. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  14. The Vector, Signal, and Image Processing Library (VSIPL): an Open Standard for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Kepner, J. V.; Janka, R. S.; Lebak, J.; Richards, M. A.

    1999-12-01

    The Vector/Signal/Image Processing Library (VSIPL) is a DARPA initiated effort made up of industry, government and academic representatives who have defined an industry standard API for vector, signal, and image processing primitives for real-time signal processing on high performance systems. VSIPL supports a wide range of data types (int, float, complex, ...) and layouts (vectors, matrices and tensors) and is ideal for astronomical data processing. The VSIPL API is intended to serve as an open, vendor-neutral, industry standard interface. The object-based VSIPL API abstracts the memory architecture of the underlying machine by using the concept of memory blocks and views. Early experiments with VSIPL code conversions have been carried out by the High Performance Computing Program team at the UCSD. Commercially, several major vendors of signal processors are actively developing implementations. VSIPL has also been explicitly required as part of a recent Rome Labs teraflop procurement. This poster presents the VSIPL API, its functionality and the status of various implementations.

  15. Test aspects of the JPL Viterbi decoder

    NASA Technical Reports Server (NTRS)

    Breuer, M. A.

    1989-01-01

    The generation of test vectors and design-for-test aspects of the Jet Propulsion Laboratory (JPL) Very Large Scale Integration (VLSI) Viterbi decoder chip is discussed. Each processor integrated circuit (IC) contains over 20,000 gates. To achieve a high degree of testability, a scan architecture is employed. The logic has been partitioned so that very few test vectors are required to test the entire chip. In addition, since several blocks of logic are replicated numerous times on this chip, test vectors need only be generated for each block, rather than for the entire circuit. These unique blocks of logic have been identified and test sets generated for them. The approach employed for testing was to use pseudo-exhaustive test vectors whenever feasible. That is, each cone of logid is tested exhaustively. Using this approach, no detailed logic design or fault model is required. All faults which modify the function of a block of combinational logic are detected, such as all irredundant single and multiple stuck-at faults.

  16. Multi-color incomplete Cholesky conjugate gradient methods for vector computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poole, E.L.

    1986-01-01

    This research is concerned with the solution on vector computers of linear systems of equations. Ax = b, where A is a large, sparse symmetric positive definite matrix with non-zero elements lying only along a few diagonals of the matrix. The system is solved using the incomplete Cholesky conjugate gradient method (ICCG). Multi-color orderings are used of the unknowns in the linear system to obtain p-color matrices for which a no-fill block ICCG method is implemented on the CYBER 205 with O(N/p) length vector operations in both the decomposition of A and, more importantly, in the forward and back solvesmore » necessary at each iteration of the method. (N is the number of unknowns and p is a small constant). A p-colored matrix is a matrix that can be partitioned into a p x p block matrix where the diagonal blocks are diagonal matrices. The matrix is stored by diagonals and matrix multiplication by diagonals is used to carry out the decomposition of A and the forward and back solves. Additionally, if the vectors across adjacent blocks line up, then some of the overhead associated with vector startups can be eliminated in the matrix vector multiplication necessary at each conjugate gradient iteration. Necessary and sufficient conditions are given to determine which multi-color orderings of the unknowns correspond to p-color matrices, and a process is indicated for choosing multi-color orderings.« less

  17. Normative beliefs about aggression and cyber aggression among young adults: a longitudinal investigation.

    PubMed

    Wright, Michelle F; Li, Yan

    2013-01-01

    This longitudinal study examined normative beliefs about aggression (e.g., face-to-face, cyber) in relation to the engagement in cyber aggression 6 months later among 126 (69 women) young adults. Participants completed electronically administered measures assessing their normative beliefs, face-to-face and cyber aggression at Time 1, and cyber aggression 6 months later (Time 2). We found that men reported more cyber relational and verbal aggression when compared to women. After controlling for each other, Time 1 face-to-face relational aggression was positively related to Time 2 cyber relational aggression, whereas Time 1 face-to-face verbal aggression was positively related to Time 2 cyber verbal aggression. Normative beliefs regarding cyber aggression was positively related to both forms of cyber aggression 6 months later, after controlling for normative beliefs about face-to-face aggression. Furthermore, a significant two-way interaction between Time 1 cyber relational aggression and normative beliefs about cyber relational aggression was found. Follow-up analysis showed that Time 1 cyber relational aggression was more strongly related to Time 2 cyber relational aggression when young adults held higher normative beliefs about cyber relational aggression. A similar two-way interaction was found for cyber verbal aggression such that the association between Time 1 and Time 2 cyber verbal aggression was stronger at higher levels of normative beliefs about cyber verbal aggression. Results are discussed in terms of the social cognitive and behavioral mechanisms associated with the engagement of cyber aggression. © 2013 Wiley Periodicals, Inc.

  18. Using Intel's Knight Landing Processor to Accelerate Global Nested Air Quality Prediction Modeling System (GNAQPMS) Model

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, H.; Chen, X.; Wu, Q.; Wang, Z.

    2016-12-01

    The Global Nested Air Quality Prediction Modeling System for Hg (GNAQPMS-Hg) is a global chemical transport model coupled Hg transport module to investigate the mercury pollution. In this study, we present our work of transplanting the GNAQPMS model on Intel Xeon Phi processor, Knights Landing (KNL) to accelerate the model. KNL is the second-generation product adopting Many Integrated Core Architecture (MIC) architecture. Compared with the first generation Knight Corner (KNC), KNL has more new hardware features, that it can be used as unique processor as well as coprocessor with other CPU. According to the Vtune tool, the high overhead modules in GNAQPMS model have been addressed, including CBMZ gas chemistry, advection and convection module, and wet deposition module. These high overhead modules were accelerated by optimizing code and using new techniques of KNL. The following optimized measures was done: 1) Changing the pure MPI parallel mode to hybrid parallel mode with MPI and OpenMP; 2.Vectorizing the code to using the 512-bit wide vector computation unit. 3. Reducing unnecessary memory access and calculation. 4. Reducing Thread Local Storage (TLS) for common variables with each OpenMP thread in CBMZ. 5. Changing the way of global communication from files writing and reading to MPI functions. After optimization, the performance of GNAQPMS is greatly increased both on CPU and KNL platform, the single-node test showed that optimized version has 2.6x speedup on two sockets CPU platform and 3.3x speedup on one socket KNL platform compared with the baseline version code, which means the KNL has 1.29x speedup when compared with 2 sockets CPU platform.

  19. N-body simulation for self-gravitating collisional systems with a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo

    2012-02-01

    We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.

  20. Parallel implementation of an adaptive and parameter-free N-body integrator

    NASA Astrophysics Data System (ADS)

    Pruett, C. David; Ingham, William H.; Herman, Ralph D.

    2011-05-01

    Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.

  1. Interaction and Impact Studies for Distributed Energy Resource, Transactive Energy, and Electric Grid, using High Performance Computing ?based Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelley, B. M.

    The electric utility industry is undergoing significant transformations in its operation model, including a greater emphasis on automation, monitoring technologies, and distributed energy resource management systems (DERMS). With these changes and new technologies, while driving greater efficiencies and reliability, these new models may introduce new vectors of cyber attack. The appropriate cybersecurity controls to address and mitigate these newly introduced attack vectors and potential vulnerabilities are still widely unknown and performance of the control is difficult to vet. This proposal argues that modeling and simulation (M&S) is a necessary tool to address and better understand these problems introduced by emergingmore » technologies for the grid. M&S will provide electric utilities a platform to model its transmission and distribution systems and run various simulations against the model to better understand the operational impact and performance of cybersecurity controls.« less

  2. Missile signal processing common computer architecture for rapid technology upgrade

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  3. Electromagnetic Physics Models for Parallel Computing Architectures

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  4. Kicking the digital dog: a longitudinal investigation of young adults' victimization and cyber-displaced aggression.

    PubMed

    Wright, Michelle F; Li, Yan

    2012-09-01

    Using the general strain theory as a theoretical framework, the present longitudinal study investigated both face-to-face and cyber victimization in relation to cyber-displaced aggression. Longitudinal data were collected from 130 (70 women) young adults who completed measures assessing their victimization (face-to-face and cyber), cyber aggression, and both face-to-face and cyber-displaced aggression. Findings indicated that victimization in both social contexts (face-to-face and cyber) contributed to cyber-displaced aggression 6 months later (Time 2), after controlling for gender, cyber aggression, face-to-face displaced aggression, and cyber-displaced aggression at Time 1. A significant two-way interaction revealed that Time 1 cyber victimization was more strongly related to Time 2 cyber-displaced aggression when young adults had higher levels of face-to-face victimization at Time 1. Implications of these findings are discussed as well as a call for more research investigating displaced aggression in the cyber context.

  5. [Cyber-bullying in adolescents: associated psychosocial problems and comparison with school bullying].

    PubMed

    Kubiszewski, V; Fontaine, R; Huré, K; Rusch, E

    2013-04-01

    The aim of this study was to determine the prevalence of adolescents engaged in cyber-bullying and then to identify whether students involved in cyber- and school bullying present the same characteristics of internalizing problems (insomnia, perceived social disintegration, psychological distress) and externalizing problems (general aggressiveness, antisocial behavior). Semi-structured interviews were conducted with 738 adolescents from a high-school and a middle-school (mean age=14.8 ± 2.7). The Electronic Bullying Questionnaire and the Olweus Bully/Victim Questionnaire were used to identify profiles of cyber-bullying (cyber-victim, cyber-bully, cyber-bully/victim and cyber-neutral) and school bullying (victim, bully, bully/victim and neutral). Internalizing problems were investigated using the Athens Insomnia Scale, a Perceived Social Disintegration Scale and a Psychological Distress Scale. Externalizing problems were assessed using a General Aggressiveness Scale and an Antisocial Behavior Scale. Almost one student in four was involved in cyber-bullying (16.4% as cyber-victim, 4.9% as cyber-bully and 5.6% as cyber-bully/victim); 14% of our sample was engaged in school bullying as a victim, 7.2% as a bully and 2.8% as a bully/victim. The majority of adolescents involved in cyber-bullying were not involved in school bullying. With regard to the problems associated with school bullying, internalizing problems were more prevalent in victims and bully/victims, whereas externalizing problems were more common in bullies and bully/victims. A similar pattern was found in cyber-bullying where internalizing problems were characteristic of cyber-victims and cyber-bully/victims. Insomnia was elevated in the cyber-bully group which is specific to cyberbullying. General aggressiveness and antisocial behavior were more prevalent in cyber-bullies and cyber-bully/victims. Looking at the differences between types of bullying, victims of "school only" and "school and cyber" bullying had higher scores for insomnia and perceived social disintegration than victims of "cyber only" bullying or students "non-involved". Higher general aggressiveness scores were observed for "school only" bullies and "school and cyber" bullies than for bullies in "cyber only" bullying or students "non-involved". Regarding antisocial behavior, "school only" bullies, "cyber only" bullies, "school and cyber" bullies had higher scores than students "non-involved". This study highlights the importance of investigating both school and cyber-bullying as many psychosocial problems are linked to these two specific and highly prevalent forms of bullying. Copyright © 2012 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  6. Predictors of willingness to use cyber counseling for college students with disabilities.

    PubMed

    Lan, Chu-Mei

    2016-04-01

    Cyber counseling is a new method for assisting people in coping with distress. People in Taiwan are more familiar with face-to-face counseling than with cyber counseling. Using computers is the most popular activity among college students with disabilities. Cyber counseling is effective for lessening client disturbance. Therefore, cyber counseling is an alternative to face-to-face counseling. This study measured the willingness of college students with disabilities to use cyber counseling to meet their mental health needs. In addition, the predictors of the willingness to use cyber counseling were explored. The subjects were college students with disabilities who were recruited from universities in Southern Taiwan through the Internet and in college counseling centers. A total of 214 structured questionnaires were collected and subsequently analyzed using SPSS Version 18.0 through stepwise regression for discovering the crucial predictors of the willingness of college students to use cyber counseling. The crucial predictors of the willingness of college students to use cyber counseling were cyber-counseling needs, the need for cyber-counseling methods (the need to use various cyber-counseling methods), a help-seeking attitude in cyber counseling, a hearing disorder, cyber counseling need for academic achievement, and grade level. The explained proportion of variance was 65.4%. The willingness of college students with disabilities to use cyber counseling was explained according to cyber-counseling needs, cyber-counseling attitudes, disease type, and grades. Based on the results, this study offers specific suggestions and future directions for research on cyber counseling for college students with disabilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  8. Tomo3D 2.0--exploitation of advanced vector extensions (AVX) for 3D reconstruction.

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-02-01

    Tomo3D is a program for fast tomographic reconstruction on multicore computers. Its high speed stems from code optimization, vectorization with Streaming SIMD Extensions (SSE), multithreading and optimization of disk access. Recently, Advanced Vector eXtensions (AVX) have been introduced in the x86 processor architecture. Compared to SSE, AVX double the number of simultaneous operations, thus pointing to a potential twofold gain in speed. However, in practice, achieving this potential is extremely difficult. Here, we provide a technical description and an assessment of the optimizations included in Tomo3D to take advantage of AVX instructions. Tomo3D 2.0 allows huge reconstructions to be calculated in standard computers in a matter of minutes. Thus, it will be a valuable tool for electron tomography studies with increasing resolution needs. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. 2005 6th Annual Science and Engineering Technology Conference

    DTIC Science & Technology

    2005-04-21

    BioFAC VBAIDS Hybrid: PCR/Immuno Fast PCR Fast Immunoassay Mass Spec (Pyrolysis) SIBS UV -LIF IR Fluorochrome Charge Detect. BioCADS Trigger Advanced...Weights Beam forming Signal Processing mapped to GPU architecture Vector Processor STAP (STAP-BOY) GaN High Frequency Transistor (WBG-RF) UV Laser...Service anti- counterfeiting • Embedded security strips Technology Limitations and Barriers • Training and cost (training intensive) Land Borders North Land

  10. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  11. A compositional reservoir simulator on distributed memory parallel computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rame, M.; Delshad, M.

    1995-12-31

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. Amore » portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented.« less

  12. For the Common Defense of Cyberspace: Implications of a US Cyber Militia on Department of Defense Cyber Operations

    DTIC Science & Technology

    2015-06-12

    the Common Defense of Cyberspace: Implications of a US Cyber Militia on Department of Defense Cyber Operations 5a. CONTRACT NUMBER 5b. GRANT ...20130423/ NEWS/304230016/Navy-wants-1-000-more-cyber-warriors. 33 Edward Cardon , “Army Cyber Capabilities” (Lecture, Advanced Operations Course...Finally, once a cyber security professional is trained, many argue, to include the head of Army’s Cyber Command, Lieutenant General Edward Cardon

  13. Lessons Learned in Over a Decade of Technical Support for U.S. Nuclear Cyber Security Programmes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glantz, Clifford S.; Landine, Guy P.; Craig, Philip A.

    Pacific Northwest National Laboratory’s (PNNL) nuclear cyber security team has been providing technical support to the U.S. Nuclear Regulatory Commission (NRC) since 2002. This team has provided cyber security technical experties in conducting cyber security inspections, developing of regulatory rules and guidance, reviewing facility cyber security plans, developing inspection guidance, and developing and teaching NRC inspectors how to conduct cyber security assessments. The extensive experience the PNNL team has gathered has allowed them to compile a lenghty list of recommendations on how to improve cyber security programs and conduct assessments. A selected set of recommendations are presented, including the needmore » to: integrate an array of defenisve strategies into a facility’s cyber security program, coordinate physical and cyber security activities, train phycial security forces to resist a cyber-enabled physical attack, improve estimates of the consequences of a cyber attack, properly resource cyber security assessments, appropropriately account for insider threats, routinely monitor security devices for potential attacks, supplement compliance-based requirements with risk-based decision making, and introduce the concept of resilience into cyber security programs.« less

  14. Behavior-based network management: a unique model-based approach to implementing cyber superiority

    NASA Astrophysics Data System (ADS)

    Seng, Jocelyn M.

    2016-05-01

    Behavior-Based Network Management (BBNM) is a technological and strategic approach to mastering the identification and assessment of network behavior, whether human-driven or machine-generated. Recognizing that all five U.S. Air Force (USAF) mission areas rely on the cyber domain to support, enhance and execute their tasks, BBNM is designed to elevate awareness and improve the ability to better understand the degree of reliance placed upon a digital capability and the operational risk.2 Thus, the objective of BBNM is to provide a holistic view of the digital battle space to better assess the effects of security, monitoring, provisioning, utilization management, allocation to support mission sustainment and change control. Leveraging advances in conceptual modeling made possible by a novel advancement in software design and implementation known as Vector Relational Data Modeling (VRDM™), the BBNM approach entails creating a network simulation in which meaning can be inferred and used to manage network behavior according to policy, such as quickly detecting and countering malicious behavior. Initial research configurations have yielded executable BBNM models as combinations of conceptualized behavior within a network management simulation that includes only concepts of threats and definitions of "good" behavior. A proof of concept assessment called "Lab Rat," was designed to demonstrate the simplicity of network modeling and the ability to perform adaptation. The model was tested on real world threat data and demonstrated adaptive and inferential learning behavior. Preliminary results indicate this is a viable approach towards achieving cyber superiority in today's volatile, uncertain, complex and ambiguous (VUCA) environment.

  15. Evaluation of the Xeon phi processor as a technology for the acceleration of real-time control in high-order adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine

    2014-08-01

    We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.

  16. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  17. Cyber Warfare: China’s Strategy to Dominate in Cyber Space

    DTIC Science & Technology

    2011-06-10

    CYBER WARFARE : CHINA‘S STRATEGY TO DOMINATE IN CYBER SPACE A thesis presented to the Faculty of the U.S. Army Command and...warfare supports the use of cyber warfare in future conflict. The IW militia unit organization provides each Chinese military region commander with...China, Strategy, Cyber Warfare , Cyber Space, Information Warfare, Electronic Warfare 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18

  18. U.S. Command Relationships in the Conduct of Cyber Warfare: Establishment, Exercise, and Institutionalization of Cyber Coordinating Authority

    DTIC Science & Technology

    2010-05-03

    FINAL 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE U.S. Command Relationships in the Conduct of Cyber Warfare : Establishment...U.S. Command Relationships in the Conduct of Cyber Warfare : Establishment, Exercise, and Institutionalization of Cyber Coordinating Authority...Relationships in the Conduct of Cyber Warfare : Establishment, Exercise, and Institutionalization of Cyber Coordinating Authority The character of

  19. Cyber-Terrorism and Cyber-Crime: There Is a Difference

    DTIC Science & Technology

    The terms cyber -terrorism and cyber -crime have many varying definitions depending on who is defining them. For example, individuals with expertise in...considerations and, when investigating a cyber -attack, procedural considerations. By examining the strengths and weaknesses of several definitions offered by...national security, law enforcement, industry, law, and scholars, this research constructs a list of parameters to consider when formulating definitions for cyber -terrorism and cyber -crime.

  20. Self-Development for Cyber Warriors

    DTIC Science & Technology

    2011-11-10

    Aggressive self-development is a critical task for the cyber warfare professional. No matter the quality, formal training and education programs age...Books and Science Fiction); Technology and Cyber-Related Magazines and Blogs; Specific Cyber Warfare Journal and Magazine Articles; Key Documents on...the strengths and weaknesses of the major donor career fields to the cyber workforce, and a Self-Assessment of Cyber Domain Expertise for readers who wish to assess their own cyber warfare expertise.

  1. Impact of Alleged Russian Cyber Attacks

    DTIC Science & Technology

    2009-05-01

    security. 15. SUBJECT TERMS Cyber Security, Cyber Warfare , Estonia, Georgia, Russian Federation Cyber Strategy, Convention on Cybercrime, NATO Center...Federation ......................................................................................... 33  X.  The Future of Russian Cyber Warfare ................................................................... 39...Issue 15.09); Binoy Kampmark, Cyber Warfare Between Estonia And Russia, (Contemporary Review: Autumn, 2003), p 288-293; Jaak Aaviksoo, Address by the

  2. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  3. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE PAGES

    Romano, Paul K.; Siegel, Andrew R.

    2017-07-01

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  4. RASSP signal processing architectures

    NASA Astrophysics Data System (ADS)

    Shirley, Fred; Bassett, Bob; Letellier, J. P.

    1995-06-01

    The rapid prototyping of application specific signal processors (RASSP) program is an ARPA/tri-service effort to dramatically improve the process by which complex digital systems, particularly embedded signal processors, are specified, designed, documented, manufactured, and supported. The domain of embedded signal processing was chosen because it is important to a variety of military and commercial applications as well as for the challenge it presents in terms of complexity and performance demands. The principal effort is being performed by two major contractors, Lockheed Sanders (Nashua, NH) and Martin Marietta (Camden, NJ). For both, improvements in methodology are to be exercised and refined through the performance of individual 'Demonstration' efforts. The Lockheed Sanders' Demonstration effort is to develop an infrared search and track (IRST) processor. In addition, both contractors' results are being measured by a series of externally administered (by Lincoln Labs) six-month Benchmark programs that measure process improvement as a function of time. The first two Benchmark programs are designing and implementing a synthetic aperture radar (SAR) processor. Our demonstration team is using commercially available VME modules from Mercury Computer to assemble a multiprocessor system scalable from one to hundreds of Intel i860 microprocessors. Custom modules for the sensor interface and display driver are also being developed. This system implements either proprietary or Navy owned algorithms to perform the compute-intensive IRST function in real time in an avionics environment. Our Benchmark team is designing custom modules using commercially available processor ship sets, communication submodules, and reconfigurable logic devices. One of the modules contains multiple vector processors optimized for fast Fourier transform processing. Another module is a fiberoptic interface that accepts high-rate input data from the sensors and provides video-rate output data to a display. This paper discusses the impact of simulation on choosing signal processing algorithms and architectures, drawing from the experiences of the Demonstration and Benchmark inter-company teams at Lockhhed Sanders, Motorola, Hughes, and ISX.

  5. Using R in Taverna: RShell v1.2

    PubMed Central

    Wassink, Ingo; Rauwerda, Han; Neerincx, Pieter BT; Vet, Paul E van der; Breit, Timo M; Leunissen, Jack AM; Nijholt, Anton

    2009-01-01

    Background R is the statistical language commonly used by many life scientists in (omics) data analysis. At the same time, these complex analyses benefit from a workflow approach, such as used by the open source workflow management system Taverna. However, Taverna had limited support for R, because it supported just a few data types and only a single output. Also, there was no support for graphical output and persistent sessions. Altogether this made using R in Taverna impractical. Findings We have developed an R plugin for Taverna: RShell, which provides R functionality within workflows designed in Taverna. In order to fully support the R language, our RShell plugin directly uses the R interpreter. The RShell plugin consists of a Taverna processor for R scripts and an RShell Session Manager that communicates with the R server. We made the RShell processor highly configurable allowing the user to define multiple inputs and outputs. Also, various data types are supported, such as strings, numeric data and images. To limit data transport between multiple RShell processors, the RShell plugin also supports persistent sessions. Here, we will describe the architecture of RShell and the new features that are introduced in version 1.2, i.e.: i) Support for R up to and including R version 2.9; ii) Support for persistent sessions to limit data transfer; iii) Support for vector graphics output through PDF; iv)Syntax highlighting of the R code; v) Improved usability through fewer port types. Our new RShell processor is backwards compatible with workflows that use older versions of the RShell processor. We demonstrate the value of the RShell processor by a use-case workflow that maps oligonucleotide probes designed with DNA sequence information from Vega onto the Ensembl genome assembly. Conclusion Our RShell plugin enables Taverna users to employ R scripts within their workflows in a highly configurable way. PMID:19607662

  6. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  7. Reconfiguration Schemes for Fault-Tolerant Processor Arrays

    DTIC Science & Technology

    1992-10-15

    partially notion of linear schedule are easily related to similar ordered subset of a multidimensional integer lattice models and concepts used in [11-[131...and several other (called indec set). The points of this lattice correspond works. to (i.e.. are the indices of) computations, and the partial There are...These data dependencies are represented as vectors that of all computations of the algorithm is to be minimized. connect points of the lattice . If a

  8. Security Informatics Research Challenges for Mitigating Cyber Friendly Fire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroll, Thomas E.; Greitzer, Frank L.; Roberts, Adam D.

    This paper addresses cognitive implications and research needs surrounding the problem of cyber friendly re (FF). We dene cyber FF as intentional o*ensive or defensive cyber/electronic actions intended to protect cyber systems against enemy forces or to attack enemy cyber systems, which unintentionally harms the mission e*ectiveness of friendly or neutral forces. We describe examples of cyber FF and discuss how it ts within a general conceptual framework for cyber security failures. Because it involves human failure, cyber FF may be considered to belong to a sub-class of cyber security failures characterized as unintentional insider threats. Cyber FF is closelymore » related to combat friendly re in that maintaining situation awareness (SA) is paramount to avoiding unintended consequences. Cyber SA concerns knowledge of a system's topology (connectedness and relationships of the nodes in a system), and critical knowledge elements such as the characteristics and vulnerabilities of the components that comprise the system and its nodes, the nature of the activities or work performed, and the available defensive and o*ensive countermeasures that may be applied to thwart network attacks. We describe a test bed designed to support empirical research on factors a*ecting cyber FF. Finally, we discuss mitigation strategies to combat cyber FF, including both training concepts and suggestions for decision aids and visualization approaches.« less

  9. Electromagnetic physics models for parallel computing architectures

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less

  10. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  11. 78 FR 9951 - Excepted Service

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-12

    ...) Not to exceed 3000 positions that require unique cyber security skills and knowledge to perform cyber..., distributed control systems security, cyber incident response, cyber exercise facilitation and management, cyber vulnerability detection and assessment, network and systems engineering, enterprise architecture...

  12. Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob

    2003-01-01

    The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.

  13. Binarized cross-approximate entropy in crowdsensing environment.

    PubMed

    Skoric, Tamara; Mohamoud, Omer; Milovanovic, Branislav; Japundzic-Zigon, Nina; Bajic, Dragana

    2017-01-01

    Personalised monitoring in health applications has been recognised as part of the mobile crowdsensing concept, where subjects equipped with sensors extract information and share them for personal or common benefit. Limited transmission resources impose the use of local analyses methodology, but this approach is incompatible with analytical tools that require stationary and artefact-free data. This paper proposes a computationally efficient binarised cross-approximate entropy, referred to as (X)BinEn, for unsupervised cardiovascular signal processing in environments where energy and processor resources are limited. The proposed method is a descendant of the cross-approximate entropy ((X)ApEn). It operates on binary, differentially encoded data series split into m-sized vectors. The Hamming distance is used as a distance measure, while a search for similarities is performed on the vector sets. The procedure is tested on rats under shaker and restraint stress, and compared to the existing (X)ApEn results. The number of processing operations is reduced. (X)BinEn captures entropy changes in a similar manner to (X)ApEn. The coding coarseness yields an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon's entropy. A binary conditional entropy for m =1 vectors is embedded into the (X)BinEn procedure. (X)BinEn can be applied to a single time series as an auto-entropy method, or to a pair of time series, as a cross-entropy method. Its low processing requirements makes it suitable for mobile, battery operated, self-attached sensing devices, with limited power and processor resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    NASA Astrophysics Data System (ADS)

    Vincenti, H.; Lobet, M.; Lehe, R.; Sasanka, R.; Vay, J.-L.

    2017-01-01

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈ 20 pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scatter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of × 2 to × 2.5 speed-up in double precision for particle shape factor of orders 1- 3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles).

  15. Cyber Power in the 21st Century

    DTIC Science & Technology

    2008-12-01

    Cyber Warfare .................................................................86 V. Conclusions and Recommendations...40 2 – Asymmetric Effects of Cyber Warfare ........................................................................41 1 CYBER POWER... cyber warfare capabilities with other elements of national power, as evidenced by the concept of “informationization” (xinxihua) put forward in

  16. Psychological Needs as a Predictor of Cyber Bullying: A Preliminary Report on College Students

    ERIC Educational Resources Information Center

    Dilmac, Bulent

    2009-01-01

    Recent surveys show that cyber bullying is a pervasive problem in North America. Many news stories have reported cyber bullying incidents around the world. Reports on the prevalence of cyber bullying and victimization as a result of cyber bullying increase yearly. Although we know what cyber bullying is it is important that we learn more about the…

  17. Problematic alcohol use as a risk factor for cyber aggression within romantic relationships.

    PubMed

    Crane, Cory A; Umehira, Nicole; Berbary, Cassandra; Easton, Caroline J

    2018-06-06

    Cyber aggression has emerged as a modern form of intimate partner violence which has yet to undergo sufficient research necessary to identify risk factors that may increase the likelihood or severity of cyber aggressive behavior toward a relationship partner. Prior research offers contradictory findings pertaining to the relationship between problematic alcohol use and cyber aggression. We recruited 100 (40 female) adult participants through online crowdsourcing to complete a series of questionnaires assessing traditional partner violence, cyber aggression, and problematic alcohol use. Forty-two percent of the sample reported perpetrating cyber relational aggression and 35% reported perpetrating cyber privacy invasion during the year prior to study participation. Traditional partner violence was associated with both forms of cyber aggression. Problematic alcohol use was only associated with privacy invasion after accounting for demographic factors and traditional partner violence. Cyber aggression was prevalent among the current adult sample. Results suggest that problematic alcohol use is a risk factor for cyber privacy invasion but not cyber relational aggression. Findings add to and clarify the nascent, conflicting results that have emerged from prior research on alcohol-related cyber aggression. (Am J Addict 2018;XX:1-7). © 2018 American Academy of Addiction Psychiatry.

  18. The cyber threat, trophy information and the fortress mentality.

    PubMed

    Scully, Tim

    2011-10-01

    'It won't happen to me' is a prevalent mindset among senior executives in the private and public sectors when considering targeted cyber intrusions. This is exacerbated by the long-term adoption of a 'fortress mentality' towards cyber security, and by the attitude of many of our cyber-security professionals, who speak a different language when it comes to communicating cyber-security events to senior executives. The prevailing approaches to cyber security have clearly failed. Almost every week another serious, targeted cyber intrusion is reported, but reported intrusions are only the tip of the iceberg. Why have we got it so wrong? It must be acknowledged that cyber security is no longer the domain of cyber-security experts alone. Many more of us at various levels of leadership must understand, and be more deeply engaged in, the cyber-security challenge if we are to deal with the threat holistically and effectively. Governments cannot combat the cyber threat alone, particularly the so-called advanced persistent threat; they must work closely with industry as trusted partners. Industry will be the 'boots on the ground' in cyber security, but there are challenges to building this relationship, which must be based on sound principles.

  19. The association between cyber victimization and subsequent cyber aggression: the moderating effect of peer rejection.

    PubMed

    Wright, Michelle F; Li, Yan

    2013-05-01

    Adolescents experience various forms of strain in their lives that may contribute jointly to their engagement in cyber aggression. However, little attention has been given to this idea. To address this gap in the literature, the present longitudinal study examined the moderating influence of peer rejection on the relationship between cyber victimization at Time 1 (T1) and subsequent cyber aggression at Time 2 (T2; 6 months later) among 261 (150 girls) 6th, 7th, and 8th graders. Our findings indicated that both peer rejection and cyber victimization were related to T2 peer-nominated and self-reported cyber aggression, both relational and verbal, after controlling for gender and T1 cyber aggression. Furthermore, T1 cyber victimization was related more strongly to T2 peer-nominated and self-reported cyber aggression at higher levels of T1 peer rejection. These results extend previous findings regarding the relationship between peer rejection and face-to-face aggressive behaviors to the cyber context. In addition, our findings underscore the importance of utilizing multiple methods, such as peer-nomination and self-report, to assess cyber aggression in a school setting.

  20. The Temporal Association Between Traditional and Cyber Dating Abuse Among Adolescents.

    PubMed

    Temple, Jeff R; Choi, Hye Jeong; Brem, Meagan; Wolford-Clevenger, Caitlin; Stuart, Gregory L; Peskin, Melissa Fleschler; Elmquist, JoAnna

    2016-02-01

    While research has explored adolescents' use of technology to perpetrate dating violence, little is known about how traditional in-person and cyber abuse are linked, and no studies have examined their relationship over time. Using our sample of 780 diverse adolescents (58 % female), we found that traditional and cyber abuse were positively associated, and cyber abuse perpetration and victimization were correlated at each time point. Cyber abuse perpetration in the previous year (spring 2013) predicted cyber abuse perpetration 1 year later (spring 2014), while controlling for traditional abuse and demographic variables. In addition, physical violence victimization and cyber abuse perpetration and victimization predicted cyber abuse victimization the following year. These findings highlight the reciprocal nature of cyber abuse and suggest that victims may experience abuse in multiple contexts.

  1. The Temporal Association between Traditional and Cyber Dating Abuse among Adolescents

    PubMed Central

    Temple, Jeff R.; Choi, Hye Jeong; Brem, Meagan; Wolford-Clevenger, Caitlin; Stuart, Gregory L.; Peskin, Melissa Fleschler; Elmquist, JoAnna

    2015-01-01

    While research has explored adolescents’ use of technology to perpetrate dating violence, little is known about how traditional in-person and cyber abuse are linked, and no studies have examined their relationship over time. Using our sample of 780 diverse adolescents (58% female), we found that traditional and cyber abuse were positively associated, and cyber abuse perpetration and victimization were correlated at each time point. Cyber abuse perpetration in the previous year (spring 2013) predicted cyber abuse perpetration one year later (spring 2014), while controlling for traditional abuse and demographic variables. In addition, physical violence victimization and cyber abuse perpetration and victimization predicted cyber abuse victimization the following year. These findings highlight the reciprocal nature of cyber abuse and suggest that victims may experience abuse in multiple contexts. PMID:26525389

  2. Human Capital Development - Resilient Cyber Physical Systems

    DTIC Science & Technology

    2017-09-29

    Human Capital Development – Resilient Cyber Physical Systems Technical Report SERC-2017-TR-113 September 29, 2017 Principal Investigator...4.2.2 Cyber Attack Taxonomy for Cyber Physical Systems .............................................................................. 43 4.2.3...Cyber- physical System Attack Taxonomy ................................................................................................ 44 4.2.4

  3. Offensive Cyber Capability: Can it Reduce Cyberterrorism

    DTIC Science & Technology

    2010-12-02

    33 Lech J. Janczewski, and Andrew M. Colarik, eds., Cyber Warfare and Cyber Terrorism (New York: Information Science Reference, 2008...Science and Business Media, 2008. Janczewski, Lech , J. and Andrew M. Colarik, eds., Cyber Warfare and Cyber Terrorism. New York: Information Science

  4. Traditional and cyber aggressors and victims: a comparison of psychosocial characteristics.

    PubMed

    Sontag, Lisa M; Clemans, Katherine H; Graber, Julia A; Lyndon, Sarah T

    2011-04-01

    To date, relatively little is known about differences between perpetrators and victims of cyber and traditional forms of aggression. Hence, this study investigated differences among traditional and cyber aggressors and victims on psychosocial characteristics typically examined in research on traditional aggression and victimization, specifically effortful control, manipulativeness, remorselessness, proactive and reactive aggression, and anxious/depressive symptoms. Participants (N = 300; 63.2% female; M age = 12.89, SD = .95; 52% Caucasian, 27% African American, 11% Latino, and 10% other) were categorized based on aggressor type (non/low aggressor, traditional-only, cyber-only, and combined traditional and cyber) and victim type (non-victim, traditional-only, cyber-only, and combined traditional and cyber). Cyber aggressors reported lower levels of reactive aggression compared to traditional-only and combined aggressors. Combined aggressors demonstrated the poorest psychosocial profile compared to all other aggressor groups. For victimization, cyber-only and combined victims reported higher levels of reactive aggression and were more likely to be cyber aggressors themselves compared to traditional-only victims and non-victims. Findings suggest that there may be unique aspects about cyber aggression and victimization that warrant further investigation.

  5. Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava

    2017-01-01

    For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particlemore » tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.« less

  6. Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs

    NASA Astrophysics Data System (ADS)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; Masciovecchio, Mario; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2017-08-01

    For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.

  7. Cyber Warfare: An Evolution in Warfare not Just War Theory

    DTIC Science & Technology

    2013-04-05

    cyber warfare is greatly debated. While some argue that Just War Theory is irrelevant to cyber warfare , a careful analysis demonstrates that it is a...useful tool for considering the morality of cyber warfare . This paper examines the application of Just War Theory to cyber warfare and contends that...Just War Theory is a useful tool for considering the morality of cyber warfare .

  8. Moving Target Techniques: Cyber Resilience throught Randomization, Diversity, and Dynamism

    DTIC Science & Technology

    2017-03-03

    Moving Target Techniques: Cyber Resilience through Randomization, Diversity, and Dynamism Hamed Okhravi and Howard Shrobe Overview: The static...nature of computer systems makes them vulnerable to cyber attacks. Consider a situation where an attacker wants to compromise a remote system running... cyber resilience that attempts to rebalance the cyber landscape is known as cyber moving target (MT) (or just moving target) techniques. Moving target

  9. The economics of data acquisition computers for ST and MST radars

    NASA Technical Reports Server (NTRS)

    Watkins, B. J.

    1983-01-01

    Some low cost options for data acquisition computers for ST (stratosphere, troposphere) and MST (mesosphere, stratosphere, troposphere) are presented. The particular equipment discussed reflects choices made by the University of Alaska group but of course many other options exist. The low cost microprocessor and array processor approach presented here has several advantages because of its modularity. An inexpensive system may be configured for a minimum performance ST radar, whereas a multiprocessor and/or a multiarray processor system may be used for a higher performance MST radar. This modularity is important for a network of radars because the initial cost is minimized while future upgrades will still be possible at minimal expense. This modularity also aids in lowering the cost of software development because system expansions should rquire little software changes. The functions of the radar computer will be to obtain Doppler spectra in near real time with some minor analysis such as vector wind determination.

  10. Cyber attack analysis on cyber-physical systems: Detectability, severity, and attenuation strategy

    NASA Astrophysics Data System (ADS)

    Kwon, Cheolhyeon

    Security of Cyber-Physical Systems (CPS) against malicious cyber attacks is an important yet challenging problem. Since most cyber attacks happen in erratic ways, it is usually intractable to describe and diagnose them systematically. Motivated by such difficulties, this thesis presents a set of theories and algorithms for a cyber-secure architecture of the CPS within the control theoretic perspective. Here, instead of identifying a specific cyber attack model, we are focused on analyzing the system's response during cyber attacks. Firstly, we investigate the detectability of the cyber attacks from the system's behavior under cyber attacks. Specifically, we conduct a study on the vulnerabilities in the CPS's monitoring system against the stealthy cyber attack that is carefully designed to avoid being detected by its detection scheme. After classifying three kinds of cyber attacks according to the attacker's ability to compromise the system, we derive the necessary and sufficient conditions under which such stealthy cyber attacks can be designed to cause the unbounded estimation error while not being detected. Then, the analytical design method of the optimal stealthy cyber attack that maximizes the estimation error is developed. The proposed stealthy cyber attack analysis is demonstrated with illustrative examples on Air Traffic Control (ATC) system and Unmanned Aerial Vehicle (UAV) navigation system applications. Secondly, in an attempt to study the CPSs' vulnerabilities in more detail, we further discuss a methodology to identify potential cyber threats inherent in the given CPSs and quantify the attack severity accordingly. We then develop an analytical algorithm to test the behavior of the CPS under various cyber attack combinations. Compared to a numerical approach, the analytical algorithm enables the prediction of the most effective cyber attack combinations without computing the severity of all possible attack combinations, thereby greatly reducing the computational cost. The proposed algorithm is validated through a linearized longitudinal motion of a UAV example. Finally, we propose an attack attenuation strategy via the controller design for CPSs that are robust to various types of cyber attacks. While the previous studies have investigated a secure control by assuming a specific attack strategy, in this research we propose a hybrid robust control scheme that contains multiple sub-controllers, each matched to a specific type of cyber attacks. Then the system can be adapted to various cyber attacks (including those that are not assumed for sub-controller design) by switching its sub-controllers to achieve the best performance. Then, a method for designing a secure switching logic to counter all possible cyber attacks is proposed and it verifies mathematically the system's performance and stability as well. The performance of the proposed control scheme is demonstrated by an example with the hybrid H2 - H-infinity controller applied to a UAV example.

  11. Simulating cyber warfare and cyber defenses: information value considerations

    NASA Astrophysics Data System (ADS)

    Stytz, Martin R.; Banks, Sheila B.

    2011-06-01

    Simulating cyber warfare is critical to the preparation of decision-makers for the challenges posed by cyber attacks. Simulation is the only means we have to prepare decision-makers for the inevitable cyber attacks upon the information they will need for decision-making and to develop cyber warfare strategies and tactics. Currently, there is no theory regarding the strategies that should be used to achieve objectives in offensive or defensive cyber warfare, and cyber warfare occurs too rarely to use real-world experience to develop effective strategies. To simulate cyber warfare by affecting the information used for decision-making, we modify the information content of the rings that are compromised during in a decision-making context. The number of rings affected and value of the information that is altered (i.e., the closeness of the ring to the center) is determined by the expertise of the decision-maker and the learning outcome(s) for the simulation exercise. We determine which information rings are compromised using the probability that the simulated cyber defenses that protect each ring can be compromised. These probabilities are based upon prior cyber attack activity in the simulation exercise as well as similar real-world cyber attacks. To determine which information in a compromised "ring" to alter, the simulation environment maintains a record of the cyber attacks that have succeeded in the simulation environment as well as the decision-making context. These two pieces of information are used to compute an estimate of the likelihood that the cyber attack can alter, destroy, or falsify each piece of information in a compromised ring. The unpredictability of information alteration in our approach adds greater realism to the cyber event. This paper suggests a new technique that can be used for cyber warfare simulation, the ring approach for modeling context-dependent information value, and our means for considering information value when assigning cyber resources to information protection tasks. The first section of the paper introduces the cyber warfare simulation challenge and the reasons for its importance. The second section contains background information related to our research. The third section contains a discussion of the information ring technique and its use for simulating cyber attacks. The fourth section contains a summary and suggestions for research.

  12. Cyber Warfare/Cyber Terrorism

    DTIC Science & Technology

    2004-03-19

    Section 1 of this paper provides an overview of cyber warfare as an element of information warfare, starting with the general background of the...alternative form of conflict, reviews the traditional principles of warfare and why they may or may not apply to cyber warfare , and proposes new principles of...warfare that may be needed to conduct cyber warfare . Section 1 concludes with a review of offensive and defensive cyber warfare concepts. Section 2

  13. Refocusing Cyber Warfare Thought

    DTIC Science & Technology

    2013-02-01

    January–February 2013 Air & Space Power Journal | 44 FeatureCyber Focus Refocusing Cyber Warfare Thought Maj Sean C. Butler, USAF In September 2007...1. REPORT DATE FEB 2013 2. REPORT TYPE 3. DATES COVERED 00-00-2013 to 00-00-2013 4. TITLE AND SUBTITLE Refocusing Cyber Warfare Thought 5a...2013 Air & Space Power Journal | 45 Butler Refocusing Cyber Warfare Thought FeatureCyber Focus characterized by the use of electronics and the

  14. Nodes and Codes: The Reality of Cyber Warfare

    DTIC Science & Technology

    2012-05-17

    Nodes and Codes explores the reality of cyber warfare through the story of Stuxnet, a string of weaponized code that reached through a domain...nodes. Stuxnet served as a proof-of-concept for cyber weapons and provided a comparative laboratory to study the reality of cyber warfare from the...military powers most often associated with advanced, offensive cyber attack capabilities. The reality of cyber warfare holds significant operational

  15. 76 FR 22409 - Nationwide Cyber Security Review (NCSR) Assessment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-21

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2011-0012] Nationwide Cyber Security Review (NCSR...), National Cyber Security Division (NCSD), Cyber Security Evaluation Program (CSEP), will submit the... for all levels of government to complete a cyber network security assessment so that a full measure of...

  16. Kalman Filter Tracking on Parallel Architectures

    NASA Astrophysics Data System (ADS)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2016-11-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.

  17. Efficient Iterative Methods Applied to the Solution of Transonic Flows

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.

    1996-02-01

    We investigate the use of an inexact Newton's method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton's method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES method. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.

  18. A communication-avoiding, hybrid-parallel, rank-revealing orthogonalization method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoemmen, Mark

    2010-11-01

    Orthogonalization consumes much of the run time of many iterative methods for solving sparse linear systems and eigenvalue problems. Commonly used algorithms, such as variants of Gram-Schmidt or Householder QR, have performance dominated by communication. Here, 'communication' includes both data movement between the CPU and memory, and messages between processors in parallel. Our Tall Skinny QR (TSQR) family of algorithms requires asymptotically fewer messages between processors and data movement between CPU and memory than typical orthogonalization methods, yet achieves the same accuracy as Householder QR factorization. Furthermore, in block orthogonalizations, TSQR is faster and more accurate than existing approaches formore » orthogonalizing the vectors within each block ('normalization'). TSQR's rank-revealing capability also makes it useful for detecting deflation in block iterative methods, for which existing approaches sacrifice performance, accuracy, or both. We have implemented a version of TSQR that exploits both distributed-memory and shared-memory parallelism, and supports real and complex arithmetic. Our implementation is optimized for the case of orthogonalizing a small number (5-20) of very long vectors. The shared-memory parallel component uses Intel's Threading Building Blocks, though its modular design supports other shared-memory programming models as well, including computation on the GPU. Our implementation achieves speedups of 2 times or more over competing orthogonalizations. It is available now in the development branch of the Trilinos software package, and will be included in the 10.8 release.« less

  19. Chinese National Strategy of Total War

    DTIC Science & Technology

    2008-06-01

    Cyber Warfare Michael J. Good, BA Major, USA June 2008 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS...who allowed me into the Cyber Warfare , my fellow students in the Cyber Warfare IDE program who have been great friends and mentors, and my fellow...Hackers and Other Cyber Criminals .............................................................................41 PLA Cyber Warfare

  20. Cyber Victimization and Depressive Symptoms in Sexual Minority College Students

    ERIC Educational Resources Information Center

    Ramsey, Jaimi L.; DiLalla, Lisabeth F.; McCrary, Megan K.

    2016-01-01

    This study investigated the relations between sexual orientation, cyber victimization, and depressive symptoms in college students. Study aims were to determine whether sexual minority college students are at greater risk for cyber victimization and to examine whether recent cyber victimization (self-reported cyber victimization over the last…

  1. Quantifying and measuring cyber resiliency

    NASA Astrophysics Data System (ADS)

    Cybenko, George

    2016-05-01

    Cyber resliency has become an increasingly attractive research and operational concept in cyber security. While several metrics have been proposed for quantifying cyber resiliency, a considerable gap remains between those metrics and operationally measurable and meaningful concepts that can be empirically determined in a scientific manner. This paper describes a concrete notion of cyber resiliency that can be tailored to meet specific needs of organizations that seek to introduce resiliency into their assessment of their cyber security posture.

  2. The Cyber Defense Review. Volume 1, Number 1, Spring 2016

    DTIC Science & Technology

    2016-04-20

    in the Land and Cyber Domains Lieutenant General Edward C. Cardon The U.S. Navy’s Evolving Cyber/ Cybersecurity Story Rear Admiral Nancy Norton...Olav Lysne Cyber Situational Awareness Maj. Gen. Earl D. Matthews, USAF, Ret Dr. Harold J. Arata III Mr. Brian L. Hale Is There a Cybersecurity ...Kallberg The Decision to Attack: Military and Intelligence Cyber Decision-Making by Dr. Aaron F. Brantly The Cyber Defense Review

  3. Strategic Impact of Cyber Warfare Rules for the United States

    DTIC Science & Technology

    2010-03-01

    Despite the growing complexities of cyberspace and the significant strategic challenge cyber warfare poses on the United States’ vital interests few...specific rules for cyber warfare exist. The United States should seek to develop and maintain cyber warfare rules in order to establish...exemplify the need for multilaterally prepared cyber warfare rules that will reduce the negative influence cyber warfare presently has on the United States’ national interests.

  4. On Cyber Warfare Command and Control Systems

    DTIC Science & Technology

    2004-06-01

    longer adequate to rely solely on the now traditional defense-in-depth strategy. We must recognize that we are engaged in a form of warfare, cyber warfare , and... warfare . This causes security devices to be used ineffectively and responses to be untimely. Cyber warfare then becomes a one-sided battle where the... cyber warfare strategy and tactics requires a cyber warfare command and control system. Responses to cyber attacks do not require offensive measures

  5. Cyber resilience: a review of critical national infrastructure and cyber security protection measures applied in the UK and USA.

    PubMed

    Harrop, Wayne; Matteson, Ashley

    This paper presents cyber resilience as key strand of national security. It establishes the importance of critical national infrastructure protection and the growing vicarious nature of remote, well-planned, and well executed cyber attacks on critical infrastructures. Examples of well-known historical cyber attacks are presented, and the emergence of 'internet of things' as a cyber vulnerability issue yet to be tackled is explored. The paper identifies key steps being undertaken by those responsible for detecting, deterring, and disrupting cyber attacks on critical national infrastructure in the United Kingdom and the USA.

  6. Building organisational cyber resilience: A strategic knowledge-based view of cyber security management.

    PubMed

    Ferdinand, Jason

    The concept of cyber resilience has emerged in recent years in response to the recognition that cyber security is more than just risk management. Cyber resilience is the goal of organisations, institutions and governments across the world and yet the emerging literature is somewhat fragmented due to the lack of a common approach to the subject. This limits the possibility of effective collaboration across public, private and governmental actors in their efforts to build and maintain cyber resilience. In response to this limitation, and to calls for a more strategically focused approach, this paper offers a knowledge-based view of cyber security management that explains how an organisation can build, assess, and maintain cyber resilience.

  7. CyberArc: a non-coplanar-arc optimization algorithm for CyberKnife

    NASA Astrophysics Data System (ADS)

    Kearney, Vasant; Cheung, Joey P.; McGuinness, Christopher; Solberg, Timothy D.

    2017-07-01

    The goal of this study is to demonstrate the feasibility of a novel non-coplanar-arc optimization algorithm (CyberArc). This method aims to reduce the delivery time of conventional CyberKnife treatments by allowing for continuous beam delivery. CyberArc uses a 4 step optimization strategy, in which nodes, beams, and collimator sizes are determined, source trajectories are calculated, intermediate radiation models are generated, and final monitor units are calculated, for the continuous radiation source model. The dosimetric results as well as the time reduction factors for CyberArc are presented for 7 prostate and 2 brain cases. The dosimetric quality of the CyberArc plans are evaluated using conformity index, heterogeneity index, local confined normalized-mutual-information, and various clinically relevant dosimetric parameters. The results indicate that the CyberArc algorithm dramatically reduces the treatment time of CyberKnife plans while simultaneously preserving the dosimetric quality of the original plans.

  8. CyberArc: a non-coplanar-arc optimization algorithm for CyberKnife.

    PubMed

    Kearney, Vasant; Cheung, Joey P; McGuinness, Christopher; Solberg, Timothy D

    2017-06-26

    The goal of this study is to demonstrate the feasibility of a novel non-coplanar-arc optimization algorithm (CyberArc). This method aims to reduce the delivery time of conventional CyberKnife treatments by allowing for continuous beam delivery. CyberArc uses a 4 step optimization strategy, in which nodes, beams, and collimator sizes are determined, source trajectories are calculated, intermediate radiation models are generated, and final monitor units are calculated, for the continuous radiation source model. The dosimetric results as well as the time reduction factors for CyberArc are presented for 7 prostate and 2 brain cases. The dosimetric quality of the CyberArc plans are evaluated using conformity index, heterogeneity index, local confined normalized-mutual-information, and various clinically relevant dosimetric parameters. The results indicate that the CyberArc algorithm dramatically reduces the treatment time of CyberKnife plans while simultaneously preserving the dosimetric quality of the original plans.

  9. Cyber Friendly Fire: Research Challenges for Security Informatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greitzer, Frank L.; Carroll, Thomas E.; Roberts, Adam D.

    This paper addresses cognitive implications and research needs surrounding the problem of cyber friendly fire (FF). We define cyber FF as intentional offensive or defensive cyber/electronic actions intended to protect cyber systems against enemy forces or to attack enemy cyber systems, which unintention-ally harms the mission effectiveness of friendly or neutral forces. Just as with combat friendly fire, maintaining situation awareness (SA) is paramount to avoiding cyber FF incidents. Cyber SA concerns knowledge of a system’s topology (connectedness and relationships of the nodes in a system), and critical knowledge elements such as the characteristics and vulnerabilities of the components thatmore » comprise the system and its nodes, the nature of the activities or work performed, and the available defensive and offensive countermeasures that may be applied to thwart network attacks. Mitigation strategies to combat cyber FF— including both training concepts and suggestions for decision aids and visualization approaches—are discussed.« less

  10. The Cyber War: Maintaining and Controlling the Key Cyber Terrain of the Cyberspace Domain

    DTIC Science & Technology

    2016-06-26

    solution strategy to assess options that will enable the commander to realize the Air Force’s cyber mission. Recommendations will be made that will...will present a solution to assist the JFC in achieving cyberspace dominance. Background In the modern world of advanced technology, control of...the solutions are: 1) timely identification of key cyber terrain, 2) accurate mapping of the cyber terrain, 3) defense of key cyber terrain, and 4

  11. Sources of Occupational Stress and Prevalence of Burnout and Clinical Distress Among U.S. Air Force Cyber Warfare Operators

    DTIC Science & Technology

    2013-01-01

    distress within the cyber warfare community. This study involved cyber warfare operators including active duty (n = 376) and civilian contractor and...revealed that when compared to civilian cyber warfare operators, active duty cyber warfare operators are more likely to suffer from the facets of...8217 write-in responses revealed cyber warfare operators attributed shift work, shift changes, and hours worked as the primary sources of high occupational

  12. International Cyber Incident Repository System: Information Sharing on a Global Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joyce, Amanda L.; Evans, PhD, Nathaniel; Tanzman, Edward A.

    According to the 2016 Internet Security Threat Report, the largest number of cyber attacks were recorded last year (2015), reaching a total of 430 million incidents throughout the world. As the number of cyber incidents increases, the need for information and intelligence sharing increases, as well. This fairly large increase in cyber incidents is driving the need for an international cyber incident data reporting system. The goal of the cyber incident reporting system is to make available shared and collected information about cyber events among participating international parties. In its 2014 report, Insurance Industry Working Session Readout Report-Insurance for CyberRelatedmore » Critical Infrastructure Loss: Key Issues, on the outcomes of a working session on cyber insurance, the U.S. Department of Homeland Security observed that “many participants cited the need for a secure method through which organizations could pool and share cyber incident information” and noted that one underwriter emphasized the importance of internationally harmonized data taxonomies. This cyber incident data reporting system could benefit all nations that take part in reporting incidents to provide a more common operating picture. In addition, this reporting system could allow for trending and anticipated attacks and could potentially benefit participating members by enabling them to get in front of potential attacks. The purpose of this paper is to identify options for consideration for such a system in fostering cooperative cyber defense.« less

  13. Vectorization of a particle code used in the simulation of rarefied hypersonic flow

    NASA Technical Reports Server (NTRS)

    Baganoff, D.

    1990-01-01

    A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.

  14. The Performance of the NAS HSPs in 1st Half of 1994

    NASA Technical Reports Server (NTRS)

    Bergeron, Robert J.; Walter, Howard (Technical Monitor)

    1995-01-01

    During the first six months of 1994, the NAS (National Airspace System) 16-CPU Y-MP C90 Von Neumann (VN) delivered an average throughput of 4.045 GFLOPS while the ACSF (Aeronautics Consolidated Supercomputer Facility) 8-CPU Y-MP C90 Eagle averaged 1.658 GFLOPS. The VN rate represents a machine efficiency of 26.3% whereas the Eagle rate corresponds to a machine efficiency of 21.6%. VN displayed a greater efficiency than Eagle primarily because the stronger workload demand for its CPU cycles allowed it to devote more time to user programs and less time to idle. An additional factor increasing VN efficiency was the ability of the UNICOS 8.0 Operating System to deliver a larger fraction of CPU time to user programs. Although measurements indicate increasing vector length for both workloads, insufficient vector lengths continue to hinder HSP (High Speed Processor) performance. To improve HSP performance, NAS should continue to encourage the HSP users to modify their codes to increase program vector length.

  15. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  16. Situational Awareness as a Measure of Performance in Cyber Security Collaborative Work

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malviya, Ashish; Fink, Glenn A.; Sego, Landon H.

    Cyber defense competitions arising from U.S. service academy exercises, offer a platform for collecting data that can inform research that ranges from characterizing the ideal cyber warrior to describing behaviors during certain challenging cyber defense situations. This knowledge in turn could lead to better preparation of cyber defenders in both military and civilian settings. We conducted proof of concept experimentation to collect data during the Pacific-rim Regional Collegiate Cyber Defense Competition (PRCCDC) and analyzed it to study the behavior of cyber defenders. We propose that situational awareness predicts performance of cyber security professionals, and in this paper we focus onmore » our collection and analysis of competition data to determine whether it supports our hypothesis. In addition to normal cyber data, we collected situational awareness and workload data and compared it against the performance of cyber defenders as indicated by their competition score. We conclude that there is a weak correlation between our measure of situational awareness and performance that we hope to exploit in further studies.« less

  17. FFTs in external or hierarchical memory

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1989-01-01

    A description is given of advanced techniques for computing an ordered FFT on a computer with external or hierarchical memory. These algorithms (1) require as few as two passes through the external data set, (2) use strictly unit stride, long vector transfers between main memory and external storage, (3) require only a modest amount of scratch space in main memory, and (4) are well suited for vector and parallel computation. Performance figures are included for implementations of some of these algorithms on Cray supercomputers. Of interest is the fact that a main memory version outperforms the current Cray library FFT routines on the Cray-2, the Cray X-MP, and the Cray Y-MP systems. Using all eight processors on the Cray Y-MP, this main memory routine runs at nearly 2 Gflops.

  18. USAF Cyber Capability Development: A Vision for Future Cyber Warfare & a Concept for Education of Cyberspace Leaders

    DTIC Science & Technology

    2009-04-01

    Significant and interrelated problems are hindering the Air Force’s development of cyber warfare capabilities. The first is a lack of awareness about...why the AF has chosen to take cyber warfare on as a core capability on par with air and space. The second stems from the lack of a commonly...the cyber capabilities needed in the future? The contributions of this research include a strategic vision for future cyber warfare capabilities that

  19. The Spectrum of Cyber Conflict from Hacking to Information Warfare: What is Law Enforcement’s Role?

    DTIC Science & Technology

    2001-04-01

    defined as cyber crime or —hacker“. Although this category of hacker includes many kinds of cyber criminals , from a DOD perspective, the motivation of a...perpetrator. 6 In his book, —Fighting Computer Crime“, Wiley identifies several types of cyber criminals . They range from pranksters who perpetrate...usually financial11 This first group of cyber criminals or —hackers“ can be categorized as Unintentional Cyber actors. Although they have a variety of

  20. Feasibility Study on Applying Radiophotoluminescent Glass Dosimeters for CyberKnife SRS Dose Verification

    PubMed Central

    Hsu, Shih-Ming; Hung, Chao-Hsiung; Liao, Yi-Jen; Fu, Hsiao-Mei; Tsai, Jo-Ting

    2017-01-01

    CyberKnife is one of multiple modalities for stereotactic radiosurgery (SRS). Due to the nature of CyberKnife and the characteristics of SRS, dose evaluation of the CyberKnife procedure is critical. A radiophotoluminescent glass dosimeter was used to verify the dose accuracy for the CyberKnife procedure and validate a viable dose verification system for CyberKnife treatment. A radiophotoluminescent glass dosimeter, thermoluminescent dosimeter, and Kodak EDR2 film were used to measure the lateral dose profile and percent depth dose of CyberKnife. A Monte Carlo simulation for dose verification was performed using BEAMnrc to verify the measured results. This study also used a radiophotoluminescent glass dosimeter coupled with an anthropomorphic phantom to evaluate the accuracy of the dose given by CyberKnife. Measurements from the radiophotoluminescent glass dosimeter were compared with the results of a thermoluminescent dosimeter and EDR2 film, and the differences found were less than 5%. The radiophotoluminescent glass dosimeter has some advantages in terms of dose measurements over CyberKnife, such as repeatability, stability, and small effective size. These advantages make radiophotoluminescent glass dosimeters a potential candidate dosimeter for the CyberKnife procedure. This study concludes that radiophotoluminescent glass dosimeters are a promising and reliable dosimeter for CyberKnife dose verification with clinically acceptable accuracy within 5%. PMID:28046056

  1. Implementation of a new fuzzy vector control of induction motor.

    PubMed

    Rafa, Souad; Larabi, Abdelkader; Barazane, Linda; Manceur, Malik; Essounbouli, Najib; Hamzaoui, Abdelaziz

    2014-05-01

    The aim of this paper is to present a new approach to control an induction motor using type-1 fuzzy logic. The induction motor has a nonlinear model, uncertain and strongly coupled. The vector control technique, which is based on the inverse model of the induction motors, solves the coupling problem. Unfortunately, in practice this is not checked because of model uncertainties. Indeed, the presence of the uncertainties led us to use human expertise such as the fuzzy logic techniques. In order to maintain the decoupling and to overcome the problem of the sensitivity to the parametric variations, the field-oriented control is replaced by a new block control. The simulation results show that the both control schemes provide in their basic configuration, comparable performances regarding the decoupling. However, the fuzzy vector control provides the insensitivity to the parametric variations compared to the classical one. The fuzzy vector control scheme is successfully implemented in real-time using a digital signal processor board dSPACE 1104. The efficiency of this technique is verified as well as experimentally at different dynamic operating conditions such as sudden loads change, parameter variations, speed changes, etc. The fuzzy vector control is found to be a best control for application in an induction motor. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  2. The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware

    NASA Astrophysics Data System (ADS)

    Kathiara, Jainik

    There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.

  3. Protecting Networks Via Automated Defense of Cyber Systems

    DTIC Science & Technology

    2016-09-01

    autonomics, and artificial intelligence . Our conclusion is that automation is the future of cyber defense, and that advances are being made in each of...SUBJECT TERMS Internet of Things, autonomics, sensors, artificial intelligence , cyber defense, active cyber defense, automated indicator sharing...called Automated Defense of Cyber Systems, built upon three core technological components: sensors, autonomics, and artificial intelligence . Our

  4. "Making Kind Cool": Parents' Suggestions for Preventing Cyber Bullying and Fostering Cyber Kindness

    ERIC Educational Resources Information Center

    Cassidy, Wanda; Brown, Karen; Jackson, Margaret

    2012-01-01

    Cyber bullying among youth is rapidly becoming a global phenomenon, as educators, parents and policymakers grapple with trying to curtail this negative and sometimes devastating behavior. Since most cyber bullying emanates from the home computer, parents can play an important role in preventing cyber bullying and in fostering a kinder online…

  5. Cyberbully and Victim Experiences of Pre-Service Teachers

    ERIC Educational Resources Information Center

    Tosun, Nilgün

    2016-01-01

    The aim of this study was to determine the prevalence of different types of cyber bullying, the ways in which cyber bullying occurred, whether the identity of cyber bullies were known, and reaction to being cyber bullied among pre-service teachers. Relationships between gender and likelihood of being a cyber bully/victim were also investigated.…

  6. A meta-analysis of sex differences in cyber-bullying behavior: the moderating role of age.

    PubMed

    Barlett, Christopher; Coyne, Sarah M

    2014-01-01

    The current research used meta-analysis to determine whether (a) sex differences emerged in cyber-bullying frequency, (b) if age moderated any sex effect, and (c) if any additional moderators (e.g., publication year and status, country and continent of data collection) influenced the sex effect. Theoretically, if cyber-bullying is considered a form of traditional bullying and aggression, males are likely to cyber-bully more than females. Conversely, if cyber-bullying is considered relational/indirect aggression, females will be slightly more likely to cyber-bully than males. Results from 122 effect size estimates showed that males were slightly more likely to cyber-bully than females; however, age moderated the overall effect. Specifically, females were more likely to report cyber-bullying during early to mid-adolescence than males, while males showed higher levels of cyber-bullying during later adolescence than females. Publication status and year and continent and country of data collection also moderated the overall effect. © 2014 Wiley Periodicals, Inc.

  7. Development of Single-Molecule DNA Sequencing Platform Based on Single-Molecule Electrical Conductance

    DTIC Science & Technology

    2015-05-25

    nanoparticles , Nature Nanotechnology 7, 197-203. 11. Dreaden, E. C., Alkilany, A. M., Huang, X. H., Murphy, C. J., and El-Sayed, M. A. (2012) The...13840-13851. 14. Llevot, A., and Astruc, D. (2012) Applications of vectorized gold nanoparticles to the diagnosis and therapy of cancer , Chem. Soc. Rev...caused by the injection of gold nanoparticles , Nanotechnology 21, 485102. 25. Dykman, L. A., Matora, L. Y., and Bogatyrev, V. A. (1996) Use of

  8. Minimizing inner product data dependencies in conjugate gradient iteration

    NASA Technical Reports Server (NTRS)

    Vanrosendale, J.

    1983-01-01

    The amount of concurrency available in conjugate gradient iteration is limited by the summations required in the inner product computations. The inner product of two vectors of length N requires time c log(N), if N or more processors are available. This paper describes an algebraic restructuring of the conjugate gradient algorithm which minimizes data dependencies due to inner product calculations. After an initial start up, the new algorithm can perform a conjugate gradient iteration in time c*log(log(N)).

  9. User interface concerns

    NASA Technical Reports Server (NTRS)

    Redhed, D. D.

    1978-01-01

    Three possible goals for the Numerical Aerodynamic Simulation Facility (NASF) are: (1) a computational fluid dynamics (as opposed to aerodynamics) algorithm development tool; (2) a specialized research laboratory facility for nearly intractable aerodynamics problems that industry encounters; and (3) a facility for industry to use in its normal aerodynamics design work that requires high computing rates. The central system issue for industry use of such a computer is the quality of the user interface as implemented in some kind of a front end to the vector processor.

  10. Modulated error diffusion CGHs for neural nets

    NASA Astrophysics Data System (ADS)

    Vermeulen, Pieter J. E.; Casasent, David P.

    1990-05-01

    New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).

  11. SC'11 Poster: A Highly Efficient MGPT Implementation for LAMMPS; with Strong Scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oppelstrup, T; Stukowski, A; Marian, J

    2011-12-07

    The MGPT potential has been implemented as a drop in package to the general molecular dynamics code LAMMPS. We implement an improved communication scheme that shrinks the communication layer thickness, and increases the load balancing. This results in unprecedented strong scaling, and speedup continuing beyond 1/8 atom/core. In addition, we have optimized the small matrix linear algebra with generic blocking (for all processors) and specific SIMD intrinsics for vectorization on Intel, AMD, and BlueGene CPUs.

  12. Securing Cyberspace: Approaches to Developing an Effective Cyber-Security Strategy

    DTIC Science & Technology

    2011-05-15

    attackers, cyber - criminals or even teenage hackers. Protecting cyberspace is a national security priority. President Obama’s National Security...prefers to engage international law enforcement to investigate and catch cyber criminals .40 International cooperation could resolve jurisdictional...sheltered them. Similarly, a state that fails to prosecute cyber - criminals , or who gives safe haven to individuals or groups that conduct cyber-attacks

  13. Cyber Bullying: Overview and Strategies for School Counsellors, Guidance Officers, and All School Personnel

    ERIC Educational Resources Information Center

    Bhat, Christine Suniti

    2008-01-01

    Cyber bullying or bullying via information and communications technology tools such as the internet and mobile phones is a problem of growing concern with school-aged students. Cyber bullying actions may not take place on school premises, but detrimental effects are experienced by victims of cyber bullying in schools. Tools used by cyber bullies…

  14. User's guide for the computer code COLTS for calculating the coupled laminar and turbulent flow over a Jovian entry probe

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Graeves, R. A.

    1980-01-01

    A user's guide for a computer code 'COLTS' (Coupled Laminar and Turbulent Solutions) is provided which calculates the laminar and turbulent hypersonic flows with radiation and coupled ablation injection past a Jovian entry probe. Time-dependent viscous-shock-layer equations are used to describe the flow field. These equations are solved by an explicit, two-step, time-asymptotic finite-difference method. Eddy viscosity in the turbulent flow is approximated by a two-layer model. In all, 19 chemical species are used to describe the injection of carbon-phenolic ablator in the hydrogen-helium gas mixture. The equilibrium composition of the mixture is determined by a free-energy minimization technique. A detailed frequency dependence of the absorption coefficient for various species is considered to obtain the radiative flux. The code is written for a CDC-CYBER-203 computer and is capable of providing solutions for ablated probe shapes also.

  15. Rumination mediates the association between cyber-victimization and depressive symptoms.

    PubMed

    Feinstein, Brian A; Bhatia, Vickie; Davila, Joanne

    2014-06-01

    The current study examined the 3-week prospective associations between cyber-victimization and both depressive symptoms and rumination. In addition, a mediation model was tested, wherein rumination mediated the association between cyber-victimization and depressive symptoms. Participants (N = 565 college-age young adults) completed online surveys at two time points 3 weeks apart. Results indicated that cyber-victimization was associated with increases in both depressive symptoms and rumination over time. Furthermore, results of the path analysis indicated that cyber-victimization was associated with increases in rumination over time, which were then associated with greater depressive symptoms, providing support for the proposed mediation effect for women, but not men. Findings extend previous correlational findings by demonstrating that cyber-victimization is associated with increases in symptomatology over time. Findings also suggest that the negative consequences of cyber-victimization extend beyond mental health problems to maladaptive emotion regulation. In fact, rumination may be a mechanism through which cyber-victimization influences mental health problems, at least for women. Mental health professionals are encouraged to assess cyber-victimization as part of standard victimization assessments and to consider targeting maladaptive emotion regulation in addition to mental health problems in clients who have experienced cyber-victimization.

  16. The Role of Technologies, Behaviors, Gender, and Gender Stereotype Traits in Adolescents' Cyber Aggression.

    PubMed

    Wright, Michelle F

    2017-03-01

    The present study focused on the impact of gender and gender stereotype traits (i.e., masculinity, femininity) on cyber aggression perpetration utilizing different technologies (i.e., social-networking sites, gaming consoles, mobile phones) and behaviors (i.e., cyber relational aggression, cyber verbal aggression, hacking). Participants included 233 eighth graders (108 female; M age = 13.26, SD = 0.36) from two middle schools in the Midwestern United States. Adolescents completed questionnaires on their endorsement of masculinity and femininity traits as well as how often they engaged in cyber aggression perpetration (i.e., cyber relational aggression, cyber verbal aggression, hacking) through mobile phones, social-networking sites, and gaming consoles. Findings indicated that boys and girls with more feminine traits engaged in more cyber relational aggression through social-networking sites and mobile phones, while boys and girls who endorsed more masculine traits perpetrated this behavior and cyber verbal aggression more often through online gaming. In addition, these boys and girls engaged in more hacking through all technologies when compared with girls and boys who reported more feminine traits. Results of this study indicate the importance of delineating gender stereotype traits, behaviors, and technologies when examining cyber aggression perpetration.

  17. Longitudinal associations between cyber-bullying perpetration and victimization and problem behavior and mental health problems in young Australians.

    PubMed

    Hemphill, Sheryl A; Kotevski, Aneta; Heerde, Jessica A

    2015-02-01

    To investigate associations between Grade 9 and 10 cyber-bullying perpetration and victimization and Grade 11 problem behavior and mental health problems after controlling for risk factors for these outcomes in the analyses. The sample comprised 927 students from Victoria, Australia who completed a modified version of the self-report Communities That Care Youth Survey in Grades 9-11 to report on risk factors, traditional and cyber-bullying perpetration and victimization, problem behavior, and mental health. Complete data on over 650 participants were analyzed. Five per cent of Grade 9 and 10 students reported cyber-bullying perpetration only, 6-8% reported victimization only, and 8-9% both cyber-bullied others and were cyber-bullied. Results showed that cyber-bullying others in Grade 10 was associated with theft in Grade 11, cyber-victimization in Grade 10 was linked with Grade 11 depressive symptoms, and Grade 10 cyber-bullying perpetration and victimization combined predicted Grade 11 school suspension and binge drinking. Prevention approaches that target traditional and cyber-bullying, and established risk factors are necessary. Such multi-faceted programs may also reduce problem behavior and mental health problems.

  18. Scalable NIC-based reduction on large-scale clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A.; Fernández, J. C.; Petrini, F.

    2003-01-01

    Many parallel algorithms require effiaent support for reduction mllectives. Over the years, researchers have developed optimal reduction algonduns by taking inm account system size, dam size, and complexities of reduction operations. However, all of these algorithm have assumed the faa that the reduction precessing takes place on the host CPU. Modem Network Interface Cards (NICs) sport programmable processors with substantial memory and thus introduce a fresh variable into the equation This raises the following intersting challenge: Can we take advantage of modern NICs to implementJost redudion operations? In this paper, we take on this challenge in the context of large-scalemore » clusters. Through experiments on the 960-node, 1920-processor or ASCI Linux Cluster (ALC) located at the Lawrence Livermore National Laboratory, we show that NIC-based reductions indeed perform with reduced latency and immed consistency over host-based aleorithms for the wmmon case and that these benefits scale as the system grows. In the largest configuration tested--1812 processors-- our NIC-based algorithm can sum a single element vector in 73 ps with 32-bi integers and in 118 with Mbit floating-point numnbers. These results represent an improvement, respeaively, of 121% and 39% with resvect w the {approx}roductionle vel MPI library« less

  19. Development of JSDF Cyber Warfare Defense Critical Capability

    DTIC Science & Technology

    2010-03-01

    attack identification capability is essential for a nation to defend her vital infrastructures against offensive cyber warfare . Although the necessity of...cyber-attack identification capability is quite clear, the Japans preparation against cyber warfare is quite limited.

  20. Some of Indonesian Cyber Law Problems

    NASA Astrophysics Data System (ADS)

    Machmuddin, D. D.; Pratama, B.

    2017-01-01

    Cyber regulation is very important to control human interaction within the Internet network in cyber space. On the surface, innovation development in science and technology facilitates human activity. But on the inside, innovation was controlled by new business model. In cyber business activities mingle with individual protection. By this condition, the law should keep the balance of the activities. Cyber law problems, were not particular country concern, but its global concern. This is a good opportunity for developing country to catch up with developed country. Beside this opportunity for talented people in law and technology is become necessity. This paper tries to describe cyber law in Indonesia. As a product of a developing country there are some of weakness that can be explained. Terminology and territory of cyber space is become interesting to discuss, because this two problems can give a broad view on cyber law in Indonesia.

  1. Cyber Attacks and Terrorism: A Twenty-First Century Conundrum.

    PubMed

    Albahar, Marwan

    2017-01-05

    In the recent years, an alarming rise in the incidence of cyber attacks has made cyber security a major concern for nations across the globe. Given the current volatile socio-political environment and the massive increase in the incidence of terrorism, it is imperative that government agencies rapidly realize the possibility of cyber space exploitation by terrorist organizations and state players to disrupt the normal way of life. The threat level of cyber terrorism has never been as high as it is today, and this has created a lot of insecurity and fear. This study has focused on different aspects of cyber attacks and explored the reasons behind their increasing popularity among the terrorist organizations and state players. This study proposes an empirical model that can be used to estimate the risk levels associated with different types of cyber attacks and thereby provide a road map to conceptualize and formulate highly effective counter measures and cyber security policies.

  2. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) on Intel Xeon Phi processors

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa

    2017-08-01

    The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.

  3. Department of Defenses Enhanced Requirement for Offensive Cyber Warfare Operations

    DTIC Science & Technology

    2010-04-01

    The Department of Defense (DoD) needs to further develop its offensive cyber warfare capabilities at all levels. In an asymmetric environment...battlefields. If the DoD does not prosecute offensive cyber warfare tactics then the DoD has effectively allowed a significant advantage to be given...offensive cyber warfare operations, These states utilize their cyber warfare capabilities to support their national, operational and strategic

  4. Cyber Defense Management

    DTIC Science & Technology

    2016-09-01

    manage cyber security is often a verymanual and labor intensive process. When a crisis hits, DoD responses range from highly automatedand instrumented...DSB Task Force Report on Cyber Defense Management September 2016 (U) This page intentionally blank REPORT OF THE DEFENSE SCIENCE BOARD STUDY ON Cyber ...DEFENSE FOR ACQUISITION, TECHNOLOGY & LOGISTICS SUBJECT: Final Report of the Defense Science Board (DSB) Task Force on Cyber Defense Management I am

  5. Reconstruction of Cyber and Physical Software Using Novel Spread Method

    NASA Astrophysics Data System (ADS)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    Cyber and Physical software has been concerned for many years since 2010. Actually, many researchers would disagree with the deployment of traditional Spread Method for reconstruction of Cyber and physical software, which embodies the key principles reconstruction of cyber physical system. NSM(novel spread method), our new methodology for reconstruction of cyber and physical software, is the solution to all of these challenges.

  6. A Responsive Cyber Risk Ecosystem

    DTIC Science & Technology

    2017-01-19

    UNCLASSIFIED - Distribution A: Approved for public release; distribution unlimited AIR FORCE CYBERWORX REPORT 16-003: A RESPONSIVE CYBER RISK...right problem to solve and find meaningful solutions by exploring a wide range of possible answers to the design problem. For the Responsive Cyber ...Risk Dashboard Design Project, CyberWorx brought together a design team of 25 participants from UASFA and Industry to explore how cyber risk to AF

  7. Cyber Threat and Vulnerability Analysis of the U.S. Electric Sector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glenn, Colleen; Sterbentz, Dane; Wright, Aaron

    With utilities in the U.S. and around the world increasingly moving toward smart grid technology and other upgrades with inherent cyber vulnerabilities, correlative threats from malicious cyber attacks on the North American electric grid continue to grow in frequency and sophistication. The potential for malicious actors to access and adversely affect physical electricity assets of U.S. electricity generation, transmission, or distribution systems via cyber means is a primary concern for utilities contributing to the bulk electric system. This paper seeks to illustrate the current cyber-physical landscape of the U.S. electric sector in the context of its vulnerabilities to cyber attacks,more » the likelihood of cyber attacks, and the impacts cyber events and threat actors can achieve on the power grid. In addition, this paper highlights utility perspectives, perceived challenges, and requests for assistance in addressing cyber threats to the electric sector. There have been no reported targeted cyber attacks carried out against utilities in the U.S. that have resulted in permanent or long term damage to power system operations thus far, yet electric utilities throughout the U.S. have seen a steady rise in cyber and physical security related events that continue to raise concern. Asset owners and operators understand that the effects of a coordinated cyber and physical attack on a utility’s operations would threaten electric system reliability–and potentially result in large scale power outages. Utilities are routinely faced with new challenges for dealing with these cyber threats to the grid and consequently maintain a set of best practices to keep systems secure and up to date. Among the greatest challenges is a lack of knowledge or strategy to mitigate new risks that emerge as a result of an exponential rise in complexity of modern control systems. This paper compiles an open-source analysis of cyber threats and risks to the electric grid, utility best practices for prevention and response to cyber threats, and utility suggestions about how the federal government can aid utilities in combating and mitigating risks.« less

  8. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    NASA Astrophysics Data System (ADS)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.

  9. Taxonomies of Cyber Adversaries and Attacks: A Survey of Incidents and Approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyers, C A; Powers, S S; Faissol, D M

    In this paper we construct taxonomies of cyber adversaries and methods of attack, drawing from a survey of the literature in the area of cyber crime. We begin by addressing the scope of cyber crime, noting its prevalence and effects on the US economy. We then survey the literature on cyber adversaries, presenting a taxonomy of the different types of adversaries and their corresponding methods, motivations, maliciousness, and skill levels. Subsequently we survey the literature on cyber attacks, giving a taxonomy of the different classes of attacks, subtypes, and threat descriptions. The goal of this paper is to inform futuremore » studies of cyber security on the shape and characteristics of the risk space and its associated adversaries.« less

  10. Using agility to combat cyber attacks.

    PubMed

    Anderson, Kerry

    2017-06-01

    Some incident response practitioners feel that they have been locked in a battle with cyber criminals since the popular adoption of the internet. Initially, organisations made great inroads in preventing and containing cyber attacks. In the last few years, however, cyber criminals have become adept at eluding defence security technologies and rapidly modifying their exploit strategies for financial or political gains. Similar to changes in military combat tactics, cyber criminals utilise distributed attack cells, real-time communications, and rapidly mutating exploits to minimise the potential for detection. Cyber criminals have changed their attack paradigm. This paper describes a new incident response paradigm aimed at combating the new model of cyber attacks with an emphasis on agility to increase the organisation's ability to respond rapidly to these new challenges.

  11. Cyber warfare: Armageddon in a Teacup?

    DTIC Science & Technology

    2009-12-11

    Security concerns over the growing capability of Cyber Warfare are in the forefront of national policy and security discussions. In order to enable a...realistic discussion of the topic this thesis seeks to analyze demonstrated Cyber Warfare capability and its ability to achieve strategic political...objectives. This study examines Cyber Warfare conducted against Estonia in 2007, Georgia in 2008, and Israel in 2008. In all three cases Cyber Warfare did

  12. The Characterization and Measurement of Cyber Warfare, Spring 2008 - Project 08-01

    DTIC Science & Technology

    2008-05-01

    Global Innovation and Strategy Center The Characterization and Measurement of Cyber Warfare Spring 2008 – Project 08-01 May 2008...and Measurement of Cyber Warfare N/A N/A N/A 08-01Dobitz, Kyle Haas, Brad Holtje, Michael Jokerst, Amanda Ochsner, Geoff Silva, Stephanie...research team as critical for purposes of cyber act characterization: Motivation, Intent, Target, Effects, and Actors. cyberspace, cyber warfare , targets

  13. At the Crossroads of Cyber Warfare: Signposts for the Royal Australian Air Force

    DTIC Science & Technology

    2011-06-01

    At the Crossroads of Cyber Warfare : Signposts for the Royal Australian Air Force by Craig Stallard, Squadron leader, Royal...in the conduct of cyber warfare . The 2009 Defence White Paper provided some clarity by indentifying cyber warfare as critical to the maintenance...of national security, but left open the most important issue: should cyber warfare be a joint engagement or a service oriented fight? The RAAF

  14. Cyber Warfare: New Character with Strategic Results

    DTIC Science & Technology

    2013-03-01

    The advent of cyber warfare has sparked a debate amongst theorists as to whether timeless Clausewitzian principles remain true in the 21st century...Violence, uncertainty, and rationality still accurately depict the nature of cyber warfare , however, its many defining attributes and means by which...this style of warfare is conducted has definitively changed the character of war. Although cyber warfare is contested in the cyber domain, it often

  15. Medical Differential Diagnosis (MDD) as the Architectural Framework for a Knowledge Model: A Vulnerability Detection and Threat Identification Methodology for Cyber-Crime and Cyber-Terrorism

    ERIC Educational Resources Information Center

    Conley-Ware, Lakita D.

    2010-01-01

    This research addresses a real world cyberspace problem, where currently no cross industry standard methodology exists. The goal is to develop a model for identification and detection of vulnerabilities and threats of cyber-crime or cyber-terrorism where cyber-technology is the vehicle to commit the criminal or terrorist act (CVCT). This goal was…

  16. Defense Science Board Task Force Report on Cyber Defense Management

    DTIC Science & Technology

    2016-09-01

    manage cyber security is often a verymanual and labor intensive process. When a crisis hits, DoD responses range from highly automatedand instrumented...DSB Task Force Report on Cyber Defense Management September 2016 (U) This page intentionally blank REPORT OF THE DEFENSE SCIENCE BOARD STUDY ON Cyber ...DEFENSE FOR ACQUISITION, TECHNOLOGY & LOGISTICS SUBJECT: Final Report of the Defense Science Board (DSB) Task Force on Cyber Defense Management I am

  17. Literature Review on Modeling Cyber Networks and Evaluating Cyber Risks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelic, Andjelka; Campbell, Philip L

    The National Infrastructure Simulations and Analysis Center (NISAC) conducted a literature review on modeling cyber networks and evaluating cyber risks. The literature review explores where modeling is used in the cyber regime and ways that consequence and risk are evaluated. The relevant literature clusters in three different spaces: network security, cyber-physical, and mission assurance. In all approaches, some form of modeling is utilized at varying levels of detail, while the ability to understand consequence varies, as do interpretations of risk. This document summarizes the different literature viewpoints and explores their applicability to securing enterprise networks.

  18. Bullying prevalence across contexts: a meta-analysis measuring cyber and traditional bullying.

    PubMed

    Modecki, Kathryn L; Minchin, Jeannie; Harbaugh, Allen G; Guerra, Nancy G; Runions, Kevin C

    2014-11-01

    Bullying involvement in any form can have lasting physical and emotional consequences for adolescents. For programs and policies to best safeguard youth, it is important to understand prevalence of bullying across cyber and traditional contexts. We conducted a thorough review of the literature and identified 80 studies that reported corresponding prevalence rates for cyber and traditional bullying and/or aggression in adolescents. Weighted mean effect sizes were calculated, and measurement features were entered as moderators to explain variation in prevalence rates and in traditional-cyber correlations within the sample of studies. Prevalence rates for cyber bullying were lower than for traditional bullying, and cyber and traditional bullying were highly correlated. A number of measurement features moderated variability in bullying prevalence; whereas a focus on traditional relational aggression increased correlations between cyber and traditional aggressions. In our meta-analytic review, traditional bullying was twice as common as cyber bullying. Cyber and traditional bullying were also highly correlated, suggesting that polyaggression involvement should be a primary target for interventions and policy. Results of moderation analyses highlight the need for greater consensus in measurement approaches for both cyber and traditional bullying. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  19. 78 FR 46997 - Agency Information Collection Activities: Submission for Review; Information Collection Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... (DHS), Science and Technology, CyberForensics Electronic Technology Clearinghouse (CyberFETCH) Program... public to comment on data collection forms for the CyberForensics Electronic Technology Clearinghouse... for providing a collaborative environment for cyber forensics practitioners from law enforcement...

  20. Space and Cyber: Shared Challenges, Shared Opportunities

    DTIC Science & Technology

    2011-11-15

    adversaries to have effective capabilities against networks and computer systems, unlike those anywhere else—here, cyber criminals , proxies for hire, and...or unintentional, conditions can impact our ability to use space and cyber capabilities. As the tools and techniques developed by cyber criminals continue

  1. 76 FR 43696 - Nationwide Cyber Security Review (NCSR) Assessment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-21

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2011-0012] Nationwide Cyber Security Review (NCSR... Protection and Programs Directorate (NPPD), Office of Cybersecurity and Communications (CS&C), National Cyber Security Division (NCSD), Cyber Security Evaluation Program (CSEP), will submit the following Information...

  2. Examining Cyber Command Structures

    DTIC Science & Technology

    2015-03-01

    domains, cyber, command and control, USCYBERCOM, combatant command, cyber force PAGES 65 16. PRICE CODE 17. SECURITY 18. SECURITY 19. SECURITY 20...USCYBERCOM, argue for the creation of a stand-alone cyber force.11 They claim that the military’s tradition-oriented and inelastic nature make the

  3. The vectorization of a ray tracing program for image generation

    NASA Technical Reports Server (NTRS)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  4. Will electrical cyber-physical interdependent networks undergo first-order transition under random attacks?

    NASA Astrophysics Data System (ADS)

    Ji, Xingpei; Wang, Bo; Liu, Dichen; Dong, Zhaoyang; Chen, Guo; Zhu, Zhenshan; Zhu, Xuedong; Wang, Xunting

    2016-10-01

    Whether the realistic electrical cyber-physical interdependent networks will undergo first-order transition under random failures still remains a question. To reflect the reality of Chinese electrical cyber-physical system, the "partial one-to-one correspondence" interdependent networks model is proposed and the connectivity vulnerabilities of three realistic electrical cyber-physical interdependent networks are analyzed. The simulation results show that due to the service demands of power system the topologies of power grid and its cyber network are highly inter-similar which can effectively avoid the first-order transition. By comparing the vulnerability curves between electrical cyber-physical interdependent networks and its single-layer network, we find that complex network theory is still useful in the vulnerability analysis of electrical cyber-physical interdependent networks.

  5. A cognitive and economic decision theory for examining cyber defense strategies.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bier, Asmeret Brooke

    Cyber attacks pose a major threat to modern organizations. Little is known about the social aspects of decision making among organizations that face cyber threats, nor do we have empirically-grounded models of the dynamics of cooperative behavior among vulnerable organizations. The effectiveness of cyber defense can likely be enhanced if information and resources are shared among organizations that face similar threats. Three models were created to begin to understand the cognitive and social aspects of cyber cooperation. The first simulated a cooperative cyber security program between two organizations. The second focused on a cyber security training program in which participantsmore » interact (and potentially cooperate) to solve problems. The third built upon the first two models and simulates cooperation between organizations in an information-sharing program.« less

  6. Adolescent predictors of young adult cyber-bullying perpetration and victimization among Australian youth

    PubMed Central

    Hemphill, Sheryl A.; Heerde, Jessica A.

    2014-01-01

    Purpose The purpose of the current paper was to examine the adolescent risk and protective factors (at the individual, peer group, and family level) for young adult cyber-bullying perpetration and victimization. Methods Data from 2006 (Grade 9) to 2010 (young adulthood) were analyzed from a community sample of 927 Victorian students originally recruited as a state-wide representative sample in Grade 5 (age 10–11 years) in 2002 and followed up to age 18–19 years in 2010 (N = 809). Participants completed a self-report survey on adolescent risk and protective factors and traditional and cyber-bullying perpetration and victimization, and young adult cyber-bullying perpetration and victimization. Results As young adults, 5.1% self-reported cyber-bullying perpetration only, 5.0% cyber-bullying victimization only, and 9.5% reported both cyber-bullying perpetration and victimization. In fully adjusted logistic regression analyses, the adolescent predictors of cyber-bullying perpetration only were traditional bullying perpetration, traditional bullying perpetration and victimization, and poor family management. For young adulthood cyber-bullying victimization only, the adolescent predictor was emotion control. The adolescent predictors for young adult cyber-bullying perpetration and victimization were traditional bullying perpetration and cyber-bullying perpetration and victimization. Conclusions Based on the results of this study, possible targets for prevention and early intervention are reducing adolescent involvement in (traditional or cyber-) bullying through the development of social skills and conflict resolution skills. In addition, another important prevention target is to support families with adolescents to ensure they set clear rules and monitor adolescent’s behavior. Universal programs that assist adolescents to develop skills in emotion control are warranted. PMID:24939014

  7. A continuous arc delivery optimization algorithm for CyberKnife m6.

    PubMed

    Kearney, Vasant; Descovich, Martina; Sudhyadhom, Atchar; Cheung, Joey P; McGuinness, Christopher; Solberg, Timothy D

    2018-06-01

    This study aims to reduce the delivery time of CyberKnife m6 treatments by allowing for noncoplanar continuous arc delivery. To achieve this, a novel noncoplanar continuous arc delivery optimization algorithm was developed for the CyberKnife m6 treatment system (CyberArc-m6). CyberArc-m6 uses a five-step overarching strategy, in which an initial set of beam geometries is determined, the robotic delivery path is calculated, direct aperture optimization is conducted, intermediate MLC configurations are extracted, and the final beam weights are computed for the continuous arc radiation source model. This algorithm was implemented on five prostate and three brain patients, previously planned using a conventional step-and-shoot CyberKnife m6 delivery technique. The dosimetric quality of the CyberArc-m6 plans was assessed using locally confined mutual information (LCMI), conformity index (CI), heterogeneity index (HI), and a variety of common clinical dosimetric objectives. Using conservative optimization tuning parameters, CyberArc-m6 plans were able to achieve an average CI difference of 0.036 ± 0.025, an average HI difference of 0.046 ± 0.038, and an average LCMI of 0.920 ± 0.030 compared with the original CyberKnife m6 plans. Including a 5 s per minute image alignment time and a 5-min setup time, conservative CyberArc-m6 plans achieved an average treatment delivery speed up of 1.545x ± 0.305x compared with step-and-shoot plans. The CyberArc-m6 algorithm was able to achieve dosimetrically similar plans compared to their step-and-shoot CyberKnife m6 counterparts, while simultaneously reducing treatment delivery times. © 2018 American Association of Physicists in Medicine.

  8. Recommendations for Model Driven Paradigms for Integrated Approaches to Cyber Defense

    DTIC Science & Technology

    2017-03-06

    analogy (e.g., Susceptible, Infected, Recovered [SIR]) • Abstract wargaming: game -theoretic model of cyber conflict without modeling the underlying...malware. 3.7 Abstract Wargaming Here, a game -theoretic process is modeled with moves and effects inspired by cyber conflict but without modeling the...underlying processes of cyber attack and defense. Examples in literature include the following: • Cho J-H, Gao J. Cyber war game in temporal networks

  9. CYBER WARFARE GOVERNANCE: EVALUATION OF CURRENT INTERNATIONAL AGREEMENTS ON THE OFFENSIVE USE OF CYBER

    DTIC Science & Technology

    2015-10-01

    AIR COMMAND AND STAFF COLLEGE DISTANCE LEARNING AIR UNIVERSITY CYBER WARFARE GOVERNANCE: EVALUATION OF CURRENT INTERNATIONAL AGREEMENTS ON THE...order to prevent catastrophic second and third order effects. Rule 43 “prohibits means or methods of cyber warfare that indiscriminate by nature...Means and methods of cyber warfare are indiscriminate by nature if they cannot be: directed at a specific military objective, or limited in their

  10. Initial Report of the Deans Cyber Warfare Ad Hoc Committee

    DTIC Science & Technology

    2011-12-22

    in a cyber warfare environment. Among the more notable recent developments have been the establishment of a new Cyber Warfare Command (USCYBERCOM) at...information-warfare-centric organization. Clearly, future Naval Academy graduates will be expected to know more about cyber warfare than those we have...graduated in the past. The Academic Dean and Provost tasked an ad hoc committeethe Cyber Warfare ad hoc Committeeto examine how USNA can best ensure that

  11. Anonymous As a Cyber Tribe: A New Model for Complex, Non-State Cyber Actors

    DTIC Science & Technology

    2015-05-01

    personas. Only then can cyber strategists exercise the required amount of cultural relativism needed to influence complex, and sometimes disturbing...that runs counter to their professional ethic ? When cyber tribes employ atrocity to create cultural barriers, how will planners remain focused on...as a cyber actor’s motivation? Meeting these challenges requires new levels of cultural relativism —the understanding of a “culture or a cultural

  12. Tactical Cyber: Building A Strategy For Cyber Support To Corps And Below

    DTIC Science & Technology

    Future U.S. Army cyber operations will need to be conducted jointly and at all echelons and must include both defensive and offensive components.1...The Army is now developing doctrine, concepts, and capabilities to conduct and support tactical cyber operations. We propose the following vision...statement: The Army will be able to employ organic cyber capabilities at the tactical echelon with dedicated personnel in support of tactical units while

  13. CYBER: CHANGING THE LANDSCAPE OF FOREIGN INTERVENTION IN DEMOCRATIC ELECTIONS

    DTIC Science & Technology

    than the costs); and 3) the methods and tools for intervening are readily available. Cyber has had a significant impact on all three of the components...significant domestic actor and can act unilaterally to influence public opinion. Second, cyber lowers both the actual and perceived costs of...intervention. Finally, cyber methods and tools are available to almost anyone at any level. Cyber has lowered the barrier to electoral intervention. As a

  14. Structural Causes and Cyber Effects: A Response to Our Critics

    DTIC Science & Technology

    2015-01-01

    the incident, saying “North Korea’s attack on [Sony] reaf- firms that cyber threats pose one of the gravest national security dangers to the United...around the world to strengthen cyber - security , promote norms of acceptable state behavior, uphold freedom of expression, and ensure that the Internet... cyber working group that made progress toward “interna- tional cyberspace rules, and measures to boost dialogue and cooperation on cyber security .”15

  15. SutraPlot, a graphical post-processor for SUTRA, a model for ground-water flow with solute or energy transport

    USGS Publications Warehouse

    Souza, W.R.

    1999-01-01

    This report documents a graphical display post-processor (SutraPlot) for the U.S. Geological Survey Saturated-Unsaturated flow and solute or energy TRAnsport simulation model SUTRA, Version 2D3D.1. This version of SutraPlot is an upgrade to SutraPlot for the 2D-only SUTRA model (Souza, 1987). It has been modified to add 3D functionality, a graphical user interface (GUI), and enhanced graphic output options. Graphical options for 2D SUTRA (2-dimension) simulations include: drawing the 2D finite-element mesh, mesh boundary, and velocity vectors; plots of contours for pressure, saturation, concentration, and temperature within the model region; 2D finite-element based gridding and interpolation; and 2D gridded data export files. Graphical options for 3D SUTRA (3-dimension) simulations include: drawing the 3D finite-element mesh; plots of contours for pressure, saturation, concentration, and temperature in 2D sections of the 3D model region; 3D finite-element based gridding and interpolation; drawing selected regions of velocity vectors (projected on principal coordinate planes); and 3D gridded data export files. Installation instructions and a description of all graphic options are presented. A sample SUTRA problem is described and three step-by-step SutraPlot applications are provided. In addition, the methodology and numerical algorithms for the 2D and 3D finite-element based gridding and interpolation, developed for SutraPlot, are described. 1

  16. Efficient iterative methods applied to the solution of transonic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.

    1996-02-01

    We investigate the use of an inexact Newton`s method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton`s method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES method. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less

  17. Confronting the Pedagogical Challenge of Cyber Safety

    ERIC Educational Resources Information Center

    Hanewald, Ria

    2008-01-01

    Cyber violence and the antidote of cyber safety are fast becoming a global concern for governments, educational authorities, teachers, parents and children alike. Despite substantial funding for information dissemination on preventative strategies and the development of electronic responses to hinder perpetrators, the phenomenon of cyber violence…

  18. Establishing a Cyber Warrior Force

    DTIC Science & Technology

    2004-09-01

    Cyber Warfare is widely touted to be the next generation of warfare. As America’s reliance on automated systems and information technology increases...so too does the potential vulnerability to cyber attack. Nation and non-nation states are developing the capability to wage cyber warfare . Historically

  19. The Cyber Aggression in Relationships Scale: A New Multidimensional Measure of Technology-Based Intimate Partner Aggression.

    PubMed

    Watkins, Laura E; Maldonado, Rosalita C; DiLillo, David

    2018-07-01

    The purpose of this study was to develop and provide initial validation for a measure of adult cyber intimate partner aggression (IPA): the Cyber Aggression in Relationships Scale (CARS). Drawing on recent conceptual models of cyber IPA, items from previous research exploring general cyber aggression and cyber IPA were modified and new items were generated for inclusion in the CARS. Two samples of adults 18 years or older were recruited online. We used item factor analysis to test the factor structure, model fit, and invariance of the measure structure across women and men. Results confirmed that three-factor models for both perpetration and victimization demonstrated good model fit, and that, in general, the CARS measures partner cyber aggression similarly for women and men. The CARS also demonstrated validity through significant associations with in-person IPA, trait anger, and jealousy. Findings suggest the CARS is a useful tool for assessing cyber IPA in both research and clinical settings.

  20. Improved parallel data partitioning by nested dissection with applications to information retrieval.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar

    The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less

  1. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  2. An Examination of the Partner Cyber Abuse Questionnaire in a College Student Sample.

    PubMed

    Wolford-Clevenger, Caitlin; Zapor, Heather; Brasfield, Hope; Febres, Jeniimarie; Elmquist, JoAnna; Brem, Meagan; Shorey, Ryan C; Stuart, Gregory L

    2016-01-01

    To examine the factor structure and convergent validity of a newly developed measure of an understudied form of partner abuse, cyber abuse, and to examine the prevalence of, and gender differences in, victimization by cyber abuse. College students in a dating relationship ( N = 502) completed the Partner Cyber Abuse Questionnaire (Hamby, 2013), as well as measures of partner abuse victimization and depression. Using exploratory factor analysis, we determined a one-factor solution was the most statistically and conceptually best fitting model. The cyber abuse victimization factor was correlated with depressive symptoms and physical, psychological, and sexual partner abuse victimization, supporting the convergent validity of the measure. The overall prevalence of victimization by cyber abuse was 40%, with victimization by specific acts ranging from 2-31%. Men and women did not differ in their victimization by cyber abuse. Cyber abuse is prevalent among college students and occurs concurrently with other partner abuse forms and depressive symptoms. Given the interrelated nature of partner abuse forms, prevention and intervention programs should address partner abuse occurring in-person and through technology. Cyber abuse should also be considered in the conceptualization and measurement of partner abuse to more fully understand this social problem.

  3. MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barhen, Jacob; Kerekes, Ryan A; ST Charles, Jesse Lee

    2008-01-01

    High-speed parallelization of common tasks holds great promise as a low-risk approach to achieving the significant increases in signal processing and computational performance required for next generation innovations in reconfigurable radio systems. Researchers at the Oak Ridge National Laboratory have been working on exploiting the parallelization offered by this emerging technology and applying it to a variety of problems. This paper will highlight recent experience with four different parallel processors applied to signal processing tasks that are directly relevant to signal processing required for SDR/CR waveforms. The first is the EnLight Optical Core Processor applied to matched filter (MF) correlationmore » processing via fast Fourier transform (FFT) of broadband Dopplersensitive waveforms (DSW) using active sonar arrays for target tracking. The second is the IBM CELL Broadband Engine applied to 2-D discrete Fourier transform (DFT) kernel for image processing and frequency domain processing. And the third is the NVIDIA graphical processor applied to document feature clustering. EnLight Optical Core Processor. Optical processing is inherently capable of high-parallelism that can be translated to very high performance, low power dissipation computing. The EnLight 256 is a small form factor signal processing chip (5x5 cm2) with a digital optical core that is being developed by an Israeli startup company. As part of its evaluation of foreign technology, ORNL's Center for Engineering Science Advanced Research (CESAR) had access to a precursor EnLight 64 Alpha hardware for a preliminary assessment of capabilities in terms of large Fourier transforms for matched filter banks and on applications related to Doppler-sensitive waveforms. This processor is optimized for array operations, which it performs in fixed-point arithmetic at the rate of 16 TeraOPS at 8-bit precision. This is approximately 1000 times faster than the fastest DSP available today. The optical core performs the matrix-vector multiplications, where the nominal matrix size is 256x256. The system clock is 125MHz. At each clock cycle, 128K multiply-and-add operations per second (OPS) are carried out, which yields a peak performance of 16 TeraOPS. IBM Cell Broadband Engine. The Cell processor is the extraordinary resulting product of 5 years of sustained, intensive R&D collaboration (involving over $400M investment) between IBM, Sony, and Toshiba. Its architecture comprises one multithreaded 64-bit PowerPC processor element (PPE) with VMX capabilities and two levels of globally coherent cache, and 8 synergistic processor elements (SPEs). Each SPE consists of a processor (SPU) designed for streaming workloads, local memory, and a globally coherent direct memory access (DMA) engine. Computations are performed in 128-bit wide single instruction multiple data streams (SIMD). An integrated high-bandwidth element interconnect bus (EIB) connects the nine processors and their ports to external memory and to system I/O. The Applied Software Engineering Research (ASER) Group at the ORNL is applying the Cell to a variety of text and image analysis applications. Research on Cell-equipped PlayStation3 (PS3) consoles has led to the development of a correlation-based image recognition engine that enables a single PS3 to process images at more than 10X the speed of state-of-the-art single-core processors. NVIDIA Graphics Processing Units. The ASER group is also employing the latest NVIDIA graphical processing units (GPUs) to accelerate clustering of thousands of text documents using recently developed clustering algorithms such as document flocking and affinity propagation.« less

  4. Cyber Warfare as an Operational Fire

    DTIC Science & Technology

    2010-04-03

    This paper explores cyber warfare as an option for creating operational fires effects. Initially, cyberspace is defined and explained from the...fires are defined and the advantages of their use are explained. From there, discussion focuses on how cyber warfare fulfills the purposes of...operational fires. Finally, the paper draws conclusions about the viability of cyber warfare as an operational fire and makes recommendations about how to prioritize the activities of the newly approved U.S. Cyber Command.

  5. Addressing Human Factors Gaps in Cyber Defense

    DTIC Science & Technology

    2016-09-23

    Factors Gaps in Cyber Defense 5a. CONTRACT NUMBER FA8650-14-D-6501-0009 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Alex... Cyber security is a high-ranking national priority that is only likely to grow as we become more dependent on cyber systems. From a research perspective...currently available work often focuses solely on technological aspects of cyber , acknowledging the human in passing, if at all. In recent years, the

  6. WHEN NORMS FAIL: NORTH KOREA AND CYBER AS AN ELEMENT OF STATECRAFT

    DTIC Science & Technology

    2017-04-06

    use these increased cyber capabilities in attempts to wield greater influence in statecraft, cyber tools have not lived up to the hype of the great... influenced by, the cyber domain. A 2017 U.S. Defense Science Board report recognizes the significant economic, social, and military advantages the U.S. gains...reliant upon or connected to the internet, look to cyber to provide them a disproportionate level of influence in international affairs. The lack of

  7. Interoceptive sensitivity as a proxy for emotional intensity and its relationship with perseverative cognition

    PubMed Central

    Lugo, Ricardo G; Helkala, Kirsi; Knox, Benjamin J; Jøsok, Øyvind; Lande, Natalie M; Sütterlin, Stefan

    2018-01-01

    Background Technical advancement in military cyber defense poses increased cognitive demands on cyber officers. In the cyber domain, the influence of emotion on decision-making is rarely investigated. The purpose of this study was to assess psychophysiological correlation with perseverative cognitions during emotionally intensive/stressful situations in cyber military personnel. In line with parallel research on clinical samples high on perseverative cognition, we expected a decreased interoceptive sensitivity in officers with high levels of perseverative cognition. Method We investigated this association in a sample of 27 cyber officer cadets. Results Contrary to our hypothesis, there was no relationship between the factors. Discussion Cyber officers might display characteristics not otherwise found in general populations. The cyber domain may lead to a selection process that attracts different profiles of cognitive and emotional processing. PMID:29296103

  8. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    DOE PAGES

    O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; ...

    1995-01-01

    Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less

  9. The density matrix renormalization group algorithm on kilo-processor architectures: Implementation and trade-offs

    NASA Astrophysics Data System (ADS)

    Nemes, Csaba; Barcza, Gergely; Nagy, Zoltán; Legeza, Örs; Szolgay, Péter

    2014-06-01

    In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU-GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix-vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.

  10. Multitasking a three-dimensional Navier-Stokes algorithm on the Cray-2

    NASA Technical Reports Server (NTRS)

    Swisshelm, Julie M.

    1989-01-01

    A three-dimensional computational aerodynamics algorithm has been multitasked for efficient parallel execution on the Cray-2. It provides a means for examining the multitasking performance of a complete CFD application code. An embedded zonal multigrid scheme is used to solve the Reynolds-averaged Navier-Stokes equations for an internal flow model problem. The explicit nature of each component of the method allows a spatial partitioning of the computational domain to achieve a well-balanced task load for MIMD computers with vector-processing capability. Experiments have been conducted with both two- and three-dimensional multitasked cases. The best speedup attained by an individual task group was 3.54 on four processors of the Cray-2, while the entire solver yielded a speedup of 2.67 on four processors for the three-dimensional case. The multiprocessing efficiency of various types of computational tasks is examined, performance on two Cray-2s with different memory access speeds is compared, and extrapolation to larger problems is discussed.

  11. Quasi-Newton parallel geometry optimization methods

    NASA Astrophysics Data System (ADS)

    Burger, Steven K.; Ayers, Paul W.

    2010-07-01

    Algorithms for parallel unconstrained minimization of molecular systems are examined. The overall framework of minimization is the same except for the choice of directions for updating the quasi-Newton Hessian. Ideally these directions are chosen so the updated Hessian gives steps that are same as using the Newton method. Three approaches to determine the directions for updating are presented: the straightforward approach of simply cycling through the Cartesian unit vectors (finite difference), a concurrent set of minimizations, and the Lanczos method. We show the importance of using preconditioning and a multiple secant update in these approaches. For the Lanczos algorithm, an initial set of directions is required to start the method, and a number of possibilities are explored. To test the methods we used the standard 50-dimensional analytic Rosenbrock function. Results are also reported for the histidine dipeptide, the isoleucine tripeptide, and cyclic adenosine monophosphate. All of these systems show a significant speed-up with the number of processors up to about eight processors.

  12. HO2 rovibrational eigenvalue studies for nonzero angular momentum

    NASA Astrophysics Data System (ADS)

    Wu, Xudong T.; Hayes, Edward F.

    1997-08-01

    An efficient parallel algorithm is reported for determining all bound rovibrational energy levels for the HO2 molecule for nonzero angular momentum values, J=1, 2, and 3. Performance tests on the CRAY T3D indicate that the algorithm scales almost linearly when up to 128 processors are used. Sustained performance levels of up to 3.8 Gflops have been achieved using 128 processors for J=3. The algorithm uses a direct product discrete variable representation (DVR) basis and the implicitly restarted Lanczos method (IRLM) of Sorensen to compute the eigenvalues of the polyatomic Hamiltonian. Since the IRLM is an iterative method, it does not require storage of the full Hamiltonian matrix—it only requires the multiplication of the Hamiltonian matrix by a vector. When the IRLM is combined with a formulation such as DVR, which produces a very sparse matrix, both memory and computation times can be reduced dramatically. This algorithm has the potential to achieve even higher performance levels for larger values of the total angular momentum.

  13. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less

  14. Adapting Unconventional Warfare Doctrine to Cyberspace Operations: An Examination of Hacktivist Based Insurgencies

    DTIC Science & Technology

    2015-06-12

    Unconventional Warfare, Cyberspace Operations, Cyber Warfare , Hacktivism, China, Russia, Georgia, Estonia, Umbrella Revolution, UW, Cyber, Guerilla, Hacktivist...6 Cyber Warfare ............................................................................................................. 7...Internet, and cyber warfare , the nature of the human element in cyberspace exhibits only a scientific advancement in the evolution of warfare, not a

  15. Developing a Hybrid Virtualization Platform Design for Cyber Warfare Training and Education

    DTIC Science & Technology

    2010-06-01

    CYBER WARFARE TRAINING AND EDUCATION THESIS Kyle E. Stewart 2nd...Government. AFIT/GCE/ENG/10-06 DEVELOPING A HYBRID VIRTUALIZATION PLATFORM DESIGN FOR CYBER WARFARE TRAINING...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GCE/ENG/10-06 DEVELOPING A HYBRID VIRTUALIZATION PLATFORM DESIGN FOR CYBER WARFARE

  16. LESSONS FROM THE FRONT: A CASE STUDY OF RUSSIAN CYBER WARFARE

    DTIC Science & Technology

    2015-12-01

    Lessons From The Front: A Case Study Of Russian Cyber Warfare looks to capitalize on the lessons learned from the alleged Russian cyber-offensive on...through the careful analysis and comparison of two disparate conflicts related by their collision with Russian cyber - warfare . Following case study

  17. Coming Soon: More Cyber Careers?

    Science.gov Websites

    exploring the possibility of creating a cyber career field for Army civilians," Lt. Gen. Edward C Programs and Posture," April 14. Establishing a cyber career management field for civilians may be working to implement a cyber career management field for enlisted personnel that will encompass accessions

  18. 78 FR 79411 - Announcement of Competition Under the America COMPETES Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-30

    ... announces the Cyber Grand Challenge (CGC), a prize competition under 15 U.S.C. 3719, the America COMPETES... cyber defense systems. The CGC seeks to engender a new generation of autonomous cyber defense... experts. FOR FURTHER INFORMATION CONTACT: All questions regarding the competition may be sent to Cyber...

  19. 75 FR 2187 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-14

    ... Review: New collection. Title: Authorized Cyber Assistant Host Application. Form: Not yet assigned. Description: The form is used by a business to apply to become an Authorized Cyber Assistant Host. Information... become a Cyber Assistant Host. Cyber Assistant is a software program that assists in the preparation of...

  20. 78 FR 6807 - Critical Infrastructure Protection and Cyber Security Trade Mission to Saudi Arabia and Kuwait...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-31

    ... Cyber Security Trade Mission to Saudi Arabia and Kuwait, September 28-October 1, 2013 AGENCY... coordinating and sponsoring an executive-led Critical Infrastructure Protection and Cyber Security mission to... on the cyber security, critical infrastructure protection, and emergency management, ports of entry...

  1. Cyber High School Students' Transition to a Traditional University

    ERIC Educational Resources Information Center

    Gracey, Dorothy M.

    2010-01-01

    This mixed-method study identifies cyber high school graduates' perceptions of the effect of a cyber high school education on successful transition to a traditional university. The study examined students' perceptions of the advantages and disadvantages their cyber education experience contributed to their academic and social transition to…

  2. Cyber-Threat Assessment for the Air Traffic Management System: A Network Controls Approach

    NASA Technical Reports Server (NTRS)

    Roy, Sandip; Sridhar, Banavar

    2016-01-01

    Air transportation networks are being disrupted with increasing frequency by failures in their cyber- (computing, communication, control) systems. Whether these cyber- failures arise due to deliberate attacks or incidental errors, they can have far-reaching impact on the performance of the air traffic control and management systems. For instance, a computer failure in the Washington DC Air Route Traffic Control Center (ZDC) on August 15, 2015, caused nearly complete closure of the Centers airspace for several hours. This closure had a propagative impact across the United States National Airspace System, causing changed congestion patterns and requiring placement of a suite of traffic management initiatives to address the capacity reduction and congestion. A snapshot of traffic on that day clearly shows the closure of the ZDC airspace and the resulting congestion at its boundary, which required augmented traffic management at multiple locations. Cyber- events also have important ramifications for private stakeholders, particularly the airlines. During the last few months, computer-system issues have caused several airlines fleets to be grounded for significant periods of time: these include United Airlines (twice), LOT Polish Airlines, and American Airlines. Delays and regional stoppages due to cyber- events are even more common, and may have myriad causes (e.g., failure of the Department of Homeland Security systems needed for security check of passengers, see [3]). The growing frequency of cyber- disruptions in the air transportation system reflects a much broader trend in the modern society: cyber- failures and threats are becoming increasingly pervasive, varied, and impactful. In consequence, an intense effort is underway to develop secure and resilient cyber- systems that can protect against, detect, and remove threats, see e.g. and its many citations. The outcomes of this wide effort on cyber- security are applicable to the air transportation infrastructure, and indeed security solutions are being implemented in the current system. While these security solutions are important, they only provide a piecemeal solution. Particular computers or communication channels are protected from particular attacks, without a holistic view of the air transportation infrastructure. On the other hand, the above-listed incidents highlight that a holistic approach is needed, for several reasons. First, the air transportation infrastructure is a large scale cyber-physical system with multiple stakeholders and diverse legacy assets. It is impractical to protect every cyber- asset from known and unknown disruptions, and instead a strategic view of security is needed. Second, disruptions to the cyber- system can incur complex propagative impacts across the air transportation network, including its physical and human assets. Also, these implications of cyber- events are exacerbated or modulated by other disruptions and operational specifics, e.g. severe weather, operator fatigue or error, etc. These characteristics motivate a holistic and strategic perspective on protecting the air transportation infrastructure from cyber- events. The analysis of cyber- threats to the air traffic system is also inextricably tied to the integration of new autonomy into the airspace. The replacement of human operators with cyber functions leaves the network open to new cyber threats, which must be modeled and managed. Paradoxically, the mitigation of cyber events in the airspace will also likely require additional autonomy, given the fast time scale and myriad pathways of cyber-attacks which must be managed. The assessment of new vulnerabilities upon integration of new autonomy is also a key motivation for a holistic perspective on cyber threats.

  3. Cyber ACTS/SAASS: A Second Year of Command and Staff College for the Future Leaders of Our Cyber Forces

    DTIC Science & Technology

    2009-01-01

    objectives. The Air Force is struggling to determine the best way of developing offensive and defensive capabilities for cyber warfare . Our warfighting...education (IDE) cyber warfare program at the Air Force Institute of Technology (AFIT), located at Wright-Patterson AFB, Ohio. I propose that the Air...Force create a two-year professional military education (PME) path consisting of ACSC followed by AFIT’s cyber warfare program, paralleling the current path of ACSC followed by SAASS.

  4. Leadership of Cyber Warriors: Enduring Principles and New Directions

    DTIC Science & Technology

    2011-07-11

    cyber warfare threat against the United States, the creation of United States Cyber Command and the designation of cyberspace as a warfighting domain now necessitate study of the attributes of successful cyber warfare leaders and the leadership techniques required to successfully lead cyber warriors. In particular, we must develop an understanding of where traditional kinetic leadership paradigms succeed, where they fail, and where new techniques must be adopted. Leadership is not a one size fits all endeavor. The capabilities and characteristics of

  5. CyberPetri at CDX 2016: Real-time Network Situation Awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arendt, Dustin L.; Best, Daniel M.; Burtner, Edwin R.

    CyberPetri is a novel visualization technique that provides a flexible map of the network based on available characteristics, such as IP address, operating system, or service. Previous work introduced CyberPetri as a visualization feature in Ocelot, a network defense tool that helped security analysts understand and respond to an active defense scenario. In this paper we present a case study in which we use the CyberPetri visualization technique to support real-time situation awareness during the 2016 Cyber Defense Exercise.

  6. Development and Validation of the Air Force Cyber Intruder Alert Testbed (CIAT)

    DTIC Science & Technology

    2016-07-27

    Validation of the Air Force Cyber Intruder Alert Testbed (CIAT) 5a. CONTRACT NUMBER FA8650-16-C-6722 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...network analysts. Therefore, a new cyber STE focused on network analysts called the Air Force Cyber Intruder Alert Testbed (CIAT) was developed. This...Prescribed by ANSI Std. Z39-18 Development and Validation of the Air Force Cyber Intruder Alert Testbed (CIAT) Gregory Funke, Gregory Dye, Brett Borghetti

  7. Recruiting the Cyber Leader: An Evaluation of the Human Resource Model Used for Recruiting the Army’s Cyber Operations Officer

    DTIC Science & Technology

    2017-09-01

    For the first time since the creation of the Special Forces branch in 1987, the Army authorized the creation of a new branch, the Cyber branch. With...management model. The purpose of our research is to evaluate the effectiveness of that model to recruit Cyber Operations Officers and to examine the...performance (MOPs) and measures of effectiveness (MOEs) based on data collected from: Army institutions; a survey of the Cyber Branch population; and the

  8. Assessment of NOAA Processed OceanSat-2 Scatterometer Ocean Surface Vector Wind Products

    NASA Astrophysics Data System (ADS)

    Chang, P.; Jelenak, Z.; Soisuvarn, S.

    2011-12-01

    The Indian Space Research Organization (ISRO) launched the Oceansat-2 satellite on 23 September 2009. Oceansat-2 carries a radar scatterometer instrument (OSCAT) capable of measuring ocean surface vector winds (OSVW) and an ocean color monitor (OCM), which will retrieve sea spectral reflectance. Oceansat-2 is ISRO's second in a series of satellites dedicated to ocean research. It will provide continuity to the services and applications of the Oceansat-1 OCM data along with additional data from a Ku-band pencil beam scatterometer. Oceansat-2 is a three-axis, body stabilized spacecraft placed into a near circular sun-synchronous orbit, at an altitude of 720 kilometers (km), with an equatorial crossing time of around 1200 hours. ISRO, the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA) and the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) share the common goal of optimizing the quality and maximizing the utility of the Oceansat-2 data for the benefit of future global and regional scientific and operational applications. NOAA, NASA and EUMETSAT have been collaboratively working with ISRO on the assessment and analysis of OSCAT data to help facilitate continuation of QuikSCAT's decade-long Ku-band scatterometer data record. NOAA's interests are focused on the utilization of OSCAT data to support operational weather forecasting and warning in the marine environment. OSCAT has the potential to significantly mitigate the loss of NASA's QuikSCAT, which has negatively impacted NOAA's marine forecasting and warning services. Since March 2011 NOAA has been receiving near real time OSCAT measurements via EumetSat. NOAA has developed its own OSCAT wind processor. This processor produces ocean surface vector winds with resolution of 25km. Performance of NOAA OSCAT product will and its availability to larger user community will be presented and discussed.

  9. Cyber security issues in online games

    NASA Astrophysics Data System (ADS)

    Zhao, Chen

    2018-04-01

    With the rapid development of the Internet, online gaming has become a way of entertainment for many young people in the modern era. However, in recent years, cyber security issues in online games have emerged in an endless stream, which have also caused great attention of many game operators. Common cyber security problems in the game include information disclosure and cyber-attacks. These problems will directly or indirectly cause economic losses to gamers. Many gaming companies are enhancing the stability and security of their network or gaming systems in order to enhance the gaming user experience. This article has carried out the research of the cyber security issues in online games by introducing the background and some common cyber security threats, and by proposing the latent solution. Finally, it speculates the future research direction of the cyber security issues of online games in the hope of providing feasible solution and useful information for game operators.

  10. Mission Assurance Modeling and Simulation: A Cyber Security Roadmap

    NASA Technical Reports Server (NTRS)

    Gendron, Gerald; Roberts, David; Poole, Donold; Aquino, Anna

    2012-01-01

    This paper proposes a cyber security modeling and simulation roadmap to enhance mission assurance governance and establish risk reduction processes within constrained budgets. The term mission assurance stems from risk management work by Carnegie Mellon's Software Engineering Institute in the late 19905. By 2010, the Defense Information Systems Agency revised its cyber strategy and established the Program Executive Officer-Mission Assurance. This highlights a shift from simply protecting data to balancing risk and begins a necessary dialogue to establish a cyber security roadmap. The Military Operations Research Society has recommended a cyber community of practice, recognizing there are too few professionals having both cyber and analytic experience. The authors characterize the limited body of knowledge in this symbiotic relationship. This paper identifies operational and research requirements for mission assurance M&S supporting defense and homeland security. M&S techniques are needed for enterprise oversight of cyber investments, test and evaluation, policy, training, and analysis.

  11. The relationship between young adults' beliefs about anonymity and subsequent cyber aggression.

    PubMed

    Wright, Michelle F

    2013-12-01

    Anonymity is considered a key motivator for cyber aggression, but few investigations have focused on the connection between anonymity and the subsequent engagement in aggression through the cyber context. The present longitudinal study utilized structural equation modeling to reveal indirect associations between two types of anonymity (i.e., punishment by authority figures and retaliation from the target) and later cyber aggression among 130 young adults. These relationships were examined through the influence of beliefs about not getting caught and not believing in the permanency of online content. Findings indicated that both forms of anonymity were related to cyber aggression 6 months later through two explanatory mechanisms (i.e., confidence with not getting caught and believing online content is not permanent), after controlling for gender and cyber aggression at Time 1. The implications of these findings are discussed, and an appeal for additional research investigating cyber aggression among young adults is given.

  12. Joint Command and Control of Cyber Operations: The Joint Force Cyber Component Command (JFCCC)

    DTIC Science & Technology

    2012-05-04

    relies so heavily on complex command and control systems and interconnectivity in general, cyber warfare has become a serious topic of interest at the...defensive cyber warfare into current and future operations and plans. In particular, Joint Task Force (JTF) Commanders must develop an optimum method to

  13. Creating Cyber Libraries: An Instructional Guide for School Library Media Specialists. School Librarianship Series.

    ERIC Educational Resources Information Center

    Craver, Kathleen W.

    This book provides school library media specialists (SLMSs) with practical guidelines and suggested Internet sites for designing cyber libraries that are reflective of their school's research, curriculum, and recreational reading needs. An introduction provides SLMSs with a rationale for designing cyber libraries. Chapter 1, Cyber Library…

  14. Cyberprints: Identifying Cyber Attackers by Feature Analysis

    ERIC Educational Resources Information Center

    Blakely, Benjamin A.

    2012-01-01

    The problem of attributing cyber attacks is one of increasing importance. Without a solid method of demonstrating the origin of a cyber attack, any attempts to deter would-be cyber attackers are wasted. Existing methods of attribution make unfounded assumptions about the environment in which they will operate: omniscience (the ability to gather,…

  15. Command and Control of the Department of Defense in Cyberspace

    DTIC Science & Technology

    2011-03-24

    superiority are also unclassified and constantly probed by intruders and cyber criminals .5 To secure and defend our nation from cyber attacks and conduct...USCYBERCOM to use both offensive and defensive cyber weapons and the tools necessary to hunt down cyber criminals based on rule of law and the legal

  16. 75 FR 18819 - Second DRAFT NIST Interagency Report (NISTIR) 7628, Smart Grid Cyber Security Strategy and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-13

    ...-0143-01] Second DRAFT NIST Interagency Report (NISTIR) 7628, Smart Grid Cyber Security Strategy and... (NIST) seeks comments on the second draft of NISTIR 7628, Smart Grid Cyber Security Strategy and..., vulnerability categories, bottom-up analysis, individual logical interface diagrams, and the cyber security...

  17. Cyber Children: What Parents Need to Know

    ERIC Educational Resources Information Center

    Roberts, Kevin J.

    2010-01-01

    Parents need to be aware of the dangers and the opportunities the cyber world offers. Video games are being used in the classroom. Commerce is increasingly taking place online and computers are indispensable in the workplace. A cyber-oriented child possesses some great advantages. The author urges parents to become experts in the cyber world so…

  18. An Analysis of Pennsylvania's Cyber Charter Schools. Issue Brief

    ERIC Educational Resources Information Center

    Jack, James; Sludden, John; Schott, Adam

    2013-01-01

    Pennsylvania's first cyber charter school opened in 1998, enrolling 44 full-time students. From this modest beginning, Pennsylvania's cyber charter sector has grown to 16 schools enrolling 35,000 students from all but one school district in the Commonwealth. Pennsylvania has one of the nation's most extensive cyber charter sectors, and six…

  19. Uncovering the Structure of and Gender and Developmental Differences in Cyber Bullying

    ERIC Educational Resources Information Center

    Griezel, Lucy; Finger, Linda R.; Bodkin-Andrews, Gawaian H.; Craven, Rhonda G.; Yeung, Alexander Seeshing

    2012-01-01

    Although literature on traditional bullying is abundant, a limited body of sound empirical research exists regarding its newest form: cyber bullying. The sample comprised Australian secondary students (N = 803) and aimed to identify the underlying structure of cyber bullying, and differences in traditional and cyber bullying behaviors across…

  20. Cyber-Physical System Security of a Power Grid: State-of-the-Art

    DOE PAGES

    Sun, Chih -Che; Liu, Chen -Ching; Xie, Jing

    2016-07-14

    Here, as part of the smart grid development, more and more technologies are developed and deployed on the power grid to enhance the system reliability. A primary purpose of the smart grid is to significantly increase the capability of computer-based remote control and automation. As a result, the level of connectivity has become much higher, and cyber security also becomes a potential threat to the cyber-physical systems (CPSs). In this paper, a survey of the state-of-the-art is conducted on the cyber security of the power grid concerning issues of: the structure of CPSs in a smart grid; cyber vulnerability assessment;more » cyber protection systems; and testbeds of a CPS. At Washington State University (WSU), the Smart City Testbed (SCT) has been developed to provide a platform to test, analyze and validate defense mechanisms against potential cyber intrusions. A test case is provided in this paper to demonstrate how a testbed helps the study of cyber security and the anomaly detection system (ADS) for substations.« less

  1. Towards a Research Agenda for Cyber Friendly Fire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greitzer, Frank L.; Clements, Samuel L.; Carroll, Thomas E.

    Historical assessments of combat fratricide reveal principal contributing factors in the effects of stress, degradation of skills due to continuous operations or sleep deprivation, poor situation awareness, and lack of training and discipline in offensive/defense response selection. While these problems are typically addressed in R&D focusing on traditional ground-based combat, there is also an emerging need for improving situation awareness and decision making on defensive/offensive response options in the cyber defense arena, where a mistaken response to an actual or perceived cyber attack could lead to destruction or compromise of friendly cyber assets. The purpose of this report is tomore » examine cognitive factors that may affect cyber situation awareness and describe possible research needs to reduce the likelihood and effects of "friendly cyber fire" on cyber defenses, information infrastructures, and data. The approach is to examine concepts and methods that have been described in research applied to the more traditional problem of mitigating the occurrence of combat identification and fratricide. Application domains of interest include cyber security defense against external or internal (insider) threats.« less

  2. Cyber-Physical System Security of a Power Grid: State-of-the-Art

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Chih -Che; Liu, Chen -Ching; Xie, Jing

    Here, as part of the smart grid development, more and more technologies are developed and deployed on the power grid to enhance the system reliability. A primary purpose of the smart grid is to significantly increase the capability of computer-based remote control and automation. As a result, the level of connectivity has become much higher, and cyber security also becomes a potential threat to the cyber-physical systems (CPSs). In this paper, a survey of the state-of-the-art is conducted on the cyber security of the power grid concerning issues of: the structure of CPSs in a smart grid; cyber vulnerability assessment;more » cyber protection systems; and testbeds of a CPS. At Washington State University (WSU), the Smart City Testbed (SCT) has been developed to provide a platform to test, analyze and validate defense mechanisms against potential cyber intrusions. A test case is provided in this paper to demonstrate how a testbed helps the study of cyber security and the anomaly detection system (ADS) for substations.« less

  3. Differences in predictors of traditional and cyber-bullying: a 2-year longitudinal study in Korean school children.

    PubMed

    Yang, Su-Jin; Stewart, Robert; Kim, Jae-Min; Kim, Sung-Wan; Shin, Il-Seon; Dewey, Michael E; Maskey, Sean; Yoon, Jin-Sang

    2013-05-01

    Traditional bullying has received considerable research but the emerging phenomenon of cyber-bullying much less so. Our study aims to investigate environmental and psychological factors associated with traditional and cyber-bullying. In a school-based 2-year prospective survey, information was collected on 1,344 children aged 10 including bullying behavior/experience, depression, anxiety, coping strategies, self-esteem, and psychopathology. Parents reported demographic data, general health, and attention-deficit hyperactivity disorder (ADHD) symptoms. These were investigated in relation to traditional and cyber-bullying perpetration and victimization at age 12. Male gender and depressive symptoms were associated with all types of bullying behavior and experience. Living with a single parent was associated with perpetration of traditional bullying while higher ADHD symptoms were associated with victimization from this. Lower academic achievement and lower self esteem were associated with cyber-bullying perpetration and victimization, and anxiety symptoms with cyber-bullying perpetration. After adjustment, previous bullying perpetration was associated with victimization from cyber-bullying but not other outcomes. Cyber-bullying has differences in predictors from traditional bullying and intervention programmes need to take these into consideration.

  4. An Examination of the Partner Cyber Abuse Questionnaire in a College Student Sample

    PubMed Central

    Wolford-Clevenger, Caitlin; Zapor, Heather; Brasfield, Hope; Febres, Jeniimarie; Elmquist, JoAnna; Brem, Meagan; Shorey, Ryan C.; Stuart, Gregory L.

    2015-01-01

    Objective To examine the factor structure and convergent validity of a newly developed measure of an understudied form of partner abuse, cyber abuse, and to examine the prevalence of, and gender differences in, victimization by cyber abuse. Method College students in a dating relationship (N = 502) completed the Partner Cyber Abuse Questionnaire (Hamby, 2013), as well as measures of partner abuse victimization and depression. Results Using exploratory factor analysis, we determined a one-factor solution was the most statistically and conceptually best fitting model. The cyber abuse victimization factor was correlated with depressive symptoms and physical, psychological, and sexual partner abuse victimization, supporting the convergent validity of the measure. The overall prevalence of victimization by cyber abuse was 40%, with victimization by specific acts ranging from 2–31%. Men and women did not differ in their victimization by cyber abuse. Conclusions Cyber abuse is prevalent among college students and occurs concurrently with other partner abuse forms and depressive symptoms. Given the interrelated nature of partner abuse forms, prevention and intervention programs should address partner abuse occurring in-person and through technology. Cyber abuse should also be considered in the conceptualization and measurement of partner abuse to more fully understand this social problem. PMID:27014498

  5. The rate of cyber dating abuse among teens and how it relates to other forms of teen dating violence.

    PubMed

    Zweig, Janine M; Dank, Meredith; Yahner, Jennifer; Lachman, Pamela

    2013-07-01

    To date, little research has documented how teens might misuse technology to harass, control, and abuse their dating partners. This study examined the extent of cyber dating abuse-abuse via technology and new media-in youth relationships and how it relates to other forms of teen dating violence. A total of 5,647 youth from ten schools in three northeastern states participated in the survey, of which 3,745 reported currently being in a dating relationship or having been in one during the prior year (52 % were female; 74 % White). Just over a quarter of youth in a current or recent relationship said that they experienced some form of cyber dating abuse victimization in the prior year, with females reporting more cyber dating abuse victimization than males (particularly sexual cyber dating abuse). One out of ten youth said that they had perpetrated cyber dating abuse, with females reporting greater levels of non-sexual cyber dating abuse perpetration than males; by contrast, male youth were significantly more likely to report perpetrating sexual cyber dating abuse. Victims of sexual cyber dating abuse were seven times more likely to have also experienced sexual coercion (55 vs. 8 %) than were non-victims, and perpetrators of sexual cyber dating abuse were 17 times more likely to have also perpetrated sexual coercion (34 vs. 2 %) than were non-perpetrators. Implications for practice and future research are discussed.

  6. A cyber-event correlation framework and metrics

    NASA Astrophysics Data System (ADS)

    Kang, Myong H.; Mayfield, Terry

    2003-08-01

    In this paper, we propose a cyber-event fusion, correlation, and situation assessment framework that, when instantiated, will allow cyber defenders to better understand the local, regional, and global cyber-situation. This framework, with associated metrics, can be used to guide assessment of our existing cyber-defense capabilities, and to help evaluate the state of cyber-event correlation research and where we must focus our future cyber-event correlation research. The framework, based on the cyber-event gathering activities and analysis functions, consists of five operational steps, each of which provides a richer set of contextual information to support greater situational understanding. The first three steps are categorically depicted as increasingly richer and broader-scoped contexts achieved through correlation activity, while in the final two steps, these richer contexts are achieved through analytical activities (situation assessment, and threat analysis & prediction). Category 1 Correlation focuses on the detection of suspicious activities and the correlation of events from a single cyber-event source. Category 2 Correlation clusters the same or similar events from multiple detectors that are located at close proximity and prioritizes them. Finally, the events from different time periods and event sources at different location/regions are correlated at Category 3 to recognize the relationship among different events. This is the category that focuses on the detection of large-scale and coordinated attacks. The situation assessment step (Category 4) focuses on the assessment of cyber asset damage and the analysis of the impact on missions. The threat analysis and prediction step (Category 5) analyzes attacks based on attack traces and predicts the next steps. Metrics that can distinguish correlation and cyber-situation assessment tools for each category are also proposed.

  7. Cyber / Physical Security Vulnerability Assessment Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, Douglas G.; Simpkins, Bret E.

    Abstract Both physical protection and cyber security domains offer solutions for the discovery of vulnerabilities through the use of various assessment processes and software tools. Each vulnerability assessment (VA) methodology provides the ability to identify and categorize vulnerabilities, and quantifies the risks within their own areas of expertise. Neither approach fully represents the true potential security risk to a site and/or a facility, nor comprehensively assesses the overall security posture. The technical approach to solving this problem was to identify methodologies and processes that blend the physical and cyber security assessments, and develop tools to accurately quantify the unaccounted formore » risk. SMEs from both the physical and the cyber security domains developed the blending methodologies, and cross trained each other on the various aspects of the physical and cyber security assessment processes. A local critical infrastructure entity volunteered to host a proof of concept physical/cyber security assessment, and the lessons learned have been leveraged by this effort. The four potential modes of attack an adversary can use in approaching a target are; Physical Only Attack, Cyber Only Attack, Physical Enabled Cyber Attack, and the Cyber Enabled Physical Attack. The Physical Only and the Cyber Only pathway analysis are two of the most widely analyzed attack modes. The pathway from an off-site location to the desired target location is dissected to ensure adversarial activity can be detected and neutralized by the protection strategy, prior to completion of a predefined task. This methodology typically explores a one way attack from the public space (or common area) inward towards the target. The Physical Enabled Cyber Attack and the Cyber Enabled Physical Attack are much more intricate. Both scenarios involve beginning in one domain to affect change in the other, then backing outward to take advantage of the reduced system effectiveness, before penetrating further into the defenses. The proper identification and assessment of the overlapping areas (and interaction between these areas) in the VA process is necessary to accurately assess the true risk.« less

  8. 1988 IEEE Aerospace Applications Conference, Park City, UT, Feb. 7-12, 1988, Digest

    NASA Astrophysics Data System (ADS)

    The conference presents papers on microwave applications, data and signal processing applications, related aerospace applications, and advanced microelectronic products for the aerospace industry. Topics include a high-performance antenna measurement system, microwave power beaming from earth to space, the digital enhancement of microwave component performance, and a GaAs vector processor based on parallel RISC microprocessors. Consideration is also given to unique techniques for reliable SBNR architectures, a linear analysis subsystem for CSSL-IV, and a structured singular value approach to missile autopilot analysis.

  9. Multitasking the INS3D-LU code on the Cray Y-MP

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Yoon, Seokkwan

    1991-01-01

    This paper presents the results of multitasking the INS3D-LU code on eight processors. The code is a full Navier-Stokes solver for incompressible fluid in three dimensional generalized coordinates using a lower-upper symmetric-Gauss-Seidel implicit scheme. This code has been fully vectorized on oblique planes of sweep and parallelized using autotasking with some directives and minor modifications. The timing results for five grid sizes are presented and analyzed. The code has achieved a processing rate of over one Gflops.

  10. Acousto-Optical Vector Matrix Product Processor: Implementation Issues

    DTIC Science & Technology

    1989-04-25

    power by a factor of 3.8. The acoustic velocity in longitudinal TeO2 is 4200 m/s, almost the same as the 4100 m/s acoustic velocity in dense flint glass ...field via an Interaction Model AOD150 dense flint glass Bragg Cell. The cell’s specifications are listed in the table below. BRAGG CELL SPECIFICATIONS...39 ns intervals). Since the speed of sound in dense flint glass is 4100 m/s, the acoustic field generated in a 10 As interval is distributed over a 4.1

  11. Detection Performance of Horizontal Linear Hydrophone Arrays in Shallow Water.

    DTIC Science & Technology

    1980-12-15

    random phase G gain G angle interval covariance matrix h processor vector H matrix matched filter; generalized beamformer I unity matrix 4 SACLANTCEN SR...omnidirectional sensor is h*Ph P G = - h [Eq. 47] G = h* Q h P s The following two sections evaluate a few examples of application of the OLP. Following the...At broadside the signal covariance matrix reduces to a dyadic: P 󈧬 s s*;therefore, the gain (e.g. Eq. 37) becomes tr(H* P H) Pn * -1 Q -1 Pn G ~OQp

  12. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  13. Cyber Analogies

    DTIC Science & Technology

    2014-02-28

    distribution is unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT This anthology of cyber analogies will resonate with readers whose duties call for them...THIS PAGE INTENTIONALLY LEFT BLANK v ABSTRACT This anthology of cyber analogies will resonate with readers...fresh insights. THE CASE FOR ANALOGIES All of us on the cyber analogies team hope that this anthol- ogy will resonate with readers whose duties call

  14. DETERMINING ELECTRONIC AND CYBER ATTACK RISK LEVEL FOR UNMANNED AIRCRAFT IN A CONTESTED ENVIRONMENT

    DTIC Science & Technology

    2016-08-01

    AIR COMMAND AND STAFF COLLEGE AIR UNIVERSITY DETERMINING ELECTRONIC AND CYBER ATTACK RISK LEVEL FOR UNMANNED AIRCRAFT IN A CONTESTED ENVIRONMENT...iii ABSTRACT During operations in a contested air environment, adversary electronic warfare (EW) and cyber-attack capability will pose a high...10 Russian Federation Electronic Warfare Systems ...................................................12 Chinese Cyber Warfare Program

  15. An Examination of the Relationship between Self-Control and Cyber Victimization in Adolescents

    ERIC Educational Resources Information Center

    Peker, Adem

    2017-01-01

    Purpose: Cyber bullying is a new phenomenon which adversely affects young people. Exposure to the cyber bullying can negatively affect the mental health. The aim of this study is to examine the predictive effect of self-control on cyber victimization in adolescents. Research Methods: The study group was composed of 353 Turkish secondary school…

  16. Defining Cyber and Focusing the Military’s Role in Cyberspace

    DTIC Science & Technology

    2013-03-01

    Service (USSS) and the U.S. Immigration and Customs Enforcement (ICE) to investigate cyber criminals .7 DoD’s role is not only to defend its own...identify trends, tactics, techniques, and procedures that cyber criminals use so that if the same type of events meet the definition of cyber war, DoD

  17. Discussing Cyber Ethics with Students Is Critical

    ERIC Educational Resources Information Center

    Kruger, Robert

    2003-01-01

    As computers become a larger part of the curriculum, educators everywhere are being asked to take a stand for cyber ethics, the right and wrong of computer and Internet use. Teachers cannot always depend on parents to instill cyber ethics. Parents may not know or follow the rules, either. Once students understand cyber ethics, they may have a…

  18. Interview with a Cyber-Student: A Look behind Online Cheating

    ERIC Educational Resources Information Center

    Davis, Julia

    2014-01-01

    This case study offers insights into the motivation and experiences of a cyber-student, an individual who completes all or portions of an online class for the registered student. The cyber-student shares information on the inner-workings of online companies specializing in matching cyber-students with potential clients. A portrait of both a…

  19. Middle School Students' Perceptions of and Responses to Cyber Bullying

    ERIC Educational Resources Information Center

    Holfeld, Brett; Grabe, Mark

    2012-01-01

    This study explored the nature and extent of middle school students' (n = 665) experiences with cyber bullying. Approximately one in five students reported being cyber bullied in the past year, with 55% of those students being repeatedly victimized within the past 30 days. Female students were more likely to be involved in cyber bullying (victim,…

  20. Cyber Asynchronous versus Blended Cyber Approach in Distance English Learning

    ERIC Educational Resources Information Center

    Ge, Zi-Gang

    2012-01-01

    This study aims to compare the single cyber asynchronous learning approach with the blended cyber learning approach in distance English education. Two classes of 70 students participated in this study, which lasted one semester of about four months, with one class using the blended approach for their English study and the other only using the…

  1. Cyber Victimization and Perceived Stress: Linkages to Late Adolescents' Cyber Aggression and Psychological Functioning

    ERIC Educational Resources Information Center

    Wright, Michelle F.

    2015-01-01

    The present study examined multiple sources of strain, particular cyber victimization, and perceived stress from parents, peers, and academics, in relation to late adolescents' (ages 16-18; N = 423) cyber aggression, anxiety, and depression, each assessed 1 year later (Time 2). Three-way interactions revealed that the relationship between Time 1…

  2. Psychological Impact of Cyber-Bullying: Implications for School Counsellors

    ERIC Educational Resources Information Center

    Nordahl, Jennifer; Beran, Tanya; Dittrick, Crystal J.

    2013-01-01

    Cyber-bullying is a significant problem for children today. This study provides evidence of the psychological impact of cyber-bullying among victimized children ages 10 to 17 years (M = 12.48, SD = 1.79) from 23 urban schools in a western province of Canada (N = 239). Students who were cyber-bullied reported high levels of anxious,…

  3. The Prevalence of Cyber Bullying Victimization and Its Relationship to Academic, Social, and Emotional Adjustment among College Students

    ERIC Educational Resources Information Center

    Beebe, Jennifer Elizabeth

    2010-01-01

    The current study investigated the prevalence and frequency of cyber bullying victimization and examined the impact of cyber bullying on academic, social, and emotional college adjustment. Participants were recruited from two universities in the United States. Participants completed the Revised Cyber Bullying Survey (Kowalski & Limber, 2007)…

  4. Software Acquisition in the Age of Cyber Warfare

    DTIC Science & Technology

    2011-05-01

    s c h o o l o f S Y S T E M S a n d L O G I S T I C S education service research Software Acquisition in the Age of Cyber Warfare Maj...DATE MAY 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE Software Acquisition in the Age of Cyber Warfare 5a...AFIT Cyber 200/300 Courses Cyber Warfare IDE Program 34 Special Emphasis On… Enterprise Integration (Active Directory, PKI) Security

  5. Cyber-Informed Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert S.; Benjamin, Jacob; Wright, Virginia L.

    A continuing challenge for engineers who utilize digital systems is to understand the impact of cyber-attacks across the entire product and program lifecycle. This is a challenge due to the evolving nature of cyber threats that may impact the design, development, deployment, and operational phases of all systems. Cyber Informed Engineering is the process by which engineers are made aware of both how to use their engineering knowledge to positively impact the cyber security in the processes by which they architect and design components and the services and security of the components themselves.

  6. Gamification for Measuring Cyber Security Situational Awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fink, Glenn A.; Best, Daniel M.; Manz, David O.

    Cyber defense competitions arising from U.S. service academy exercises, offer a platform for collecting data that can inform research that ranges from characterizing the ideal cyber warrior to describing behaviors during certain challenging cyber defense situations. This knowledge could lead to better preparation of cyber defenders in both military and civilian settings. This paper describes how one regional competition, the PRCCDC, a participant in the national CCDC program, conducted proof of concept experimentation to collect data during the annual competition for later analysis. The intent is to create an ongoing research agenda that expands on this current work and incorporatesmore » augmented cognition and gamification methods for measuring cybersecurity situational awareness under the stress of cyber attack.« less

  7. Design of Hack-Resistant Diabetes Devices and Disclosure of Their Cyber Safety.

    PubMed

    Sackner-Bernstein, Jonathan

    2017-03-01

    The focus of the medical device industry and regulatory bodies on cyber security parallels that in other industries, primarily on risk assessment and user education as well as the recognition and response to infiltration. However, transparency of the safety of marketed devices is lacking and developers are not embracing optimal design practices with new devices. Achieving cyber safe diabetes devices: To improve understanding of cyber safety by clinicians and patients, and inform decision making on use practices of medical devices requires disclosure by device manufacturers of the results of their cyber security testing. Furthermore, developers should immediately shift their design processes to deliver better cyber safety, exemplified by use of state of the art encryption, secure operating systems, and memory protections from malware.

  8. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  9. Automated vector selection of SIVQ and parallel computing integration MATLAB™: Innovations supporting large-scale and high-throughput image analysis studies.

    PubMed

    Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J

    2011-01-01

    Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.

  10. ORNL Cray X1 evaluation status report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, P.K.; Alexander, R.A.; Apra, E.

    2004-05-01

    On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of themore » architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of photon echo run 8 times faster than previously achieved. Even at 256 processors, the Cray X1 system is already outperforming other supercomputers with thousands of processors for a certain class of applications such as climate modeling and some fusion applications. This evaluation is the outcome of a number of meetings with both high-performance computing (HPC) system vendors and application experts over the past 9 months and has received broad-based support from the scientific community and other agencies.« less

  11. Adolescents' Involvement in Cyber Bullying and Perceptions of School: The Importance of Perceived Peer Acceptance for Female Adolescents.

    PubMed

    Betts, Lucy R; Spenser, Karin A; Gardner, Sarah E

    2017-01-01

    Young people are spending increasing amounts of time using digital technology and, as such, are at great risk of being involved in cyber bullying as a victim, bully, or bully/victim. Despite cyber bullying typically occurring outside the school environment, the impact of being involved in cyber bullying is likely to spill over to school. Fully 285 11- to 15-year-olds (125 male and 160 female, M age  = 12.19 years, SD  = 1.03) completed measures of cyber bullying involvement, self-esteem, trust, perceived peer acceptance, and perceptions of the value of learning and the importance of school. For young women, involvement in cyber bullying as a victim, bully, or bully/victim negatively predicted perceptions of learning and school, and perceived peer acceptance mediated this relationship. The results indicated that involvement in cyber bullying negatively predicted perceived peer acceptance which, in turn, positively predicted perceptions of learning and school. For young men, fulfilling the bully/victim role negatively predicted perceptions of learning and school. Consequently, for young women in particular, involvement in cyber bullying spills over to impact perceptions of learning. The findings of the current study highlight how stressors external to the school environment can adversely impact young women's perceptions of school and also have implications for the development of interventions designed to ameliorate the effects of cyber bullying.

  12. Computer program for post-flight evaluation of the control surface response for an attitude controlled missile

    NASA Technical Reports Server (NTRS)

    Knauber, R. N.

    1982-01-01

    A FORTRAN IV coded computer program is presented for post-flight analysis of a missile's control surface response. It includes preprocessing of digitized telemetry data for time lags, biases, non-linear calibration changes and filtering. Measurements include autopilot attitude rate and displacement gyro output and four control surface deflections. Simple first order lags are assumed for the pitch, yaw and roll axes of control. Each actuator is also assumed to be represented by a first order lag. Mixing of pitch, yaw and roll commands to four control surfaces is assumed. A pseudo-inverse technique is used to obtain the pitch, yaw and roll components from the four measured deflections. This program has been used for over 10 years on the NASA/SCOUT launch vehicle for post-flight analysis and was helpful in detecting incipient actuator stall due to excessive hinge moments. The program is currently set up for a CDC CYBER 175 computer system. It requires 34K words of memory and contains 675 cards. A sample problem presented herein including the optional plotting requires eleven (11) seconds of central processor time.

  13. DNS of incompressible turbulence in a periodic box with up to 4096^3 grid points

    NASA Astrophysics Data System (ADS)

    Kaneda, Yukio

    2007-11-01

    Turbulence of incompressible fluid obeying the Navier-Stokes (NS) equations under periodic boundary conditions is one of the simplest dynamical systems keeping the essence of turbulence dynamics, and suitable for the study of high Reynolds number (Re) turbulence by direct numerical simulation (DNS). This talk presents a review on DNS of such a system with the number N^3 of the grid points up to 4096^3, performed on the Earth Simulator (ES). The ES consists of 640 processor nodes (=5120 arithmetic processors) with 10TB of main memory and the peak performance of 40 Tflops. The DNSs are based on a spectral method free from alias error. The convolution sums in the wave vector space were evaluated by radix-4 Fast Fourier Transforms with double precision arithmetic. Sustained performance of 16.4 Tflops was achieved on the 2048^3 DNS by using 512 processor nodes of the ES. The DNSs consist of two series; one is with kmax η1 (Series 1) and the other with kmax η2 (Series 2), where kmax is the highest wavenumber in each simulation, and η is the Kolmogorov length scale. In the 4096^3 DNS, the Taylor-scale Reynolds number Rλ1130 (675) and the ratio L/η of the integral length scale L to η is approximately 2133(1040), in Series 1 (Series 2). Such DNS data are expected to shed some light on the basic questions in turbulence research, including those on (i) the normalized mean rate of energy dissipation in the high Re limit, (ii) the universality of energy spectrum at small scale, (iii) scale- and Re- dependences of the statistics, and (iv) intermittency. We have constructed a database consisting of (a) animations and figures of turbulent fields (b) statistics including those associated with (i)-(iv) noted above, (c) snapshot data of the velocity fields. The data size of (c) can be very large for large N. For example, one snapshot of single precision data of the velocity vector field of the 4096^3 DNS requires approximately 0.8 TB.

  14. Computer-implemented method and apparatus for autonomous position determination using magnetic field data

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor A. (Inventor)

    2000-01-01

    A computer-implemented method and apparatus for determining position of a vehicle within 100 km autonomously from magnetic field measurements and attitude data without a priori knowledge of position. An inverted dipole solution of two possible position solutions for each measurement of magnetic field data are deterministically calculated by a program controlled processor solving the inverted first order spherical harmonic representation of the geomagnetic field for two unit position vectors 180 degrees apart and a vehicle distance from the center of the earth. Correction schemes such as a successive substitutions and a Newton-Raphson method are applied to each dipole. The two position solutions for each measurement are saved separately. Velocity vectors for the position solutions are calculated so that a total energy difference for each of the two resultant position paths is computed. The position path with the smaller absolute total energy difference is chosen as the true position path of the vehicle.

  15. Implicit flux-split schemes for the Euler equations

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Walters, R. W.; Van Leer, B.

    1985-01-01

    Recent progress in the development of implicit algorithms for the Euler equations using the flux-vector splitting method is described. Comparisons of the relative efficiency of relaxation and spatially-split approximately factored methods on a vector processor for two-dimensional flows are made. For transonic flows, the higher convergence rate per iteration of the Gauss-Seidel relaxation algorithms, which are only partially vectorizable, is amply compensated for by the faster computational rate per iteration of the approximately factored algorithm. For supersonic flows, the fully-upwind line-relaxation method is more efficient since the numerical domain of dependence is more closely matched to the physical domain of dependence. A hybrid three-dimensional algorithm using relaxation in one coordinate direction and approximate factorization in the cross-flow plane is developed and applied to a forebody shape at supersonic speeds and a swept, tapered wing at transonic speeds.

  16. Time variant analysis of large scale constrained rotorcraft systems dynamics - An exploitation of IBM-3090 vector-processor's pipe-lining feature

    NASA Technical Reports Server (NTRS)

    Amirouche, F. M. L.; Shareef, N. H.; Xie, M.

    1991-01-01

    A generalized algorithmic procedure is presented for handling the constraints in transmissions, which are treated as a multibody system of interconnected rigid/flexible bodies. The type of constraints are classified based on the interconnection of the bodies, assuming one or more points of contact to exist between them. The method is explained through flow charts and configuration/interaction tables. A significant increase in speed of execution is achieved by vectorizing the developed code in computationally intensive areas. The study of an example consisting of two meshing disks rotating at high angular velocity is carried out. The dynamic behavior of the constraint forces associated with the generalized coordinates of the system are plotted by selecting various modes. Applications are intended for the study of dynamic and subsequent prediction of constraint forces at the gear teeth contacting points in helicopter transmissions with the aim of improving performance dependability.

  17. High-Speed Current dq PI Controller for Vector Controlled PMSM Drive

    PubMed Central

    Reaz, Mamun Bin Ibne; Rahman, Labonnah Farzana; Chang, Tae Gyu

    2014-01-01

    High-speed current controller for vector controlled permanent magnet synchronous motor (PMSM) is presented. The controller is developed based on modular design for faster calculation and uses fixed-point proportional-integral (PI) method for improved accuracy. Current dq controller is usually implemented in digital signal processor (DSP) based computer. However, DSP based solutions are reaching their physical limits, which are few microseconds. Besides, digital solutions suffer from high implementation cost. In this research, the overall controller is realizing in field programmable gate array (FPGA). FPGA implementation of the overall controlling algorithm will certainly trim down the execution time significantly to guarantee the steadiness of the motor. Agilent 16821A Logic Analyzer is employed to validate the result of the implemented design in FPGA. Experimental results indicate that the proposed current dq PI controller needs only 50 ns of execution time in 40 MHz clock, which is the lowest computational cycle for the era. PMID:24574913

  18. The Nature and Frequency of Cyber Bullying Behaviors and Victimization Experiences in Young Canadian Children

    ERIC Educational Resources Information Center

    Holfeld, Brett; Leadbeater, Bonnie J.

    2015-01-01

    As access to technology is increasing in children and adolescents, there are growing concerns over the dangers of cyber bullying. It remains unclear what cyber bullying looks like among young Canadian children and how common these experiences are. In this study, we examine the psychometric properties of a measure of cyber bullying behaviors and…

  19. Cyber Operations: The New Balance

    DTIC Science & Technology

    2009-01-01

    compelling evidence to suggest that enlight - enment, rather than retrenchment, is the path for cyber New Balance. The economic calamity of the Great...www.guardian.co.uk/ technology /2008/ oct/02/interviews.internet>. 16 Langevin, 11. 17 James Lewis, “Cyber Security Recommen- dations for the Next...Administration,” testimony before House Subcommittee on Emerging Threats, Cyber Security, and Science and Technology , Washington, DC, September 16

  20. Pages - U.S. Fleet Cyber Command

    Science.gov Websites

    Links Expand Links : U.S. Fleet Cyber Command Help (new window) Site Help Page Content Website 2nd Banner.jpg Since its establishment on Jan. 29, 2010, U.S. Fleet Cyber Command (FCC)/U.S. TENTH Fleet (C10F civilians organized into 26 active commands, 40 Cyber Mission Force units, and 27 reserve commands around

  1. Cyber Insurance - Managing Cyber Risk

    DTIC Science & Technology

    2015-04-01

    license under the clause at DFARS 252.227-7013 (a)(16) [Jun 2013]. Cyber Insurance – Managing Cyber Risk Data breaches involving...significant personal information losses and financial impact are becoming increasingly common. Whether the data breach has financial implications for...hundreds of millions of dollars depending on the type and size of the breach. Most states have some type of data breach law requiring notification

  2. Co-Simulation Platform For Characterizing Cyber Attacks in Cyber Physical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadi, Mohammad A. H.; Ali, Mohammad Hassan; Dasgupta, Dipankar

    Smart grid is a complex cyber physical system containing a numerous and variety of sources, devices, controllers and loads. Communication/Information infrastructure is the backbone of the smart grid system where different grid components are connected with each other through this structure. Therefore, the drawbacks of the information technology related issues are also becoming a part of the smart grid. Further, smart grid is also vulnerable to the grid related disturbances. For such a dynamic system, disturbance and intrusion detection is a paramount issue. This paper presents a Simulink and OPNET based co-simulated test bed to carry out a cyber-intrusion inmore » a cyber-network for modern power systems and smart grid. The effect of the cyber intrusion on the physical power system is also presented. The IEEE 30 bus power system model is used to demonstrate the effectiveness of the simulated testbed. The experiments were performed by disturbing the circuit breakers reclosing time through a cyber-attack in the cyber network. Different disturbance situations in the proposed test system are considered and the results indicate the effectiveness of the proposed co-simulated scheme.« less

  3. Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users.

    PubMed

    Veksler, Vladislav D; Buchler, Norbou; Hoffman, Blaine E; Cassenti, Daniel N; Sample, Char; Sugrim, Shridat

    2018-01-01

    Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting.

  4. Timing of cyber conflict.

    PubMed

    Axelrod, Robert; Iliev, Rumen

    2014-01-28

    Nations are accumulating cyber resources in the form of stockpiles of zero-day exploits as well as other novel methods of engaging in future cyber conflict against selected targets. This paper analyzes the optimal timing for the use of such cyber resources. A simple mathematical model is offered to clarify how the timing of such a choice can depend on the stakes involved in the present situation, as well as the characteristics of the resource for exploitation. The model deals with the question of when the resource should be used given that its use today may well prevent it from being available for use later. The analysis provides concepts, theory, applications, and distinctions to promote the understanding strategy aspects of cyber conflict. Case studies include the Stuxnet attack on Iran's nuclear program, the Iranian cyber attack on the energy firm Saudi Aramco, the persistent cyber espionage carried out by the Chinese military, and an analogous case of economic coercion by China in a dispute with Japan. The effects of the rapidly expanding market for zero-day exploits are also analyzed. The goal of the paper is to promote the understanding of this domain of cyber conflict to mitigate the harm it can do, and harness the capabilities it can provide.

  5. Cyber Victimization, Psychological Intimate Partner Violence, and Problematic Mental Health Outcomes Among First-Year College Students.

    PubMed

    Sargent, Kelli S; Krauss, Alison; Jouriles, Ernest N; McDonald, Renee

    2016-09-01

    Both cyber victimization and psychological intimate partner violence (IPV) have been associated with negative mental health outcomes among adolescents and young adults. The present study examined relations among cyber victimization, psychological IPV, and mental health outcomes (depressive symptoms, antisocial behavior) among first-year college students. Consistent with polyvictimization theory, we hypothesized that cyber victimization and psychological IPV would be related to each other. We also hypothesized that each would uniquely contribute to depressive symptoms and antisocial behavior, after accounting for the other. Participants (N = 342, M age = 18.33 years; 50% male) completed questionnaires during a single lab visit. Results indicated that cyber victimization and psychological IPV were related to each other, and both contributed uniquely to depressive symptoms, but only cyber victimization contributed uniquely to antisocial behavior. Exploratory analyses indicated that experiencing both cyber victimization and psychological IPV was necessary for increased depressive symptoms and antisocial behavior. This study is the first to establish a unique relation between cyber victimization and mental health problems, after accounting for psychological IPV. The findings also suggest a need to consider multiple forms of victimization when considering relations between specific types of victimization and mental health problems.

  6. Search for a Charged Higgs Boson Produced in the Vector-Boson Fusion Mode with Decay H(±)→W(±)Z using pp Collisions at √s=8  TeV with the ATLAS Experiment.

    PubMed

    Aad, G; Abbott, B; Abdallah, J; Abdinov, O; Aben, R; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Affolder, A A; Agatonovic-Jovin, T; Aguilar-Saavedra, J A; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimoto, G; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Alkire, S P; Allbrooke, B M M; Allport, P P; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Álvarez Piqueras, D; Alviggi, M G; Amadio, B T; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anders, J K; Anderson, K J; Andreazza, A; Andrei, V; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Arabidze, G; Arai, Y; Araque, J P; Arce, A T H; Arduh, F A; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Auerbach, B; Augsten, K; Aurousseau, M; Avolio, G; Axen, B; Ayoub, M K; Azuelos, G; Baak, M A; Baas, A E; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Badescu, E; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Balek, P; Balestri, T; Balli, F; Banas, E; Banerjee, Sw; Bannoura, A A E; Bansil, H S; Barak, L; Baranov, S P; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnes, S L; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Bassalat, A; Basye, A; Bates, R L; Batista, S J; Batley, J R; Battaglia, M; Bauce, M; Bauer, F; Bawa, H S; Beacham, J B; Beattie, M D; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, M; Becker, S; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, J K; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bender, M; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Bentvelsen, S; Beresford, L; Beretta, M; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Beringer, J; Bernard, C; Bernard, N R; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertsche, C; Bertsche, D; Besana, M I; Besjes, G J; Bessidskaia Bylund, O; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Bevan, A J; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blanco, J E; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boehler, M; Bogaerts, J A; Bogdanchikov, A G; Bohm, C; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boveia, A; Boyd, J; Boyko, I R; Bozic, I; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brendlinger, K; Brennan, A J; Brenner, L; Brenner, R; Bressler, S; Bristow, K; Bristow, T M; Britton, D; Britzger, D; Brochu, F M; Brock, I; Brock, R; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Brown, J; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Buchholz, P; Buckley, A G; Buda, S I; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Burckhart, H; Burdin, S; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Buszello, C P; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Buzykaev, R; Cabrera Urbán, S; Caforio, D; Cairo, V M; Cakir, O; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Camarri, P; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cano Bret, M; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Castaneda-Miranda, E; Castelli, A; Castillo Gimenez, V; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Caudron, J; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerio, B C; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chang, P; Chapleau, B; Chapman, J D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, X; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cheremushkina, E; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Childers, J T; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Choi, K; Chouridou, S; Chow, B K B; Christodoulou, V; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Chuinard, A J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Cinca, D; Cindro, V; Cioara, I A; Ciocio, A; Citron, Z H; Ciubancan, M; Clark, A; Clark, B L; Clark, P J; Clarke, R N; Cleland, W; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Compostella, G; Conde Muiño, P; Coniavitis, E; Connell, S H; Connelly, I A; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cuthbert, C; Czirr, H; Czodrowski, P; D'Auria, S; D'Onofrio, M; Da Cunha Sargedas De Sousa, M J; Da Via, C; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Dandoy, J R; Dang, N P; Daniells, A C; Danninger, M; Dano Hoffmann, M; Dao, V; Darbo, G; Darmora, S; Dassoulas, J; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dearnaley, W J; Debbe, R; Debenedetti, C; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Delgove, D; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; DeMarco, D A; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Diaconu, C; Diamond, M; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Diglio, S; Dimitrievska, A; Dingfelder, J; Dittus, F; Djama, F; Djobava, T; Djuvsland, J I; do Vale, M A B; Dobos, D; Dobre, M; Doglioni, C; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Drechsler, E; Dris, M; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Duschinger, D; Dyndal, M; Eckardt, C; Ecker, K M; Edgar, R C; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Elliot, A A; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Engelmann, R; Erdmann, J; Ereditato, A; Ernis, G; Ernst, J; Ernst, M; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Faucci Giannelli, M; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Fernandez Martinez, P; Fernandez Perez, S; Ferrag, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, C; Fischer, J; Fisher, W C; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Flowerdew, M J; Formica, A; Forti, A; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Francis, D; Franconi, L; Franklin, M; Fraternali, M; Freeborn, D; French, S T; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudiello, A; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Geisler, M P; Gemme, C; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghazlane, H; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gibbard, B; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giorgi, F M; Giorgi, F M; Giraud, P F; Giromini, P; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gkougkousis, E L; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Goblirsch-Kolb, M; Goddard, J R; Godlewski, J; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Goujdami, D; Goussiou, A G; Govender, N; Grabas, H M X; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramling, J; Gramstad, E; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Graziani, E; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Grout, Z J; Guan, L; Guenther, J; Guescini, F; Guest, D; Gueta, O; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Guo, J; Gupta, S; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageböck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Haley, J; Hall, D; Halladjian, G; Hallewell, G D; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, M C; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harrington, R D; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, S; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauser, R; Hauswald, L; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hays, J M; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Hejbal, J; Helary, L; Hellman, S; Hellmich, D; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Hengler, C; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Herbert, G H; Hernández Jiménez, Y; Herrberg-Schubert, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hetherly, J W; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillier, S J; Hinchliffe, I; Hines, E; Hinman, R R; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hohlfeld, M; Hohn, D; Holmes, T R; Hong, T M; Hooft van Huysduynen, L; Hopkins, W H; Horii, Y; Horton, A J; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hrynevich, A; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, Q; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Idrissi, Z; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Inamaru, Y; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jabbar, S; Jackson, B; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansky, R W; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, Y; Jiggins, S; Jimenez Pena, J; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, P; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Jung, C A; Jussel, P; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kahn, S J; Kajomovitz, E; Kalderon, C W; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneda, M; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karakostas, K; Karamaoun, A; Karastathis, N; Kareem, M J; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kass, R D; Kastanas, A; Kataoka, Y; Katre, A; Katzy, J; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Kehoe, R; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Keyes, R A; Khalil-zada, F; Khandanyan, H; Khanov, A; Kharlamov, A G; Khoo, T J; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H Y; Kim, H; Kim, S H; Kim, Y; Kimura, N; Kind, O M; King, B T; King, M; King, R S B; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kiuchi, K; Kivernyk, O; Kladiva, E; Klein, M H; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Klok, P F; Kluge, E-E; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Koletsou, I; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; König, S; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Kortner, O; Kortner, S; Kosek, T; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumeli-Charalampidi, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Krizka, K; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Krumnack, N; Krumshteyn, Z V; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kucuk, H; Kuday, S; Kuehn, S; Kugel, A; Kuger, F; Kuhl, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunigo, T; Kupco, A; Kurashige, H; Kurochkin, Y A; Kurumida, R; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwan, T; Kyriazopoulos, D; La Rosa, A; La Rosa Navarro, J L; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, J C; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Lasagni Manghi, F; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeBlanc, M; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmann Miotto, G; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Leroy, C; Lester, C G; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Leyko, A M; Leyton, M; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, S; Li, Y; Liang, Z; Liao, H; Liberti, B; Liblong, A; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, J; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Long, B A; Long, J D; Long, R E; Looper, K A; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lopez Paz, I; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lösel, P J; Lou, X; Lounis, A; Love, J; Love, P A; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Macdonald, C M; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeland, S; Maeno, T; Maevskiy, A; Magradze, E; Mahboubi, K; Mahlstedt, J; Maiani, C; Maidantchik, C; Maier, A A; Maier, T; Maio, A; Majewski, S; Makida, Y; Makovec, N; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mancini, G; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mantoani, M; Mapelli, L; March, L; Marchiori, G; Marcisovsky, M; Marino, C P; Marjanovic, M; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, T A; Martin, V J; Martin dit Latour, B; Martinez, M; Martin-Haugh, S; Martoiu, V S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazza, S M; Mazzaferro, L; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Middleton, R P; Miglioranzi, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milesi, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Minaenko, A A; Minami, Y; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mitani, T; Mitrevski, J; Mitsou, V A; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Morinaga, M; Morisbak, V; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Mortensen, S S; Morton, A; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, K; Mueller, R S P; Mueller, T; Muenstermann, D; Mullen, P; Munwes, Y; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagata, K; Nagel, M; Nagy, E; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Naranjo Garcia, R F; Narayan, R; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolopoulos, K; Nilsen, J K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nomachi, M; Nomidis, I; Nooney, T; Norberg, S; Nordberg, M; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'Brien, B J; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, I; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Oide, H; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olivares Pino, S A; Oliveira Damazio, D; Oliver Garcia, E; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Otero y Garzon, G; Otono, H; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Owen, R E; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Pan, Y B; Panagiotopoulou, E; Pandini, C E; Panduro Vazquez, J G; Pani, P; Panitkin, S; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, M A; Parker, K A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Pauly, T; Pearce, J; Pearson, B; Pedersen, L E; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perini, L; Pernegger, H; Perrella, S; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Pickering, M A; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinfold, J L; Pingel, A; Pinto, B; Pires, S; Pitt, M; Pizio, C; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Pluth, D; Poettgen, R; Poggioli, L; Pohl, D; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pralavorio, P; Pranko, A; Prasad, S; Prell, S; Price, D; Price, L E; Primavera, M; Prince, S; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Ptacek, E; Puddu, D; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Raddum, S; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Rangel-Smith, C; Rauscher, F; Rave, S; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reisin, H; Relich, M; Rembser, C; Ren, H; Renaud, A; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Richter, S; Richter-Was, E; Ricken, O; Ridel, M; Rieck, P; Riegel, C J; Rieger, J; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ristić, B; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romano Saez, S M; Romero Adam, E; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, P; Rosendahl, P L; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Russell, H L; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sabato, G; Sacerdoti, S; Saddique, A; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Saimpert, M; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, C; Sandstroem, R; Sankey, D P C; Sannino, M; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sasaki, O; Sasaki, Y; Sato, K; Sauvage, G; Sauvan, E; Savage, G; Savard, P; Sawyer, C; Sawyer, L; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaeffer, J; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Schiavi, C; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, S; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schopf, E; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schramm, S; Schreyer, M; Schroeder, C; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seema, P; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekhon, K; Sekula, S J; Selbach, K E; Seliverstov, D M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Sessa, M; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shaw, S M; Shcherbakova, A; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Shoaleh Saadi, D; Shochet, M J; Shojaii, S; Shrestha, S; Shulga, E; Shupe, M A; Shushkevich, S; Sicho, P; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silver, Y; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simon, D; Simoniello, R; Sinervo, P; Sinev, N B; Siragusa, G; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinner, M B; Skottowe, H P; Skubic, P; Slater, M; Slavicek, T; Slawinska, M; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, M N K; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, B; Sopko, V; Sorin, V; Sosa, D; Sosebee, M; Sotiropoulou, C L; Soualah, R; Soueid, P; Soukharev, A M; South, D; Spagnolo, S; Spalla, M; Spanò, F; Spearman, W R; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; Spreitzer, T; St Denis, R D; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Stavina, P; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, S; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tannenwald, B B; Tannoury, N; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, F E; Taylor, G N; Taylor, W; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Teoh, J J; Tepel, F; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thun, R P; Tibbetts, M J; Ticse Torres, R E; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Truong, L; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turra, R; Turvey, A J; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urban, J; Urquijo, P; Urrejola, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valderanis, C; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloso, F; Velz, T; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; Wharton, A M; White, A; White, M J; White, R; White, S; Whiteson, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wildauer, A; Wilkens, H G; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, A; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winter, B T; Wittgen, M; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wu, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wyatt, T R; Wynne, B M; Xella, S; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamada, M; Yamaguchi, Y; Yamamoto, A; Yamamoto, S; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, Y; Yao, L; Yao, W-M; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yurkewicz, A; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zalieckas, J; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zhang, D; Zhang, F; Zhang, J; Zhang, L; Zhang, R; Zhang, X; Zhang, Z; Zhao, X; Zhao, Y; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, C; Zhou, L; Zhou, L; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, S; Zinonos, Z; Zinser, M; Ziolkowski, M; Živković, L; Zobernig, G; Zoccoli, A; zur Nedden, M; Zurzolo, G; Zwalinski, L

    2015-06-12

    A search for a charged Higgs boson, H(±), decaying to a W(±) boson and a Z boson is presented. The search is based on 20.3  fb(-1) of proton-proton collision data at a center-of-mass energy of 8 TeV recorded with the ATLAS detector at the LHC. The H(±) boson is assumed to be produced via vector-boson fusion and the decays W(±)→qq' and Z→e(+)e(-)/μ(+)μ(-) are considered. The search is performed in a range of charged Higgs boson masses from 200 to 1000 GeV. No evidence for the production of an H(±) boson is observed. Upper limits of 31-1020 fb at 95% C.L. are placed on the cross section for vector-boson fusion production of an H(±) boson times its branching fraction to W(±)Z. The limits are compared with predictions from the Georgi-Machacek Higgs triplet model.

  7. What good cyber resilience looks like.

    PubMed

    Hult, Fredrik; Sivanesan, Giri

    In January 2012, the World Economic Forum made cyber attacks its fourth top global risk. In the 2013 risk report, cyber attacks were noted to be an even higher risk in absolute terms. The reliance of critical infrastructure on cyber working has never been higher; the frequency, intensity, impact and sophistication of attacks is growing. This trend looks likely to continue. It can be argued that it is no longer a question whether an organisation will be successfully hacked, but how long it will take to detect. In the ever-changing cyber environment, traditional protection techniques and reliance on preventive controls are not enough. A more agile approach is required to give assurance of a sufficiently secure digital society. Are we faced with a paradigm shift or a storm in a digital teacup? This paper offers an introduction to why cyber is important, a wider taxonomy on the topic and some historical context on how the discipline of cyber security has evolved, and an interpretation on what this means in the new normal of today.

  8. Metaphors for cyber security.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Judy Hennessey; Parrott, Lori K.; Karas, Thomas H.

    2008-08-01

    This report is based upon a workshop, called 'CyberFest', held at Sandia National Laboratories on May 27-30, 2008. Participants in the workshop came from organizations both outside and inside Sandia. The premise of the workshop was that thinking about cyber security from a metaphorical perspective could lead to a deeper understanding of current approaches to cyber defense and perhaps to some creative new approaches. A wide range of metaphors was considered, including those relating to: military and other types of conflict, biological, health care, markets, three-dimensional space, and physical asset protection. These in turn led to consideration of a varietymore » of possible approaches for improving cyber security in the future. From the proposed approaches, three were formulated for further discussion. These approaches were labeled 'Heterogeneity' (drawing primarily on the metaphor of biological diversity), 'Motivating Secure Behavior' (taking a market perspective on the adoption of cyber security measures) and 'Cyber Wellness' (exploring analogies with efforts to improve individual and public health).« less

  9. Towards Resilient Critical Infrastructures: Application of Type-2 Fuzzy Logic in Embedded Network Security Cyber Sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ondrej Linda; Todd Vollmer; Jim Alves-Foss

    2011-08-01

    Resiliency and cyber security of modern critical infrastructures is becoming increasingly important with the growing number of threats in the cyber-environment. This paper proposes an extension to a previously developed fuzzy logic based anomaly detection network security cyber sensor via incorporating Type-2 Fuzzy Logic (T2 FL). In general, fuzzy logic provides a framework for system modeling in linguistic form capable of coping with imprecise and vague meanings of words. T2 FL is an extension of Type-1 FL which proved to be successful in modeling and minimizing the effects of various kinds of dynamic uncertainties. In this paper, T2 FL providesmore » a basis for robust anomaly detection and cyber security state awareness. In addition, the proposed algorithm was specifically developed to comply with the constrained computational requirements of low-cost embedded network security cyber sensors. The performance of the system was evaluated on a set of network data recorded from an experimental cyber-security test-bed.« less

  10. Introducing cyber.

    PubMed

    Hult, Fredrik; Sivanesan, Giri

    In January 2012, the World Economic Forum made cyber attacks its fourth top global risk. In the 2013 risk report, cyber attacks were noted to be an even higher risk in absolute terms. The reliance of critical infrastructure on cyber working has never been higher; the frequency, intensity, impact and sophistication of attacks is growing. This trend looks likely to continue. It can be argued that it is no longer a question whether an organisation will be successfully hacked, but how long it will take to detect. In the ever-changing cyber environment, traditional protection techniques and reliance on preventive controls are not enough. A more agile approach is required to give assurance of a sufficiently secure digital society. Are we faced with a paradigm shift or a storm in a digital teacup? This paper offers an introduction to why cyber is important, a wider taxonomy on the topic and some historical context on how the discipline of cyber security has evolved, and an interpretation on what this means in the new normal of today.

  11. Perceived Parenting and Adolescent Cyber-Bullying: Examining the Intervening Role of Autonomy and Relatedness Need Satisfaction, Empathic Concern and Recognition of Humanness.

    PubMed

    Fousiani, Kyriaki; Dimitropoulou, Panagiota; Michaelides, Michalis P; Van Petegem, Stijn

    Due to the progress in information technology, cyber-bullying is becoming one of the most common forms of interpersonal harm, especially among teenagers. The present study ( N  = 548) aimed to investigate the relation between perceived parenting style (in terms of autonomy support and psychological control) and cyber-bullying in adolescence. Thereby, the study tested for the intervening role of adolescent need satisfaction (i.e., autonomy and relatedness), empathic concern towards others, and adolescents' recognition of full humanness to cyber-bullying offenders and victims. Findings revealed both a direct and an indirect relation between parenting and cyber-bullying. More specifically, parental psychological control directly predicted cyber-bullying, whereas parental autonomy support related to less cyber-bullying indirectly, as it was associated with the satisfaction of adolescents' need for autonomy, which predicted more empathic concern towards others, which in turn differentially related to recognition of humanness to victims and bullies. The discussion focuses on the implications of the current findings.

  12. Design of Hack-Resistant Diabetes Devices and Disclosure of Their Cyber Safety

    PubMed Central

    Sackner-Bernstein, Jonathan

    2017-01-01

    Background: The focus of the medical device industry and regulatory bodies on cyber security parallels that in other industries, primarily on risk assessment and user education as well as the recognition and response to infiltration. However, transparency of the safety of marketed devices is lacking and developers are not embracing optimal design practices with new devices. Achieving cyber safe diabetes devices: To improve understanding of cyber safety by clinicians and patients, and inform decision making on use practices of medical devices requires disclosure by device manufacturers of the results of their cyber security testing. Furthermore, developers should immediately shift their design processes to deliver better cyber safety, exemplified by use of state of the art encryption, secure operating systems, and memory protections from malware. PMID:27837161

  13. Network Intrusion Detection and Visualization using Aggregations in a Cyber Security Data Warehouse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czejdo, Bogdan; Ferragut, Erik M; Goodall, John R

    2012-01-01

    The challenge of achieving situational understanding is a limiting factor in effective, timely, and adaptive cyber-security analysis. Anomaly detection fills a critical role in network assessment and trend analysis, both of which underlie the establishment of comprehensive situational understanding. To that end, we propose a cyber security data warehouse implemented as a hierarchical graph of aggregations that captures anomalies at multiple scales. Each node of our pro-posed graph is a summarization table of cyber event aggregations, and the edges are aggregation operators. The cyber security data warehouse enables domain experts to quickly traverse a multi-scale aggregation space systematically. We describemore » the architecture of a test bed system and a summary of results on the IEEE VAST 2012 Cyber Forensics data.« less

  14. Cyber-physical security of Wide-Area Monitoring, Protection and Control in a smart grid environment

    PubMed Central

    Ashok, Aditya; Hahn, Adam; Govindarasu, Manimaran

    2013-01-01

    Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation’s electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments. PMID:25685516

  15. Cyber-physical security of Wide-Area Monitoring, Protection and Control in a smart grid environment.

    PubMed

    Ashok, Aditya; Hahn, Adam; Govindarasu, Manimaran

    2014-07-01

    Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation's electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments.

  16. Cyber Workforce Retention

    DTIC Science & Technology

    2016-10-01

    movement to focus on cybersecurity in the private sector. The company has shared intelligence and resources on cyber threats, even going as far as...personnel and 1NX intelligence personnel as well as 14N intelligence officers and the 17D/S cyber operations officers who lead and manage Air Force...threat of cyber incidents, the burgeon- ing cost of doing business due to cybersecurity infiltrations, and corporate America’s / senior executives

  17. Electronic Warfare for Cyber Warriors

    DTIC Science & Technology

    2008-06-01

    This research paper provides complete course content for the AFIT EENG 509, Electronic Warfare class. It is intended as a replacement for the existing course and designed for Intermediate Developmental Education (IDE) students in the Cyber Warfare degree program. This course provides relevant academic courseware and study material to give cyber warriors an academic and operational perspective on electronic warfare and its integration in the cyber domain.

  18. Beyond Mission Command: Maneuver Warfare for Cyber Command and Control

    DTIC Science & Technology

    2015-05-18

    operation in an A2AD environment. 15. SUBJECT TERMS command and control; maneuver warfare; cyberspace; cyberspace operations; cyber warfare , mission...Some Principles of Cyber Warfare (NWC 2160) (U.S. Naval War College, Joint Military Operations Department, Newport, RI: U.S. Naval War College...research/ innovationleadership.pdf. Crowell, Richard M. Some Principles of Cyber Warfare (NWC 2160). U.S. Naval War College, Joint Military Operations

  19. Cognitive and Non-cognitive Predictors of Career Intentions within Cyber Jobs

    DTIC Science & Technology

    2016-04-01

    in Anaheim, California, April 14-16, 2016. 15. SUBJECT TERMS Cyber, career intentions, retention , cognitive ability, job fit, normative commitment...Predictors of Cyber Job Career Intentions 1 The views, opinions, and findings contained in this article are solely those of the authors and should...other documentation. Cognitive and Non-cognitive Predictors of Career Intentions within Cyber Jobs Kristophor Canali U. S. Army

  20. Special Operations And Cyber Warfare

    DTIC Science & Technology

    2016-12-01

    with the high level of Soldier competency in the 95th for CA Soldiers to retrain and fulfill the cyber requirement. With the reorganization of the...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. SPECIAL OPERATIONS AND CYBER...OPERATIONS AND CYBER WARFARE 5. FUNDING NUMBERS 6. AUTHOR(S) Jason C. Tebedo 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School

  1. Cyber-Herding and Cyber Activism: Countering Qutbists on the Internet

    DTIC Science & Technology

    2007-12-01

    13 f. Phase 6, Concentrate Web Sites ..........14 g. Phase 7, Develop Darknet ................16 B. CYBER ACTIVISM...continues in Phase 3 with the introduction of web sites owned by the cyber herding program and later on with the introduction of Darknets . The...own doppelganger.) Create several content-rich Darknet environments—a private virtual network where users connect only to people they trust8—that

  2. 28 CFR 81.12 - Submission of reports to the “Cyber Tipline” at the National Center for Missing and Exploited...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Submission of reports to the âCyber... “Cyber Tipline” at the National Center for Missing and Exploited Children. (a) When a provider of... circumstances to the “Cyber Tipline” at the National Center for Missing and Exploited Children Web site (http...

  3. 28 CFR 81.12 - Submission of reports to the “Cyber Tipline” at the National Center for Missing and Exploited...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Submission of reports to the âCyber... “Cyber Tipline” at the National Center for Missing and Exploited Children. (a) When a provider of... circumstances to the “Cyber Tipline” at the National Center for Missing and Exploited Children Web site (http...

  4. 28 CFR 81.12 - Submission of reports to the “Cyber Tipline” at the National Center for Missing and Exploited...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Submission of reports to the âCyber... “Cyber Tipline” at the National Center for Missing and Exploited Children. (a) When a provider of... circumstances to the “Cyber Tipline” at the National Center for Missing and Exploited Children Web site (http...

  5. 28 CFR 81.12 - Submission of reports to the “Cyber Tipline” at the National Center for Missing and Exploited...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Submission of reports to the âCyber... “Cyber Tipline” at the National Center for Missing and Exploited Children. (a) When a provider of... circumstances to the “Cyber Tipline” at the National Center for Missing and Exploited Children Web site (http...

  6. A Case Study on the Development and Implementation of Cyber Capabilities in the United States

    ERIC Educational Resources Information Center

    Walton, Marquetta

    2016-01-01

    The effectiveness of U.S. cyber-capabilities can have a serious effect on the cyber-security stance of the US and significantly impact how well U.S. critical infrastructures are protected. The problem is that the state of the U.S. cyber-security could be negatively impacted by the dependency that the US displays in its use of defensive…

  7. 28 CFR 81.12 - Submission of reports to the “Cyber Tipline” at the National Center for Missing and Exploited...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Submission of reports to the âCyber... “Cyber Tipline” at the National Center for Missing and Exploited Children. (a) When a provider of... circumstances to the “Cyber Tipline” at the National Center for Missing and Exploited Children Web site (http...

  8. Optimizing CyberShake Seismic Hazard Workflows for Large HPC Resources

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2014-12-01

    The CyberShake computational platform is a well-integrated collection of scientific software and middleware that calculates 3D simulation-based probabilistic seismic hazard curves and hazard maps for the Los Angeles region. Currently each CyberShake model comprises about 235 million synthetic seismograms from about 415,000 rupture variations computed at 286 sites. CyberShake integrates large-scale parallel and high-throughput serial seismological research codes into a processing framework in which early stages produce files used as inputs by later stages. Scientific workflow tools are used to manage the jobs, data, and metadata. The Southern California Earthquake Center (SCEC) developed the CyberShake platform using USC High Performance Computing and Communications systems and open-science NSF resources.CyberShake calculations were migrated to the NSF Track 1 system NCSA Blue Waters when it became operational in 2013, via an interdisciplinary team approach including domain scientists, computer scientists, and middleware developers. Due to the excellent performance of Blue Waters and CyberShake software optimizations, we reduced the makespan (a measure of wallclock time-to-solution) of a CyberShake study from 1467 to 342 hours. We will describe the technical enhancements behind this improvement, including judicious introduction of new GPU software, improved scientific software components, increased workflow-based automation, and Blue Waters-specific workflow optimizations.Our CyberShake performance improvements highlight the benefits of scientific workflow tools. The CyberShake workflow software stack includes the Pegasus Workflow Management System (Pegasus-WMS, which includes Condor DAGMan), HTCondor, and Globus GRAM, with Pegasus-mpi-cluster managing the high-throughput tasks on the HPC resources. The workflow tools handle data management, automatically transferring about 13 TB back to SCEC storage.We will present performance metrics from the most recent CyberShake study, executed on Blue Waters. We will compare the performance of CPU and GPU versions of our large-scale parallel wave propagation code, AWP-ODC-SGT. Finally, we will discuss how these enhancements have enabled SCEC to move forward with plans to increase the CyberShake simulation frequency to 1.0 Hz.

  9. Using CyberShake Workflows to Manage Big Seismic Hazard Data on Large-Scale Open-Science HPC Resources

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2015-12-01

    The CyberShake computational platform, developed by the Southern California Earthquake Center (SCEC), is an integrated collection of scientific software and middleware that performs 3D physics-based probabilistic seismic hazard analysis (PSHA) for Southern California. CyberShake integrates large-scale and high-throughput research codes to produce probabilistic seismic hazard curves for individual locations of interest and hazard maps for an entire region. A recent CyberShake calculation produced about 500,000 two-component seismograms for each of 336 locations, resulting in over 300 million synthetic seismograms in a Los Angeles-area probabilistic seismic hazard model. CyberShake calculations require a series of scientific software programs. Early computational stages produce data used as inputs by later stages, so we describe CyberShake calculations using a workflow definition language. Scientific workflow tools automate and manage the input and output data and enable remote job execution on large-scale HPC systems. To satisfy the requests of broad impact users of CyberShake data, such as seismologists, utility companies, and building code engineers, we successfully completed CyberShake Study 15.4 in April and May 2015, calculating a 1 Hz urban seismic hazard map for Los Angeles. We distributed the calculation between the NSF Track 1 system NCSA Blue Waters, the DOE Leadership-class system OLCF Titan, and USC's Center for High Performance Computing. This study ran for over 5 weeks, burning about 1.1 million node-hours and producing over half a petabyte of data. The CyberShake Study 15.4 results doubled the maximum simulated seismic frequency from 0.5 Hz to 1.0 Hz as compared to previous studies, representing a factor of 16 increase in computational complexity. We will describe how our workflow tools supported splitting the calculation across multiple systems. We will explain how we modified CyberShake software components, including GPU implementations and migrating from file-based communication to MPI messaging, to greatly reduce the I/O demands and node-hour requirements of CyberShake. We will also present performance metrics from CyberShake Study 15.4, and discuss challenges that producers of Big Data on open-science HPC resources face moving forward.

  10. Edge smoothing for real-time simulation of a polygon face object system as viewed by a moving observer

    NASA Technical Reports Server (NTRS)

    Lotz, Robert W. (Inventor); Westerman, David J. (Inventor)

    1980-01-01

    The visual system within an aircraft flight simulation system receives flight data and terrain data which is formated into a buffer memory. The image data is forwarded to an image processor which translates the image data into face vertex vectors Vf, defining the position relationship between the vertices of each terrain object and the aircraft. The image processor then rotates, clips, and projects the image data into two-dimensional display vectors (Vd). A display generator receives the Vd faces, and other image data to provide analog inputs to CRT devices which provide the window displays for the simulated aircraft. The video signal to the CRT devices passes through an edge smoothing device which prolongs the rise time (and fall time) of the video data inversely as the slope of the edge being smoothed. An operational amplifier within the edge smoothing device has a plurality of independently selectable feedback capacitors each having a different value. The values of the capacitors form a series which doubles as a power of two. Each feedback capacitor has a fast switch responsive to the corresponding bit of a digital binary control word for selecting (1) or not selecting (0) that capacitor. The control word is determined by the slope of each edge. The resulting actual feedback capacitance for each edge is the sum of all the selected capacitors and is directly proportional to the value of the binary control word. The output rise time (or fall time) is a function of the feedback capacitance, and is controlled by the slope through the binary control word.

  11. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    DOE PAGES

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; ...

    2017-06-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less

  12. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less

  13. Human-Technology Centric In Cyber Security Maintenance For Digital Transformation Era

    NASA Astrophysics Data System (ADS)

    Ali, Firkhan Ali Bin Hamid; Zalisham Jali, Mohd, Dr

    2018-05-01

    The development of the digital transformation in the organizations has become more expanding in these present and future years. This is because of the active demand to use the ICT services among all the organizations whether in the government agencies or private sectors. While digital transformation has led manufacturers to incorporate sensors and software analytics into their offerings, the same innovation has also brought pressure to offer clients more accommodating appliance deployment options. So, their needs a well plan to implement the cyber infrastructures and equipment. The cyber security play important role to ensure that the ICT components or infrastructures execute well along the organization’s business successful. This paper will present a study of security management models to guideline the security maintenance on existing cyber infrastructures. In order to perform security model for the currently existing cyber infrastructures, combination of the some security workforces and security process of extracting the security maintenance in cyber infrastructures. In the assessment, the focused on the cyber security maintenance within security models in cyber infrastructures and presented a way for the theoretical and practical analysis based on the selected security management models. Then, the proposed model does evaluation for the analysis which can be used to obtain insights into the configuration and to specify desired and undesired configurations. The implemented cyber security maintenance within security management model in a prototype and evaluated it for practical and theoretical scenarios. Furthermore, a framework model is presented which allows the evaluation of configuration changes in the agile and dynamic cyber infrastructure environments with regard to properties like vulnerabilities or expected availability. In case of a security perspective, this evaluation can be used to monitor the security levels of the configuration over its lifetime and to indicate degradations.

  14. Modeling Cyber Conflicts Using an Extended Petri Net Formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakrzewska, Anita N; Ferragut, Erik M

    2011-01-01

    When threatened by automated attacks, critical systems that require human-controlled responses have difficulty making optimal responses and adapting protections in real- time and may therefore be overwhelmed. Consequently, experts have called for the development of automatic real-time reaction capabilities. However, a technical gap exists in the modeling and analysis of cyber conflicts to automatically understand the repercussions of responses. There is a need for modeling cyber assets that accounts for concurrent behavior, incomplete information, and payoff functions. Furthermore, we address this need by extending the Petri net formalism to allow real-time cyber conflicts to be modeled in a way thatmore » is expressive and concise. This formalism includes transitions controlled by players as well as firing rates attached to transitions. This allows us to model both player actions and factors that are beyond the control of players in real-time. We show that our formalism is able to represent situational aware- ness, concurrent actions, incomplete information and objective functions. These factors make it well-suited to modeling cyber conflicts in a way that allows for useful analysis. MITRE has compiled the Common Attack Pattern Enumera- tion and Classification (CAPEC), an extensive list of cyber attacks at various levels of abstraction. CAPEC includes factors such as attack prerequisites, possible countermeasures, and attack goals. These elements are vital to understanding cyber attacks and to generating the corresponding real-time responses. We demonstrate that the formalism can be used to extract precise models of cyber attacks from CAPEC. Several case studies show that our Petri net formalism is more expressive than other models, such as attack graphs, for modeling cyber conflicts and that it is amenable to exploring cyber strategies.« less

  15. Department of Homeland Security

    MedlinePlus

    ... Release Joint Technical Alerts on Malicious North Korean Cyber Activity Today, DHS and FBI released a pair ... María Provide Feedback to DHS Protect Myself from Cyber Attacks Report Cyber Incidents Prepare My Family for ...

  16. Longitudinal Associations in Youth Involvement as Victimized, Bullying, or Witnessing Cyberbullying.

    PubMed

    Holfeld, Brett; Mishna, Faye

    2018-04-01

    Although cyberbullying has been linked to cyber victimization, it is unknown whether witnessing cyberbullying impacts and is impacted by experiences of cyberbullying and victimization. In the current study, we examine the frequency of youth involved as victimized, bullying, and witnessing cyberbullying and how these experiences are associated across three academic years. Participants comprised 670 Canadian students who began the longitudinal study in grades 4, 7, or 10 at Time 1 (T1). Cyber witnessing represented the largest role of youth involvement in cyberbullying. Cyber witnessing was positively associated with both cyberbullying and victimization. Cyber victimization at T1 was positively associated with cyber witnessing at T2, which was positively related to both cyberbullying and victimization at T3. Findings highlight the significance of addressing the role of cyber witnesses in cyberbullying prevention and intervention efforts.

  17. Cyber-Physical System Security of Smart Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dagle, Jeffery E.

    2012-01-31

    Abstract—This panel presentation will provide perspectives of cyber-physical system security of smart grids. As smart grid technologies are deployed, the interconnected nature of these systems is becoming more prevalent and more complex, and the cyber component of this cyber-physical system is increasing in importance. Studying system behavior in the face of failures (e.g., cyber attacks) allows a characterization of the systems’ response to failure scenarios, loss of communications, and other changes in system environment (such as the need for emergent updates and rapid reconfiguration). The impact of such failures on the availability of the system can be assessed and mitigationmore » strategies considered. Scenarios associated with confidentiality, integrity, and availability are considered. The cyber security implications associated with the American Recovery and Reinvestment Act of 2009 in the United States are discussed.« less

  18. Remodeling Air Force Cyber Command and Control

    DTIC Science & Technology

    2017-10-10

    AIR FORCE CYBERWORX REPORT: REMODELING AIR FORCE CYBER COMMAND & CONTROL COURSE DESIGN PROJECT CONDUCTED 5 Jan – 5 May 17 Produced...For the Air Force Cyber Command and Control (C2) Design Project, CyberWorx brought together 25 cadets from the United States Air Force Academy...warfighting based upon the findings of the design teams. Participants The design course was attended by a diverse group of civilians from industry

  19. The U.S. Needs International Cyber Treaties

    DTIC Science & Technology

    2010-03-01

    formed to deal with issues surrounding cyber warfare . However, no major treaties between nations exist regarding this form of combat. Examining...the history of cyber warfare , the inadequate international response, the obstacles to international agreement, and poor U.S. readiness demonstrates...the current need for the U.S. to lead the effort to codify treaties. First, a brief history of cyber warfare helps to shed light on the international

  20. Cyberspace as a Complex Adaptive System and the Policy and Operational Implications for Cyber Warfare

    DTIC Science & Technology

    2014-05-22

    CYBERSPACE AS A COMPLEX ADAPTIVE SYSTEM AND THE POLICY AND OPERTIONAL IMPLICATIONS FOR CYBER WARFARE A Monograph by Major Albert O. Olagbemiro...serves the US, especially in regards to the protect ion o f the 1S. SUBJECT TERMS omplex Adaptive System, Cyberspace, lnfosphere, Cyber Warfare ber...System and the Policy and Operational Implications for Cyber Warfare Approved by: __________________________________, Monograph Director Jeffrey

  1. Cyber Capabilities for Global Strike in 2035

    DTIC Science & Technology

    2012-02-15

    operations force, by treating cyber warfare capabilities in the same manner as it treats its other weapon systems. It argues that despite preconceptions of...As such, while automation is required, cyber warfare will be much more manpower intensive than is currently understood, and will require a force that...constantly keeping cyber warfare capabilities in pace with the technologies of the environment.This paper reaches these conclusions by first providing a

  2. Cyber Security: A Road Map for Turkey

    DTIC Science & Technology

    2012-03-19

    Cyber warfare is a form of information warfare, sometimes seen as analogous to conventional warfare, among a range of potential actors, including...nation states, non-state groups, and a complex hybrid of conflict involving both state and non-state actors. Cyber warfare is a tool of national power...An entire nation s ability to operate and fight in the information age is vital toward survival. Nowadays, cyber warfare is mostly focused on

  3. Cyber Capabilities for Global Strike in 2035

    DTIC Science & Technology

    2012-02-15

    operations force, by treating cyber warfare capabilities in the same manner as it treats its other weapon systems. It argues that despite preconceptions of...As such, while automation is required, cyber warfare will be much more manpower intensive than is currently understood, and will require a force...constantly keeping cyber warfare capabilities in pace with the technologies of the environment. This paper reaches these conclusions by first providing a

  4. Multinational Experiment 7. Outcome 3 - Cyber Domain Objective 3.4: Cyber Situational Awareness Standard Operating Procedure

    DTIC Science & Technology

    2012-12-01

    and activity coordination (for example, SOC management ). 10. In Reference D the information sharing framework represents a hub & node model in... management , vulnerabilities, critical assets, threats, impacts on operations etc. UNCLASSIFIED UNCLASSIFIED 6 PART 3 - CYBER SITUATIONAL AWARENESS...limit the effect of cyber incidents. 23. Tasks of the SOC include: • System maintenance and management including applying the directed security

  5. FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER PHYSICAL SYSTEMS

    DTIC Science & Technology

    2018-02-23

    FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER- PHYSICAL SYSTEMS UNIVERSITY OF TEXAS AT ARLINGTON FEBRUARY 2018 FINAL...COVERED (From - To) APR 2015 – APR 2017 4. TITLE AND SUBTITLE FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER- PHYSICAL ...dated 16 Jan 09 13. SUPPLEMENTARY NOTES 14. ABSTRACT This project studied emergent behavior in distributed cyber- physical systems (DCPS). Emergent

  6. Defending the New Domain: Cyberspace

    DTIC Science & Technology

    2011-03-21

    and time consuming, the FBI has proven technology, techniques, and procedures to hunt down and capture cyber criminals , despite the anonymity of the...help stop cyber crime and make it harder for cyber criminals to commit crime on the internet. MILITARY The U.S. military relies on DoD...one country and attacking another. Cyber criminals are using servers and proxies in one country to execute their criminal activities in another

  7. Information System Incidents: The Development of a Damage Assessment Model

    DTIC Science & Technology

    1999-12-01

    Cyber criminals use creativity, knowledge, software, and hardware to attack and infiltrate information systems (IS) in order to copy, delete, or...the Internet led to an increase in cyber criminals and a variety or cyber crimes such as attacks, intrusions, introduction of viruses, and data theft...organizations on information systems is contributing to the increased number of cyber criminals . Additionally, the growing sophistication and availability of

  8. A Game Theoretic Model Of Strategic Conflict In Cyberspace

    DTIC Science & Technology

    2012-01-01

    EXECUTIVE SUMMARY Conflict in cyberspace is difficult to analyze; methods developed for other dimensions of conflict, such as land warfare , war at sea...and missile warfare , do not adequately address cyber conflict. A characteristic that distinguishes cyber conflict is that actors do not know the...strategic and policy guidance. To analyze the strategic decisions involved in cyber conflict, we use a game theoretic framework—we view cyber warfare as a

  9. Building An Adaptive Cyber Strategy

    DTIC Science & Technology

    2016-06-01

    forces. The primary mission of the military in any domain, including cyber , should be readiness to exert force if needed during crisis . AU/ACSC/SMITH...of crisis . The military must be able to AU/ACSC/SMITH, FI/AY16 manipulate the cyber environment, but should avoid direct use of force against...operations focus on maintaining a manageable threat level. Cyberspace is a continually evolving domain, and nations throughout the world can join in cyber

  10. Defense Civil Support: DOD Needs to Identify National Guards Cyber Capabilities and Address Challenges in Its Exercises

    DTIC Science & Technology

    2016-09-01

    Congress. Consequently, as prepared now, this report does not help DOD leaders identify assets that could be used in a cyber crisis scenario...Guidance. GAO-13-128. Washington, D.C.: October 24, 2012. Defense Cyber Efforts: Management Improvements Needed to Enhance Programs Protecting the...DEFENSE CIVIL SUPPORT DOD Needs to Identify National Guard’s Cyber Capabilities and Address Challenges in Its

  11. Defense Science Board (DSB) Task Force on Cyber Deterrence

    DTIC Science & Technology

    2017-02-01

    the 2015 cyber heist from the Office of Personnel Management of some 18 million records containing personal information as so egregious as to warrant...major war, including through misperception and inadvertent escalation. The dynamics of cyber offensive weapons will increase challenges to crisis ...may have hoped to gain. − Proposed norms for the conduct of offensive cyber operations, in crisis and conflict. These norms will provide boundaries

  12. Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data

    DTIC Science & Technology

    2017-03-02

    AFRL-AFOSR-UK-TR-2017-0020 Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data Philip Walther UNIVERSITT WIEN Final...REPORT TYPE Final 3. DATES COVERED (From - To) 15 Oct 2015 to 31 Dec 2016 4. TITLE AND SUBTITLE Quantum-Enhanced Cyber Security: Experimental Computation...FORM SF 298 Final Report for FA9550-1-6-1-0004 Quantum-enhanced cyber security: Experimental quantum computation with quantum-encrypted data

  13. The Current Status and Future Directions in the Development of the Cyber Home Learning System in Korea

    ERIC Educational Resources Information Center

    Kang, Myunghee; Kim, Seyoung; Yoon, Seonghye; Chung, Warren

    2017-01-01

    The purpose of this study was to set future directions of the Cyber Home Learning System in Korea based on its current status. The Cyber Home Learning System has been designed and used by K-12 students to study voluntarily at home using online lessons. The development process of the Cyber Home Learning System was composed of the following four…

  14. Functional Mission Analysis (FMA) for the Air Force Cyber Squadron Initiative (CS I)

    DTIC Science & Technology

    2017-03-17

    Analysis for the Air Force Cyber Squadron Initiative DESIGN PROJECT CONDUCTED 13 FEB – 17 FEB 17 Produced with input from numerous units...Success The Air Force’s base-level Communications Squadrons are engaged in a cultural and technological transformation through the Cyber Squadron...sharpening their focus to include active cyber defense and mission assurance as core competencies to enable operational advantages and out-maneuver our

  15. Timing of cyber conflict

    PubMed Central

    Axelrod, Robert; Iliev, Rumen

    2014-01-01

    Nations are accumulating cyber resources in the form of stockpiles of zero-day exploits as well as other novel methods of engaging in future cyber conflict against selected targets. This paper analyzes the optimal timing for the use of such cyber resources. A simple mathematical model is offered to clarify how the timing of such a choice can depend on the stakes involved in the present situation, as well as the characteristics of the resource for exploitation. The model deals with the question of when the resource should be used given that its use today may well prevent it from being available for use later. The analysis provides concepts, theory, applications, and distinctions to promote the understanding strategy aspects of cyber conflict. Case studies include the Stuxnet attack on Iran’s nuclear program, the Iranian cyber attack on the energy firm Saudi Aramco, the persistent cyber espionage carried out by the Chinese military, and an analogous case of economic coercion by China in a dispute with Japan. The effects of the rapidly expanding market for zero-day exploits are also analyzed. The goal of the paper is to promote the understanding of this domain of cyber conflict to mitigate the harm it can do, and harness the capabilities it can provide. PMID:24474752

  16. Bullying in school and cyberspace: Associations with depressive symptoms in Swiss and Australian adolescents

    PubMed Central

    2010-01-01

    Background Cyber-bullying (i.e., bullying via electronic means) has emerged as a new form of bullying that presents unique challenges to those victimised. Recent studies have demonstrated that there is a significant conceptual and practical overlap between both types of bullying such that most young people who are cyber-bullied also tend to be bullied by more traditional methods. Despite the overlap between traditional and cyber forms of bullying, it remains unclear if being a victim of cyber-bullying has the same negative consequences as being a victim of traditional bullying. Method The current study investigated associations between cyber versus traditional bullying and depressive symptoms in 374 and 1320 students from Switzerland and Australia respectively (52% female; Age: M = 13.8, SD = 1.0). All participants completed a bullying questionnaire (assessing perpetration and victimisation of traditional and cyber forms of bullying behaviour) in addition to scales on depressive symptoms. Results Across both samples, traditional victims and bully-victims reported more depressive symptoms than bullies and non-involved children. Importantly, victims of cyber-bullying reported significantly higher levels of depressive symptoms, even when controlling for the involvement in traditional bullying/victimisation. Conclusions Overall, cyber-victimisation emerged as an additional risk factor for depressive symptoms in adolescents involved in bullying. PMID:21092266

  17. Protecting water and wastewater infrastructure from cyber attacks

    NASA Astrophysics Data System (ADS)

    Panguluri, Srinivas; Phillips, William; Cusimano, John

    2011-12-01

    Multiple organizations over the years have collected and analyzed data on cyber attacks and they all agree on one conclusion: cyber attacks are real and can cause significant damages. This paper presents some recent statistics on cyber attacks and resulting damages. Water and wastewater utilities must adopt countermeasures to prevent or minimize the damage in case of such attacks. Many unique challenges are faced by the water and wastewater industry while selecting and implementing security countermeasures; the key challenges are: 1) the increasing interconnection of their business and control system networks, 2) large variation of proprietary industrial control equipment utilized, 3) multitude of cross-sector cyber-security standards, and 4) the differences in the equipment vendor's approaches to meet these security standards. The utilities can meet these challenges by voluntarily selecting and adopting security standards, conducting a gap analysis, performing vulnerability/risk analysis, and undertaking countermeasures that best meets their security and organizational requirements. Utilities should optimally utilize their limited resources to prepare and implement necessary programs that are designed to increase cyber-security over the years. Implementing cyber security does not necessarily have to be expensive, substantial improvements can be accomplished through policy, procedure, training and awareness. Utilities can also get creative and allocate more funding through annual budgets and reduce dependence upon capital improvement programs to achieve improvements in cyber-security.

  18. Impact modeling and prediction of attacks on cyber targets

    NASA Astrophysics Data System (ADS)

    Khalili, Aram; Michalk, Brian; Alford, Lee; Henney, Chris; Gilbert, Logan

    2010-04-01

    In most organizations, IT (information technology) infrastructure exists to support the organization's mission. The threat of cyber attacks poses risks to this mission. Current network security research focuses on the threat of cyber attacks to the organization's IT infrastructure; however, the risks to the overall mission are rarely analyzed or formalized. This connection of IT infrastructure to the organization's mission is often neglected or carried out ad-hoc. Our work bridges this gap and introduces analyses and formalisms to help organizations understand the mission risks they face from cyber attacks. Modeling an organization's mission vulnerability to cyber attacks requires a description of the IT infrastructure (network model), the organization mission (business model), and how the mission relies on IT resources (correlation model). With this information, proper analysis can show which cyber resources are of tactical importance in a cyber attack, i.e., controlling them enables a large range of cyber attacks. Such analysis also reveals which IT resources contribute most to the organization's mission, i.e., lack of control over them gravely affects the mission. These results can then be used to formulate IT security strategies and explore their trade-offs, which leads to better incident response. This paper presents our methodology for encoding IT infrastructure, organization mission and correlations, our analysis framework, as well as initial experimental results and conclusions.

  19. Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users

    PubMed Central

    Veksler, Vladislav D.; Buchler, Norbou; Hoffman, Blaine E.; Cassenti, Daniel N.; Sample, Char; Sugrim, Shridat

    2018-01-01

    Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting. PMID:29867661

  20. Dataset of anomalies and malicious acts in a cyber-physical subsystem.

    PubMed

    Laso, Pedro Merino; Brosset, David; Puentes, John

    2017-10-01

    This article presents a dataset produced to investigate how data and information quality estimations enable to detect aNomalies and malicious acts in cyber-physical systems. Data were acquired making use of a cyber-physical subsystem consisting of liquid containers for fuel or water, along with its automated control and data acquisition infrastructure. Described data consist of temporal series representing five operational scenarios - Normal, aNomalies, breakdown, sabotages, and cyber-attacks - corresponding to 15 different real situations. The dataset is publicly available in the .zip file published with the article, to investigate and compare faulty operation detection and characterization methods for cyber-physical systems.

Top