Fringe Capacitance of a Parallel-Plate Capacitor.
ERIC Educational Resources Information Center
Hale, D. P.
1978-01-01
Describes an experiment designed to measure the forces between charged parallel plates, and determines the relationship among the effective electrode area, the measured capacitance values, and the electrode spacing of a parallel plate capacitor. (GA)
The Perspective Structure of Visual Space
2015-01-01
Luneburg’s model has been the reference for experimental studies of visual space for almost seventy years. His claim for a curved visual space has been a source of inspiration for visual scientists as well as philosophers. The conclusion of many experimental studies has been that Luneburg’s model does not describe visual space in various tasks and conditions. Remarkably, no alternative model has been suggested. The current study explores perspective transformations of Euclidean space as a model for visual space. Computations show that the geometry of perspective spaces is considerably different from that of Euclidean space. Collinearity but not parallelism is preserved in perspective space and angles are not invariant under translation and rotation. Similar relationships have shown to be properties of visual space. Alley experiments performed early in the nineteenth century have been instrumental in hypothesizing curved visual spaces. Alleys were computed in perspective space and compared with reconstructed alleys of Blumenfeld. Parallel alleys were accurately described by perspective geometry. Accurate distance alleys were derived from parallel alleys by adjusting the interstimulus distances according to the size-distance invariance hypothesis. Agreement between computed and experimental alleys and accommodation of experimental results that rejected Luneburg’s model show that perspective space is an appropriate model for how we perceive orientations and angles. The model is also appropriate for perceived distance ratios between stimuli but fails to predict perceived distances. PMID:27648222
Study of solid rocket motor for space shuttle booster. Volume 4: Cost
NASA Technical Reports Server (NTRS)
1972-01-01
The cost data for solid propellant rocket engines for use with the space shuttle are presented. The data are based on the selected 156 inch parallel and series burn configurations. Summary cost data are provided for the production of the 120 inch and 260 inch configurations. Graphs depicting parametric cost estimating relationships are included.
Low heat transfer, high strength window materials
Berlad, Abraham L.; Salzano, Francis J.; Batey, John E.
1978-01-01
A multi-pane window with improved insulating qualities; comprising a plurality of transparent or translucent panes held in an essentially parallel, spaced-apart relationship by a frame. Between at least one pair of panes is a convection defeating means comprising an array of parallel slats or cells so designed as to prevent convection currents from developing in the space between the two panes. The convection defeating structures may have reflective surfaces so as to improve the collection and transmittance of the incident radiant energy. These same means may be used to control (increase or decrease) the transmittance of solar energy as well as to decouple the radiative transfer between the interior surfaces of the transparent panes.
Data analytics and parallel-coordinate materials property charts
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.
2018-01-01
It is often advantageous to display material properties relationships in the form of charts that highlight important correlations and thereby enhance our understanding of materials behavior and facilitate materials selection. Unfortunately, in many cases, these correlations are highly multidimensional in nature, and one typically employs low-dimensional cross-sections of the property space to convey some aspects of these relationships. To overcome some of these difficulties, in this work we employ methods of data analytics in conjunction with a visualization strategy, known as parallel coordinates, to represent better multidimensional materials data and to extract useful relationships among properties. We illustrate the utility of this approach by the construction and systematic analysis of multidimensional materials properties charts for metallic and ceramic systems. These charts simplify the description of high-dimensional geometry, enable dimensional reduction and the identification of significant property correlations and underline distinctions among different materials classes.
Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments
NASA Astrophysics Data System (ADS)
Atwal, Gurinder S.; Kinney, Justin B.
2016-03-01
A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.
Inertial energy storage device
Knight, Jr., Charles E.; Kelly, James J.; Pollard, Roy E.
1978-01-01
The inertial energy storage device of the present invention comprises a composite ring formed of circumferentially wound resin-impregnated filament material, a flanged hollow metal hub concentrically disposed in the ring, and a plurality of discrete filament bandsets coupling the hub to the ring. Each bandset is formed of a pair of parallel bands affixed to the hub in a spaced apart relationship with the axis of rotation of the hub being disposed between the bands and with each band being in the configuration of a hoop extending about the ring along a chordal plane thereof. The bandsets are disposed in an angular relationship with one another so as to encircle the ring at spaced-apart circumferential locations while being disposed in an overlapping relationship on the flanges of the hub. The energy storage device of the present invention has the capability of substantial energy storage due to the relationship of the filament bands to the ring and the flanged hub.
Wilds, R.B.; Ames, J.R.
1957-09-24
The line-above-ground attenuator provides a continuously variable microwave attenuator for a coaxial line that is capable of high attenuation and low insertion loss. The device consists of a short section of the line-above- ground plane type transmission lime, a pair of identical rectangular slabs of lossy material like polytron, whose longitudinal axes are parallel to and indentically spaced away from either side of the line, and a geared mechanism to adjust amd maintain this spaced relationship. This device permits optimum fineness and accuracy of attenuator control which heretofore has been difficult to achieve.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J.; Chen, S. Y., E-mail: sychen531@163.com; Tang, C. J.
2014-01-15
The physical mechanism of the synergy current driven by lower hybrid wave (LHW) and electron cyclotron wave (ECW) in tokamaks is investigated using theoretical analysis and simulation methods in the present paper. Research shows that the synergy relationship between the two waves in velocity space strongly depends on the frequency ω and parallel refractive index N{sub //} of ECW. For a given spectrum of LHW, the parameter range of ECW, in which the synergy current exists, can be predicted by theoretical analysis, and these results are consistent with the simulation results. It is shown that the synergy effect is mainlymore » caused by the electrons accelerated by both ECW and LHW, and the acceleration of these electrons requires that there is overlap of the resonance regions of the two waves in velocity space.« less
Yang, Chifu; Zhao, Jinsong; Li, Liyi; Agrawal, Sunil K
2018-01-01
Robotic spine brace based on parallel-actuated robotic system is a new device for treatment and sensing of scoliosis, however, the strong dynamic coupling and anisotropy problem of parallel manipulators result in accuracy loss of rehabilitation force control, including big error in direction and value of force. A novel active force control strategy named modal space force control is proposed to solve these problems. Considering the electrical driven system and contact environment, the mathematical model of spatial parallel manipulator is built. The strong dynamic coupling problem in force field is described via experiments as well as the anisotropy problem of work space of parallel manipulators. The effects of dynamic coupling on control design and performances are discussed, and the influences of anisotropy on accuracy are also addressed. With mass/inertia matrix and stiffness matrix of parallel manipulators, a modal matrix can be calculated by using eigenvalue decomposition. Making use of the orthogonality of modal matrix with mass matrix of parallel manipulators, the strong coupled dynamic equations expressed in work space or joint space of parallel manipulator may be transformed into decoupled equations formulated in modal space. According to this property, each force control channel is independent of others in the modal space, thus we proposed modal space force control concept which means the force controller is designed in modal space. A modal space active force control is designed and implemented with only a simple PID controller employed as exampled control method to show the differences, uniqueness, and benefits of modal space force control. Simulation and experimental results show that the proposed modal space force control concept can effectively overcome the effects of the strong dynamic coupling and anisotropy problem in the physical space, and modal space force control is thus a very useful control framework, which is better than the current joint space control and work space control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hartmann, Alfred; Redfield, Steve
1989-04-01
This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.
Degtiarenko, Pavel V.
2003-08-12
A heat exchange apparatus comprising a coolant conduit or heat sink having attached to its surface a first radial array of spaced-apart parallel plate fins or needles and a second radial array of spaced-apart parallel plate fins or needles thermally coupled to a body to be cooled and meshed with, but not contacting the first radial array of spaced-apart parallel plate fins or needles.
Cooled particle accelerator target
Degtiarenko, Pavel V.
2005-06-14
A novel particle beam target comprising: a rotating target disc mounted on a retainer and thermally coupled to a first array of spaced-apart parallel plate fins that extend radially inwardly from the retainer and mesh without physical contact with a second array of spaced-apart parallel plate fins that extend radially outwardly from and are thermally coupled to a cooling mechanism capable of removing heat from said second array of spaced-apart fins and located within the first array of spaced-apart parallel fins. Radiant thermal exchange between the two arrays of parallel plate fins provides removal of heat from the rotating disc. A method of cooling the rotating target is also described.
Combining points and lines in rectifying satellite images
NASA Astrophysics Data System (ADS)
Elaksher, Ahmed F.
2017-09-01
The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.
Vortex-induced vibration of two parallel risers: Experimental test and numerical simulation
NASA Astrophysics Data System (ADS)
Huang, Weiping; Zhou, Yang; Chen, Haiming
2016-04-01
The vortex-induced vibration of two identical rigidly mounted risers in a parallel arrangement was studied using Ansys- CFX and model tests. The vortex shedding and force were recorded to determine the effect of spacing on the two-degree-of-freedom oscillation of the risers. CFX was used to study the single riser and two parallel risers in 2-8 D spacing considering the coupling effect. Because of the limited width of water channel, only three different riser spacings, 2 D, 3 D, and 4 D, were tested to validate the characteristics of the two parallel risers by comparing to the numerical simulation. The results indicate that the lift force changes significantly with the increase in spacing, and in the case of 3 D spacing, the lift force of the two parallel risers reaches the maximum. The vortex shedding of the risers in 3 D spacing shows that a variable velocity field with the same frequency as the vortex shedding is generated in the overlapped area, thus equalizing the period of drag force to that of lift force. It can be concluded that the interaction between the two parallel risers is significant when the risers are brought to a small distance between them because the trajectory of riser changes from oval to curve 8 as the spacing is increased. The phase difference of lift force between the two risers is also different as the spacing changes.
Townsend, James T; Eidels, Ami
2011-08-01
Increasing the number of available sources of information may impair or facilitate performance, depending on the capacity of the processing system. Tests performed on response time distributions are proving to be useful tools in determining the workload capacity (as well as other properties) of cognitive systems. In this article, we develop a framework and relevant mathematical formulae that represent different capacity assays (Miller's race model bound, Grice's bound, and Townsend's capacity coefficient) in the same space. The new space allows a direct comparison between the distinct bounds and the capacity coefficient values and helps explicate the relationships among the different measures. An analogous common space is proposed for the AND paradigm, relating the capacity index to the Colonius-Vorberg bounds. We illustrate the effectiveness of the unified spaces by presenting data from two simulated models (standard parallel, coactive) and a prototypical visual detection experiment. A conversion table for the unified spaces is provided.
Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules; Halem, Milton (Technical Monitor)
2000-01-01
We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla
2014-03-01
Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly inmore » high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reimberg, Paulo; Bernardeau, Francis; Pitrou, Cyril, E-mail: paulo.flose-reimberg@cea.fr, E-mail: francis.bernardeau@cea.fr, E-mail: pitrou@iap.fr
Redshift-space distortions are generally considered in the plane parallel limit, where the angular separation between the two sources can be neglected. Given that galaxy catalogues now cover large fractions of the sky, it becomes necessary to consider them in a formalism which takes into account the wide angle separations. In this article we derive an operational formula for the matter correlators in the Newtonian limit to be used in actual data sets. In order to describe the geometrical nature of the wide angle RSD effect on Fourier space, we extend the formalism developed in configuration space to Fourier space withoutmore » relying on a plane-parallel approximation, but under the extra assumption of no bias evolution. We then recover the plane-parallel limit not only in configuration space where the geometry is simpler, but also in Fourier space, and we exhibit the first corrections that should be included in large surveys as a perturbative expansion over the plane-parallel results. We finally compare our results to existing literature, and show explicitly how they are related.« less
A parallel variable metric optimization algorithm
NASA Technical Reports Server (NTRS)
Straeter, T. A.
1973-01-01
An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.
Increasing the perceptual salience of relationships in parallel coordinate plots.
Harter, Jonathan M; Wu, Xunlei; Alabi, Oluwafemi S; Phadke, Madhura; Pinto, Lifford; Dougherty, Daniel; Petersen, Hannah; Bass, Steffen; Taylor, Russell M
2012-01-01
We present three extensions to parallel coordinates that increase the perceptual salience of relationships between axes in multivariate data sets: (1) luminance modulation maintains the ability to preattentively detect patterns in the presence of overplotting, (2) adding a one-vs.-all variable display highlights relationships between one variable and all others, and (3) adding a scatter plot within the parallel-coordinates display preattentively highlights clusters and spatial layouts without strongly interfering with the parallel-coordinates display. These techniques can be combined with one another and with existing extensions to parallel coordinates, and two of them generalize beyond cases with known-important axes. We applied these techniques to two real-world data sets (relativistic heavy-ion collision hydrodynamics and weather observations with statistical principal component analysis) as well as the popular car data set. We present relationships discovered in the data sets using these methods.
Self-calibrated correlation imaging with k-space variant correlation functions.
Li, Yu; Edalati, Masoud; Du, Xingfu; Wang, Hui; Cao, Jie J
2018-03-01
Correlation imaging is a previously developed high-speed MRI framework that converts parallel imaging reconstruction into the estimate of correlation functions. The presented work aims to demonstrate this framework can provide a speed gain over parallel imaging by estimating k-space variant correlation functions. Because of Fourier encoding with gradients, outer k-space data contain higher spatial-frequency image components arising primarily from tissue boundaries. As a result of tissue-boundary sparsity in the human anatomy, neighboring k-space data correlation varies from the central to the outer k-space. By estimating k-space variant correlation functions with an iterative self-calibration method, correlation imaging can benefit from neighboring k-space data correlation associated with both coil sensitivity encoding and tissue-boundary sparsity, thereby providing a speed gain over parallel imaging that relies only on coil sensitivity encoding. This new approach is investigated in brain imaging and free-breathing neonatal cardiac imaging. Correlation imaging performs better than existing parallel imaging techniques in simulated brain imaging acceleration experiments. The higher speed enables real-time data acquisition for neonatal cardiac imaging in which physiological motion is fast and non-periodic. With k-space variant correlation functions, correlation imaging gives a higher speed than parallel imaging and offers the potential to image physiological motion in real-time. Magn Reson Med 79:1483-1494, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Research of Surface Roughness Anisotropy
NASA Astrophysics Data System (ADS)
Bulaha, N.; Rudzitis, J.; Lungevics, J.; Linins, O.; Krizbergs, J.
2017-04-01
The authors of the paper have investigated surfaces with irregular roughness for the purpose of determination of roughness spacing parameters perpendicularly to machining traces - RSm1 and parallel to them - RSm2, as well as checking the relationship between the surface anisotropy coefficient c and surface aspect ratio Str from the standard LVS EN ISO 25178-2. Surface roughness measurement experiments with 11 surfaces show that measuring equipment values of mean spacing of profile irregularities in the longitudinal direction are not reliable due to the divergence of surface mean plane and roughness profile mean line. After the additional calculations it was stated that parameter Str can be used for determination of parameter RSm2 and roughness anisotropy evaluation for grinded, polished, friction surfaces and other surfaces with similar characteristics.
NASA Technical Reports Server (NTRS)
Shay, Rick; Swieringa, Kurt A.; Baxley, Brian T.
2012-01-01
Flight deck based Interval Management (FIM) applications using ADS-B are being developed to improve both the safety and capacity of the National Airspace System (NAS). FIM is expected to improve the safety and efficiency of the NAS by giving pilots the technology and procedures to precisely achieve an interval behind the preceding aircraft by a specific point. Concurrently but independently, Optimized Profile Descents (OPD) are being developed to help reduce fuel consumption and noise, however, the range of speeds available when flying an OPD results in a decrease in the delivery precision of aircraft to the runway. This requires the addition of a spacing buffer between aircraft, reducing system throughput. FIM addresses this problem by providing pilots with speed guidance to achieve a precise interval behind another aircraft, even while flying optimized descents. The Interval Management with Spacing to Parallel Dependent Runways (IMSPiDR) human-in-the-loop experiment employed 24 commercial pilots to explore the use of FIM equipment to conduct spacing operations behind two aircraft arriving to parallel runways, while flying an OPD during high-density operations. This paper describes the impact of variations in pilot operations; in particular configuring the aircraft, their compliance with FIM operating procedures, and their response to changes of the FIM speed. An example of the displayed FIM speeds used incorrectly by a pilot is also discussed. Finally, this paper examines the relationship between achieving airline operational goals for individual aircraft and the need for ATC to deliver aircraft to the runway with greater precision. The results show that aircraft can fly an OPD and conduct FIM operations to dependent parallel runways, enabling operational goals to be achieved efficiently while maintaining system throughput.
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; ...
2017-11-14
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
NASA Astrophysics Data System (ADS)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; Gagliardi, Laura; de Jong, Wibe A.
2017-11-01
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals for the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. The chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.
Biomimetic shoulder complex based on 3-PSS/S spherical parallel mechanism
NASA Astrophysics Data System (ADS)
Hou, Yulei; Hu, Xinzhe; Zeng, Daxing; Zhou, Yulin
2015-01-01
The application of the parallel mechanism is still limited in the humanoid robot fields, and the existing parallel humanoid robot joint has not yet been reflected the characteristics of the parallel mechanism completely, also failed to solve the problem, such as small workspace, effectively. From the structural and functional bionic point of view, a three degrees of freedom(DOFs) spherical parallel mechanism for the shoulder complex of the humanoid robot is presented. According to the structure and kinetic characteristics analysis of the human shoulder complex, 3-PSS/S(P for prismatic pair, S for spherical pair) is chosen as the original configuration for the shouder complex. Using genetic algorithm, the optimization of the 3-PSS/S spherical parallel mechanism is performed, and the orientation workspace of the prototype mechanism is enlarged obviously. Combining the practical structure characteristics of the human shouder complex, an offset output mode, which means the output rod of the mechanism turn to any direction at the point a certain distance from the rotation center of the mechanism, is put forward, which provide possibility for the consistent of the workspace of the mechanism and the actual motion space of the human body shoulder joint. The relationship of the attitude angles between different coordinate system is derived, which establishs the foundation for the motion descriptions under different conditions and control development. The 3-PSS/S spherical parallel mechanism is proposed for the shoulder complex, and the consistence of the workspace of the mechanism and the human shoulder complex is realized by the stuctural parameter optimization and the offset output design.
A Concept for Airborne Precision Spacing for Dependent Parallel Approaches
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Baxley, Brian T.; Abbott, Terence S.; Capron, William R.; Smith, Colin L.; Shay, Richard F.; Hubbs, Clay
2012-01-01
The Airborne Precision Spacing concept of operations has been previously developed to support the precise delivery of aircraft landing successively on the same runway. The high-precision and consistent delivery of inter-aircraft spacing allows for increased runway throughput and the use of energy-efficient arrivals routes such as Continuous Descent Arrivals and Optimized Profile Descents. This paper describes an extension to the Airborne Precision Spacing concept to enable dependent parallel approach operations where the spacing aircraft must manage their in-trail spacing from a leading aircraft on approach to the same runway and spacing from an aircraft on approach to a parallel runway. Functionality for supporting automation is discussed as well as procedures for pilots and controllers. An analysis is performed to identify the required information and a new ADS-B report is proposed to support these information needs. Finally, several scenarios are described in detail.
Data communications in a parallel active messaging interface of a parallel computer
Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.
2014-09-02
Eager send data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address.
Data communications in a parallel active messaging interface of a parallel computer
Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.
2014-09-16
Eager send data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address.
Parallel-wire grid assembly with method and apparatus for construction thereof
Lewandowski, E.F.; Vrabec, J.
1981-10-26
Disclosed is a parallel wire grid and an apparatus and method for making the same. The grid consists of a generally coplanar array of parallel spaced-apart wires secured between metallic frame members by an electrically conductive epoxy. The method consists of continuously winding a wire about a novel winding apparatus comprising a plurality of spaced-apart generally parallel spindles. Each spindle is threaded with a number of predeterminedly spaced-apart grooves which receive and accurately position the wire at predetermined positions along the spindle. Overlying frame members coated with electrically conductive epoxy are then placed on either side of the wire array and are drawn together. After the epoxy hardens, portions of the wire array lying outside the frame members are trimmed away.
Parallel-wire grid assembly with method and apparatus for construction thereof
Lewandowski, Edward F.; Vrabec, John
1984-01-01
Disclosed is a parallel wire grid and an apparatus and method for making the same. The grid consists of a generally coplanar array of parallel spaced-apart wires secured between metallic frame members by an electrically conductive epoxy. The method consists of continuously winding a wire about a novel winding apparatus comprising a plurality of spaced-apart generally parallel spindles. Each spindle is threaded with a number of predeterminedly spaced-apart grooves which receive and accurately position the wire at predetermined positions along the spindle. Overlying frame members coated with electrically conductive epoxy are then placed on either side of the wire array and are drawn together. After the epoxy hardens, portions of the wire array lying outside the frame members are trimmed away.
36 CFR Appendix D to Part 1191 - Technical
Code of Federal Regulations, 2014 CFR
2014-07-01
... inch (13 mm) high shall be ramped, and shall comply with 405 or 406. 304Turning Space 304.1General... ground space allows a parallel approach to an element and the side reach is unobstructed, the high side....2Obstructed High Reach. Where a clear floor or ground space allows a parallel approach to an element and the...
Principle and analysis of a rotational motion Fourier transform infrared spectrometer
NASA Astrophysics Data System (ADS)
Cai, Qisheng; Min, Huang; Han, Wei; Liu, Yixuan; Qian, Lulu; Lu, Xiangning
2017-09-01
Fourier transform infrared spectroscopy is an important technique in studying molecular energy levels, analyzing material compositions, and environmental pollutants detection. A novel rotational motion Fourier transform infrared spectrometer with high stability and ultra-rapid scanning characteristics is proposed in this paper. The basic principle, the optical path difference (OPD) calculations, and some tolerance analysis are elaborated. The OPD of this spectrometer is obtained by the continuously rotational motion of a pair of parallel mirrors instead of the translational motion in traditional Michelson interferometer. Because of the rotational motion, it avoids the tilt problems occurred in the translational motion Michelson interferometer. There is a cosine function relationship between the OPD and the rotating angle of the parallel mirrors. An optical model is setup in non-sequential mode of the ZEMAX software, and the interferogram of a monochromatic light is simulated using ray tracing method. The simulated interferogram is consistent with the theoretically calculated interferogram. As the rotating mirrors are the only moving elements in this spectrometer, the parallelism of the rotating mirrors and the vibration during the scan are analyzed. The vibration of the parallel mirrors is the main error during the rotation. This high stability and ultra-rapid scanning Fourier transform infrared spectrometer is a suitable candidate for airborne and space-borne remote sensing spectrometer.
Babelay, E.F.
1962-02-13
A flexible shaft coupling for operation at speeds in excess of 14,000 rpm is designed which requires no lubrication. A driving sleeve member and a driven sleeve member are placed in concentric spaced relationship. A torque force is transmitted to the driven member from the driving member through a plurality of nylon balls symmetrically disposed between the spaced sleeves. The balls extend into races and recesses within the respective sleeve members. The sleeve members have a suitable clearance therebetween and the balls have a suitable radial clearance during operation of the coupling to provide a relatively loose coupling. These clearances accommodate for both parallel and/or angular misalignments and avoid metal-tometal contact between the sleeve members during operation. Thus, no lubrication is needed, and a minimum of vibrations is transmitted between the sleeve members. (AEC)
Detecting opportunities for parallel observations on the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Lucks, Michael
1992-01-01
The presence of multiple scientific instruments aboard the Hubble Space Telescope provides opportunities for parallel science, i.e., the simultaneous use of different instruments for different observations. Determining whether candidate observations are suitable for parallel execution depends on numerous criteria (some involving quantitative tradeoffs) that may change frequently. A knowledge based approach is presented for constructing a scoring function to rank candidate pairs of observations for parallel science. In the Parallel Observation Matching System (POMS), spacecraft knowledge and schedulers' preferences are represented using a uniform set of mappings, or knowledge functions. Assessment of parallel science opportunities is achieved via composition of the knowledge functions in a prescribed manner. The knowledge acquisition, and explanation facilities of the system are presented. The methodology is applicable to many other multiple criteria assessment problems.
Linearly exact parallel closures for slab geometry
NASA Astrophysics Data System (ADS)
Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun
2013-08-01
Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients).
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.
2011-07-01
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Increasing airport capacity with modified IFR approach procedures for close-spaced parallel runways
DOT National Transportation Integrated Search
2001-01-01
Because of wake turbulence considerations, current instrument approach : procedures treat close-spaced (i.e., less than 2,500 feet apart) parallel run : ways as a single runway. This restriction is designed to assure safety for all : aircraft types u...
Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations
NASA Technical Reports Server (NTRS)
Fijany, Amir
1993-01-01
In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2011-01-01
This paper presents an overview of an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This airborne self-spacing concept is trajectory-based, allowing for spacing operations prior to the aircraft being on a common path. This implementation provides the ability to manage spacing against two traffic aircraft, with one of these aircraft operating to a parallel dependent runway. Because this algorithm is trajectory-based, it also has the inherent ability to support required-time-of-arrival (RTA) operations
Nana, Roger; Hu, Xiaoping
2010-01-01
k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.
Computational methods and software systems for dynamics and control of large space structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.
1990-01-01
Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers.
Liu, L L; Liu, M J; Ma, M
2015-09-28
The central task of this study was to mine the gene-to-medium relationship. Adequate knowledge of this relationship could potentially improve the accuracy of differentially expressed gene mining. One of the approaches to differentially expressed gene mining uses conventional clustering algorithms to identify the gene-to-medium relationship. Compared to conventional clustering algorithms, self-organization maps (SOMs) identify the nonlinear aspects of the gene-to-medium relationships by mapping the input space into another higher dimensional feature space. However, SOMs are not suitable for huge datasets consisting of millions of samples. Therefore, a new computational model, the Function Clustering Self-Organization Maps (FCSOMs), was developed. FCSOMs take advantage of the theory of granular computing as well as advanced statistical learning methodologies, and are built specifically for each information granule (a function cluster of genes), which are intelligently partitioned by the clustering algorithm provided by the DAVID_6.7 software platform. However, only the gene functions, and not their expression values, are considered in the fuzzy clustering algorithm of DAVID. Compared to the clustering algorithm of DAVID, these experimental results show a marked improvement in the accuracy of classification with the application of FCSOMs. FCSOMs can handle huge datasets and their complex classification problems, as each FCSOM (modeled for each function cluster) can be easily parallelized.
Telemetry downlink interfaces and level-zero processing
NASA Technical Reports Server (NTRS)
Horan, S.; Pfeiffer, J.; Taylor, J.
1991-01-01
The technical areas being investigated are as follows: (1) processing of space to ground data frames; (2) parallel architecture performance studies; and (3) parallel programming techniques. Additionally, the University administrative details and the technical liaison between New Mexico State University and Goddard Space Flight Center are addressed.
Parallel CE/SE Computations via Domain Decomposition
NASA Technical Reports Server (NTRS)
Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung
2000-01-01
This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.
14 CFR 23.393 - Loads parallel to hinge line.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) K=24 for vertical surfaces; (2) K=12 for horizontal surfaces; and (3) W=weight of the movable... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Loads parallel to hinge line. 23.393 Section 23.393 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION...
14 CFR 23.393 - Loads parallel to hinge line.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) K=24 for vertical surfaces; (2) K=12 for horizontal surfaces; and (3) W=weight of the movable... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Loads parallel to hinge line. 23.393 Section 23.393 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION...
14 CFR 23.393 - Loads parallel to hinge line.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) K=24 for vertical surfaces; (2) K=12 for horizontal surfaces; and (3) W=weight of the movable... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Loads parallel to hinge line. 23.393 Section 23.393 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION...
14 CFR 23.393 - Loads parallel to hinge line.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) K=24 for vertical surfaces; (2) K=12 for horizontal surfaces; and (3) W=weight of the movable... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Loads parallel to hinge line. 23.393 Section 23.393 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION...
14 CFR 23.393 - Loads parallel to hinge line.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) K=24 for vertical surfaces; (2) K=12 for horizontal surfaces; and (3) W=weight of the movable... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Loads parallel to hinge line. 23.393 Section 23.393 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION...
Access and visualization using clusters and other parallel computers
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Bergou, Attila; Berriman, Bruce; Block, Gary; Collier, Jim; Curkendall, Dave; Good, John; Husman, Laura; Jacob, Joe; Laity, Anastasia;
2003-01-01
JPL's Parallel Applications Technologies Group has been exploring the issues of data access and visualization of very large data sets over the past 10 or so years. this work has used a number of types of parallel computers, and today includes the use of commodity clusters. This talk will highlight some of the applications and tools we have developed, including how they use parallel computing resources, and specifically how we are using modern clusters. Our applications focus on NASA's needs; thus our data sets are usually related to Earth and Space Science, including data delivered from instruments in space, and data produced by telescopes on the ground.
Operation of high power converters in parallel
NASA Technical Reports Server (NTRS)
Decker, D. K.; Inouye, L. Y.
1993-01-01
High power converters that are used in space power subsystems are limited in power handling capability due to component and thermal limitations. For applications, such as Space Station Freedom, where multi-kilowatts of power must be delivered to user loads, parallel operation of converters becomes an attractive option when considering overall power subsystem topologies. TRW developed three different unequal power sharing approaches for parallel operation of converters. These approaches, known as droop, master-slave, and proportional adjustment, are discussed and test results are presented.
A Parallel Trade Study Architecture for Design Optimization of Complex Systems
NASA Technical Reports Server (NTRS)
Kim, Hongman; Mullins, James; Ragon, Scott; Soremekun, Grant; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
Design of a successful product requires evaluating many design alternatives in a limited design cycle time. This can be achieved through leveraging design space exploration tools and available computing resources on the network. This paper presents a parallel trade study architecture to integrate trade study clients and computing resources on a network using Web services. The parallel trade study solution is demonstrated to accelerate design of experiments, genetic algorithm optimization, and a cost as an independent variable (CAIV) study for a space system application.
NASA Astrophysics Data System (ADS)
Zuza, A. V.; Yin, A.; Lin, J. C.
2015-12-01
Parallel evenly-spaced strike-slip faults are prominent in the southern San Andreas fault system, as well as other settings along plate boundaries (e.g., the Alpine fault) and within continental interiors (e.g., the North Anatolian, central Asian, and northern Tibetan faults). In southern California, the parallel San Jacinto, Elsinore, Rose Canyon, and San Clemente faults to the west of the San Andreas are regularly spaced at ~40 km. In the Eastern California Shear Zone, east of the San Andreas, faults are spaced at ~15 km. These characteristic spacings provide unique mechanical constraints on how the faults interact. Despite the common occurrence of parallel strike-slip faults, the fundamental questions of how and why these fault systems form remain unanswered. We address this issue by using the stress shadow concept of Lachenbruch (1961)—developed to explain extensional joints by using the stress-free condition on the crack surface—to present a mechanical analysis of the formation of parallel strike-slip faults that relates fault spacing and brittle-crust thickness to fault strength, crustal strength, and the crustal stress state. We discuss three independent models: (1) a fracture mechanics model, (2) an empirical stress-rise function model embedded in a plastic medium, and (3) an elastic-plate model. The assumptions and predictions of these models are quantitatively tested using scaled analogue sandbox experiments that show that strike-slip fault spacing is linearly related to the brittle-crust thickness. We derive constraints on the mechanical properties of the southern San Andreas strike-slip faults and fault-bounded crust (e.g., local fault strength and crustal/regional stress) given the observed fault spacing and brittle-crust thickness, which is obtained by defining the base of the seismogenic zone with high-resolution earthquake data. Our models allow direct comparison of the parallel faults in the southern San Andreas system with other similar strike-slip fault systems, both on Earth and throughout the solar system (e.g., the Tiger Stripe Fractures on Enceladus).
NASA Astrophysics Data System (ADS)
Sun, Degui; Wang, Na-Xin; He, Li-Ming; Weng, Zhao-Heng; Wang, Daheng; Chen, Ray T.
1996-06-01
A space-position-logic-encoding scheme is proposed and demonstrated. This encoding scheme not only makes the best use of the convenience of binary logic operation, but is also suitable for the trinary property of modified signed- digit (MSD) numbers. Based on the space-position-logic-encoding scheme, a fully parallel modified signed-digit adder and subtractor is built using optoelectronic switch technologies in conjunction with fiber-multistage 3D optoelectronic interconnects. Thus an effective combination of a parallel algorithm and a parallel architecture is implemented. In addition, the performance of the optoelectronic switches used in this system is experimentally studied and verified. Both the 3-bit experimental model and the experimental results of a parallel addition and a parallel subtraction are provided and discussed. Finally, the speed ratio between the MSD adder and binary adders is discussed and the advantage of the MSD in operating speed is demonstrated.
Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density
NASA Astrophysics Data System (ADS)
Hohl, A.; Delmelle, E. M.; Tang, W.
2015-07-01
Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
A Parallel Saturation Algorithm on Shared Memory Architectures
NASA Technical Reports Server (NTRS)
Ezekiel, Jonathan; Siminiceanu
2007-01-01
Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.
NASA Astrophysics Data System (ADS)
Fehr, M.; Navarro, V.; Martin, L.; Fletcher, E.
2013-08-01
Space Situational Awareness[8] (SSA) is defined as the comprehensive knowledge, understanding and maintained awareness of the population of space objects, the space environment and existing threats and risks. As ESA's SSA Conjunction Prediction Service (CPS) requires the repetitive application of a processing algorithm against a data set of man-made space objects, it is crucial to exploit the highly parallelizable nature of this problem. Currently the CPS system makes use of OpenMP[7] for parallelization purposes using CPU threads, but only a GPU with its hundreds of cores can fully benefit from such high levels of parallelism. This paper presents the adaptation of several core algorithms[5] of the CPS for general-purpose computing on graphics processing units (GPGPU) using NVIDIAs Compute Unified Device Architecture (CUDA).
DICE/ColDICE: 6D collisionless phase space hydrodynamics using a lagrangian tesselation
NASA Astrophysics Data System (ADS)
Sousbie, Thierry
2018-01-01
DICE is a C++ template library designed to solve collisionless fluid dynamics in 6D phase space using massively parallel supercomputers via an hybrid OpenMP/MPI parallelization. ColDICE, based on DICE, implements a cosmological and physical VLASOV-POISSON solver for cold systems such as dark matter (CDM) dynamics.
Solid oxide fuel cell having compound cross flow gas patterns
Fraioli, A.V.
1983-10-12
A core construction for a fuel cell is disclosed having both parallel and cross flow passageways for the fuel and the oxidant gases. Each core passageway is defined by electrolyte and interconnect walls. Each electrolyte wall consists of cathode and anode materials sandwiching an electrolyte material. Each interconnect wall is formed as a sheet of inert support material having therein spaced small plugs of interconnect material, where cathode and anode materials are formed as layers on opposite sides of each sheet and are electrically connected together by the interconnect material plugs. Each interconnect wall in a wavy shape is connected along spaced generally parallel line-like contact areas between corresponding spaced pairs of generally parallel electrolyte walls, operable to define one tier of generally parallel flow passageways for the fuel and oxidant gases. Alternate tiers are arranged to have the passageways disposed normal to one another. Solid mechanical connection of the interconnect walls of adjacent tiers to the opposite sides of the common electrolyte wall therebetween is only at spaced point-like contact areas, 90 where the previously mentioned line-like contact areas cross one another.
Solid oxide fuel cell having compound cross flow gas patterns
Fraioli, Anthony V.
1985-01-01
A core construction for a fuel cell is disclosed having both parallel and cross flow passageways for the fuel and the oxidant gases. Each core passageway is defined by electrolyte and interconnect walls. Each electrolyte wall consists of cathode and anode materials sandwiching an electrolyte material. Each interconnect wall is formed as a sheet of inert support material having therein spaced small plugs of interconnect material, where cathode and anode materials are formed as layers on opposite sides of each sheet and are electrically connected together by the interconnect material plugs. Each interconnect wall in a wavy shape is connected along spaced generally parallel line-like contact areas between corresponding spaced pairs of generally parallel electrolyte walls, operable to define one tier of generally parallel flow passageways for the fuel and oxidant gases. Alternate tiers are arranged to have the passageways disposed normal to one another. Solid mechanical connection of the interconnect walls of adjacent tiers to the opposite sides of the common electrolyte wall therebetween is only at spaced point-like contact areas, 90 where the previously mentioned line-like contact areas cross one another.
NASA Technical Reports Server (NTRS)
Waller, Marvin C.; Scanlon, Charles H.
1999-01-01
A number of our nations airports depend on closely spaced parallel runway operations to handle their normal traffic throughput when weather conditions are favorable. For safety these operations are curtailed in Instrument Meteorological Conditions (IMC) when the ceiling or visibility deteriorates and operations in many cases are limited to the equivalent of a single runway. Where parallel runway spacing is less than 2500 feet, capacity loss in IMC is on the order of 50 percent for these runways. Clearly, these capacity losses result in landing delays, inconveniences to the public, increased operational cost to the airlines, and general interruption of commerce. This document presents a description and the results of a fixed-base simulation study to evaluate an initial concept that includes a set of procedures for conducting safe flight in closely spaced parallel runway operations in IMC. Consideration of flight-deck information technology and displays to support the procedures is also included in the discussions. The procedures and supporting technology rely heavily on airborne capabilities operating in conjunction with the air traffic control system.
Non-Cartesian Parallel Imaging Reconstruction
Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole
2014-01-01
Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499
The Goddard Space Flight Center Program to develop parallel image processing systems
NASA Technical Reports Server (NTRS)
Schaefer, D. H.
1972-01-01
Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.
NASA Astrophysics Data System (ADS)
Xu, Wen-Sheng; Zhang, Wen-Zheng
2018-01-01
A new orientation relationship (OR) is found between Widmanstätten cementite precipitates and the austenite matrix in a 1.3C-14Mn steel. The associated habit plane (HP) and the dislocations in the HP have been investigated with transmission electron microscopy. The HP is parallel to ? in cementite, and it is parallel to ? in austenite. Three groups of interfacial dislocations are observed in the HP, with limited quantitative experimental data. The line directions, the spacing and the Burgers vectors of two sets of dislocations have been calculated based on a misfit analysis, which combines the CSL/DSC/O-lattice theories, row matching and good matching site (GMS) mappings. The calculated results are in reasonable agreement with the experimental results. The dislocations 'Coarse 1' and 'Fine 1' are in the same direction as the matching rows, i.e. ?. 'Coarse 1' dislocations are secondary dislocations with a Burgers vector of ?, and 'Fine 1' dislocations are pseudo-primary dislocations with a plausible Burgers vector of ?. The reason why the fraction of the new OR is much less than that of the dominant Pitsch OR has been discussed in terms of the degree of matching in the HPs.
Parallels between a Collaborative Research Process and the Middle Level Philosophy
ERIC Educational Resources Information Center
Dever, Robin; Ross, Diane; Miller, Jennifer; White, Paula; Jones, Karen
2014-01-01
The characteristics of the middle level philosophy as described in This We Believe closely parallel the collaborative research process. The journey of one research team is described in relationship to these characteristics. The collaborative process includes strengths such as professional relationships, professional development, courageous…
Relationship of Individual and Group Change: Ontogeny and Phylogeny in Biology.
ERIC Educational Resources Information Center
Gould, Steven Jay
1984-01-01
Considers the issue of parallels between ontogeny and phylogeny from an historical perspective. Discusses such parallels in relationship to two ontogenetic principles concerning recapitulation and sequence of stages. Differentiates between Piaget's use of the idea of recapitulation and Haeckel's biogenetic law. (Author/RH)
Mechanical stratigraphic controls on natural fracture spacing and penetration
NASA Astrophysics Data System (ADS)
McGinnis, Ronald N.; Ferrill, David A.; Morris, Alan P.; Smart, Kevin J.; Lehrmann, Daniel
2017-02-01
Fine-grained low permeability sedimentary rocks, such as shale and mudrock, have drawn attention as unconventional hydrocarbon reservoirs. Fracturing - both natural and induced - is extremely important for increasing permeability in otherwise low-permeability rock. We analyze natural extension fracture networks within a complete measured outcrop section of the Ernst Member of the Boquillas Formation in Big Bend National Park, west Texas. Results of bed-center, dip-parallel scanline surveys demonstrate nearly identical fracture strikes and slight variation in dip between mudrock, chalk, and limestone beds. Fracture spacing tends to increase proportional to bed thickness in limestone and chalk beds; however, dramatic differences in fracture spacing are observed in mudrock. A direct relationship is observed between fracture spacing/thickness ratio and rock competence. Vertical fracture penetrations measured from the middle of chalk and limestone beds generally extend to and often beyond bed boundaries into the vertically adjacent mudrock beds. In contrast, fractures in the mudrock beds rarely penetrate beyond the bed boundaries into the adjacent carbonate beds. Consequently, natural bed-perpendicular fracture connectivity through the mechanically layered sequence generally is poor. Fracture connectivity strongly influences permeability architecture, and fracture prediction should consider thin bed-scale control on fracture heights and the strong lithologic control on fracture spacing.
Skeletal changes during and after spaceflight.
Vico, Laurence; Hargens, Alan
2018-03-21
Space sojourns are challenging for life. The ability of the human body to adapt to these extreme conditions has been noted since the beginning of human space travel. Skeletal alterations that occur during spaceflight are now better understood owing to tools such as dual-energy X-ray densitometry and high-resolution peripheral quantitative CT, and murine models help researchers to understand cellular and matrix changes that occur in bone and that are difficult to measure in humans. However, questions remain with regard to bone adaptation and osteocyte fate, as well as to interactions of the skeleton with fluid shifts towards the head and with the vascular system. Further investigations into the relationships between the musculoskeletal system, energy metabolism and sensory motor acclimatisation are needed. In this regard, an integrated intervention is required that will address multiple systems simultaneously. Importantly, radiation and isolation-related stresses are gaining increased attention as the prospect of human exploration into deep space draws nearer. Although space is a unique environment, clear parallels exist between the effects of spaceflight, periods of immobilization and ageing, with possibly irreversible features. Space travel offers an opportunity to establish integrated deconditioning and ageing interventions that combine nutritional, physical and pharmaceutical strategies.
NASA Astrophysics Data System (ADS)
Loring, B.; Karimabadi, H.; Rortershteyn, V.
2015-10-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.
Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.
Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene
2016-11-01
Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim
2014-07-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not.more » We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.« less
NASA Technical Reports Server (NTRS)
Pritchett, Amy R.; Hansman, R. John
1997-01-01
Efforts to increase airport capacity include studies of aircraft systems that would enable simultaneous approaches to closely spaced parallel runway in Instrument Meteorological Conditions (IMC). The time-critical nature of a parallel approach results in key design issues for current and future collision avoidance systems. Two part-task flight simulator studies have examined the procedural and display issues inherent in such a time-critical task, the interaction of the pilot with a collision avoidance system, and the alerting criteria and avoidance maneuvers preferred by subjects.
Sanders, David M.; Decker, Derek E.
1999-01-01
Optical patterns and lithographic techniques are used as part of a process to embed parallel and evenly spaced conductors in the non-planar surfaces of an insulator to produce high gradient insulators. The approach extends the size that high gradient insulating structures can be fabricated as well as improves the performance of those insulators by reducing the scale of the alternating parallel lines of insulator and conductor along the surface. This fabrication approach also substantially decreases the cost required to produce high gradient insulators.
Checkpoint-Restart in User Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
CRUISE implements a user-space file system that stores data in main memory and transparently spills over to other storage, like local flash memory or the parallel file system, as needed. CRUISE also exposes file contents fo remote direct memory access, allowing external tools to copy files to the parallel file system in the background with reduced CPU interruption.
An Analysis of the Role of ATC in the AILS Concept
NASA Technical Reports Server (NTRS)
Waller, Marvin C.; Doyle, Thomas M.; McGee, Frank G.
2000-01-01
Airborne information for lateral spacing (AILS) is a concept for making approaches to closely spaced parallel runways in instrument meteorological conditions (IMC). Under the concept, each equipped aircraft will assume responsibility for accurately managing its flight path along the approach course and maintaining separation from aircraft on the parallel approach. This document presents the results of an analysis of the AILS concept from an Air Traffic Control (ATC) perspective. The process has been examined in a step by step manner to determine ATC system support necessary to safely conduct closely spaced parallel approaches using the AILS concept. The analysis resulted in recognizing a number of issues related to integrating the process into the airspace system and proposes operating procedures.
A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems
NASA Astrophysics Data System (ADS)
Pawlicki, Ted
1988-03-01
Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions of hierarchical indexing. (i.e. the specificity, adjunct, and parent indices) It supports the notion that multiple canonical views of an object may have to be stored in memory to enable its efficient identification. The use of variable fields in the state space vectors appears to keep the number of required nodes in the network down to a tractable number while imposing a semantic value on different areas of the state space. This semantic imposition supports an interface between the analogical aspects of neural networks and the propositional paradigms of symbolic processing.
Peng, Wei; Crouse, Julia
2013-06-01
Although multiplayer modes are common among contemporary video games, the bulk of game research focuses on the single-player mode. To fill the gap in the literature, the current study investigated the effects of different multiplayer modes on enjoyment, future play motivation, and the actual physical activity intensity in an active video game. One hundred sixty-two participants participated in a one-factor between-subject laboratory experiment with three conditions: (a) single player: play against self pretest score; (b) cooperation with another player in the same physical space; (c) parallel competition with another player in separated physical spaces. We found that parallel competition in separate physical spaces was the optimal mode, since it resulted in both high enjoyment and future play motivation and high physical intensity. Implications for future research on multiplayer mode and play space as well as active video game-based physical activity interventions are discussed.
NASA Astrophysics Data System (ADS)
Tian, Fang; Cao, Xianyong; Dallmeyer, Anne; Zhao, Yan; Ni, Jian; Herzschuh, Ulrike
2017-01-01
Temporal and spatial stability of the vegetation-climate relationship is a basic ecological assumption for pollen-based quantitative inferences of past climate change and for predicting future vegetation. We explore this assumption for the Holocene in eastern continental Asia (China, Mongolia). Boosted regression trees (BRT) between fossil pollen taxa percentages (Abies, Artemisia, Betula, Chenopodiaceae, Cyperaceae, Ephedra, Picea, Pinus, Poaceae and Quercus) and climate model outputs of mean annual precipitation (Pann) and mean temperature of the warmest month (Mtwa) for 9 and 6 ka (ka = thousand years before present) were set up and results compared to those obtained from relating modern pollen to modern climate. Overall, our results reveal only slight temporal differences in the pollen-climate relationships. Our analyses suggest that the importance of Pann compared with Mtwa for taxa distribution is higher today than it was at 6 ka and 9 ka. In particular, the relevance of Pann for Picea and Pinus increases and has become the main determinant. This change in the climate-tree pollen relationship parallels a widespread tree pollen decrease in north-central China and the eastern Tibetan Plateau. We assume that this is at least partly related to vegetation-climate disequilibrium originating from human impact. Increased atmospheric CO2 concentration may have permitted the expansion of moisture-loving herb taxa (Cyperaceae and Poaceae) during the late Holocene into arid/semi-arid areas. We furthermore find that the pollen-climate relationship between north-central China and the eastern Tibetan Plateau is generally similar, but that regional differences are larger than temporal differences. In summary, vegetation-climate relationships in China are generally stable in space and time, and pollen-based climate reconstructions can be applied to the Holocene. Regional differences imply the calibration-set should be restricted spatially.
Chromatin organization and global regulation of Hox gene clusters
Montavon, Thomas; Duboule, Denis
2013-01-01
During development, a properly coordinated expression of Hox genes, within their different genomic clusters is critical for patterning the body plans of many animals with a bilateral symmetry. The fascinating correspondence between the topological organization of Hox clusters and their transcriptional activation in space and time has served as a paradigm for understanding the relationships between genome structure and function. Here, we review some recent observations, which revealed highly dynamic changes in the structure of chromatin at Hox clusters, in parallel with their activation during embryonic development. We discuss the relevance of these findings for our understanding of large-scale gene regulation. PMID:23650639
Multigrid methods with space–time concurrency
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...
2017-10-06
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
Multigrid methods with space–time concurrency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
Improving parallel I/O autotuning with performance modeling
Behzad, Babak; Byna, Surendra; Wild, Stefan M.; ...
2014-01-01
Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2more » hours, while achieving 54X I/O performance speedup.« less
Querying databases of trajectories of differential equations 2: Index functions
NASA Technical Reports Server (NTRS)
Grossman, Robert
1991-01-01
Suppose that a large number of parameterized trajectories (gamma) of a dynamical system evolving in R sup N are stored in a database. Let eta is contained R sup N denote a parameterized path in Euclidean space, and let parallel to center dot parallel to denote a norm on the space of paths. A data structures and indices for trajectories are defined and algorithms are given to answer queries of the following forms: Query 1. Given a path eta, determine whether eta occurs as a subtrajectory of any trajectory gamma from the database. If so, return the trajectory; otherwise, return null. Query 2. Given a path eta, return the trajectory gamma from the database which minimizes the norm parallel to eta - gamma parallel.
Pilot Non-Conformance to Alerting System Commands During Closely Spaced Parallel Approaches
NASA Technical Reports Server (NTRS)
Pritchett, Amy R.; Hansman, R. John
1997-01-01
Pilot non-conformance to alerting system commands has been noted in general and to a TCAS-like collision avoidance system in a previous experiment. This paper details two experiments studying collision avoidance during closely-spaced parallel approaches in instrument meteorological conditions (IMC), and specifically examining possible causal factors of, and design solutions to, pilot non-conformance.
Active illuminated space object imaging and tracking simulation
NASA Astrophysics Data System (ADS)
Yue, Yufang; Xie, Xiaogang; Luo, Wen; Zhang, Feizhou; An, Jianzhu
2016-10-01
Optical earth imaging simulation of a space target in orbit and it's extraction in laser illumination condition were discussed. Based on the orbit and corresponding attitude of a satellite, its 3D imaging rendering was built. General simulation platform was researched, which was adaptive to variable 3D satellite models and relative position relationships between satellite and earth detector system. Unified parallel projection technology was proposed in this paper. Furthermore, we denoted that random optical distribution in laser-illuminated condition was a challenge for object discrimination. Great randomicity of laser active illuminating speckles was the primary factor. The conjunction effects of multi-frame accumulation process and some tracking methods such as Meanshift tracking, contour poid, and filter deconvolution were simulated. Comparison of results illustrates that the union of multi-frame accumulation and contour poid was recommendable for laser active illuminated images, which had capacities of high tracking precise and stability for multiple object attitudes.
Wigner, E.P.; Ohlinger, L.E.; Young, G.J.; Weinberg, A.M.
1959-02-17
Radiation shield construction is described for a nuclear reactor. The shield is comprised of a plurality of steel plates arranged in parallel spaced relationship within a peripheral shell. Reactor coolant inlet tubes extend at right angles through the plates and baffles are arranged between the plates at right angles thereto and extend between the tubes to create a series of zigzag channels between the plates for the circulation of coolant fluid through the shield. The shield may be divided into two main sections; an inner section adjacent the reactor container and an outer section spaced therefrom. Coolant through the first section may be circulated at a faster rate than coolant circulated through the outer section since the area closest to the reactor container is at a higher temperature and is more radioactive. The two sections may have separate cooling systems to prevent the coolant in the outer section from mixing with the more contaminated coolant in the inner section.
"What Do You Think We Should Do?": Relationship and Reflexivity in Participant Observation.
Elliot, Michelle L
2015-07-01
This article uses three concepts as a framework by which to examine how the interrelational elements of ethnographic approaches to qualitative inquiry reflect dimensions of therapeutic engagement. Participant observation, reflexivity, and context are all widely and routinely included within research methods; however, they are less frequently attended to directly in their experiential capacity through the lens of the researcher, clinician turned investigator. A unique study design will be profiled to reflect the complicated juxtaposition between methods, questions, sample population, time, space, and identity. Studying occupational therapy students traveling abroad for a short-term immersion experience, this narrative study called on a necessary and attentive awareness of locality as the researcher traveled with the group. Conducting ethnographic research where the researcher's therapeutic skills aided and constrained relationships resulted in rich, guarded, and relevant insights that parallel the therapeutic use of self in occupational therapy practice.
Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.
Palkowski, Marek; Bielecki, Wlodzimierz
2018-01-15
RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
Parallel family trees for transfer matrices in the Potts model
NASA Astrophysics Data System (ADS)
Navarro, Cristobal A.; Canfora, Fabrizio; Hitschfeld, Nancy; Navarro, Gonzalo
2015-02-01
The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic (q , v) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O(4m) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O(3m) to build the generic (q , v) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O(3m ×4m) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O(4m ×4m) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM), especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster scenario it was in the range p ∈ [ 8 , 10 ] . Because of the parallel capabilities of the algorithm, a large-scale execution of the parallel family trees strategy in a supercomputer could contribute to the study of wider strip lattices.
Steering law for parallel mounted double-gimbaled control moment gyros
NASA Technical Reports Server (NTRS)
Kennel, H. F.
1975-01-01
Parallel mounting of double-gimbaled control moment gyros (DG CMG) is discussed in terms of simplification of the steering law. The steering law/parallel mounted DG CMG is considered to be a 'CMG kit' applicable to any space vehicle where the need for DG CMG's has been established.
Fels, S S; Hinton, G E
1998-01-01
Glove-TalkII is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three-space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.
Vision-Based Navigation and Parallel Computing
1990-08-01
33 5.8. Behizad Kamgar-Parsi and Behrooz Karngar-Parsi,"On Problem 5- lving with Hopfield Neural Networks", CAR-TR-462, CS-TR...Second. the hypercube connections support logarithmic implementations of fundamental parallel algorithms. such as grid permutations and scan...the pose space. It also uses a set of virtual processors to represent an orthogonal projection grid , and projections of the six dimensional pose space
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
NASA Astrophysics Data System (ADS)
Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-11-01
Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
NASA Astrophysics Data System (ADS)
Allphin, Devin
Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.
ERIC Educational Resources Information Center
Saglam, Murat
2015-01-01
This study explored the relationship between accuracy of and confidence in performance of 114 prospective primary school teachers in answering diagnostic questions on potential difference in parallel electric circuits. The participants were required to indicate their confidence in their answers for each question. Bias and calibration indices were…
NASA Technical Reports Server (NTRS)
Wilson, T. G.; Lee, F. C. Y.; Burns, W. W., III; Owen, H. A., Jr.
1975-01-01
It recently has been shown in the literature that many dc-to-square-wave parallel inverters which are widely used in power-conditioning applications can be grouped into one of two families. Each family is characterized by an equivalent RLC network. Based on this approach, a classification procedure is presented for self-oscillating parallel inverters which makes evident natural relationships which exist between various inverter configurations. By utilizing concepts from the basic theory of negative resistance oscillators and the principle of duality as applied to nonlinear networks, a chain of relationships is established which enables a methodical transfer of knowledge gained about one family of inverters to any of the other families in the classification array.
A nonrecursive order N preconditioned conjugate gradient: Range space formulation of MDOF dynamics
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.
1990-01-01
While excellent progress has been made in deriving algorithms that are efficient for certain combinations of system topologies and concurrent multiprocessing hardware, several issues must be resolved to incorporate transient simulation in the control design process for large space structures. Specifically, strategies must be developed that are applicable to systems with numerous degrees of freedom. In addition, the algorithms must have a growth potential in that they must also be amenable to implementation on forthcoming parallel system architectures. For mechanical system simulation, this fact implies that algorithms are required that induce parallelism on a fine scale, suitable for the emerging class of highly parallel processors; and transient simulation methods must be automatically load balancing for a wider collection of system topologies and hardware configurations. These problems are addressed by employing a combination range space/preconditioned conjugate gradient formulation of multi-degree-of-freedom dynamics. The method described has several advantages. In a sequential computing environment, the method has the features that: by employing regular ordering of the system connectivity graph, an extremely efficient preconditioner can be derived from the 'range space metric', as opposed to the system coefficient matrix; because of the effectiveness of the preconditioner, preliminary studies indicate that the method can achieve performance rates that depend linearly upon the number of substructures, hence the title 'Order N'; and the method is non-assembling. Furthermore, the approach is promising as a potential parallel processing algorithm in that the method exhibits a fine parallel granularity suitable for a wide collection of combinations of physical system topologies/computer architectures; and the method is easily load balanced among processors, and does not rely upon system topology to induce parallelism.
NASA Astrophysics Data System (ADS)
Forbes, Richard G.
2008-10-01
This paper reports (a) a simple dimensionless equation relating to field-emitted vacuum space charge (FEVSC) in parallel-plane geometry, namely 9ζ2θ2-3θ-4ζ+3=0, where ζ is the FEVSC "strength" and θ is the reduction in emitter surface field (θ =field-with/field-without FEVSC), and (b) the formula j =9θ2ζ/4, where j is the ratio of emitted current density JP to that predicted by Child's law. These equations apply to any charged particle, positive or negative, emitted with near-zero kinetic energy. They yield existing and additional basic formulas in planar FEVSC theory. The first equation also yields the well-known cubic equation describing the relationship between JP and applied voltage; a method of analytical solution is described. Illustrative FEVSC effects in a liquid metal ion source and in field electron emission are discussed. For Fowler-Nordheim plots, a "turn-over" effect is predicted in the high FEVSC limit. The higher the voltage-to-local-field conversion factor for the emitter concerned, then the higher is the field at which turn over occurs. Past experiments have not found complete turn over; possible reasons are noted. For real field emitters, planar theory is a worst-case limit; however, adjusting ζ on the basis of Monte Carlo calculations might yield formulae adequate for real situations.
Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging
Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.
2014-01-01
Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602
Twelve Channel Optical Fiber Connector Assembly: From Commercial Off the Shelf to Space Flight Use
NASA Technical Reports Server (NTRS)
Ott, Melaine N.
1998-01-01
The commercial off the shelf (COTS) twelve channel optical fiber MTP array connector and ribbon cable assembly is being validated for space flight use and the results of this study to date are presented here. The interconnection system implemented for the Parallel Fiber Optic Data Bus (PFODB) physical layer will include a 100/140 micron diameter optical fiber in the cable configuration among other enhancements. As part of this investigation, the COTS 62.5/125 microns optical fiber cable assembly has been characterized for space environment performance as a baseline for improving the performance of the 100/140 micron diameter ribbon cable for the Parallel FODB application. Presented here are the testing and results of random vibration and thermal environmental characterization of this commercial off the shelf (COTS) MTP twelve channel ribbon cable assembly. This paper is the first in a series of papers which will characterize and document the performance of Parallel FODB's physical layer from COTS to space flight worthy.
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David
1987-01-01
The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.
Interval Management with Spacing to Parallel Dependent Runways (IMSPIDR) Experiment and Results
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Swieringa, Kurt A.; Capron, William R.
2012-01-01
An area in aviation operations that may offer an increase in efficiency is the use of continuous descent arrivals (CDA), especially during dependent parallel runway operations. However, variations in aircraft descent angle and speed can cause inaccuracies in estimated time of arrival calculations, requiring an increase in the size of the buffer between aircraft. This in turn reduces airport throughput and limits the use of CDAs during high-density operations, particularly to dependent parallel runways. The Interval Management with Spacing to Parallel Dependent Runways (IMSPiDR) concept uses a trajectory-based spacing tool onboard the aircraft to achieve by the runway an air traffic control assigned spacing interval behind the previous aircraft. This paper describes the first ever experiment and results of this concept at NASA Langley. Pilots flew CDAs to the Dallas Fort-Worth airport using airspeed calculations from the spacing tool to achieve either a Required Time of Arrival (RTA) or Interval Management (IM) spacing interval at the runway threshold. Results indicate flight crews were able to land aircraft on the runway with a mean of 2 seconds and less than 4 seconds standard deviation of the air traffic control assigned time, even in the presence of forecast wind error and large time delay. Statistically significant differences in delivery precision and number of speed changes as a function of stream position were observed, however, there was no trend to the difference and the error did not increase during the operation. Two areas the flight crew indicated as not acceptable included the additional number of speed changes required during the wind shear event, and issuing an IM clearance via data link while at low altitude. A number of refinements and future spacing algorithm capabilities were also identified.
Unilateral distalization of a maxillary molar with sliding mechanics: a case report.
Keles, Ahmet
2002-06-01
A unilateral Class II relationship could arise due to early loss of an upper second deciduous molar on one side during the mixed dentition period. This would allow the mesial drift of the molars, which may block the eruption of the second premolar. A 15-year 8-month-old male patient presented with a Class II molar relationship on the right, and Class I canine and molar relationship on the left side. His E was extracted when he was 5 years old. The 54 were impacted and the 3 was ectopically positioned due to the space loss from the mesial migration of the 76. In addition 21 1 were in cross-bite. Skeletally he had Class III tendency with low MMPA. He presented with a straight profile and retruded upper lip. For maxillary molar distalization, a newly developed 'Keles Slider' was used. The appliance was composed of one premolar and two molar bands, and the anchorage unit was composed of a wide Nance button. 46 were connected to the Nance button and, therefore, included into the anchorage unit. The point of distal force application was close to the centre of resistance of the 6 and parallel to the occlusal plane. Ni-Ti coil springs were used and 200 g of distal force was applied. Seven months later the space required for eruption of the permanent premolars and canine was regained, and the anterior cross-bite corrected. The appliance was removed and final alignment of the teeth was achieved with fixed appliances. At the end of the second phase treatment Class I molar and canine relationship was achieved on the both sides, the anterior cross-bite was corrected, inter-incisal angle was improved, and ideal overbite and overjet relationship was achieved. The active treatment time was 27 months.
Parallel algorithms for mapping pipelined and parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram
Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain.« less
Work stealing for GPU-accelerated parallel programs in a global address space framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram
Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain« less
Klystron having electrostatic quadrupole focusing arrangement
Maschke, Alfred W.
1983-08-30
A klystron includes a source for emitting at least one electron beam, and an accelerator for accelarating the beam in a given direction through a number of drift tube sections successively aligned relative to one another in the direction of the beam. A number of electrostatic quadrupole arrays are successively aligned relative to one another along at least one of the drift tube sections in the beam direction for focusing the electron beam. Each of the electrostatic quadrupole arrays forms a different quadrupole for each electron beam. Two or more electron beams can be maintained in parallel relationship by the quadrupole arrays, thereby enabling space charge limitations encountered with conventional single beam klystrons to be overcome.
Klystron having electrostatic quadrupole focusing arrangement
Maschke, A.W.
1983-08-30
A klystron includes a source for emitting at least one electron beam, and an accelerator for accelerating the beam in a given direction through a number of drift tube sections successively aligned relative to one another in the direction of the beam. A number of electrostatic quadrupole arrays are successively aligned relative to one another along at least one of the drift tube sections in the beam direction for focusing the electron beam. Each of the electrostatic quadrupole arrays forms a different quadrupole for each electron beam. Two or more electron beams can be maintained in parallel relationship by the quadrupole arrays, thereby enabling space charge limitations encountered with conventional single beam klystrons to be overcome. 4 figs.
Parallelized reliability estimation of reconfigurable computer networks
NASA Technical Reports Server (NTRS)
Nicol, David M.; Das, Subhendu; Palumbo, Dan
1990-01-01
A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.
Parallel Reconstruction Using Null Operations (PRUNO)
Zhang, Jian; Liu, Chunlei; Moseley, Michael E.
2011-01-01
A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290
Evaluation of fault-tolerant parallel-processor architectures over long space missions
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1989-01-01
The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.
NASA Technical Reports Server (NTRS)
1994-01-01
CESDIS, the Center of Excellence in Space Data and Information Sciences was developed jointly by NASA, Universities Space Research Association (USRA), and the University of Maryland in 1988 to focus on the design of advanced computing techniques and data systems to support NASA Earth and space science research programs. CESDIS is operated by USRA under contract to NASA. The Director, Associate Director, Staff Scientists, and administrative staff are located on-site at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The primary CESDIS mission is to increase the connection between computer science and engineering research programs at colleges and universities and NASA groups working with computer applications in Earth and space science. Research areas of primary interest at CESDIS include: 1) High performance computing, especially software design and performance evaluation for massively parallel machines; 2) Parallel input/output and data storage systems for high performance parallel computers; 3) Data base and intelligent data management systems for parallel computers; 4) Image processing; 5) Digital libraries; and 6) Data compression. CESDIS funds multiyear projects at U. S. universities and colleges. Proposals are accepted in response to calls for proposals and are selected on the basis of peer reviews. Funds are provided to support faculty and graduate students working at their home institutions. Project personnel visit Goddard during academic recess periods to attend workshops, present seminars, and collaborate with NASA scientists on research projects. Additionally, CESDIS takes on specific research tasks of shorter duration for computer science research requested by NASA Goddard scientists.
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
The first chapter of the seventh unit in this SMSG series discusses perpendiculars and parallels; topics covered include the relationship between parallelism and perpendicularity, rectangles, transversals, parallelograms, general triangles, and measurement of the circumference of the earth. The second chapter, on similarity, discusses scale…
Inflated speedups in parallel simulations via malloc()
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
Discrete-event simulation programs make heavy use of dynamic memory allocation in order to support simulation's very dynamic space requirements. When programming in C one is likely to use the malloc() routine. However, a parallel simulation which uses the standard Unix System V malloc() implementation may achieve an overly optimistic speedup, possibly superlinear. An alternate implementation provided on some (but not all systems) can avoid the speedup anomaly, but at the price of significantly reduced available free space. This is especially severe on most parallel architectures, which tend not to support virtual memory. It is shown how a simply implemented user-constructed interface to malloc() can both avoid artificially inflated speedups, and make efficient use of the dynamic memory space. The interface simply catches blocks on the basis of their size. The problem is demonstrated empirically, and the effectiveness of the solution is shown both empirically and analytically.
Maximum entropy models of ecosystem functioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertram, Jason, E-mail: jason.bertram@anu.edu.au
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on themore » information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.« less
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-05-01
Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.
Simulation Exploration through Immersive Parallel Planes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
Simulation Exploration through Immersive Parallel Planes: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
Structure and decomposition of the silver formate Ag(HCO{sub 2})
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puzan, Anna N., E-mail: anna_puzan@mail.ru; Baumer, Vyacheslav N.; Mateychenko, Pavel V.
Crystal structure of the silver formate Ag(HCO{sub 2}) has been determined (orthorhombic, sp.gr. Pccn, a=7.1199(5), b=10.3737(4), c=6.4701(3)Å, V=477.88(4) Å{sup 3}, Z=8). The structure contains isolated formate ions and the pairs Ag{sub 2}{sup 2+} which form the layers in (001) planes (the shortest Ag–Ag distances is 2.919 in the pair and 3.421 and 3.716 Å between the nearest Ag atoms of adjacent pairs). Silver formate is unstable compound which decompose spontaneously vs time. Decomposition was studied using Rietveld analysis of the powder diffraction patterns. It was concluded that the diffusion of Ag atoms leads to the formation of plate-like metal particlesmore » as nuclei in the (100) planes which settle parallel to (001) planes of the silver formate matrix. - Highlights: • Silver formate Ag(HCO{sub 2}) was synthesized and characterized. • Layered packing of Ag-Ag pairs in the structure was found. • Decomposition of Ag(HCO{sub 2}) and formation of metal phase were studied. • Rietveld-refined micro-structural characteristics during decomposition reveal the space relationship between the matrix structure and forming Ag phase REPLACE with: Space relationship between the matrix structure and forming Ag phase.« less
NASA Technical Reports Server (NTRS)
Manners, B.; Gholdston, E. W.; Karimi, K.; Lee, F. C.; Rajagopalan, J.; Panov, Y.
1996-01-01
As space direct current (dc) power systems continue to grow in size, switching power converters are playing an ever larger role in power conditioning and control. When designing a large dc system using power converters of this type, special attention must be placed on the electrical stability of the system and of the individual loads on the system. In the design of the electric power system (EPS) of the International Space Station (ISS), the National Aeronautics and Space Administration (NASA) and its contractor team led by Boeing Defense & Space Group has placed a great deal of emphasis on designing for system and load stability. To achieve this goal, the team has expended considerable effort deriving a dear concept on defining system stability in both a general sense and specifically with respect to the space station. The ISS power system presents numerous challenges with respect to system stability, such as high power, complex sources and undefined loads. To complicate these issues, source and load components have been designed in parallel by three major subcontractors (Boeing, Rocketdyne, and McDonnell Douglas) with interfaces to both sources and loads being designed in different countries (Russia, Japan, Canada, Europe, etc.). These issues, coupled with the program goal of limiting costs, have proven a significant challenge to the program. As a result, the program has derived an impedance specification approach for system stability. This approach is based on the significant relationship between source and load impedances and the effect of this relationship on system stability. This approach is limited in its applicability by the theoretical and practical limits on component designs as presented by each system segment. As a result, the overall approach to system stability implemented by the ISS program consists of specific hardware requirements coupled with extensive system analysis and hardware testing. Following this approach, the ISS program plans to begin construction of the world's largest orbiting power system in 1997.
NASA Technical Reports Server (NTRS)
Toomarian, N.; Fijany, A.; Barhen, J.
1993-01-01
Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.
2012-05-22
tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition, we use x2f mpi – a Fortran library...for parallel vector-valued function evaluation (used with ISAT in this context) – to efficiently redistribute the chemistry workload among the...Constrained-Equilibrium (RCCE) method, and tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L.; Li, Y.
2015-02-03
This paper analyzes the longitudinal space charge impedances of a round uniform beam inside a rectangular and parallel plate chambers using the image charge method. This analysis is valid for arbitrary wavelengths, and the calculations converge rapidly. The research shows that only a few of the image beams are needed to obtain a relative error less than 0.1%. The beam offset effect is also discussed in the analysis.
Computational methods and software systems for dynamics and control of large space structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.
1990-01-01
This final report on computational methods and software systems for dynamics and control of large space structures covers progress to date, projected developments in the final months of the grant, and conclusions. Pertinent reports and papers that have not appeared in scientific journals (or have not yet appeared in final form) are enclosed. The grant has supported research in two key areas of crucial importance to the computer-based simulation of large space structure. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area, as reported here, involves massively parallel computers.
Achilles tendon shape and echogenicity on ultrasound among active badminton players.
Malliaras, P; Voss, C; Garau, G; Richards, P; Maffulli, N
2012-04-01
The relationship between Achilles tendon ultrasound abnormalities, including a spindle shape and heterogeneous echogenicity, is unclear. This study investigated the relationship between these abnormalities, tendon thickness, Doppler flow and pain. Sixty-one badminton players (122 tendons, 36 men, and 25 women) were recruited. Achilles tendon thickness, shape (spindle, parallel), echogenicity (heterogeneous, homogeneous) and Doppler flow (present or absent) were measured bilaterally with ultrasound. Achilles tendon pain (during or after activity over the last week) and pain and function [Victorian Institute of Sport Achilles Assessment (VISA-A)] were measured. Sixty-eight (56%) tendons were parallel with homogeneous echogenicity (normal), 22 (18%) were spindle shaped with homogeneous echogenicity, 16 (13%) were parallel with heterogeneous echogenicity and 16 (13%) were spindle shaped with heterogeneous echogenicity. Spindle shape was associated with self-reported pain (P<0.05). Heterogeneous echogenicity was associated with lower VISA-A scores than normal tendon (P<0.05). There was an ordinal relationship between normal tendon, parallel and heterogeneous and spindle shaped and heterogeneous tendons with regard to increasing thickness and likelihood of Doppler flow. Heterogeneous echogenicity with a parallel shape may be a physiological phase and may develop into heterogeneous echogenicity with a spindle shape that is more likely to be pathological. © 2010 John Wiley & Sons A/S.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lou, Jialin; Xia, Yidong; Luo, Lixiang
2016-09-01
In this study, we use a combination of modeling techniques to describe the relationship between fracture radius that might be accomplished in a hypothetical enhanced geothermal system (EGS) and drilling distance required to create and access those fractures. We use a combination of commonly applied analytical solutions for heat transport in parallel fractures and 3D finite-element method models of more realistic heat extraction geometries. For a conceptual model involving multiple parallel fractures developed perpendicular to an inclined or horizontal borehole, calculations demonstrate that EGS will likely require very large fractures, of greater than 300 m radius, to keep interfracture drillingmore » distances to ~10 km or less. As drilling distances are generally inversely proportional to the square of fracture radius, drilling costs quickly escalate as the fracture radius decreases. It is important to know, however, whether fracture spacing will be dictated by thermal or mechanical considerations, as the relationship between drilling distance and number of fractures is quite different in each case. Information about the likelihood of hydraulically creating very large fractures comes primarily from petroleum recovery industry data describing hydraulic fractures in shale. Those data suggest that fractures with radii on the order of several hundred meters may, indeed, be possible. The results of this study demonstrate that relatively simple calculations can be used to estimate primary design constraints on a system, particularly regarding the relationship between generated fracture radius and the total length of drilling needed in the fracture creation zone. Comparison of the numerical simulations of more realistic geometries than addressed in the analytical solutions suggest that simple proportionalities can readily be derived to relate a particular flow field.« less
Wake Encounter Analysis for a Closely Spaced Parallel Runway Paired Approach Simulation
NASA Technical Reports Server (NTRS)
Mckissick,Burnell T.; Rico-Cusi, Fernando J.; Murdoch, Jennifer; Oseguera-Lohr, Rosa M.; Stough, Harry P, III; O'Connor, Cornelius J.; Syed, Hazari I.
2009-01-01
A Monte Carlo simulation of simultaneous approaches performed by two transport category aircraft from the final approach fix to a pair of closely spaced parallel runways was conducted to explore the aft boundary of the safe zone in which separation assurance and wake avoidance are provided. The simulation included variations in runway centerline separation, initial longitudinal spacing of the aircraft, crosswind speed, and aircraft speed during the approach. The data from the simulation showed that the majority of the wake encounters occurred near or over the runway and the aft boundaries of the safe zones were identified for all simulation conditions.
Memory access in shared virtual memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berrendorf, R.
1992-01-01
Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.
Memory access in shared virtual memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berrendorf, R.
1992-09-01
Shared virtual memory (SVM) is a virtual memory layer with a single address space on top of a distributed real memory on parallel computers. We examine the behavior and performance of SVM running a parallel program with medium-grained, loop-level parallelism on top of it. A simulator for the underlying parallel architecture can be used to examine the behavior of SVM more deeply. The influence of several parameters, such as the number of processors, page size, cold or warm start, and restricted page replication, is studied.
1999-01-01
Cylinder and Another Interior Shell of Different Length (Reid and Tennant 1973) 429 C. 1.19. View Factors between Two Infinitely-Long Parallel and Opposed...by Another Parallel Cylinder of Different Radius 433 X C. 1.21. View Factor between Two Parallel and Opposed Cylinders of Unequal Radii and Equal...Length (Juul 1982) 435 C. 1.22. View Factor between Two Parallel Cylindrical Sections at Different Levels and of Different Length 439 C.2 CALCULATION OF
Wake turbulence limits on paired approaches to parallel runways
DOT National Transportation Integrated Search
2002-07-01
Wake turbulence considerations currently restrict the use of parallel runways less than 2500 ft (762 m) apart. : However, wake turbulence is not a factor if there are appropriate limits on allowed longitudinal pair spacings : and/or allowed crosswind...
[CMACPAR an modified parallel neuro-controller for control processes].
Ramos, E; Surós, R
1999-01-01
CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.
Parallel computations and control of adaptive structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)
1991-01-01
The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.
Enabling CSPA Operations Through Pilot Involvement in Longitudinal Approach Spacing
NASA Technical Reports Server (NTRS)
Battiste, Vernol (Technical Monitor); Pritchett, Amy
2003-01-01
Several major airports around the United States have, or plan to have, closely-spaced parallel runways. This project complemented current and previous research by examining the pilots ability to control their position longitudinally within their approach stream.This project s results considered spacing for separation from potential positions of wake vortices from the parallel approach. This preventive function could enable CSPA operations to very closely spaced runways. This work also considered how pilot involvement in longitudinal spacing could allow for more efficient traffic flow, by allowing pilots to keep their aircraft within tighter arrival slots then air traffic control (ATC) might be able to establish, and by maintaining space within the arrival stream for corresponding departure slots. To this end, this project conducted several research studies providing an analytic and computational basis for calculating appropriate aircraft spacings, experimental results from a piloted flight simulator test, and an experimental testbed for future simulator tests. The following sections summarize the results of these three efforts.
Module Six: Parallel Circuits; Basic Electricity and Electronics Individualized Learning System.
ERIC Educational Resources Information Center
Bureau of Naval Personnel, Washington, DC.
In this module the student will learn the rules that govern the characteristics of parallel circuits; the relationships between voltage, current, resistance and power; and the results of common troubles in parallel circuits. The module is divided into four lessons: rules of voltage and current, rules for resistance and power, variational analysis,…
A portable MPI-based parallel vector template library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C++ by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of C or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
A Portable MPI-Based Parallel Vector Template Library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C + + by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of c or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
ERIC Educational Resources Information Center
Allen, Frank B.; And Others
This is part two of a two-part SMSG mathematics text for high school students. Chapter topics include: (1) perpendicular lines and planes in space; (2) parallel lines in a plane; (3) parallel lines in space; (4) areas of polygonal regions: (5) similarity; (6) circles and spheres; (7) constructions; (8) the area of a circle and related topics; and…
FUEL ASSEMBLY FOR A NEUTRONIC REACTOR
Wigner, E.P.
1958-04-29
A fuel assembly for a nuclear reactor of the type wherein liquid coolant is circulated through the core of the reactor in contact with the external surface of the fuel elements is described. In this design a plurality of parallel plates containing fissionable material are spaced about one-tenth of an inch apart and are supported between a pair of spaced parallel side members generally perpendicular to the plates. The plates all have a small continuous and equal curvature in the same direction between the side members.
Geometry of matrix product states: Metric, parallel transport, and curvature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haegeman, Jutho, E-mail: jutho.haegeman@gmail.com; Verstraete, Frank; Faculty of Physics and Astronomy, University of Ghent, Krijgslaan 281 S9, 9000 Gent
2014-02-15
We study the geometric properties of the manifold of states described as (uniform) matrix product states. Due to the parameter redundancy in the matrix product state representation, matrix product states have the mathematical structure of a (principal) fiber bundle. The total space or bundle space corresponds to the parameter space, i.e., the space of tensors associated to every physical site. The base manifold is embedded in Hilbert space and can be given the structure of a Kähler manifold by inducing the Hilbert space metric. Our main interest is in the states living in the tangent space to the base manifold,more » which have recently been shown to be interesting in relation to time dependence and elementary excitations. By lifting these tangent vectors to the (tangent space) of the bundle space using a well-chosen prescription (a principal bundle connection), we can define and efficiently compute an inverse metric, and introduce differential geometric concepts such as parallel transport (related to the Levi-Civita connection) and the Riemann curvature tensor.« less
A k-space method for acoustic propagation using coupled first-order equations in three dimensions.
Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C
2009-09-01
A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.
Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver
NASA Technical Reports Server (NTRS)
Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)
2002-01-01
The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.
GPU-completeness: theory and implications
NASA Astrophysics Data System (ADS)
Lin, I.-Jong
2011-01-01
This paper formalizes a major insight into a class of algorithms that relate parallelism and performance. The purpose of this paper is to define a class of algorithms that trades off parallelism for quality of result (e.g. visual quality, compression rate), and we propose a similar method for algorithmic classification based on NP-Completeness techniques, applied toward parallel acceleration. We will define this class of algorithm as "GPU-Complete" and will postulate the necessary properties of the algorithms for admission into this class. We will also formally relate his algorithmic space and imaging algorithms space. This concept is based upon our experience in the print production area where GPUs (Graphic Processing Units) have shown a substantial cost/performance advantage within the context of HPdelivered enterprise services and commercial printing infrastructure. While CPUs and GPUs are converging in their underlying hardware and functional blocks, their system behaviors are clearly distinct in many ways: memory system design, programming paradigms, and massively parallel SIMD architecture. There are applications that are clearly suited to each architecture: for CPU: language compilation, word processing, operating systems, and other applications that are highly sequential in nature; for GPU: video rendering, particle simulation, pixel color conversion, and other problems clearly amenable to massive parallelization. While GPUs establishing themselves as a second, distinct computing architecture from CPUs, their end-to-end system cost/performance advantage in certain parts of computation inform the structure of algorithms and their efficient parallel implementations. While GPUs are merely one type of architecture for parallelization, we show that their introduction into the design space of printing systems demonstrate the trade-offs against competing multi-core, FPGA, and ASIC architectures. While each architecture has its own optimal application, we believe that the selection of architecture can be defined in terms of properties of GPU-Completeness. For a welldefined subset of algorithms, GPU-Completeness is intended to connect the parallelism, algorithms and efficient architectures into a unified framework to show that multiple layers of parallel implementation are guided by the same underlying trade-off.
Linked exploratory visualizations for uncertain MR spectroscopy data
NASA Astrophysics Data System (ADS)
Feng, David; Kwock, Lester; Lee, Yueh; Taylor, Russell M., II
2010-01-01
We present a system for visualizing magnetic resonance spectroscopy (MRS) data sets. Using MRS, radiologists generate multiple 3D scalar fields of metabolite concentrations within the brain and compare them to anatomical magnetic resonance imaging. By understanding the relationship between metabolic makeup and anatomical structure, radiologists hope to better diagnose and treat tumors and lesions. Our system consists of three linked visualizations: a spatial glyph-based technique we call Scaled Data-Driven Spheres, a parallel coordinates visualization augmented to incorporate uncertainty in the data, and a slice plane for accurate data value extraction. The parallel coordinates visualization uses specialized brush interactions designed to help users identify nontrivial linear relationships between scalar fields. We describe two novel contributions to parallel coordinates visualizations: linear function brushing and new axis construction. Users have discovered significant relationships among metabolites and anatomy by linking interactions between the three visualizations.
Linked Exploratory Visualizations for Uncertain MR Spectroscopy Data
Feng, David; Kwock, Lester; Lee, Yueh; Taylor, Russell M.
2010-01-01
We present a system for visualizing magnetic resonance spectroscopy (MRS) data sets. Using MRS, radiologists generate multiple 3D scalar fields of metabolite concentrations within the brain and compare them to anatomical magnetic resonance imaging. By understanding the relationship between metabolic makeup and anatomical structure, radiologists hope to better diagnose and treat tumors and lesions. Our system consists of three linked visualizations: a spatial glyph-based technique we call Scaled Data-Driven Spheres, a parallel coordinates visualization augmented to incorporate uncertainty in the data, and a slice plane for accurate data value extraction. The parallel coordinates visualization uses specialized brush interactions designed to help users identify nontrivial linear relationships between scalar fields. We describe two novel contributions to parallel coordinates visualizations: linear function brushing and new axis construction. Users have discovered significant relationships among metabolites and anatomy by linking interactions between the three visualizations. PMID:21152337
Carbon Nanotube-based Sensor and Method for Continually Sensing Changes in a Structure
NASA Technical Reports Server (NTRS)
Jordan, Jeffry D. (Inventor); Watkins, Anthony Neal (Inventor); Oglesby, Donald M. (Inventor); Ingram, JoAnne L. (Inventor)
2007-01-01
A sensor has a plurality of carbon nanotube (CNT)-based conductors operatively positioned on a substrate. The conductors are arranged side-by-side, such as in a substantially parallel relationship to one another. At least one pair of spaced-apart electrodes is coupled to opposing ends of the conductors. A portion of each of the conductors spanning between each pair of electrodes comprises a plurality of carbon nanotubes arranged end-to-end and substantially aligned along an axis. Because a direct correlation exists between resistance of a carbon nanotube and carbon nanotube strain, changes experienced by the portion of the structure to which the sensor is coupled induce a change in electrical properties of the conductors.
A Study of the Crystal Structure of Co40Fe40B20 Epitaxial Films on a Bi2Te3 Topological Insulator
NASA Astrophysics Data System (ADS)
Kaveev, A. K.; Suturin, S. M.; Sokolov, N. S.; Kokh, K. A.; Tereshchenko, O. E.
2018-03-01
Laser molecular-beam epitaxy has been used to form Co40Fe40B20 layers on Bi2Te3 topological insulator substrates, and their growth conditions have been studied. The possibility of growing epitaxial ferromagnetic layers on the surface of a topological insulator is demonstrated for the first time. The CoFeB layers have a body-centered cubic crystal structure with the (111) crystal plane parallel to the (0001) plane of Bi2Te3. 3D mapping in the reciprocal space of high-energy electron-diffraction patterns made it possible to determine the epitaxial relationships between the film and the substrate.
Phase space simulation of collisionless stellar systems on the massively parallel processor
NASA Technical Reports Server (NTRS)
White, Richard L.
1987-01-01
A numerical technique for solving the collisionless Boltzmann equation describing the time evolution of a self gravitating fluid in phase space was implemented on the Massively Parallel Processor (MPP). The code performs calculations for a two dimensional phase space grid (with one space and one velocity dimension). Some results from calculations are presented. The execution speed of the code is comparable to the speed of a single processor of a Cray-XMP. Advantages and disadvantages of the MPP architecture for this type of problem are discussed. The nearest neighbor connectivity of the MPP array does not pose a significant obstacle. Future MPP-like machines should have much more local memory and easier access to staging memory and disks in order to be effective for this type of problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, M.; Grimshaw, A.
1996-12-31
The Legion project at the University of Virginia is an architecture for designing and building system services that provide the illusion of a single virtual machine to users, a virtual machine that provides secure shared object and shared name spaces, application adjustable fault-tolerance, improved response time, and greater throughput. Legion targets wide area assemblies of workstations, supercomputers, and parallel supercomputers, Legion tackles problems not solved by existing workstation based parallel processing tools; the system will enable fault-tolerance, wide area parallel processing, inter-operability, heterogeneity, a single global name space, protection, security, efficient scheduling, and comprehensive resource management. This paper describes themore » core Legion object model, which specifies the composition and functionality of Legion`s core objects-those objects that cooperate to create, locate, manage, and remove objects in the Legion system. The object model facilitates a flexible extensible implementation, provides a single global name space, grants site autonomy to participating organizations, and scales to millions of sites and trillions of objects.« less
Hides, Julie; Lambrecht, Gunda; Ramdharry, Gita; Cusack, Rebecca; Bloomberg, Jacob; Stokes, Maria
2017-01-01
Exposure to the microgravity environment induces physiological changes in the cardiovascular, musculoskeletal and sensorimotor systems in healthy astronauts. As space agencies prepare for extended duration missions, it is difficult to predict the extent of the effects that prolonged exposure to microgravity will have on astronauts. Prolonged bed rest is a model used by space agencies to simulate the effects of spaceflight on the human body, and bed rest studies have provided some insights into the effects of immobilisation and inactivity. Whilst microgravity exposure is confined to a relatively small population, on return to Earth, the physiological changes seen in astronauts parallel many changes routinely seen by physiotherapists on Earth in people with low back pain (LBP), muscle wasting diseases, exposure to prolonged bed rest, elite athletes and critically ill patients in intensive care. The medical operations team at the European Space Agency are currently involved in preparing astronauts for spaceflight, advising on exercises whilst astronauts are on the International Space Station, and reconditioning astronauts following their return. There are a number of parallels between this role and contemporary roles performed by physiotherapists working with elite athletes and muscle wasting conditions. This clinical commentary will draw parallels between changes which occur to the neuromuscular system in the absence of gravity and conditions which occur on Earth. Implications for physiotherapy management of astronauts and terrestrial patients will be discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bedez, Mathieu; Belhachmi, Zakaria; Haeberlé, Olivier; Greget, Renaud; Moussaoui, Saliha; Bouteiller, Jean-Marie; Bischoff, Serge
2016-01-15
The resolution of a model describing the electrical activity of neural tissue and its propagation within this tissue is highly consuming in term of computing time and requires strong computing power to achieve good results. In this study, we present a method to solve a model describing the electrical propagation in neuronal tissue, using parareal algorithm, coupling with parallelization space using CUDA in graphical processing unit (GPU). We applied the method of resolution to different dimensions of the geometry of our model (1-D, 2-D and 3-D). The GPU results are compared with simulations from a multi-core processor cluster, using message-passing interface (MPI), where the spatial scale was parallelized in order to reach a comparable calculation time than that of the presented method using GPU. A gain of a factor 100 in term of computational time between sequential results and those obtained using the GPU has been obtained, in the case of 3-D geometry. Given the structure of the GPU, this factor increases according to the fineness of the geometry used in the computation. To the best of our knowledge, it is the first time such a method is used, even in the case of neuroscience. Parallelization time coupled with GPU parallelization space allows for drastically reducing computational time with a fine resolution of the model describing the propagation of the electrical signal in a neuronal tissue. Copyright © 2015 Elsevier B.V. All rights reserved.
Parallel Ray Tracing Using the Message Passing Interface
2007-09-01
software is available for lens design and for general optical systems modeling. It tends to be designed to run on a single processor and can be very...Cameron, Senior Member, IEEE Abstract—Ray-tracing software is available for lens design and for general optical systems modeling. It tends to be designed to...National Aeronautics and Space Administration (NASA), optical ray tracing, parallel computing, parallel pro- cessing, prime numbers, ray tracing
Space shuttle system program definition. Volume 4: Cost and schedule report
NASA Technical Reports Server (NTRS)
1972-01-01
The supporting cost and schedule data for the second half of the Space Shuttle System Phase B Extension Study is summarized. The major objective for this period was to address the cost/schedule differences affecting final selection of the HO orbiter space shuttle system. The contending options under study included the following booster launch configurations: (1) series burn ballistic recoverable booster (BRB), (2) parallel burn ballistic recoverable booster (BRB), (3) series burn solid rocket motors (SRM's), and (4) parallel burn solid rocket motors (SRM's). The implications of varying payload bay sizes for the orbiter, engine type for the ballistics recoverable booster, and SRM motors for the solid booster were examined.
Synergia: an accelerator modeling tool with 3-D space charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amundson, James F.; Spentzouris, P.; /Fermilab
2004-07-01
High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less
NASA Research For Instrument Approaches To Closely Spaced Parallel Runways
NASA Technical Reports Server (NTRS)
Elliott, Dawn M.; Perry, R. Brad
2000-01-01
Within the NASA Aviation Systems Capacity Program, the Terminal Area Productivity (TAP) Project is addressing airport capacity enhancements during instrument meteorological condition (IMC). The Airborne Information for Lateral Spacing (AILS) research within TAP has focused on an airborne centered approach for independent instrument approaches to closely spaced parallel runways using Differential Global Positioning System (DGPS) and Automatic Dependent Surveillance-Broadcast (ADS-B) technologies. NASA Langley Research Center (LaRC), working in partnership with Honeywell, Inc., completed in AILS simulation study, flight test, and demonstration in 1999 examining normal approaches and potential collision scenarios to runways with separation distances of 3,400 and 2,500 feet. The results of the flight test and demonstration validate the simulation study.
Airborne Precision Spacing (APS) Dependent Parallel Arrivals (DPA)
NASA Technical Reports Server (NTRS)
Smith, Colin L.
2012-01-01
The Airborne Precision Spacing (APS) team at the NASA Langley Research Center (LaRC) has been developing a concept of operations to extend the current APS concept to support dependent approaches to parallel or converging runways along with the required pilot and controller procedures and pilot interfaces. A staggered operations capability for the Airborne Spacing for Terminal Arrival Routes (ASTAR) tool was developed and designated as ASTAR10. ASTAR10 has reached a sufficient level of maturity to be validated and tested through a fast-time simulation. The purpose of the experiment was to identify and resolve any remaining issues in the ASTAR10 algorithm, as well as put the concept of operations through a practical test.
Very fast motion planning for highly dexterous-articulated robots
NASA Technical Reports Server (NTRS)
Challou, Daniel J.; Gini, Maria; Kumar, Vipin
1994-01-01
Due to the inherent danger of space exploration, the need for greater use of teleoperated and autonomous robotic systems in space-based applications has long been apparent. Autonomous and semi-autonomous robotic devices have been proposed for carrying out routine functions associated with scientific experiments aboard the shuttle and space station. Finally, research into the use of such devices for planetary exploration continues. To accomplish their assigned tasks, all such autonomous and semi-autonomous devices will require the ability to move themselves through space without hitting themselves or the objects which surround them. In space it is important to execute the necessary motions correctly when they are first attempted because repositioning is expensive in terms of both time and resources (e.g., fuel). Finally, such devices will have to function in a variety of different environments. Given these constraints, a means for fast motion planning to insure the correct movement of robotic devices would be ideal. Unfortunately, motion planning algorithms are rarely used in practice because of their computational complexity. Fast methods have been developed for detecting imminent collisions, but the more general problem of motion planning remains computationally intractable. However, in this paper we show how the use of multicomputers and appropriate parallel algorithms can substantially reduce the time required to synthesize paths for dexterous articulated robots with a large number of joints. We have developed a parallel formulation of the Randomized Path Planner proposed by Barraquand and Latombe. We have shown that our parallel formulation is capable of formulating plans in a few seconds or less on various parallel architectures including: the nCUBE2 multicomputer with up to 1024 processors (nCUBE2 is a registered trademark of the nCUBE corporation), and a network of workstations.
EXPOSE-E: an ESA astrobiology mission 1.5 years in space.
Rabbow, Elke; Rettberg, Petra; Barczyk, Simon; Bohmeier, Maria; Parpart, André; Panitz, Corinna; Horneck, Gerda; von Heise-Rotenburg, Ralf; Hoppenbrouwers, Tom; Willnecker, Rainer; Baglioni, Pietro; Demets, René; Dettmann, Jan; Reitz, Guenther
2012-05-01
The multi-user facility EXPOSE-E was designed by the European Space Agency to enable astrobiology research in space (low-Earth orbit). On 7 February 2008, EXPOSE-E was carried to the International Space Station (ISS) on the European Technology Exposure Facility (EuTEF) platform in the cargo bay of Space Shuttle STS-122 Atlantis. The facility was installed at the starboard cone of the Columbus module by extravehicular activity, where it remained in space for 1.5 years. EXPOSE-E was returned to Earth with STS-128 Discovery on 12 September 2009 for subsequent sample analysis. EXPOSE-E provided accommodation in three exposure trays for a variety of astrobiological test samples that were exposed to selected space conditions: either to space vacuum, solar electromagnetic radiation at >110 nm and cosmic radiation (trays 1 and 3) or to simulated martian surface conditions (tray 2). Data on UV radiation, cosmic radiation, and temperature were measured every 10 s and downlinked by telemetry. A parallel mission ground reference (MGR) experiment was performed on ground with a parallel set of hardware and samples under simulated space conditions. EXPOSE-E performed a successful 1.5-year mission in space.
The Fight Deck Perspective of the NASA Langley AILS Concept
NASA Technical Reports Server (NTRS)
Rine, Laura L.; Abbott, Terence S.; Lohr, Gary W.; Elliott, Dawn M.; Waller, Marvin C.; Perry, R. Brad
2000-01-01
Many US airports depend on parallel runway operations to meet the growing demand for day to day operations. In the current airspace system, Instrument Meteorological Conditions (IMC) reduce the capacity of close parallel runway operations; that is, runways spaced closer than 4300 ft. These capacity losses can result in landing delays causing inconveniences to the traveling public, interruptions in commerce, and increased operating costs to the airlines. This document presents the flight deck perspective component of the Airborne Information for Lateral Spacing (AILS) approaches to close parallel runways in IMC. It represents the ideas the NASA Langley Research Center (LaRC) AILS Development Team envisions to integrate a number of components and procedures into a workable system for conducting close parallel runway approaches. An initial documentation of the aspects of this concept was sponsored by LaRC and completed in 1996. Since that time a number of the aspects have evolved to a more mature state. This paper is an update of the earlier documentation.
Atomic-scale electronic structure of the cuprate d-symmetry form factor density wave state
M. H. Hamidian; Kim, Chung Koo; Edkins, S. D.; ...
2015-10-26
Research on high-temperature superconducting cuprates is at present focused on identifying the relationship between the classic ‘pseudogap’ phenomenon 1, 2 and the more recently investigated density wave state 3–13. This state is generally characterized by a wavevector Q parallel to the planar Cu–O–Cu bonds 4–13 along with a predominantly d-symmetry form factor 14–17 (dFF-DW). To identify the microscopic mechanism giving rise to this state 18–30, one must identify the momentum-space states contributing to the dFF-DW spectral weight, determine their particle–hole phase relationship about the Fermi energy, establish whether they exhibit a characteristic energy gap, and understand the evolution of allmore » these phenomena throughout the phase diagram. Here we use energy-resolved sublattice visualization 14 of electronic structure and reveal that the characteristic energy of the dFF-DW modulations is actually the ‘pseudogap’ energy Δ 1. Moreover, we demonstrate that the dFF-DW modulations at E = –Δ 1 (filled states) occur with relative phase π compared to those at E = Δ 1 (empty states). Lastly, we show that the conventionally defined dFF-DW Q corresponds to scattering between the ‘hot frontier’ regions of momentum-space beyond which Bogoliubov quasiparticles cease to exist 30–32. These data indicate that the cuprate dFF-DW state involves particle–hole interactions focused at the pseudogap energy scale and between the four pairs of ‘hot frontier’ regions in momentum space where the pseudogap opens.« less
Distensibility and pressure-flow relationship of the pulmonary circulation. II. Multibranched model.
Bshouty, Z; Younes, M
1990-04-01
The contribution of distensibility and recruitment to the distinctive behavior of the pulmonary circulation is not known. To examine this question we developed a multibranched model in which an arterial vascular bed bifurcates sequentially up to 8 parallel channels that converge and reunite at the venous side to end in the left atrium. Eight resistors representing the capillary bed separate the arterial and venous beds. The elastic behavior of capillaries and extra-alveolar vessels was modeled after Fung and Sobin (Circ. Res. 30: 451-490, 1972) and Smith and Mitzner (J. Appl. Physiol. 48: 450-467, 1980), respectively. Forces acting on each component are modified and calculated individually, thus enabling the user to explore the effects of parallel and longitudinal heterogeneities in applied forces (e.g., gravity, vasomotor tone). Model predictions indicate that the contribution of distensibility to nonlinearities in the pressure-flow (P-F) and atrial-pulmonary arterial pressure (Pla-Ppa) relationships is substantial, whereas gravity-related recruitment contributes very little to these relationships. In addition, Pla-Ppa relationships, obtained at a constant flow, have no discriminating ability in identifying the presence or absence of a waterfall along the circulation. The P-F relationship is routinely shifted in a parallel fashion, within the physiological flow range, whenever extra forces (e.g., lung volume, tone) are applied uniformly at one or more branching levels, regardless of whether a waterfall is created. For a given applied force, the magnitude of parallel shift varies with proportion of the circulation subjected to the added force and with Pla.
Parallel processing and expert systems
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Lau, Sonie
1991-01-01
Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.
Coupling between structure and liquids in a parallel stage space shuttle design
NASA Technical Reports Server (NTRS)
Kana, D. D.; Ko, W. L.; Francis, P. H.; Nagy, A.
1972-01-01
A study was conducted to determine the influence of liquid propellants on the dynamic loads for space shuttle vehicles. A parallel-stage configuration model was designed and tested to determine the influence of liquid propellants on coupled natural modes. A forty degree-of-freedom analytical model was also developed for predicting these modes. Currently available analytical models were used to represent the liquid contributions, even though coupled longitudinal and lateral motions are present in such a complex structure. Agreement between the results was found in the lower few modes.
A Queue Simulation Tool for a High Performance Scientific Computing Center
NASA Technical Reports Server (NTRS)
Spear, Carrie; McGalliard, James
2007-01-01
The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.
Projective flatness in the quantisation of bosons and fermions
NASA Astrophysics Data System (ADS)
Wu, Siye
2015-07-01
We compare the quantisation of linear systems of bosons and fermions. We recall the appearance of projectively flat connection and results on parallel transport in the quantisation of bosons. We then discuss pre-quantisation and quantisation of fermions using the calculus of fermionic variables. We define a natural connection on the bundle of Hilbert spaces and show that it is projectively flat. This identifies, up to a phase, equivalent spinor representations constructed by various polarisations. We introduce the concept of metaplectic correction for fermions and show that the bundle of corrected Hilbert spaces is naturally flat. We then show that the parallel transport in the bundle of Hilbert spaces along a geodesic is a rescaled projection provided that the geodesic lies within the complement of a cut locus. Finally, we study the bundle of Hilbert spaces when there is a symmetry.
ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousbie, Thierry, E-mail: tsousbie@gmail.com; Department of Physics, The University of Tokyo, Tokyo 113-0033; Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033
2016-09-15
Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the bestmore » way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.« less
On the relationship between collisionless shock structure and energetic particle acceleration
NASA Technical Reports Server (NTRS)
Kennel, C. F.
1983-01-01
Recent experimental research on bow shock structure and theoretical studies of quasi-parallel shock structure and shock acceleration of energetic particles were reviewed, to point out the relationship between structure and particle acceleration. The phenomenological distinction between quasi-parallel and quasi-perpendicular shocks that has emerged from bow shock research; present efforts to extend this work to interplanetary shocks; theories of particle acceleration by shocks; and particle acceleration to shock structures using multiple fluid models were discussed.
NASA Astrophysics Data System (ADS)
Hayano, Akira; Ishii, Eiichi
2016-10-01
This study investigates the mechanical relationship between bedding-parallel and bedding-oblique faults in a Neogene massive siliceous mudstone at the site of the Horonobe Underground Research Laboratory (URL) in Hokkaido, Japan, on the basis of observations of drill-core recovered from pilot boreholes and fracture mapping on shaft and gallery walls. Four bedding-parallel faults with visible fault gouge, named respectively the MM Fault, the Last MM Fault, the S1 Fault, and the S2 Fault (stratigraphically, from the highest to the lowest), were observed in two pilot boreholes (PB-V01 and SAB-1). The distribution of the bedding-parallel faults at 350 m depth in the Horonobe URL indicates that these faults are spread over at least several tens of meters in parallel along a bedding plane. The observation that the bedding-oblique fault displaces the Last MM fault is consistent with the previous interpretation that the bedding- oblique faults formed after the bedding-parallel faults. In addition, the bedding-parallel faults terminate near the MM and S1 faults, indicating that the bedding-parallel faults with visible fault gouge act to terminate the propagation of younger bedding-oblique faults. In particular, the MM and S1 faults, which have a relatively thick fault gouge, appear to have had a stronger control on the propagation of bedding-oblique faults than did the Last MM fault, which has a relatively thin fault gouge.
Scaling device for photographic images
NASA Technical Reports Server (NTRS)
Rivera, Jorge E. (Inventor); Youngquist, Robert C. (Inventor); Cox, Robert B. (Inventor); Haskell, William D. (Inventor); Stevenson, Charles G. (Inventor)
2005-01-01
A scaling device projects a known optical pattern into the field of view of a camera, which can be employed as a reference scale in a resulting photograph of a remote object, for example. The device comprises an optical beam projector that projects two or more spaced, parallel optical beams onto a surface of a remotely located object to be photographed. The resulting beam spots or lines on the object are spaced from one another by a known, predetermined distance. As a result, the size of other objects or features in the photograph can be determined through comparison of their size to the known distance between the beam spots. Preferably, the device is a small, battery-powered device that can be attached to a camera and employs one or more laser light sources and associated optics to generate the parallel light beams. In a first embodiment of the invention, a single laser light source is employed, but multiple parallel beams are generated thereby through use of beam splitting optics. In another embodiment, multiple individual laser light sources are employed that are mounted in the device parallel to one another to generate the multiple parallel beams.
NASA Technical Reports Server (NTRS)
Bekey, I.; Mayer, H. L.; Wolfe, M. G.
1976-01-01
The likely system concepts which might be representative of NASA and DoD space programs in the 1980-2000 time period were studied along with the programs' likely needs for major space transportation vehicles, orbital support vehicles, and technology developments which could be shared by the military and civilian space establishments in that time period. Such needs could then be used by NASA as an input in determining the nature of its long-range development plan. The approach used was to develop a list of possible space system concepts (initiatives) in parallel with a list of needs based on consideration of the likely environments and goals of the future. The two lists thus obtained represented what could be done, regardless of need; and what should be done, regardless of capability, respectively. A set of development program plans for space application concepts was then assembled, matching needs against capabilities, and the requirements of the space concepts for support vehicles, transportation, and technology were extracted. The process was pursued in parallel for likely military and civilian programs, and the common support needs thus identified.
A Two-dimensional Version of the Niblett-Bostick Transformation for Magnetotelluric Interpretations
NASA Astrophysics Data System (ADS)
Esparza, F.
2005-05-01
An imaging technique for two-dimensional magnetotelluric interpretations is developed following the well known Niblett-Bostick transformation for one-dimensional profiles. The algorithm uses a Hopfield artificial neural network to process series and parallel magnetotelluric impedances along with their analytical influence functions. The adaptive, weighted average approximation preserves part of the nonlinearity of the original problem. No initial model in the usual sense is required for the recovery of a functional model. Rather, the built-in relationship between model and data considers automatically, all at the same time, many half spaces whose electrical conductivities vary according to the data. The use of series and parallel impedances, a self-contained pair of invariants of the impedance tensor, avoids the need to decide on best angles of rotation for TE and TM separations. Field data from a given profile can thus be fed directly into the algorithm without much processing. The solutions offered by the Hopfield neural network correspond to spatial averages computed through rectangular windows that can be chosen at will. Applications of the algorithm to simple synthetic models and to the COPROD2 data set illustrate the performance of the approximation.
NASA Technical Reports Server (NTRS)
Demakes, P. T.; Hirsch, G. N.; Stewart, W. A.; Glatt, C. R.
1976-01-01
The use of a recoverable liquid rocket booster (LRB) system to replace the existing solid rocket booster (SRB) system for the shuttle was studied. Historical weight estimating relationships were developed for the LRB using Saturn technology and modified as required. Mission performance was computed using February 1975 shuttle configuration groundrules to allow reasonable comparison of the existing shuttle with the study designs. The launch trajectory was constrained to pass through both the RTLS/AOA and main engine cut off points of the shuttle reference mission 1. Performance analysis is based on a point design trajectory model which optimizes initial tilt rate and exoatmospheric pitch profile. A gravity turn was employed during the boost phase in place of the shuttle angle of attack profile. Engine throttling add/or shutdown was used to constrain dynamic pressure and/or longitudinal acceleration where necessary. Four basic configurations were investigated: a parallel burn vehicle with an F-1 engine powered LRB; a parallel burn vehicle with a high pressure engine powered LRB; a series burn vehicle with a high pressure engine powered LRB. The relative sizes of the LRB and the ET are optimized to minimize GLOW in most cases.
NASA Astrophysics Data System (ADS)
Zhao, Feng; Frietman, Edward E. E.; Han, Zhong; Chen, Ray T.
1999-04-01
A characteristic feature of a conventional von Neumann computer is that computing power is delivered by a single processing unit. Although increasing the clock frequency improves the performance of the computer, the switching speed of the semiconductor devices and the finite speed at which electrical signals propagate along the bus set the boundaries. Architectures containing large numbers of nodes can solve this performance dilemma, with the comment that main obstacles in designing such systems are caused by difficulties to come up with solutions that guarantee efficient communications among the nodes. Exchanging data becomes really a bottleneck should al nodes be connected by a shared resource. Only optics, due to its inherent parallelism, could solve that bottleneck. Here, we explore a multi-faceted free space image distributor to be used in optical interconnects in massively parallel processing. In this paper, physical and optical models of the image distributor are focused on from diffraction theory of light wave to optical simulations. the general features and the performance of the image distributor are also described. The new structure of an image distributor and the simulations for it are discussed. From the digital simulation and experiment, it is found that the multi-faceted free space image distributing technique is quite suitable for free space optical interconnection in massively parallel processing and new structure of the multifaceted free space image distributor would perform better.
A Parallel Process Growth Model of Avoidant Personality Disorder Symptoms and Personality Traits
Wright, Aidan G. C.; Pincus, Aaron L.; Lenzenweger, Mark F.
2012-01-01
Background Avoidant personality disorder (AVPD), like other personality disorders, has historically been construed as a highly stable disorder. However, results from a number of longitudinal studies have found that the symptoms of AVPD demonstrate marked change over time. Little is known about which other psychological systems are related to this change. Although cross-sectional research suggests a strong relationship between AVPD and personality traits, no work has examined the relationship of their change trajectories. The current study sought to establish the longitudinal relationship between AVPD and basic personality traits using parallel process growth curve modeling. Methods Parallel process growth curve modeling was applied to the trajectories of AVPD and basic personality traits from the Longitudinal Study of Personality Disorders (Lenzenweger, 2006), a naturalistic, prospective, multiwave, longitudinal study of personality disorder, temperament, and normal personality. The focus of these analyses is on the relationship between the rates of change in both AVPD symptoms and basic personality traits. Results AVPD symptom trajectories demonstrated significant negative relationships with the trajectories of interpersonal dominance and affiliation, and a significant positive relationship to rates of change in neuroticism. Conclusions These results provide some of the first compelling evidence that trajectories of change in PD symptoms and personality traits are linked. These results have important implications for the ways in which temporal stability is conceptualized in AVPD specifically, and PD in general. PMID:22506627
Canaliculi in the tessellated skeleton of cartilaginous fishes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dean, M.N.; Socha, J.J.; Hall, B.K.
2010-08-04
The endoskeletal elements of sharks and rays are comprised of an uncalcified, hyaline cartilage-like core overlain by a thin fibro-ceramic layer of mineralized hexagonal tiles (tesserae) adjoined by intertesseral fibers. The basic spatial relationships of the constituent tissues (unmineralized cartilage, mineralized cartilage, fibrous tissue) are well-known - endoskeletal tessellation is a long-recognized synapomorphy of elasmobranch fishes - but a high-resolution and three-dimensional (3D) understanding of their interactions has been hampered by difficulties in sample preparation and lack of technologies adequate for visualizing microstructure and microassociations. We used cryo-electron microscopy and synchrotron radiation tomography to investigate tessellated skeleton ultrastructure but withoutmore » damage to the delicate relationships between constituent tissues or to the tesserae themselves. The combination of these techniques allowed visualization of never before appreciated internal structures, namely passages connecting the lacunar spaces within tesserae. These intratesseral 'canaliculi' link consecutive lacunar spaces into long lacunar strings, radiating outward from the center of tesserae. The continuity of extracellular matrix throughout the canalicular network may explain how chondrocytes in tesserae remain vital despite encasement in mineral. Extracellular fluid exchange may also permit transmission of nutrients, and mechanical and mineralization signals among chondrocytes, in a manner similar to the canalicular network in bone. These co-adapted mechanisms for the facilitated exchange of extracellular material suggest a level of parallelism in early chondrocyte and osteocyte evolution.« less
Optimizing the Four-Index Integral Transform Using Data Movement Lower Bounds Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Rastello, Fabrice; Kowalski, Karol
The four-index integral transform is a fundamental and computationally demanding calculation used in many computational chemistry suites such as NWChem. It transforms a four-dimensional tensor from an atomic basis to a molecular basis. This transformation is most efficiently implemented as a sequence of four tensor contractions that each contract a four-dimensional tensor with a two-dimensional transformation matrix. Differing degrees of permutation symmetry in the intermediate and final tensors in the sequence of contractions cause intermediate tensors to be much larger than the final tensor and limit the number of electronic states in the modeled systems. Loop fusion, in conjunction withmore » tiling, can be very effective in reducing the total space requirement, as well as data movement. However, the large number of possible choices for loop fusion and tiling, and data/computation distribution across a parallel system, make it challenging to develop an optimized parallel implementation for the four-index integral transform. We develop a novel approach to address this problem, using lower bounds modeling of data movement complexity. We establish relationships between available aggregate physical memory in a parallel computer system and ineffective fusion configurations, enabling their pruning and consequent identification of effective choices and a characterization of optimality criteria. This work has resulted in the development of a significantly improved implementation of the four-index transform that enables higher performance and the ability to model larger electronic systems than the current implementation in the NWChem quantum chemistry software suite.« less
Circuit topology of self-interacting chains: implications for folding and unfolding dynamics.
Mugler, Andrew; Tans, Sander J; Mashaghi, Alireza
2014-11-07
Understanding the relationship between molecular structure and folding is a central problem in disciplines ranging from biology to polymer physics and DNA origami. Topology can be a powerful tool to address this question. For a folded linear chain, the arrangement of intra-chain contacts is a topological property because rearranging the contacts requires discontinuous deformations. Conversely, the topology is preserved when continuously stretching the chain while maintaining the contact arrangement. Here we investigate how the folding and unfolding of linear chains with binary contacts is guided by the topology of contact arrangements. We formalize the topology by describing the relations between any two contacts in the structure, which for a linear chain can either be in parallel, in series, or crossing each other. We show that even when other determinants of folding rate such as contact order and size are kept constant, this 'circuit' topology determines folding kinetics. In particular, we find that the folding rate increases with the fractions of parallel and crossed relations. Moreover, we show how circuit topology constrains the conformational phase space explored during folding and unfolding: the number of forbidden unfolding transitions is found to increase with the fraction of parallel relations and to decrease with the fraction of series relations. Finally, we find that circuit topology influences whether distinct intermediate states are present, with crossed contacts being the key factor. The approach presented here can be more generally applied to questions on molecular dynamics, evolutionary biology, molecular engineering, and single-molecule biophysics.
NASA Technical Reports Server (NTRS)
Miquel, J. (Editor); Economos, A. C. (Editor)
1982-01-01
Presentations are given which address the effects of space flght on the older person, the parallels between the physiological responses to weightlessness and the aging process, and experimental possibilities afforded by the weightless environment to fundamental research in gerontology and geriatrics.
Zero-Adjective Contrast in Much-less Ellipsis: The Advantage for Parallel Syntax.
Carlson, Katy; Harris, Jesse A
2018-01-01
This paper explores the processing of sentences with a much less coordinator ( I don't own a pink hat, much less a red one ). This understudied ellipsis sentence, one of several focus-sensitive coordination structures, imposes syntactic and semantic conditions on the relationship between the correlate ( a pink hat ) and remnant ( a red one ). We present the case of zero-adjective contrast, in which an NP remnant introduces an adjective without an overt counterpart in the correlate ( I don't own a hat, much less a red one ). Although zero-adjective contrast could in principle ease comprehension by limiting the possible relationships between the remnant and correlate to entailment, we find that zero-adjective contrast is avoided in production and taxing in online processing. Results from several studies support a processing model in which syntactic parallelism is the primary guide for determining contrast in ellipsis structures, even when violating parallelism would assist in computing semantic relationships.
Issues of planning trajectory of parallel robots taking into account zones of singularity
NASA Astrophysics Data System (ADS)
Rybak, L. A.; Khalapyan, S. Y.; Gaponenko, E. V.
2018-03-01
A method for determining the design characteristics of a parallel robot necessary to provide specified parameters of its working space that satisfy the controllability requirement is developed. The experimental verification of the proposed method was carried out using an approximate planar 3-RPR mechanism.
Parallel State Space Construction for a Model Checking Based on Maximality Semantics
NASA Astrophysics Data System (ADS)
El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine
2009-03-01
The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.
Planned development of a 3D computer based on free-space optical interconnects
NASA Astrophysics Data System (ADS)
Neff, John A.; Guarino, David R.
1994-05-01
Free-space optical interconnection has the potential to provide upwards of a million data channels between planes of electronic circuits. This may result in the planar board and backplane structures of today giving away to 3-D stacks of wafers or multi-chip modules interconnected via channels running perpendicular to the processor planes, thereby eliminating much of the packaging overhead. Three-dimensional packaging is very appealing for tightly coupled fine-grained parallel computing where the need for massive numbers of interconnections is severely taxing the capabilities of the planar structures. This paper describes a coordinated effort by four research organizations to demonstrate an operational fine-grained parallel computer that achieves global connectivity through the use of free space optical interconnects.
CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED
This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter, and the power-law relationship between...
CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED
This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter , and the power-law relationship betwe...
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-07-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.
NASA Astrophysics Data System (ADS)
Caputo, Riccardo; Hancock, Paul L.
1998-11-01
It is well accepted and documented that faulting is produced by the cyclic behaviour of a stress field. Some extension fractures, such as veins characterised by the crack-seal mechanism, have also been presumed to result from repeated stress cycles. In the present note, some commonly observed field phenomena and relationships such as hackle marks and vein and joint spacing, are employed to argue that a stress field can also display cyclic behaviour during extensional fracturing. Indeed, the requirement of critical stress conditions for the occurrence of extensional failure events does not accord with the presence of contemporaneously open nearby parallel fractures. Therefore, because after each fracture event there is stress release within the surrounding volume of rock, high density sets of parallel extensional fractures also strongly support the idea that rocks undergo stress cyclicity during jointing and veining. A comparison with seismological data from earthquakes with dipole mechanical solutions, confirms that this process presently occurs at depth in the Earth crust. Furthermore, in order to explain dense sets of hair-like closely spaced microveins, a crack-jump mechanism is introduced here as an alternative to the crack-seal mechanism. We also propose that as a consequence of medium-scale stress cyclicity during brittle deformation, the re-fracturing of a rock mass occurs in either one or the other of these two possible ways depending on the ratio between the elastic parameters of the sealing material and those of the host rock. The crack-jump mechanism occurs when the former is stronger.
Project MANTIS: A MANTle Induction Simulator for coupling geodynamic and electromagnetic modeling
NASA Astrophysics Data System (ADS)
Weiss, C. J.
2009-12-01
A key component to testing geodynamic hypotheses resulting from the 3D mantle convection simulations is the ability to easily translate the predicted physiochemical state to the model space relevant for an independent geophysical observation, such as earth's seismic, geodetic or electromagnetic response. In this contribution a new parallel code for simulating low-frequency, global-scale electromagnetic induction phenomena is introduced that has the same Earth discretization as the popular CitcomS mantle convection code. Hence, projection of the CitcomS model into the model space of electrical conductivity is greatly simplified, and focuses solely on the node-to-node, physics-based relationship between these Earth parameters without the need for "upscaling", "downscaling", averaging or harmonizing with some other model basis such as spherical harmonics. Preliminary performance tests of the MANTIS code on shared and distributed memory parallel compute platforms shows favorable scaling (>70% efficiency) for up to 500 processors. As with CitcomS, an OpenDX visualization widget (VISMAN) is also provided for 3D rendering and interactive interrogation of model results. Details of the MANTIS code will be briefly discussed here, focusing on compatibility with CitcomS modeling, as will be preliminary results in which the electromagnetic response of a CitcomS model is evaluated. VISMAN rendering of electrical tomography-derived electrical conductivity model overlain by an a 1x1 deg crustal conductivity map. Grey scale represents the log_10 magnitude of conductivity [S/m]. Arrows are horiztonal components of a hypothetical magnetospheric source field used to electromagnetically excite the conductivity model.
Massively parallel information processing systems for space applications
NASA Technical Reports Server (NTRS)
Schaefer, D. H.
1979-01-01
NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.
Tile-based Level of Detail for the Parallel Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niski, K; Cohen, J D
Today's PCs incorporate multiple CPUs and GPUs and are easily arranged in clusters for high-performance, interactive graphics. We present an approach based on hierarchical, screen-space tiles to parallelizing rendering with level of detail. Adapt tiles, render tiles, and machine tiles are associated with CPUs, GPUs, and PCs, respectively, to efficiently parallelize the workload with good resource utilization. Adaptive tile sizes provide load balancing while our level of detail system allows total and independent management of the load on CPUs and GPUs. We demonstrate our approach on parallel configurations consisting of both single PCs and a cluster of PCs.
Carbon nanotube-based sensor and method for detection of crack growth in a structure
NASA Technical Reports Server (NTRS)
Smits, Jan M. (Inventor); Moore, Thomas C. (Inventor); Kite, Marlen T. (Inventor); Wincheski, Russell A. (Inventor); Ingram, JoAnne L. (Inventor); Watkins, Anthony N. (Inventor); Williams, Phillip A. (Inventor)
2007-01-01
A sensor has a plurality of carbon nanotube (CNT)-based conductors operatively positioned on a substrate. The conductors are arranged side-by-side, such as in a substantially parallel relationship to one another. At least one pair of spaced-apart electrodes is coupled to opposing ends of the conductors. A portion of each of the conductors spanning between each pair of electrodes comprises a plurality of carbon nanotubes arranged end-to-end and substantially aligned along an axis. Because a direct correlation exists between the resistance of a carbon nanotube and its strain, changes experienced by the portion of the structure to which the sensor is coupled induce a corresponding change in the electrical properties of the conductors, thereby enabling detection of crack growth in the structure.
A neural network for controlling the configuration of frame structure with elastic members
NASA Technical Reports Server (NTRS)
Tsutsumi, Kazuyoshi
1989-01-01
A neural network for controlling the configuration of frame structure with elastic members is proposed. In the present network, the structure is modeled not by using the relative angles of the members but by using the distances between the joint locations alone. The relationship between the environment and the joints is also defined by their mutual distances. The analog neural network attains the reaching motion of the manipulator as a minimization problem of the energy constructed by the distances between the joints, the target, and the obstacles. The network can generate not only the final but also the transient configurations and the trajectory. This framework with flexibility and parallelism is very suitable for controlling the Space Telerobotic systems with many degrees of freedom.
NASA Astrophysics Data System (ADS)
Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie
2016-05-01
This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.
NASA Technical Reports Server (NTRS)
Brandstetter, J. Robert; Reck, Gregory M.
1973-01-01
Combustion tests of two V-gutter types were conducted in a 19.25-in. diameter duct using vitiated air. Fuel spraybars were mounted in line with the V-gutters. Combustor length was set by flame-quench water sprays which were part of a calorimeter for measuring combustion efficiency. Although the levels of performance of the parallel and circular array afterburners were different, the trends with geometry variations were consistent. Therefore, parallel arrays can be used for evaluating V-gutter geometry effects on combustion performance. For both arrays, the highest inlet temperature produced combustion efficiencies near 100 percent. A 5-in. spraybar - to - V-gutter spacing gave higher efficiency and better lean blowout performance than a spacing twice as large. Gutter durability was good.
NASA Technical Reports Server (NTRS)
Morris, Robert A.
1990-01-01
The emphasis is on defining a set of communicating processes for intelligent spacecraft secondary power distribution and control. The computer hardware and software implementation platform for this work is that of the ADEPTS project at the Johnson Space Center (JSC). The electrical power system design which was used as the basis for this research is that of Space Station Freedom, although the functionality of the processes defined here generalize to any permanent manned space power control application. First, the Space Station Electrical Power Subsystem (EPS) hardware to be monitored is described, followed by a set of scenarios describing typical monitor and control activity. Then, the parallel distributed problem solving approach to knowledge engineering is introduced. There follows a two-step presentation of the intelligent software design for secondary power control. The first step decomposes the problem of monitoring and control into three primary functions. Each of the primary functions is described in detail. Suggestions for refinements and embelishments in design specifications are given.
New 2D diffraction model and its applications to terahertz parallel-plate waveguide power splitters
Zhang, Fan; Song, Kaijun; Fan, Yong
2017-01-01
A two-dimensional (2D) diffraction model for the calculation of the diffraction field in 2D space and its applications to terahertz parallel-plate waveguide power splitters are proposed in this paper. Compared with the Huygens-Fresnel principle in three-dimensional (3D) space, the proposed model provides an approximate analytical expression to calculate the diffraction field in 2D space. The diffraction filed is regarded as the superposition integral in 2D space. The calculated results obtained from the proposed diffraction model agree well with the ones by software HFSS based on the element method (FEM). Based on the proposed 2D diffraction model, two parallel-plate waveguide power splitters are presented. The splitters consist of a transmitting horn antenna, reflectors, and a receiving antenna array. The reflector is cylindrical parabolic with superimposed surface relief to efficiently couple the transmitted wave into the receiving antenna array. The reflector is applied as computer-generated holograms to match the transformed field to the receiving antenna aperture field. The power splitters were optimized by a modified real-coded genetic algorithm. The computed results of the splitters agreed well with the ones obtained by software HFSS verify the novel design method for power splitter, which shows good applied prospects of the proposed 2D diffraction model. PMID:28181514
Transportation systems analyses: Volume 1: Executive Summary
NASA Astrophysics Data System (ADS)
1993-05-01
The principal objective of this study is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform man delivery and return, cargo transfer, cargo delivery, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationships between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. This executive summary of the transportation systems analyses (TSM) semi-annual report addresses the SSF logistics resupply. Our analysis parallels the ongoing NASA SSF redesign effort. Therefore, there could be no SSF design to drive our logistics analysis. Consequently, the analysis attempted to bound the reasonable SSF design possibilities (and the subsequent transportation implications). No other strategy really exists until after a final decision is rendered on the SSF configuration.
Epitaxial relationship of semipolar s-plane (1101) InN grown on r-plane sapphire
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrakopulos, G. P.
2012-07-02
The heteroepitaxy of semipolar s-plane (1101) InN grown directly on r-plane sapphire by plasma-assisted molecular beam epitaxy is studied using transmission electron microscopy techniques. The epitaxial relationship is determined to be (1101){sub InN} Parallel-To (1102){sub Al{sub 2O{sub 3}}}, [1120]{sub InN} Parallel-To [2021]{sub Al{sub 2O{sub 3}}}, [1102]{sub InN}{approx} Parallel-To [0221]{sub Al{sub 2O{sub 3}}}, which ensures a 0.7% misfit along [1120]{sub InN}. Two orientation variants are identified. Proposed geometrical factors contributing to the high density of basal stacking faults, partial dislocations, and sphalerite cubic pockets include the misfit accommodation and reduction, as well as the accommodation of lattice twist.
NASA Astrophysics Data System (ADS)
Alves Júnior, A. A.; Sokoloff, M. D.
2017-10-01
MCBooster is a header-only, C++11-compliant library that provides routines to generate and perform calculations on large samples of phase space Monte Carlo events. To achieve superior performance, MCBooster is capable to perform most of its calculations in parallel using CUDA- and OpenMP-enabled devices. MCBooster is built on top of the Thrust library and runs on Linux systems. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments
Optical Interconnection Via Computer-Generated Holograms
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Zhou, Shaomin
1995-01-01
Method of free-space optical interconnection developed for data-processing applications like parallel optical computing, neural-network computing, and switching in optical communication networks. In method, multiple optical connections between multiple sources of light in one array and multiple photodetectors in another array made via computer-generated holograms in electrically addressed spatial light modulators (ESLMs). Offers potential advantages of massive parallelism, high space-bandwidth product, high time-bandwidth product, low power consumption, low cross talk, and low time skew. Also offers advantage of programmability with flexibility of reconfiguration, including variation of strengths of optical connections in real time.
Sequential color video to parallel color video converter
NASA Technical Reports Server (NTRS)
1975-01-01
The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.
Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A
2008-10-01
Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.
Third Conference on Artificial Intelligence for Space Applications, part 1
NASA Technical Reports Server (NTRS)
Denton, Judith S. (Compiler); Freeman, Michael S. (Compiler); Vereen, Mary (Compiler)
1987-01-01
The application of artificial intelligence to spacecraft and aerospace systems is discussed. Expert systems, robotics, space station automation, fault diagnostics, parallel processing, knowledge representation, scheduling, man-machine interfaces and neural nets are among the topics discussed.
Multi-aircraft dynamics, navigation and operation
NASA Astrophysics Data System (ADS)
Houck, Sharon Wester
Air traffic control stands on the brink of a revolution. Fifty years from now, we will look back and marvel that we ever flew by radio beacons and radar alone, much as we now marvel that early aviation pioneers flew by chronometer and compass alone. The microprocessor, satellite navigation systems, and air-to-air data links are the technical keys to this revolution. Many airports are near or at capacity now for at least portions of the day, making it clear that major increases in airport capacity will be required in order to support the projected growth in air traffic. This can be accomplished by adding airports, adding runways at existing airports, or increasing the capacity of the existing runways. Technology that allows use of ultra closely spaced (750 ft to 2500 ft) parallel approaches would greatly reduce the environmental impact of airport capacity increases. This research tackles the problem of multi aircraft dynamics, navigation, and operation, specifically in the terminal area, and presents new findings on how ultra closely spaced parallel approaches may be accomplished. The underlying approach considers how multiple aircraft are flown in visual conditions, where spacing criteria is much less stringent, and then uses this data to study the critical parameters for collision avoidance during an ultra closely spaced parallel approach. Also included is experimental and analytical investigations on advanced guidance systems that are critical components of precision approaches. Together, these investigations form a novel approach to the design and analysis of parallel approaches for runways spaced less than 2500 ft apart. This research has concluded that it is technically feasible to reduce the required runway spacing during simultaneous instrument approaches to less than the current minimum of 3400 ft with the use of advanced navigation systems while maintaining the currently accepted levels of safety. On a smooth day with both pilots flying a tunnel-in-the-sky display and being guided by a Category I LAAS, it is technically feasible to reduce the runway spacing to 1100 ft. If a Category I LAAS and an "intelligent auto-pilot" that executes both the approach and emergency escape maneuver are used, the technically achievable required runway spacing is reduced to 750 ft. Both statements presume full aircraft state information, including position, velocity, and attitude, is being reliably passed between aircraft at a rate equal to or greater than one Hz.
Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)
2000-01-01
HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).
Options for Parallelizing a Planning and Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.
2011-01-01
Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.
Limpanuparb, Taweetham; Milthorpe, Josh; Rendell, Alistair P
2014-10-30
Use of the modern parallel programming language X10 for computing long-range Coulomb and exchange interactions is presented. By using X10, a partitioned global address space language with support for task parallelism and the explicit representation of data locality, the resolution of the Ewald operator can be parallelized in a straightforward manner including use of both intranode and internode parallelism. We evaluate four different schemes for dynamic load balancing of integral calculation using X10's work stealing runtime, and report performance results for long-range HF energy calculation of large molecule/high quality basis running on up to 1024 cores of a high performance cluster machine. Copyright © 2014 Wiley Periodicals, Inc.
Parallel-In-Time For Moving Meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falgout, R. D.; Manteuffel, T. A.; Southworth, B.
2016-02-04
With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less
Airborne Precision Spacing for Dependent Parallel Operations Interface Study
NASA Technical Reports Server (NTRS)
Volk, Paul M.; Takallu, M. A.; Hoffler, Keith D.; Weiser, Jarold; Turner, Dexter
2012-01-01
This paper describes a usability study of proposed cockpit interfaces to support Airborne Precision Spacing (APS) operations for aircraft performing dependent parallel approaches (DPA). NASA has proposed an airborne system called Pair Dependent Speed (PDS) which uses their Airborne Spacing for Terminal Arrival Routes (ASTAR) algorithm to manage spacing intervals. Interface elements were designed to facilitate the input of APS-DPA spacing parameters to ASTAR, and to convey PDS system information to the crew deemed necessary and/or helpful to conduct the operation, including: target speed, guidance mode, target aircraft depiction, and spacing trend indication. In the study, subject pilots observed recorded simulations using the proposed interface elements in which the ownship managed assigned spacing intervals from two other arriving aircraft. Simulations were recorded using the Aircraft Simulation for Traffic Operations Research (ASTOR) platform, a medium-fidelity simulator based on a modern Boeing commercial glass cockpit. Various combinations of the interface elements were presented to subject pilots, and feedback was collected via structured questionnaires. The results of subject pilot evaluations show that the proposed design elements were acceptable, and that preferable combinations exist within this set of elements. The results also point to potential improvements to be considered for implementation in future experiments.
The extent of visual space inferred from perspective angles
Erkelens, Casper J.
2015-01-01
Retinal images are perspective projections of the visual environment. Perspective projections do not explain why we perceive perspective in 3-D space. Analysis of underlying spatial transformations shows that visual space is a perspective transformation of physical space if parallel lines in physical space vanish at finite distance in visual space. Perspective angles, i.e., the angle perceived between parallel lines in physical space, were estimated for rails of a straight railway track. Perspective angles were also estimated from pictures taken from the same point of view. Perspective angles between rails ranged from 27% to 83% of their angular size in the retinal image. Perspective angles prescribe the distance of vanishing points of visual space. All computed distances were shorter than 6 m. The shallow depth of a hypothetical space inferred from perspective angles does not match the depth of visual space, as it is perceived. Incongruity between the perceived shape of a railway line on the one hand and the experienced ratio between width and length of the line on the other hand is huge, but apparently so unobtrusive that it has remained unnoticed. The incompatibility between perspective angles and perceived distances casts doubt on evidence for a curved visual space that has been presented in the literature and was obtained from combining judgments of distances and angles with physical positions. PMID:26034567
Manyscale Computing for Sensor Processing in Support of Space Situational Awareness
NASA Astrophysics Data System (ADS)
Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.
2014-09-01
Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include performance analysis and results in terms of execution time as well as storage, power, and energy consumption for bus-connected and/or networked architectures. The feasibility of the manyscale paradigm is demonstrated by addressing four principal challenges: (1) architectural/structural diversity, parallelism, and locality, (2) masking of I/O and memory latencies, (3) scalability of design as well as implementation, and (4) efficient representation/expression of parallel applications. Examples will demonstrate how manyscale computing helps solve these challenges efficiently on real-world computing systems.
Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei
2016-11-10
Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepburn, I.; De Schutter, E., E-mail: erik@oist.jp; Theoretical Neurobiology & Neuroengineering, University of Antwerp, Antwerp 2610
Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realisticmore » biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.« less
Toward Millions of File System IOPS on Low-Cost, Commodity Hardware
Zheng, Da; Burns, Randal; Szalay, Alexander S.
2013-01-01
We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads. PMID:24402052
Toward Millions of File System IOPS on Low-Cost, Commodity Hardware.
Zheng, Da; Burns, Randal; Szalay, Alexander S
2013-01-01
We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads.
Use of gamma ray radiation to parallel the plates of a Fabry-Perot interferometer
NASA Technical Reports Server (NTRS)
Skinner, Wilbert R.; Hays, Paul B.; Anderson, Sally M.
1987-01-01
The use of gamma radiation to parallel the plates of a Fabry-Perot etalon is examined. The method for determining the etalon parallelism, and the procedure for irradiating the posts are described. Changes in effective gap for the etalon over the surface are utilized to measure the parallelism of the Fabry-Perot etalon. An example in which this technique is applied to an etalon of fused silica plates, which are 132 mm in diameter and coded with zinc sulfide and cryolite, with Zerodur spaces 2 cm in length. The effect of the irradiation of the posts on the thermal performance of the etalon is investigated.
Haptic spatial matching in near peripersonal space.
Kaas, Amanda L; Mier, Hanneke I van
2006-04-01
Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \\) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.
Yu, Dongjun; Wu, Xiaowei; Shen, Hongbin; Yang, Jian; Tang, Zhenmin; Qi, Yong; Yang, Jingyu
2012-12-01
Membrane proteins are encoded by ~ 30% in the genome and function importantly in the living organisms. Previous studies have revealed that membrane proteins' structures and functions show obvious cell organelle-specific properties. Hence, it is highly desired to predict membrane protein's subcellular location from the primary sequence considering the extreme difficulties of membrane protein wet-lab studies. Although many models have been developed for predicting protein subcellular locations, only a few are specific to membrane proteins. Existing prediction approaches were constructed based on statistical machine learning algorithms with serial combination of multi-view features, i.e., different feature vectors are simply serially combined to form a super feature vector. However, such simple combination of features will simultaneously increase the information redundancy that could, in turn, deteriorate the final prediction accuracy. That's why it was often found that prediction success rates in the serial super space were even lower than those in a single-view space. The purpose of this paper is investigation of a proper method for fusing multiple multi-view protein sequential features for subcellular location predictions. Instead of serial strategy, we propose a novel parallel framework for fusing multiple membrane protein multi-view attributes that will represent protein samples in complex spaces. We also proposed generalized principle component analysis (GPCA) for feature reduction purpose in the complex geometry. All the experimental results through different machine learning algorithms on benchmark membrane protein subcellular localization datasets demonstrate that the newly proposed parallel strategy outperforms the traditional serial approach. We also demonstrate the efficacy of the parallel strategy on a soluble protein subcellular localization dataset indicating the parallel technique is flexible to suite for other computational biology problems. The software and datasets are available at: http://www.csbio.sjtu.edu.cn/bioinf/mpsp.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-01-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008
Geometry of the perceptual space
NASA Astrophysics Data System (ADS)
Assadi, Amir H.; Palmer, Stephen; Eghbalnia, Hamid; Carew, John
1999-09-01
The concept of space and geometry varies across the subjects. Following Poincare, we consider the construction of the perceptual space as a continuum equipped with a notion of magnitude. The study of the relationships of objects in the perceptual space gives rise to what we may call perceptual geometry. Computational modeling of objects and investigation of their deeper perceptual geometrical properties (beyond qualitative arguments) require a mathematical representation of the perceptual space. Within the realm of such a mathematical/computational representation, visual perception can be studied as in the well-understood logic-based geometry. This, however, does not mean that one could reduce all problems of visual perception to their geometric counterparts. Rather, visual perception as reported by a human observer, has a subjective factor that could be analytically quantified only through statistical reasoning and in the course of repetitive experiments. Thus, the desire to experimentally verify the statements in perceptual geometry leads to an additional probabilistic structure imposed on the perceptual space, whose amplitudes are measured through intervention by human observers. We propose a model for the perceptual space and the case of perception of textured surfaces as a starting point for object recognition. To rigorously present these ideas and propose computational simulations for testing the theory, we present the model of the perceptual geometry of surfaces through an amplification of theory of Riemannian foliation in differential topology, augmented by statistical learning theory. When we refer to the perceptual geometry of a human observer, the theory takes into account the Bayesian formulation of the prior state of the knowledge of the observer and Hebbian learning. We use a Parallel Distributed Connectionist paradigm for computational modeling and experimental verification of our theory.
Functional and space programming.
Hayward, C
1988-01-01
In this article, the author expands the earlier stated case for functional and space programming based on objective evidence of user needs. It provides an in-depth examination of the logic and processes of programming as a continuum which precedes, then parallels, architectural design.
Canadian Space Launch: Exploiting Northern Latitudes For Efficient Space Launch
2015-04-01
9 Peoples’ Republic of China .........................................................................................11 USA Launch... taxation and legislation that make Canada an attractive destination for commercial space companies.3 General Definitions Highly Inclined Orbit...launches from sites north of the 35th parallel.33 USA Launch Facilities There are 3 US based launch facilities that conduct launch operations north
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
Families and nurses: building partnerships for growth and health.
Grace, J T
1995-05-01
The newborn's social and emotional development depends on a relationship in which attention, affect, and information are shared with the mother. Young, disadvantaged families may encounter characteristic difficulties in maintaining these social relationships. Partnerships for health between nurses and disadvantaged clients require sharing similar relationships, and they present parallel challenges.
NASA Technical Reports Server (NTRS)
Hooey, Becky Lee; Gore, Brian Francis; Mahlstedt, Eric; Foyle, David C.
2013-01-01
The objectives of the current research were to develop valid human performance models (HPMs) of approach and land operations; use these models to evaluate the impact of NextGen Closely Spaced Parallel Operations (CSPO) on pilot performance; and draw conclusions regarding flight deck display design and pilot-ATC roles and responsibilities for NextGen CSPO concepts. This document presents guidelines and implications for flight deck display designs and candidate roles and responsibilities. A companion document (Gore, Hooey, Mahlstedt, & Foyle, 2013) provides complete scenario descriptions and results including predictions of pilot workload, visual attention and time to detect off-nominal events.
Effects of ATC automation on precision approaches to closely space parallel runways
NASA Technical Reports Server (NTRS)
Slattery, R.; Lee, K.; Sanford, B.
1995-01-01
Improved navigational technology (such as the Microwave Landing System and the Global Positioning System) installed in modern aircraft will enable air traffic controllers to better utilize available airspace. Consequently, arrival traffic can fly approaches to parallel runways separated by smaller distances than are currently allowed. Previous simulation studies of advanced navigation approaches have found that controller workload is increased when there is a combination of aircraft that are capable of following advanced navigation routes and aircraft that are not. Research into Air Traffic Control automation at Ames Research Center has led to the development of the Center-TRACON Automation System (CTAS). The Final Approach Spacing Tool (FAST) is the component of the CTAS used in the TRACON area. The work in this paper examines, via simulation, the effects of FAST used for aircraft landing on closely spaced parallel runways. The simulation contained various combinations of aircraft, equipped and unequipped with advanced navigation systems. A set of simulations was run both manually and with an augmented set of FAST advisories to sequence aircraft, assign runways, and avoid conflicts. The results of the simulations are analyzed, measuring the airport throughput, aircraft delay, loss of separation, and controller workload.
Advances in locally constrained k-space-based parallel MRI.
Samsonov, Alexey A; Block, Walter F; Arunachalam, Arjun; Field, Aaron S
2006-02-01
In this article, several theoretical and methodological developments regarding k-space-based, locally constrained parallel MRI (pMRI) reconstruction are presented. A connection between Parallel MRI with Adaptive Radius in k-Space (PARS) and GRAPPA methods is demonstrated. The analysis provides a basis for unified treatment of both methods. Additionally, a weighted PARS reconstruction is proposed, which may absorb different weighting strategies for improved image reconstruction. Next, a fast and efficient method for pMRI reconstruction of data sampled on non-Cartesian trajectories is described. In the new technique, the computational burden associated with the numerous matrix inversions in the original PARS method is drastically reduced by limiting direct calculation of reconstruction coefficients to only a few reference points. The rest of the coefficients are found by interpolating between the reference sets, which is possible due to the similar configuration of points participating in reconstruction for highly symmetric trajectories, such as radial and spirals. As a result, the time requirements are drastically reduced, which makes it practical to use pMRI with non-Cartesian trajectories in many applications. The new technique was demonstrated with simulated and actual data sampled on radial trajectories. Copyright 2006 Wiley-Liss, Inc.
I/O Parallelization for the Goddard Earth Observing System Data Assimilation System (GEOS DAS)
NASA Technical Reports Server (NTRS)
Lucchesi, Rob; Sawyer, W.; Takacs, L. L.; Lyster, P.; Zero, J.
1998-01-01
The National Aeronautics and Space Administration (NASA) Data Assimilation Office (DAO) at the Goddard Space Flight Center (GSFC) has developed the GEOS DAS, a data assimilation system that provides production support for NASA missions and will support NASA's Earth Observing System (EOS) in the coming years. The GEOS DAS will be used to provide background fields of meteorological quantities to EOS satellite instrument teams for use in their data algorithms as well as providing assimilated data sets for climate studies on decadal time scales. The DAO has been involved in prototyping parallel implementations of the GEOS DAS for a number of years and is now embarking on an effort to convert the production version from shared-memory parallelism to distributed-memory parallelism using the portable Message-Passing Interface (MPI). The GEOS DAS consists of two main components, an atmospheric General Circulation Model (GCM) and a Physical-space Statistical Analysis System (PSAS). The GCM operates on data that are stored on a regular grid while PSAS works with observational data that are scattered irregularly throughout the atmosphere. As a result, the two components have different data decompositions. The GCM is decomposed horizontally as a checkerboard with all vertical levels of each box existing on the same processing element(PE). The dynamical core of the GCM can also operate on a rotated grid, which requires communication-intensive grid transformations during GCM integration. PSAS groups observations on PEs in a more irregular and dynamic fashion.
NASA Astrophysics Data System (ADS)
Naseralavi, S. S.; Salajegheh, E.; Fadaee, M. J.; Salajegheh, J.
2014-06-01
This paper presents a technique for damage detection in structures under unknown periodic excitations using the transient displacement response. The method is capable of identifying the damage parameters without finding the input excitations. We first define the concept of displacement space as a linear space in which each point represents displacements of structure under an excitation and initial condition. Roughly speaking, the method is based on the fact that structural displacements under free and forced vibrations are associated with two parallel subspaces in the displacement space. Considering this novel geometrical viewpoint, an equation called kernel parallelization equation (KPE) is derived for damage detection under unknown periodic excitations and a sensitivity-based algorithm for solving KPE is proposed accordingly. The method is evaluated via three case studies under periodic excitations, which confirm the efficiency of the proposed method.
ERIC Educational Resources Information Center
Laszlo, Sarah; Plaut, David C.
2012-01-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between…
Mountain Plains Learning Experience Guide: Radio and T.V. Repair. Course: A.C. Circuits.
ERIC Educational Resources Information Center
Hoggatt, P.; And Others
One of four individualized courses included in a radio and television repair curriculum, this course focuses on alternating current relationships and computations, transformers, power supplies, series and parallel resistive-reactive circuits, and series and parallel resonance. The course is comprised of eight units: (1) Introduction to Alternating…
Wegener, Mai
2009-01-01
The article traces the rise and fall of "psychophysical parallelism" - which was the most advanced scientific formulation of the mind / body relationship in the second half of the 19th century - through an interdisciplinary and broad geographical spectrum. It sheds light on the extremely different positions that rallied round this discursive figure, ranging from Fechner, Hering, Mach, Wundt, Bain, Hughlings Jackson, and Taine to Freud and Saussure. The article develops the thesis that the psychophysical parallelism functioned as a 'hot zone' within and a symptom of the changes in the order of sciences at that time. Against that background, the criticism of the psychophysical parallelism which became prominent around 1900 (Stumpf, Busse, Bergson, Mauthner et. al.) indicates the cooling of this 'hot zone' and the establishment of a new order within the scientific disciplines. The article pays particular attention to the position of this figure in contemporaneous language theories. Its basic assumption is that the relationship between the body and the psyche is itself constituted by language.
The formation of quasi-parallel shocks. [in space, solar and astrophysical plasmas
NASA Technical Reports Server (NTRS)
Cargill, Peter J.
1991-01-01
In a collisionless plasma, the coupling between a piston and the plasma must take place through either laminar or turbulent electromagnetic fields. Of the three types of coupling (laminar, Larmor and turbulent), shock formation in the parallel regime is dominated by the latter and in the quasi-parallel regime by a combination of all three, depending on the piston. In the quasi-perpendicular regime, there is usually a good separation between piston and shock. This is not true in the quasi-parallel and parallel regime. Hybrid numerical simulations for hot plasma pistons indicate that when the electrons are hot, a shock forms, but does not cleanly decouple from the piston. For hot ion pistons, no shock forms in the parallel limit: in the quasi-parallel case, a shock forms, but there is severe contamination from hot piston ions. These results suggest that the properties of solar and astrophysical shocks, such as particle acceleration, cannot be readily separated from their driving mechanism.
Air Traffic and Operational Data on Selected US Airports with Parallel Runways
NASA Technical Reports Server (NTRS)
Doyle, Thomas M.; McGee, Frank G.
1998-01-01
This report presents information on a number of airports in the country with parallel runways and focuses on those that have at least one pair of parallel runways closer than 4300 ft. Information contained in the report describes the airport's current operational activity as obtained through contact with the facility and from FAA air traffic tower activity data for FY 1997. The primary reason for this document is to provide a single source of information for research to determine airports where Airborne Information for Lateral Spacing (AILS) technology may be applicable.
Summary results from long-term wake turbulence measurements at San Francisco International Airport
DOT National Transportation Integrated Search
2004-07-01
This report summarizes the results of an extensive wake turbulence data collection program at the San Francisco International : Airport (SFO). Most of the landings at SFO are conducted on closely spaced parallel runways that are spaced 750 feet : bet...
From chemotaxis to the cognitive map: The function of olfaction
Jacobs, Lucia F.
2012-01-01
A paradox of vertebrate brain evolution is the unexplained variability in the size of the olfactory bulb (OB), in contrast to other brain regions, which scale predictably with brain size. Such variability appears to be the result of selection for olfactory function, yet there is no obvious concordance that would predict the causal relationship between OB size and behavior. This discordance may derive from assuming the primary function of olfaction is odorant discrimination and acuity. If instead the primary function of olfaction is navigation, i.e., predicting odorant distributions in time and space, variability in absolute OB size could be ascribed and explained by variability in navigational demand. This olfactory spatial hypothesis offers a single functional explanation to account for patterns of olfactory system scaling in vertebrates, the primacy of olfaction in spatial navigation, even in visual specialists, and proposes an evolutionary scenario to account for the convergence in olfactory structure and function across protostomes and deuterostomes. In addition, the unique percepts of olfaction may organize odorant information in a parallel map structure. This could have served as a scaffold for the evolution of the parallel map structure of the mammalian hippocampus, and possibly the arthropod mushroom body, and offers an explanation for similar flexible spatial navigation strategies in arthropods and vertebrates. PMID:22723365
Parallel optoelectronic trinary signed-digit division
NASA Astrophysics Data System (ADS)
Alam, Mohammad S.
1999-03-01
The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Hao-Qiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels of parallelism can be achieved by combining message passing and OpenMP parallelization. Which programming paradigm is the best will depend on the nature of the given problem, the hardware components of the cluster, the network, and the available software. In this study we compare the performance of different implementations of the same CFD benchmark application, using the same numerical algorithm but employing different programming paradigms.
Fajardo, Alex
2016-05-01
The study of scaling examines the relative dimensions of diverse organismal traits. Understanding whether global scaling patterns are paralleled within species is key to identify causal factors of universal scaling. I examined whether the foliage-stem (Corner's rules), the leaf size-number, and the leaf mass-leaf area scaling relationships remained invariant and isometric with elevation in a wide-distributed treeline species in the southern Chilean Andes. Mean leaf area, leaf mass, leafing intensity, and twig cross-sectional area were determined for 1-2 twigs of 8-15 Nothofagus pumilio individuals across four elevations (including treeline elevation) and four locations (from central Chile at 36°S to Tierra del Fuego at 54°S). Mixed effects models were fitted to test whether the interaction term between traits and elevation was nonsignificant (invariant). The leaf-twig cross-sectional area and the leaf mass-leaf area scaling relationships were isometric (slope = 1) and remained invariant with elevation, whereas the leaf size-number (i.e., leafing intensity) scaling was allometric (slope ≠ -1) and showed no variation with elevation. Leaf area and leaf number were consistently negatively correlated across elevation. The scaling relationships examined in the current study parallel those seen across species. It is plausible that the explanation of intraspecific scaling relationships, as trait combinations favored by natural selection, is the same as those invoked to explain across species patterns. Thus, it is very likely that the global interspecific Corner's rules and other leaf-leaf scaling relationships emerge as the aggregate of largely parallel intraspecific patterns. © 2016 Botanical Society of America.
DSPCP: A Data Scalable Approach for Identifying Relationships in Parallel Coordinates.
Nguyen, Hoa; Rosen, Paul
2018-03-01
Parallel coordinates plots (PCPs) are a well-studied technique for exploring multi-attribute datasets. In many situations, users find them a flexible method to analyze and interact with data. Unfortunately, using PCPs becomes challenging as the number of data items grows large or multiple trends within the data mix in the visualization. The resulting overdraw can obscure important features. A number of modifications to PCPs have been proposed, including using color, opacity, smooth curves, frequency, density, and animation to mitigate this problem. However, these modified PCPs tend to have their own limitations in the kinds of relationships they emphasize. We propose a new data scalable design for representing and exploring data relationships in PCPs. The approach exploits the point/line duality property of PCPs and a local linear assumption of data to extract and represent relationship summarizations. This approach simultaneously shows relationships in the data and the consistency of those relationships. Our approach supports various visualization tasks, including mixed linear and nonlinear pattern identification, noise detection, and outlier detection, all in large data. We demonstrate these tasks on multiple synthetic and real-world datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
Similarity of the Multidimensional Space Defined by Parallel Forms of a Mathematics Test.
ERIC Educational Resources Information Center
Reckase, Mark D.; And Others
The purpose of the paper is to determine whether test forms of the Mathematics Usage Test (AAP Math) of the American College Testing Program are parallel in a multidimensional sense. The AAP Math is an achievement test of mathematics concepts acquired by high school students by the end of their third year. To determine the dimensionality of the…
Bellucci, Michael A; Coker, David F
2011-07-28
We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Efficient Thread Labeling for Monitoring Programs with Nested Parallelism
NASA Astrophysics Data System (ADS)
Ha, Ok-Kyoon; Kim, Sun-Sook; Jun, Yong-Kee
It is difficult and cumbersome to detect data races occurred in an execution of parallel programs. Any on-the-fly race detection techniques using Lamport's happened-before relation needs a thread labeling scheme for generating unique identifiers which maintain logical concurrency information for the parallel threads. NR labeling is an efficient thread labeling scheme for the fork-join program model with nested parallelism, because its efficiency depends only on the nesting depth for every fork and join operation. This paper presents an improved NR labeling, called e-NR labeling, in which every thread generates its label by inheriting the pointer to its ancestor list from the parent threads or by updating the pointer in a constant amount of time and space. This labeling is more efficient than the NR labeling, because its efficiency does not depend on the nesting depth for every fork and join operation. Some experiments were performed with OpenMP programs having nesting depths of three or four and maximum parallelisms varying from 10,000 to 1,000,000. The results show that e-NR is 5 times faster than NR labeling and 4.3 times faster than OS labeling in the average time for creating and maintaining the thread labels. In average space required for labeling, it is 3.5 times smaller than NR labeling and 3 times smaller than OS labeling.
Plains tectonism on Venus: The deformation belts of Lavinia Planitia
NASA Technical Reports Server (NTRS)
Squyres, Steven W.; Jankowski, David G.; Simons, Mark; Solomon, Sean C.; Hager, Bradford H.; Mcgill, George E.
1993-01-01
High-resolution radar images from the Magellan spacecraft have revealed the first details of the morphology of the Lavinia Planitia region of Venus. A number of geologic units can be distinguished, including volcanic plains units with a range of ages. Transecting these plains over much of the Lavinia region are two types of generally orthogonal features that we interpret to be compressional wrinkle ridges and extensional grooves. The dominant tectonic features of Lavinia are broad elevated belts of intense deformation that transect the plains with complex geometry. They are many tens to a few hundred kilometers wide, as much as 1000 km long, and elevated hundreds of meters above the surrounding plains. Two classes of deformation belts are seen in the Lavinia region. 'Ridge belts' are composed of parallel ridges, each a few hundred meters in elevation, that we interpret to be folds. Typical fold spacings are 5-10 km. 'Fracture belts' are dominated instead by intense faulting, with faults in some instances paired to form narrow grabens. There is also some evidence for modest amounts of horizontal shear distributed across both ridge and fracture belts. Crosscutting relationships among the belts show there to be a range in belt ages. In western Lavinia, in particular, many ridge and fracture belts appear to bear a relationship to the much smaller wrinkle ridges and grooves on the surrounding plains: ridge morphology tends to dominate belts that lie more nearly parallel to local plains wrinkle ridges, and fracture morphology tends to dominate belts that lie more nearly parallel to local plains grooves. We use simple models to explore the formation of ridge and fracture belts. We show that convective motions in the mantle can couple to the crust to cause horizontal stresses of a magnitude sufficient to induce the formation of deformation belts like those observed in Lavinia. We also use the small-scale wavelengths of deformation observed within individual ridge belts to place an approximate lower limit on the venusian thermal gradient in the Lavinia region at the time of deformation.
Search and Determine Integrated Environment (SADIE)
NASA Astrophysics Data System (ADS)
Sabol, C.; Schumacher, P.; Segerman, A.; Coffey, S.; Hoskins, A.
2012-09-01
A new and integrated high performance computing software applications package called the Search and Determine Integrated Environment (SADIE) is being jointly developed and refined by the Air Force and Naval Research Laboratories (AFRL and NRL) to automatically resolve uncorrelated tracks (UCTs) and build a more complete space object catalog for improved Space Situational Awareness (SSA). The motivation for SADIE is to respond to very challenging needs identified and guidance received from Air Force Space Command (AFSPC) and other senior leaders to develop this technology to support the evolving Joint Space Operations Center (JSpOC) and Alternate Space Control Center (ASC2)-Dahlgren. The JSpOC and JMS SSA mission requirements and threads flow down from the United States Strategic Command (USSTRATCOM). The SADIE suite includes modification and integration of legacy applications and software components that include Search And Determine (SAD), Satellite Identification (SID), and Parallel Catalog (Parcat), as well as other utilities and scripts to enable end-to-end catalog building and maintenance in a parallel processing environment. SADIE is being developed to handle large catalog building challenges in all orbit regimes and includes the automatic processing of radar, fence, and optical data. Real data results are provided for the processing of Air Force Space Surveillance System fence observations and for the processing of Space Surveillance Telescope optical data.
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Labarta, Jesus; Gimenez, Judit
2004-01-01
With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors, parallel programming techniques have evolved that support parallelism beyond a single level. When comparing the performance of applications based on different programming paradigms, it is important to differentiate between the influence of the programming model itself and other factors, such as implementation specific behavior of the operating system (OS) or architectural issues. Rewriting-a large scientific application in order to employ a new programming paradigms is usually a time consuming and error prone task. Before embarking on such an endeavor it is important to determine that there is really a gain that would not be possible with the current implementation. A detailed performance analysis is crucial to clarify these issues. The multilevel programming paradigms considered in this study are hybrid MPI/OpenMP, MLP, and nested OpenMP. The hybrid MPI/OpenMP approach is based on using MPI [7] for the coarse grained parallelization and OpenMP [9] for fine grained loop level parallelism. The MPI programming paradigm assumes a private address space for each process. Data is transferred by explicitly exchanging messages via calls to the MPI library. This model was originally designed for distributed memory architectures but is also suitable for shared memory systems. The second paradigm under consideration is MLP which was developed by Taft. The approach is similar to MPi/OpenMP, using a mix of coarse grain process level parallelization and loop level OpenMP parallelization. As it is the case with MPI, a private address space is assumed for each process. The MLP approach was developed for ccNUMA architectures and explicitly takes advantage of the availability of shared memory. A shared memory arena which is accessible by all processes is required. Communication is done by reading from and writing to the shared memory.
Parallel consistent labeling algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samal, A.; Henderson, T.
Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less
ERIC Educational Resources Information Center
Kerrigan, Monica Reid
2014-01-01
This convergent parallel design mixed methods case study of four community colleges explores the relationship between organizational capacity and implementation of data-driven decision making (DDDM). The article also illustrates purposive sampling using replication logic for cross-case analysis and the strengths and weaknesses of quantitizing…
ERIC Educational Resources Information Center
Bevan, Samantha J.; Chan, Cecilia W. L.; Tanner, Julian A.
2014-01-01
Although there is increasing evidence for a relationship between courses that emphasize student engagement and achievement of student deep learning, there is a paucity of quantitative comparative studies in a biochemistry and molecular biology context. Here, we present a pedagogical study in two contrasting parallel biochemistry introductory…
An efficient decoding for low density parity check codes
NASA Astrophysics Data System (ADS)
Zhao, Ling; Zhang, Xiaolin; Zhu, Manjie
2009-12-01
Low density parity check (LDPC) codes are a class of forward-error-correction codes. They are among the best-known codes capable of achieving low bit error rates (BER) approaching Shannon's capacity limit. Recently, LDPC codes have been adopted by the European Digital Video Broadcasting (DVB-S2) standard, and have also been proposed for the emerging IEEE 802.16 fixed and mobile broadband wireless-access standard. The consultative committee for space data system (CCSDS) has also recommended using LDPC codes in the deep space communications and near-earth communications. It is obvious that LDPC codes will be widely used in wired and wireless communication, magnetic recording, optical networking, DVB, and other fields in the near future. Efficient hardware implementation of LDPC codes is of great interest since LDPC codes are being considered for a wide range of applications. This paper presents an efficient partially parallel decoder architecture suited for quasi-cyclic (QC) LDPC codes using Belief propagation algorithm for decoding. Algorithmic transformation and architectural level optimization are incorporated to reduce the critical path. First, analyze the check matrix of LDPC code, to find out the relationship between the row weight and the column weight. And then, the sharing level of the check node updating units (CNU) and the variable node updating units (VNU) are determined according to the relationship. After that, rearrange the CNU and the VNU, and divide them into several smaller parts, with the help of some assistant logic circuit, these smaller parts can be grouped into CNU during the check node update processing and grouped into VNU during the variable node update processing. These smaller parts are called node update kernel units (NKU) and the assistant logic circuit are called node update auxiliary unit (NAU). With NAUs' help, the two steps of iteration operation are completed by NKUs, which brings in great hardware resource reduction. Meanwhile, efficient techniques have been developed to reduce the computation delay of the node processing units and to minimize hardware overhead for parallel processing. This method may be applied not only to regular LDPC codes, but also to the irregular ones. Based on the proposed architectures, a (7493, 6096) irregular QC-LDPC code decoder is described using verilog hardware design language and implemented on Altera field programmable gate array (FPGA) StratixII EP2S130. The implementation results show that over 20% of logic core size can be saved than conventional partially parallel decoder architectures without any performance degradation. If the decoding clock is 100MHz, the proposed decoder can achieve a maximum (source data) decoding throughput of 133 Mb/s at 18 iterations.
Advanced missions safety. Volume 3: Appendices. Part 1: Space shuttle rescue capability
NASA Technical Reports Server (NTRS)
1972-01-01
The space shuttle rescue capability is analyzed as a part of the advanced mission safety study. The subjects discussed are: (1) mission evaluation, (2) shuttle configurations and performance, (3) performance of shuttle-launched tug system, (4) multiple pass grazing reentry from lunar orbit, (5) ground launched ascent and rendezvous time, (6) cost estimates, and (7) parallel-burn space shuttle configuration.
Design of a 6-DOF upper limb rehabilitation exoskeleton with parallel actuated joints.
Chen, Yanyan; Li, Ge; Zhu, Yanhe; Zhao, Jie; Cai, Hegao
2014-01-01
In this paper, a 6-DOF wearable upper limb exoskeleton with parallel actuated joints which perfectly mimics human motions is proposed. The upper limb exoskeleton assists the movement of physically weak people. Compared with the existing upper limb exoskeletons which are mostly designed using a serial structure with large movement space but small stiffness and poor wearable ability, a prototype for motion assistance based on human anatomy structure has been developed in our design. Moreover, the design adopts balls instead of bearings to save space, which simplifies the structure and reduces the cost of the mechanism. The proposed design also employs deceleration processes to ensure that the transmission ratio of each joint is coincident.
14 CFR 1214.802 - Relationship to Shuttle policy.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Relationship to Shuttle policy. 1214.802 Section 1214.802 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Reimbursement for Spacelab Services § 1214.802 Relationship to Shuttle policy. Except as specifically noted, the...
14 CFR 1214.802 - Relationship to Shuttle policy.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Relationship to Shuttle policy. 1214.802 Section 1214.802 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Reimbursement for Spacelab Services § 1214.802 Relationship to Shuttle policy. Except as specifically noted, the...
SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space
Lustig, Michael; Pauly, John M.
2010-01-01
A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790
NASA Astrophysics Data System (ADS)
Simioni, M.; Bedin, L. R.; Aparicio, A.; Piotto, G.; Milone, A. P.; Nardiello, D.; Anderson, J.; Bellini, A.; Brown, T. M.; Cassisi, S.; Cunial, A.; Granata, V.; Ortolani, S.; van der Marel, R. P.; Vesperini, E.
2018-05-01
As part of the Hubble Space Telescope UV Legacy Survey of Galactic globular clusters, 110 parallel fields were observed with the Wide Field Channel of the Advanced Camera for Surveys, in the outskirts of 48 globular clusters, plus the open cluster NGC 6791. Totalling about 0.3 deg2 of observed sky, this is the largest homogeneous Hubble Space Telescope photometric survey of Galalctic globular clusters outskirts to date. In particular, two distinct pointings have been obtained for each target on average, all centred at about 6.5 arcmin from the cluster centre, thus covering a mean area of about 23 arcmin2 for each globular cluster. For each field, at least one exposure in both F475W and F814W filters was collected. In this work, we publicly release the astrometric and photometric catalogues and the astrometrized atlases for each of these fields.
Evaluation of the Intel iWarp parallel processor for space flight applications
NASA Technical Reports Server (NTRS)
Hine, Butler P., III; Fong, Terrence W.
1993-01-01
The potential of a DARPA-sponsored advanced processor, the Intel iWarp, for use in future SSF Data Management Systems (DMS) upgrades is evaluated through integration into the Ames DMS testbed and applications testing. The iWarp is a distributed, parallel computing system well suited for high performance computing applications such as matrix operations and image processing. The system architecture is modular, supports systolic and message-based computation, and is capable of providing massive computational power in a low-cost, low-power package. As a consequence, the iWarp offers significant potential for advanced space-based computing. This research seeks to determine the iWarp's suitability as a processing device for space missions. In particular, the project focuses on evaluating the ease of integrating the iWarp into the SSF DMS baseline architecture and the iWarp's ability to support computationally stressing applications representative of SSF tasks.
NASA Technical Reports Server (NTRS)
Jeffries, K. S.; Renz, D. D.
1984-01-01
A parametric analysis was performed of transmission cables for transmitting electrical power at high voltage (up to 1000 V) and high frequency (10 to 30 kHz) for high power (100 kW or more) space missions. Large diameter (5 to 30 mm) hollow conductors were considered in closely spaced coaxial configurations and in parallel lines. Formulas were derived to calculate inductance and resistance for these conductors. Curves of cable conductance, mass, inductance, capacitance, resistance, power loss, and temperature were plotted for various conductor diameters, conductor thickness, and alternating current frequencies. An example 5 mm diameter coaxial cable with 0.5 mm conductor thickness was calculated to transmit 100 kW at 1000 Vac, 50 m with a power loss of 1900 W, an inductance of 1.45 micron and a capacitance of 0.07 micron-F. The computer programs written for this analysis are listed in the appendix.
An equivalent viscoelastic model for rock mass with parallel joints
NASA Astrophysics Data System (ADS)
Li, Jianchun; Ma, Guowei; Zhao, Jian
2010-03-01
An equivalent viscoelastic medium model is proposed for rock mass with parallel joints. A concept of "virtual wave source (VWS)" is proposed to take into account the wave reflections between the joints. The equivalent model can be effectively applied to analyze longitudinal wave propagation through discontinuous media with parallel joints. Parameters in the equivalent viscoelastic model are derived analytically based on longitudinal wave propagation across a single rock joint. The proposed model is then verified by applying identical incident waves to the discontinuous and equivalent viscoelastic media at one end to compare the output waves at the other end. When the wavelength of the incident wave is sufficiently long compared to the joint spacing, the effect of the VWS on wave propagation in rock mass is prominent. The results from the equivalent viscoelastic medium model are very similar to those determined from the displacement discontinuity method. Frequency dependence and joint spacing effect on the equivalent viscoelastic model and the VWS method are discussed.
Multiple wavelength X-ray monochromators
Steinmeyer, P.A.
1992-11-17
An improved apparatus and method is provided for separating input x-ray radiation containing first and second x-ray wavelengths into spatially separate first and second output radiation which contain the first and second x-ray wavelengths, respectively. The apparatus includes a crystalline diffractor which includes a first set of parallel crystal planes, where each of the planes is spaced a predetermined first distance from one another. The crystalline diffractor also includes a second set of parallel crystal planes inclined at an angle with respect to the first set of crystal planes where each of the planes of the second set of parallel crystal planes is spaced a predetermined second distance from one another. In one embodiment, the crystalline diffractor is comprised of a single crystal. In a second embodiment, the crystalline diffractor is comprised of a stack of two crystals. In a third embodiment, the crystalline diffractor includes a single crystal that is bent for focusing the separate first and second output x-ray radiation wavelengths into separate focal points. 3 figs.
Use of Parallel Micro-Platform for the Simulation the Space Exploration
NASA Astrophysics Data System (ADS)
Velasco Herrera, Victor Manuel; Velasco Herrera, Graciela; Rosano, Felipe Lara; Rodriguez Lozano, Salvador; Lucero Roldan Serrato, Karen
The purpose of this work is to create a parallel micro-platform, that simulates the virtual movements of a space exploration in 3D. One of the innovations presented in this design consists of the application of a lever mechanism for the transmission of the movement. The development of such a robot is a challenging task very different of the industrial manipulators due to a totally different target system of requirements. This work presents the study and simulation, aided by computer, of the movement of this parallel manipulator. The development of this model has been developed using the platform of computer aided design Unigraphics, in which it was done the geometric modeled of each one of the components and end assembly (CAD), the generation of files for the computer aided manufacture (CAM) of each one of the pieces and the kinematics simulation of the system evaluating different driving schemes. We used the toolbox (MATLAB) of aerospace and create an adaptive control module to simulate the system.
The Simplified Aircraft-Based Paired Approach With the ALAS Alerting Algorithm
NASA Technical Reports Server (NTRS)
Perry, Raleigh B.; Madden, Michael M.; Torres-Pomales, Wilfredo; Butler, Ricky W.
2013-01-01
This paper presents the results of an investigation of a proposed concept for closely spaced parallel runways called the Simplified Aircraft-based Paired Approach (SAPA). This procedure depends upon a new alerting algorithm called the Adjacent Landing Alerting System (ALAS). This study used both low fidelity and high fidelity simulations to validate the SAPA procedure and test the performance of the new alerting algorithm. The low fidelity simulation enabled a determination of minimum approach distance for the worst case over millions of scenarios. The high fidelity simulation enabled an accurate determination of timings and minimum approach distance in the presence of realistic trajectories, communication latencies, and total system error for 108 test cases. The SAPA procedure and the ALAS alerting algorithm were applied to the 750-ft parallel spacing (e.g., SFO 28L/28R) approach problem. With the SAPA procedure as defined in this paper, this study concludes that a 750-ft application does not appear to be feasible, but preliminary results for 1000-ft parallel runways look promising.
Precision Parameter Estimation and Machine Learning
NASA Astrophysics Data System (ADS)
Wandelt, Benjamin D.
2008-12-01
I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.
2016-03-07
Peering deep into the early Universe, this picturesque parallel field observation from the NASA/ESA Hubble Space Telescope reveals thousands of colourful galaxies swimming in the inky blackness of space. A few foreground stars from our own galaxy, the Milky Way, are also visible. In October 2013 Hubble’s Wide Field Camera 3 (WFC3) and Advanced Camera for Surveys (ACS) began observing this portion of sky as part of the Frontier Fields programme. This spectacular skyscape was captured during the study of the giant galaxy cluster Abell 2744, otherwise known as Pandora’s Box. While one of Hubble’s cameras concentrated on Abell 2744, the other camera viewed this adjacent patch of sky near to the cluster. Containing countless galaxies of various ages, shapes and sizes, this parallel field observation is nearly as deep as the Hubble Ultra-Deep Field. In addition to showcasing the stunning beauty of the deep Universe in incredible detail, this parallel field — when compared to other deep fields — will help astronomers understand how similar the Universe looks in different directions
Study on Parallel 2-DOF Rotation Machanism in Radar
NASA Astrophysics Data System (ADS)
Jiang, Ming; Hu, Xuelong; Liu, Lei; Yu, Yunfei
The spherical parallel machine has become the world's academic and industrial focus of the field in recent years due to its simple and economical manufacture as well as its structural compactness especially suitable for areas where space gesture changes. This paper dwells upon its present research and development home and abroad. The newer machine (RGRR-II) can rotate around the axis z within 360° and the axis y1 from -90° to +90°. It has the advantages such as less moving parts (only 3 parts), larger ratio of work space to machine size, zero mechanic coupling, no singularity. Constructing rotation machine with spherical parallel 2-DOF rotation join (RGRR-II) may realize semispherical movement with zero dead point and extent the range. Control card (PA8000NT Series CNC) is installed in the computer. The card can run the corresponding software which realizes radar movement control. The machine meets the need of radars in plane and satellite which require larger detection range, lighter weight and compacter structure.
Multiple wavelength X-ray monochromators
Steinmeyer, Peter A.
1992-11-17
An improved apparatus and method is provided for separating input x-ray radiation containing first and second x-ray wavelengths into spatially separate first and second output radiation which contain the first and second x-ray wavelengths, respectively. The apparatus includes a crystalline diffractor which includes a first set of parallel crystal planes, where each of the planes is spaced a predetermined first distance from one another. The crystalline diffractor also includes a second set of parallel crystal planes inclined at an angle with respect to the first set of crystal planes where each of the planes of the second set of parallel crystal planes is spaced a predetermined second distance from one another. In one embodiment, the crystalline diffractor is comprised of a single crystal. In a second embodiment, the crystalline diffractor is comprised of a stack of two crystals. In a third embodiment, the crystalline diffractor includes a single crystal that is bent for focussing the separate first and second output x-ray radiation wavelengths into separate focal points.
Study of solid rocket motors for a space shuttle booster. Volume 4: Mass properties report
NASA Technical Reports Server (NTRS)
Vonderesch, A. H.
1972-01-01
Mass properties data for the 156 inch diameter, parallel burn, solid propellant rocket engine for the space shuttle booster are presented. Design ground rules and assumptions applicable to generation of the mass properties data are described, together with pertinent data sources.
Study of solid rocket motor for a space shuttle booster
NASA Technical Reports Server (NTRS)
1972-01-01
The study of solid rocket motors for a space shuttle booster was directed toward definition of a parallel-burn shuttle booster using two 156-in.-dia solid rocket motors. The study effort was organized into the following major task areas: system studies, preliminary design, program planning, and program costing.
Automated design of spacecraft systems power subsystems
NASA Technical Reports Server (NTRS)
Terrile, Richard J.; Kordon, Mark; Mandutianu, Dan; Salcedo, Jose; Wood, Eric; Hashemi, Mona
2006-01-01
This paper discusses the application of evolutionary computing to a dynamic space vehicle power subsystem resource and performance simulation in a parallel processing environment. Our objective is to demonstrate the feasibility, application and advantage of using evolutionary computation techniques for the early design search and optimization of space systems.
Hydrogen-assisted stable crack growth in iron-3 wt% silicon steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marrow, T.J.; Prangnell, P.; Aindow, M.
1996-08-01
Observations of internal hydrogen cleavage in Fe-3Si are reported. Hydrogen-assisted stable crack growth (H-SCG) is associated with cleavage striations of a 300 nm spacing, observed using scanning electron microscopy (SEM) and atomic force microscopy (AFM). High resolution SEM revealed finer striations, previously undetected, with a spacing of approximately 30 nm. These were parallel to the coarser striations. Scanning tunneling microscopy (STM) also showed the fine striation spacing, and gave a striation height of approximately 15 nm. The crack front was not parallel to the striations. Transmission electron microscopy (TEM) of crack tip plastic zones showed {l_brace}112{r_brace} and {l_brace}110{r_brace} slip, withmore » a high dislocation density (around 10{sup 14}m{sup {minus}2}). The slip plane spacing was approximately 15--30 nm. Parallel arrays of high dislocation density were observed in the wake of the hydrogen cleavage crack. It is concluded that H-ScG in Fe-3Si occurs by periodic brittle cleavage on the {l_brace}001{r_brace} planes. This is preceded by dislocation emission. The coarse striations are produced by crack tip blunting and the fine striations by dislocations attracted by image forces to the fracture surface after cleavage. The effects of temperature, pressure and yield strength on the kinetics of H-SCG can be predicted using a model for diffusion of hydrogen through the plastic zone.« less
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1989-01-01
The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.
Power, Patriarchy, and Punishment in Shakespeare's "Othello."
ERIC Educational Resources Information Center
Lynch, Kimberly
An informal survey revealed that graduate students presented with Shakespeare's works felt academically unfit and powerless. These student-teacher-text power relationships parallel the power relationships between the dominant patriarchy and the female characters in "Othello"--Desdemona, Emilia, and Bianca. However, "Antony and…
Issues in Language Proficiency Assessment.
ERIC Educational Resources Information Center
Sanchez, Rosaura; And Others
Three papers on assessment and planning in bilingual education are presented. In "Language Theory Bases," Rosaura Sanchez advocates an approach toward child bilingual education that takes into account the relationship between the parallel domains of language development and cognitive development. An awareness of this relationship is…
Data Partitioning and Load Balancing in Parallel Disk Systems
NASA Technical Reports Server (NTRS)
Scheuermann, Peter; Weikum, Gerhard; Zabback, Peter
1997-01-01
Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible waves, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent, self-reliant file system that aims to optimize striping by taking into account the requirements of the applications and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces.
Massively parallel GPU-accelerated minimization of classical density functional theory
NASA Astrophysics Data System (ADS)
Stopper, Daniel; Roth, Roland
2017-08-01
In this paper, we discuss the ability to numerically minimize the grand potential of hard disks in two-dimensional and of hard spheres in three-dimensional space within the framework of classical density functional and fundamental measure theory on modern graphics cards. Our main finding is that a massively parallel minimization leads to an enormous performance gain in comparison to standard sequential minimization schemes. Furthermore, the results indicate that in complex multi-dimensional situations, a heavy parallel minimization of the grand potential seems to be mandatory in order to reach a reasonable balance between accuracy and computational cost.
NASA Astrophysics Data System (ADS)
Sheynin, Yuriy; Shutenko, Felix; Suvorova, Elena; Yablokov, Evgenej
2008-04-01
High rate interconnections are important subsystems in modern data processing and control systems of many classes. They are especially important in prospective embedded and on-board systems that used to be multicomponent systems with parallel or distributed architecture, [1]. Modular architecture systems of previous generations were based on parallel busses that were widely used and standardised: VME, PCI, CompactPCI, etc. Busses evolution went in improvement of bus protocol efficiency (burst transactions, split transactions, etc.) and increasing operation frequencies. However, due to multi-drop bus nature and multi-wire skew problems the parallel bussing speedup became more and more limited. For embedded and on-board systems additional reason for this trend was in weight, size and power constraints of an interconnection and its components. Parallel interfaces have become technologically more challenging as their respective clock frequencies have increased to keep pace with the bandwidth requirements of their attached storage devices. Since each interface uses a data clock to gate and validate the parallel data (which is normally 8 bits or 16 bits wide), the clock frequency need only be equivalent to the byte rate or word rate being transmitted. In other words, for a given transmission frequency, the wider the data bus, the slower the clock. As the clock frequency increases, more high frequency energy is available in each of the data lines, and a portion of this energy is dissipated in radiation. Each data line not only transmits this energy but also receives some from its neighbours. This form of mutual interference is commonly called "cross-talk," and the signal distortion it produces can become another major contributor to loss of data integrity unless compensated by appropriate cable designs. Other transmission problems such as frequency-dependent attenuation and signal reflections, while also applicable to serial interfaces, are more troublesome in parallel interfaces due to the number of additional cable conductors involved. In order to compensate for these drawbacks, higher quality cables, shorter cable runs and fewer devices on the bus have been the norm. Finally, the physical bulk of the parallel cables makes them more difficult to route inside an enclosure, hinders cooling airflow and is incompatible with the trend toward smaller form-factor devices. Parallel busses worked in systems during the past 20 years, but the accumulated problems dictate the need for change and the technology is available to spur the transition. The general trend in high-rate interconnections turned from parallel bussing to scalable interconnections with a network architecture and high-rate point-to-point links. Analysis showed that data links with serial information transfer could achieve higher throughput and efficiency and it was confirmed in various research and practical design. Serial interfaces offer an improvement over older parallel interfaces: better performance, better scalability, and also better reliability as the parallel interfaces are at their limits of speed with reliable data transfers and others. The trend was implemented in major standards' families evolution: e.g. from PCI/PCI-X parallel bussing to PCIExpress interconnection architecture with serial lines, from CompactPCI parallel bus to ATCA (Advanced Telecommunications Architecture) specification with serial links and network topologies of an interconnection, etc. In the article we consider a general set of characteristics and features of serial interconnections, give a brief overview of serial interconnections specifications. In more details we present the SpaceWire interconnection technology. Have been developed for space on-board systems applications the SpaceWire has important features and characteristics that make it a prospective interconnection for wide range of embedded systems.
Heilweil, Victor M.; Benoit, Jerome; Healy, Richard W.
2015-01-01
Spreading-basin methods have resulted in more than 130 million cubic meters of recharge to the unconfined Navajo Sandstone of southern Utah in the past decade, but infiltration rates have slowed in recent years because of reduced hydraulic gradients and clogging. Trench infiltration is a promising alternative technique for increasing recharge and minimizing evaporation. This paper uses a variably saturated flow model to further investigate the relative importance of the following variables on rates of trench infiltration to unconfined aquifers: saturated hydraulic conductivity, trench spacing and dimensions, initial water-table depth, alternate wet/dry periods, and number of parallel trenches. Modeling results showed (1) increased infiltration with higher hydraulic conductivity, deeper initial water tables, and larger spacing between parallel trenches, (2) deeper or wider trenches do not substantially increase infiltration, (3) alternating wet/dry periods result in less overall infiltration than keeping the trenches continuously full, and (4) larger numbers of parallel trenches within a fixed area increases infiltration but with a diminishing effect as trench spacing becomes tighter. An empirical equation for estimating expected trench infiltration rates as a function of hydraulic conductivity and initial water-table depth was derived and can be used for evaluating feasibility of trench infiltration in other hydrogeologic settings
Runtime Detection of C-Style Errors in UPC Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirkelbauer, P; Liao, C; Panas, T
2011-09-29
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the globalmore » address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.« less
NASA Astrophysics Data System (ADS)
Destefano, Anthony; Heerikhuisen, Jacob
2015-04-01
Fully 3D particle simulations can be a computationally and memory expensive task, especially when high resolution grid cells are required. The problem becomes further complicated when parallelization is needed. In this work we focus on computational methods to solve these difficulties. Hilbert curves are used to map the 3D particle space to the 1D contiguous memory space. This method of organization allows for minimized cache misses on the GPU as well as a sorted structure that is equivalent to an octal tree data structure. This type of sorted structure is attractive for uses in adaptive mesh implementations due to the logarithm search time. Implementations using the Message Passing Interface (MPI) library and NVIDIA's parallel computing platform CUDA will be compared, as MPI is commonly used on server nodes with many CPU's. We will also compare static grid structures with those of adaptive mesh structures. The physical test bed will be simulating heavy interstellar atoms interacting with a background plasma, the heliosphere, simulated from fully consistent coupled MHD/kinetic particle code. It is known that charge exchange is an important factor in space plasmas, specifically it modifies the structure of the heliosphere itself. We would like to thank the Alabama Supercomputer Authority for the use of their computational resources.
Solid propellant rocket motor internal ballistics performance variation analysis, phase 3
NASA Technical Reports Server (NTRS)
Sforzini, R. H.; Foster, W. A., Jr.; Murph, J. E.; Adams, G. W., Jr.
1977-01-01
Results of research aimed at improving the predictability of off nominal internal ballistics performance of solid propellant rocket motors (SRMs) including thrust imbalance between two SRMs firing in parallel are reported. The potential effects of nozzle throat erosion on internal ballistic performance were studied and a propellant burning rate low postulated. The propellant burning rate model when coupled with the grain deformation model permits an excellent match between theoretical results and test data for the Titan IIIC, TU455.02, and the first Space Shuttle SRM (DM-1). Analysis of star grain deformation using an experimental model and a finite element model shows the star grain deformation effects for the Space Shuttle to be small in comparison to those of the circular perforated grain. An alternative technique was developed for predicting thrust imbalance without recourse to the Monte Carlo computer program. A scaling relationship used to relate theoretical results to test results may be applied to the alternative technique of predicting thrust imbalance or to the Monte Carlo evaluation. Extended investigation into the effect of strain rate on propellant burning rate leads to the conclusion that the thermoelastic effect is generally negligible for both steadily increasing pressure loads and oscillatory loads.
NASA Astrophysics Data System (ADS)
Azatov, Mikheil; Sun, Xiaoyu; Suberi, Alexandra; Fourkas, John T.; Upadhyaya, Arpita
2017-12-01
Cells can sense and adapt to mechanical properties of their environment. The local geometry of the extracellular matrix, such as its topography, has been shown to modulate cell morphology, migration, and proliferation. Here we investigate the effect of micro/nanotopography on the morphology and cytoskeletal dynamics of human pancreatic tumor-associated fibroblast cells (TAFs). We use arrays of parallel nanoridges with variable spacings on a subcellular scale to investigate the response of TAFs to the topography of their environment. We find that cell shape and stress fiber organization both align along the direction of the nanoridges. Our analysis reveals a strong bimodal relationship between the degree of alignment and the spacing of the nanoridges. Furthermore, focal adhesions align along ridges and form preferentially on top of the ridges. Tracking actin stress fiber movement reveals enhanced dynamics of stress fibers on topographically patterned surfaces. We find that components of the actin cytoskeleton move preferentially along the ridges with a significantly higher velocity along the ridges than on a flat surface. Our results suggest that a complex interplay between the actin cytoskeleton and focal adhesions coordinates the cellular response to micro/nanotopography.
Output-increasing, protective cover for a solar cell
Hammerbacher, Milfred D.
1995-11-21
A flexible cover (14) for a flexible solar cell (12) protects the cell from the ambient and increases the cell's efficiency. The cell(12)includes silicon spheres (16) held in a flexible aluminum sheet matrix (20,22). The cover (14) is a flexible, protective layer (60) of light-transparent material having a relatively flat upper, free surface (64) and an irregular opposed surface (66). The irregular surface (66) includes first portions (68) which conform to the polar regions (31R) of the spheres (16) and second convex (72) or concave (90) portions (72 or 90) which define spaces (78) in conjunction with the reflective surface (20T) of one aluminum sheet (20). Without the cover (14) light (50) falling on the surface (20T) between the spheres (16) is wasted, that is, it does not fall on a sphere (16). The surfaces of the second portions are non-parallel to the direction of the otherwise wasted light (50), which fact, together with a selected relationship between the refractive indices of the cover and the spaces, result in sufficient diffraction of the otherwise wasted light (50) so that about 25% of it is reflected from the surface (20T) onto a sphere (16).
Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility
NASA Technical Reports Server (NTRS)
Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer
2009-01-01
Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).
NASA Astrophysics Data System (ADS)
Bonilla Sierra, V.; Donze, F. V.; Duriez, J.; Klinger, Y.; Scholtes, L.
2016-12-01
At the very early stages of a pure strike-slip fault zone formation, shear displacement along a deep buried parent fault produces a characteristic set of "evenly-spaced" strike-slip faults at the surface, e.g. Southern San Andreas, North Anatolian, Central Asian, and Northern Tibetan fault systems. This mode III fracture propagation is initiated by the rotation of the local principal stress at the tip of the parent discontinuity, generating twisted fractures with a helicoidal shape. In sandbox or clay-cake experiments used to reproduce these structures, it has been observed that the spacing and possibly the characteristic length of the fractures appearing at the surface are proportional to the overburden thickness of the deformed layer. Based on a Discrete Element Method (YADE DEM-Open Source), we have investigated the conditions controlling the linear relationships between the spacing of the surface "evenly-spaced" strike-slip discontinuities and the thickness of the deformed layer. Increasing the basement displacement of the model, a diffused shear zone appears first at the tip of the basal parent discontinuity. From this mist zone, localized and strongly interacting shear fractures start to propagate. This interaction process can generate complex internal structures: some fractures will propagate faster than their neighbors, modifying their close surrounding stress environment. Some propagating fractures can stop growing and asymmetrical fracture sets can be observed. This resulting hierarchical bifurcation process leads to a set of "en echelon" discontinuities appearing at the surface (Figure 1). In a pure strike-slip mode, fracture spacing is proportional to the thickness, with a ratio and a bifurcation mode controlled by the cohesion value at the first order. Depending on the Poisson's ratio value, which mainly controls the orientation of the discontinuities, this ratio can be affected at a lower degree. In presence of mixed-mode (transpression or transtension), these linear relationships disappear. Figure 1: Effects of the cohesion C and the thickness T of the deformed layer on the surface discontinuity pattern (a) T = Tref and C = Cref (b) T = Tref and C= 10×Cref (c) T = 2×Tref and C = Cref (d) T = 2×Tref and 10×Cref. The color code corresponds to the instantaneous velocity in the Y direction.
NASA Astrophysics Data System (ADS)
Wu, W.; Zhu, J. B.; Zhao, J.
2013-02-01
The purpose of this study is to further investigate the seismic response of a set of parallel rock fractures filled with viscoelastic materials, following the work by Zhu et al. Dry quartz sands are used to represent the viscoelastic materials. The split Hopkinson rock bar (SHRB) technique is modified to simulate 1-D P-wave propagation across the sand-filled parallel fractures. At first, the displacement and stress discontinuity model (DSDM) describes the seismic response of a sand-filled single fracture. The modified recursive method (MRM) then predicts the seismic response of the sand-filled parallel fractures. The SHRB tests verify the theoretical predictions by DSDM for the sand-filled single fracture and by MRM for the sand-filled parallel fractures. The filling sands cause stress discontinuity across the fractures and promote displacement discontinuity. The wave transmission coefficient for the sand-filled parallel fractures depends on wave superposition between the fractures, which is similar to the effect of fracture spacing on the wave transmission coefficient for the non-filled parallel fractures.
Sawyer, William C.
1995-01-01
An apparatus for supporting a heating element in a channel formed in a heater base is disclosed. A preferred embodiment includes a substantially U-shaped tantalum member. The U-shape is characterized by two substantially parallel portions of tantalum that each have an end connected to opposite ends of a base portion of tantalum. The parallel portions are each substantially perpendicular to the base portion and spaced apart a distance not larger than a width of the channel and not smaller than a width of a graphite heating element. The parallel portions each have a hole therein, and the centers of the holes define an axis that is substantially parallel to the base portion. An aluminum oxide ceramic retaining pin extends through the holes in the parallel portions and into a hole in a wall of the channel to retain the U-shaped member in the channel and to support the graphite heating element. The graphite heating element is confined by the parallel portions of tantalum, the base portion of tantalum, and the retaining pin. A tantalum tube surrounds the retaining pin between the parallel portions of tantalum.
Sawyer, W.C.
1995-08-15
An apparatus for supporting a heating element in a channel formed in a heater base is disclosed. A preferred embodiment includes a substantially U-shaped tantalum member. The U-shape is characterized by two substantially parallel portions of tantalum that each have an end connected to opposite ends of a base portion of tantalum. The parallel portions are each substantially perpendicular to the base portion and spaced apart a distance not larger than a width of the channel and not smaller than a width of a graphite heating element. The parallel portions each have a hole therein, and the centers of the holes define an axis that is substantially parallel to the base portion. An aluminum oxide ceramic retaining pin extends through the holes in the parallel portions and into a hole in a wall of the channel to retain the U-shaped member in the channel and to support the graphite heating element. The graphite heating element is confined by the parallel portions of tantalum, the base portion of tantalum, and the retaining pin. A tantalum tube surrounds the retaining pin between the parallel portions of tantalum. 6 figs.
The astrobiological mission EXPOSE-R on board of the International Space Station
NASA Astrophysics Data System (ADS)
Rabbow, Elke; Rettberg, Petra; Barczyk, Simon; Bohmeier, Maria; Parpart, Andre; Panitz, Corinna; Horneck, Gerda; Burfeindt, Jürgen; Molter, Ferdinand; Jaramillo, Esther; Pereira, Carlos; Weiß, Peter; Willnecker, Rainer; Demets, René; Dettmann, Jan
2015-01-01
EXPOSE-R flew as the second of the European Space Agency (ESA) EXPOSE multi-user facilities on the International Space Station. During the mission on the external URM-D platform of the Zvezda service module, samples of eight international astrobiology experiments selected by ESA and one Russian guest experiment were exposed to low Earth orbit space parameters from March 10th, 2009 to January 21st, 2011. EXPOSE-R accommodated a total of 1220 samples for exposure to selected space conditions and combinations, including space vacuum, temperature cycles through 273 K, cosmic radiation, solar electromagnetic radiation at >110, >170 or >200 nm at various fluences up to GJ m-2. Samples ranged from chemical compounds via unicellular organisms and multicellular mosquito larvae and seeds to passive radiation dosimeters. Additionally, one active radiation measurement instrument was accommodated on EXPOSE-R and commanded from ground in accordance with the facility itself. Data on ultraviolet radiation, cosmic radiation and temperature were measured every 10 s and downlinked by telemetry and data carrier every few months. The EXPOSE-R trays and samples returned to Earth on March 9th, 2011 with Shuttle flight, Space Transportation System (STS)-133/ULF 5, Discovery, after successful total mission duration of 27 months in space. The samples were analysed in the individual investigators laboratories. A parallel Mission Ground Reference experiment was performed on ground with a parallel set of hardware and samples under simulated space conditions following to the data transmitted from the flight mission.
Mechanisms of the passage of dark currents through Cd(Zn)Te semi-insulating crystals
NASA Astrophysics Data System (ADS)
Sklyarchuk, V.; Fochuk, P.; Rarenko, I.; Zakharuk, Z.; Sklyarchuk, O.; Nykoniuk, Ye.; Rybka, A.; Kutny, V.; Bolotnikov, A. E.; James, R. B.
2014-09-01
We investigated the passage of dark currents through semi-insulating crystals of Cd(Zn)Te with weak n-type conductivity that are used widely as detectors of ionizing radiation. The crystals were grown from a tellurium solution melt at 800 оС by the zone-melting method, in which a polycrystalline rod in a quartz ampoule was moved through a zone heater at a rate of 2 mm per day. The synthesis of the rod was carried out at ~1150 оС. We determined the important electro-physical parameters of this semiconductor, using techniques based on a parallel study of the temperature dependence of current-voltage characteristics in both the ohmic and the space-charge-limited current regions. We established in these crystals the relationship between the energy levels and the concentrations of deep-level impurity states, responsible for dark conductivity and their usefulness as detectors.
Protein Science by DNA Sequencing: How Advances in Molecular Biology Are Accelerating Biochemistry.
Higgins, Sean A; Savage, David F
2018-01-09
A fundamental goal of protein biochemistry is to determine the sequence-function relationship, but the vastness of sequence space makes comprehensive evaluation of this landscape difficult. However, advances in DNA synthesis and sequencing now allow researchers to assess the functional impact of every single mutation in many proteins, but challenges remain in library construction and the development of general assays applicable to a diverse range of protein functions. This Perspective briefly outlines the technical innovations in DNA manipulation that allow massively parallel protein biochemistry and then summarizes the methods currently available for library construction and the functional assays of protein variants. Areas in need of future innovation are highlighted with a particular focus on assay development and the use of computational analysis with machine learning to effectively traverse the sequence-function landscape. Finally, applications in the fundamentals of protein biochemistry, disease prediction, and protein engineering are presented.
DISCRN: A Distributed Storytelling Framework for Intelligence Analysis.
Shukla, Manu; Dos Santos, Raimundo; Chen, Feng; Lu, Chang-Tien
2017-09-01
Storytelling connects entities (people, organizations) using their observed relationships to establish meaningful storylines. This can be extended to spatiotemporal storytelling that incorporates locations, time, and graph computations to enhance coherence and meaning. But when performed sequentially these computations become a bottleneck because the massive number of entities make space and time complexity untenable. This article presents DISCRN, or distributed spatiotemporal ConceptSearch-based storytelling, a distributed framework for performing spatiotemporal storytelling. The framework extracts entities from microblogs and event data, and links these entities using a novel ConceptSearch to derive storylines in a distributed fashion utilizing key-value pair paradigm. Performing these operations at scale allows deeper and broader analysis of storylines. The novel parallelization techniques speed up the generation and filtering of storylines on massive datasets. Experiments with microblog posts such as Twitter data and Global Database of Events, Language, and Tone events show the efficiency of the techniques in DISCRN.
Dupas, Laura; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre; Boulant, Nicolas
2015-06-01
The spokes method combined with parallel transmission is a promising technique to mitigate the B1(+) inhomogeneity at ultra-high field in 2D imaging. To date however, the spokes placement optimization combined with the magnitude least squares pulse design has never been done in direct conjunction with the explicit Specific Absorption Rate (SAR) and hardware constraints. In this work, the joint optimization of 2-spoke trajectories and RF subpulse weights is performed under these constraints explicitly and in the small tip angle regime. The problem is first considerably simplified by making the observation that only the vector between the 2 spokes is relevant in the magnitude least squares cost-function, thereby reducing the size of the parameter space and allowing a more exhaustive search. The algorithm starts from a set of initial k-space candidates and performs in parallel for all of them optimizations of the RF subpulse weights and the k-space locations simultaneously, under explicit SAR and power constraints, using an active-set algorithm. The dimensionality of the spoke placement parameter space being low, the RF pulse performance is computed for every location in k-space to study the robustness of the proposed approach with respect to initialization, by looking at the probability to converge towards a possible global minimum. Moreover, the optimization of the spoke placement is repeated with an increased pulse bandwidth in order to investigate the impact of the constraints on the result. Bloch simulations and in vivo T2(∗)-weighted images acquired at 7 T validate the approach. The algorithm returns simulated normalized root mean square errors systematically smaller than 5% in 10 s. Copyright © 2015 Elsevier Inc. All rights reserved.
Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo
2018-02-01
The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yin, An; Pappalardo, Robert T.
2015-11-01
Despite a decade of intense research the mechanical origin of the tiger-stripe fractures (TSF) and their geologic relationship to the hosting South Polar Terrain (SPT) of Enceladus remain poorly understood. Here we show via systematic photo-geological mapping that the semi-squared SPT is bounded by right-slip, left-slip, extensional, and contractional zones on its four edges. Discrete deformation along the edges in turn accommodates translation of the SPT as a single sheet with its transport direction parallel to the regional topographic gradient. This parallel relationship implies that the gradient of gravitational potential energy drove the SPT motion. In map view, internal deformation of the SPT is expressed by distributed right-slip shear parallel to the SPT transport direction. The broad right-slip shear across the whole SPT was facilitated by left-slip bookshelf faulting along the parallel TSF. We suggest that the flow-like tectonics, to the first approximation across the SPT on Enceladus, is best explained by the occurrence of a transient thermal event, which allowed the release of gravitational potential energy via lateral viscous flow within the thermally weakened ice shell.
Tunable gas adsorption in graphene oxide framework
NASA Astrophysics Data System (ADS)
Razmkhah, Mohammad; Moosavi, Fatemeh; Taghi Hamed Mosavian, Mohammad; Ahmadpour, Ali
2018-06-01
Effect of length of linker inter-space was studied on the adsorption capacity of CO2 by graphene oxide framework (GOF). Effect of linker inter-space of 14, 11, and 8 Å was studied here. The linker inter-space of 11 Å showed the highest CO2 adsorption capacity. A dual-site Langmuir model was observed for adsorption of CO2 and CH4 into the GOF. According to radial distribution function (RDF), facial and central atoms of linker are the dual-site predicted by Langmuir model. Two distinguishable sites of adsorption and parallel orientation of CO2 are the main reasons of high adsorption capacity in 11 Å linker inter-space. Gas-adsorbent affinity obtains the orientation of CO2 near the linker. The affinity in the 11 Å linker inter-space is the highest. Thus, it forces the CO2 to lay parallel and orient more localized than the other GOFs. In addition, CH4 resulted higher working capacity than CO2 in 14 Å. This occurs because of the change in gas-adsorbent affinity by changing pressure. An entrance adsorption occurs out of the pore of the GOF. This adsorption is not as stable as deep adsorption.
A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.
Catarinucci, Luca; Tarricone, Luciano
2009-01-01
The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.
On the dimensionally correct kinetic theory of turbulence for parallel propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaelzer, R., E-mail: rudi.gaelzer@ufrgs.br, E-mail: yoonp@umd.edu, E-mail: 007gasun@khu.ac.kr, E-mail: luiz.ziebell@ufrgs.br; Ziebell, L. F., E-mail: rudi.gaelzer@ufrgs.br, E-mail: yoonp@umd.edu, E-mail: 007gasun@khu.ac.kr, E-mail: luiz.ziebell@ufrgs.br; Yoon, P. H., E-mail: rudi.gaelzer@ufrgs.br, E-mail: yoonp@umd.edu, E-mail: 007gasun@khu.ac.kr, E-mail: luiz.ziebell@ufrgs.br
2015-03-15
Yoon and Fang [Phys. Plasmas 15, 122312 (2008)] formulated a second-order nonlinear kinetic theory that describes the turbulence propagating in directions parallel/anti-parallel to the ambient magnetic field. Their theory also includes discrete-particle effects, or the effects due to spontaneously emitted thermal fluctuations. However, terms associated with the spontaneous fluctuations in particle and wave kinetic equations in their theory contain proper dimensionality only for an artificial one-dimensional situation. The present paper extends the analysis and re-derives the dimensionally correct kinetic equations for three-dimensional case. The new formalism properly describes the effects of spontaneous fluctuations emitted in three-dimensional space, while the collectivelymore » emitted turbulence propagates predominantly in directions parallel/anti-parallel to the ambient magnetic field. As a first step, the present investigation focuses on linear wave-particle interaction terms only. A subsequent paper will include the dimensionally correct nonlinear wave-particle interaction terms.« less
USDA-ARS?s Scientific Manuscript database
Rill networks have been a focus of study for many decades but we still lack a complete understanding of what variables control the spacing of rills and the geometry of rill networks (e.g. parallel or dendritic) on hillslopes. In this paper we investigate the controls on the spacing and geometry of ...
Terminal Area Procedures for Paired Runways
NASA Technical Reports Server (NTRS)
Lozito, Sandra; Verma, Savita Arora
2011-01-01
Parallel runway operations have been found to increase capacity within the National Airspace but poor visibility conditions reduce the use of these operations. The NextGen and SESAR Programs have identified the capacity benefits from increased use of closely-space parallel runway. Previous research examined the concepts and procedures related to parallel runways however, there has been no investigation of the procedures associated with the strategic and tactical pairing of aircraft for these operations. This simulation study developed and examined the pilot and controller procedures and information requirements for creating aircraft pairs for parallel runway operations. The goal was to achieve aircraft pairing with a temporal separation of 15s (+/- 10s error) at a coupling point that was about 12 nmi from the runway threshold. Two variables were explored for the pilot participants: two levels of flight deck automation (current-day flight deck automation and auto speed control future automation) as well as two flight deck displays that assisted in pilot conformance monitoring. The controllers were also provided with automation to help create and maintain aircraft pairs. Results show the operations in this study were acceptable and safe. Subjective workload, when using the pairing procedures and tools, was generally low for both controllers and pilots, and situation awareness was typically moderate to high. Pilot workload was influenced by display type and automation condition. Further research on pairing and off-nominal conditions is required however, this investigation identified promising findings about the feasibility of closely-spaced parallel runway operations.
CREATING A "NEST" OF EMOTIONAL SAFETY: REFLECTIVE SUPERVISION IN A CHILD-PARENT PSYCHOTHERAPY CASE.
Many, Michele M; Kronenberg, Mindy E; Dickson, Amy B
2016-11-01
Reflective supervision is considered a key practice component for any infant mental health provider to work effectively with young children and their families. This article will provide a brief history and discussion of reflective supervision followed by a case study demonstrating the importance of reflective supervision in the context of child-parent psychotherapy (CPP; A.F. Lieberman, C. Ghosh Ippen, & P. Van Horn, ; A.F. Lieberman & P. Van Horn, , 2008). Given that CPP leverages the caregiver-child relationship as the mechanism for change in young children who have been impacted by stressors and traumas, primary objectives of CPP include assisting caregivers as they understand the meaning of their child's distress and improving the caregiver-child relationship to make it a safe and supportive space in which the child can heal. As this case will demonstrate, when a clinician is emotionally triggered by a family's negative intergenerational patterns of relating, reflective supervision supports a parallel process in which the psychotherapist feels understood and contained by the supervisor so that she or he is able to support the caregiver's efforts to understand and contain the child. © 2016 Michigan Association for Infant Mental Health.
Parallel program debugging with flowback analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Jongdeok.
1989-01-01
This thesis describes the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors. The goal of the debugging system is to present to the programmer a graphical view of the dynamic program dependences while keeping the execution-time overhead low. The author first describes the use of flowback analysis to provide information on causal relationship between events in a programs' execution without re-executing the program for debugging. Execution time overhead is kept low by recording only a small amount of trace during a program's execution. He uses semantic analysis and a technique called incrementalmore » tracing to keep the time and space overhead low. As part of the semantic analysis, he uses a static program dependence graph structure that reduces the amount of work done at compile time and takes advantage of the dynamic information produced during execution time. The cornerstone of the incremental tracing concept is to generate a coarse trace during execution and fill incrementally, during the interactive portion of the debugging session, the gap between the information gathered in the coarse trace and the information needed to do the flowback analysis using the coarse trace. Then, he describes how to extend the flowback analysis to parallel programs. The flowback analysis can span process boundaries; i.e., the most recent modification to a shared variable might be traced to a different process than the one that contains the current reference. The static and dynamic program dependence graphs of the individual processes are tied together with synchronization and data dependence information to form complete graphs that represent the entire program.« less
NASA Technical Reports Server (NTRS)
Eliason, J. T. (Inventor)
1976-01-01
A photovoltaic cell array consisting of parallel columns of silicon filaments is described. Each fiber is doped to produce an inner region of one polarity type and an outer region of an opposite polarity type to thereby form a continuous radial semi conductor junction. Spaced rows of electrical contacts alternately connect to the inner and outer regions to provide a plurality of electrical outputs which may be combined in parallel or in series.
Study of solid rocket motors for a space shuttle booster. Volume 2, book 3: Cost estimating data
NASA Technical Reports Server (NTRS)
Vanderesch, A. H.
1972-01-01
Cost estimating data for the 156 inch diameter, parallel burn solid rocket propellant engine selected for the space shuttle booster are presented. The costing aspects on the baseline motor are initially considered. From the baseline, sufficient data is obtained to provide cost estimates of alternate approaches.
Evolutionary computing for the design search and optimization of space vehicle power subsystems
NASA Technical Reports Server (NTRS)
Kordon, M.; Klimeck, G.; Hanks, D.
2004-01-01
Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment.
Earth/Space Science Course No. 2001310. [Student Guide and] Teacher's Guide.
ERIC Educational Resources Information Center
Atkinson, Missy
These documents contain instructional materials for the Earth/Space Science curriculum designed by the Florida Department of Education. The student guide is adapted for students with disabilities or diverse learning needs. The content of Parallel Alternative Strategies for Students (PASS) differs from standard textbooks with its simplified text,…
First CLIPS Conference Proceedings, volume 2
NASA Technical Reports Server (NTRS)
1990-01-01
The topics of volume 2 of First CLIPS Conference are associated with following applications: quality control; intelligent data bases and networks; Space Station Freedom; Space Shuttle and satellite; user interface; artificial neural systems and fuzzy logic; parallel and distributed processing; enchancements to CLIPS; aerospace; simulation and defense; advisory systems and tutors; and intelligent control.
Reliability models applicable to space telescope solar array assembly system
NASA Technical Reports Server (NTRS)
Patil, S. A.
1986-01-01
A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Role of the Controller in an Integrated Pilot-Controller Study for Parallel Approaches
NASA Technical Reports Server (NTRS)
Verma, Savvy; Kozon, Thomas; Ballinger, Debbi; Lozito, Sandra; Subramanian, Shobana
2011-01-01
Closely spaced parallel runway operations have been found to increase capacity within the National Airspace System but poor visibility conditions reduce the use of these operations [1]. Previous research examined the concepts and procedures related to parallel runways [2][4][5]. However, there has been no investigation of the procedures associated with the strategic and tactical pairing of aircraft for these operations. This study developed and examined the pilot s and controller s procedures and information requirements for creating aircraft pairs for closely spaced parallel runway operations. The goal was to achieve aircraft pairing with a temporal separation of 15s (+/- 10s error) at a coupling point that was 12 nmi from the runway threshold. In this paper, the role of the controller, as examined in an integrated study of controllers and pilots, is presented. The controllers utilized a pairing scheduler and new pairing interfaces to help create and maintain aircraft pairs, in a high-fidelity, human-in-the loop simulation experiment. Results show that the controllers worked as a team to achieve pairing between aircraft and the level of inter-controller coordination increased when the aircraft in the pair belonged to different sectors. Controller feedback did not reveal over reliance on the automation nor complacency with the pairing automation or pairing procedures.
GW/Bethe-Salpeter calculations for charged and model systems from real-space DFT
NASA Astrophysics Data System (ADS)
Strubbe, David A.
GW and Bethe-Salpeter (GW/BSE) calculations use mean-field input from density-functional theory (DFT) calculations to compute excited states of a condensed-matter system. Many parts of a GW/BSE calculation are efficiently performed in a plane-wave basis, and extensive effort has gone into optimizing and parallelizing plane-wave GW/BSE codes for large-scale computations. Most straightforwardly, plane-wave DFT can be used as a starting point, but real-space DFT is also an attractive starting point: it is systematically convergeable like plane waves, can take advantage of efficient domain parallelization for large systems, and is well suited physically for finite and especially charged systems. The flexibility of a real-space grid also allows convenient calculations on non-atomic model systems. I will discuss the interfacing of a real-space (TD)DFT code (Octopus, www.tddft.org/programs/octopus) with a plane-wave GW/BSE code (BerkeleyGW, www.berkeleygw.org), consider performance issues and accuracy, and present some applications to simple and paradigmatic systems that illuminate fundamental properties of these approximations in many-body perturbation theory.
NASA Technical Reports Server (NTRS)
Fischer, James R.; Grosch, Chester; Mcanulty, Michael; Odonnell, John; Storey, Owen
1987-01-01
NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era.
Towards a large-scale scalable adaptive heart model using shallow tree meshes
NASA Astrophysics Data System (ADS)
Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf
2015-10-01
Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.
High-Frequency Subband Compressed Sensing MRI Using Quadruplet Sampling
Sung, Kyunghyun; Hargreaves, Brian A
2013-01-01
Purpose To presents and validates a new method that formalizes a direct link between k-space and wavelet domains to apply separate undersampling and reconstruction for high- and low-spatial-frequency k-space data. Theory and Methods High- and low-spatial-frequency regions are defined in k-space based on the separation of wavelet subbands, and the conventional compressed sensing (CS) problem is transformed into one of localized k-space estimation. To better exploit wavelet-domain sparsity, CS can be used for high-spatial-frequency regions while parallel imaging can be used for low-spatial-frequency regions. Fourier undersampling is also customized to better accommodate each reconstruction method: random undersampling for CS and regular undersampling for parallel imaging. Results Examples using the proposed method demonstrate successful reconstruction of both low-spatial-frequency content and fine structures in high-resolution 3D breast imaging with a net acceleration of 11 to 12. Conclusion The proposed method improves the reconstruction accuracy of high-spatial-frequency signal content and avoids incoherent artifacts in low-spatial-frequency regions. This new formulation also reduces the reconstruction time due to the smaller problem size. PMID:23280540
NASA Astrophysics Data System (ADS)
Glas, Frank
2003-06-01
We give a fully analytical solution for the displacement and strain fields generated by the coherent elastic relaxation of a type of misfitting inclusions with uniform dilatational eigenstrain lying in a half space, assuming linear isotropic elasticity. The inclusion considered is an infinitely long circular cylinder having an axis parallel to the free surface and truncated by two arbitrarily positioned planes parallel to this surface. These calculations apply in particular to strained semiconductor quantum wires. The calculations are illustrated by examples showing quantitatively that, depending on the depth of the wire under the free surface, the latter may significantly affect the magnitude and the distribution of the various strain components inside the inclusion as well as in the surrounding matrix.
2nd-Order CESE Results For C1.4: Vortex Transport by Uniform Flow
NASA Technical Reports Server (NTRS)
Friedlander, David J.
2015-01-01
The Conservation Element and Solution Element (CESE) method was used as implemented in the NASA research code ez4d. The CESE method is a time accurate formulation with flux-conservation in both space and time. The method treats the discretized derivatives of space and time identically and while the 2nd-order accurate version was used, high-order versions exist, the 2nd-order accurate version was used. In regards to the ez4d code, it is an unstructured Navier-Stokes solver coded in C++ with serial and parallel versions available. As part of its architecture, ez4d has the capability to utilize multi-thread and Messaging Passage Interface (MPI) for parallel runs.
A proposed experimental search for chameleons using asymmetric parallel plates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrage, Clare; Copeland, Edmund J.; Stevenson, James A., E-mail: Clare.Burrage@nottingham.ac.uk, E-mail: ed.copeland@nottingham.ac.uk, E-mail: james.stevenson@nottingham.ac.uk
2016-08-01
Light scalar fields coupled to matter are a common consequence of theories of dark energy and attempts to solve the cosmological constant problem. The chameleon screening mechanism is commonly invoked in order to suppress the fifth forces mediated by these scalars, sufficiently to avoid current experimental constraints, without fine tuning. The force is suppressed dynamically by allowing the mass of the scalar to vary with the local density. Recently it has been shown that near future cold atoms experiments using atom-interferometry have the ability to access a large proportion of the chameleon parameter space. In this work we demonstrate howmore » experiments utilising asymmetric parallel plates can push deeper into the remaining parameter space available to the chameleon.« less
Space Biotechnology and Commercial Applications University of Florida
NASA Technical Reports Server (NTRS)
Phillips, Winfred; Evanich, Peggy L.
2004-01-01
The Space Biotechnology and Commercial Applications grant was funded by NASA's Kennedy Space Center in FY 2002 to provide dedicated biotechnology and agricultural research focused on the regeneration of space flight environments with direct parallels in Earth-based applications for solving problems in the environment, advances in agricultural science, and other human support issues amenable to targeted biotechnology solutions. This grant had three project areas, each with multiple tasks. They are: 1) Space Agriculture and Biotechnology Research and Education, 2) Integrated Smart Nanosensors for Space Biotechnology Applications, and 3) Commercial Applications. The Space Agriculture and Biotechnology Research and Education (SABRE) Center emphasized the fundamental biology of organisms involved in space flight applications, including those involved in advanced life support environments because of their critical role in the long-term exploration of space. The SABRE Center supports research at the University of Florida and at the Space Life Sciences Laboratory (SLSL) at the Kennedy Space Center. The Integrated Smart Nanosensors for Space Biotechnology Applications component focused on developing and applying sensor technologies to space environments and agricultural systems. The research activities in nanosensors were coordinated with the SABRE portions of this grant and with the research sponsored by the NASA Environmental Systems Commercial Space Technology Center located in the Department of Environmental Engineering Sciences. Initial sensor efforts have focused on air and water quality monitoring essential to humans for living and working permanently in space, an important goal identified in NASA's strategic plan. The closed environment of a spacecraft or planetary base accentuates cause and effect relationships and environmental impacts. The limited available air and water resources emphasize the need for reuse, recycling, and system monitoring. It is essential to collect real-time information from these systems to ensure crew safety. This new class of nanosensors will be critical to monitoring the space flight environment in future NASA space systems. The Commercial Applications component of this program pursued industry partnerships to develop products for terrestrial use of NASA sponsored technologies, and in turn to stimulate growth in the biotechnology industry. For technologies demonstrating near term commercial potential, the objective is to include industry partners on or about the time of proof of concept that will not only co-invest in the technology but also take the resultant technology to the commercial market.
Social Conflict: The Negative Aspect of Social Relations.
ERIC Educational Resources Information Center
Abbey, Antonia; Rovine, Michael
Interpersonal relationships can be nonsupportive as well as supportive. A study was conducted to investigate the negative aspects of social relations which parallel two positive components of social relations, esteem support and affirmative support. If social support represents the positive aspects of interpersonal relationships, social conflict…
Zhao, Jinsong; Wang, Zhipeng; Zhang, Chuanbi; Yang, Chifu; Bai, Wenjie; Zhao, Zining
2018-06-01
The shaking table based on electro-hydraulic servo parallel mechanism has the advantage of strong carrying capacity. However, the strong coupling caused by the eccentric load not only affects the degree of freedom space control precision, but also brings trouble to the system control. A novel decoupling control strategy is proposed, which is based on modal space to solve the coupling problem for parallel mechanism with eccentric load. The phenomenon of strong dynamic coupling among degree of freedom space is described by experiments, and its influence on control design is discussed. Considering the particularity of plane motion, the dynamic model is built by Lagrangian method to avoid complex calculations. The dynamic equations of the coupling physical space are transformed into the dynamic equations of the decoupling modal space by using the weighted orthogonality of the modal main mode with respect to mass matrix and stiffness matrix. In the modal space, the adjustments of the modal channels are independent of each other. Moreover, the paper discusses identical closed-loop dynamic characteristics of modal channels, which will realize decoupling for degree of freedom space, thus a modal space three-state feedback control is proposed to expand the frequency bandwidth of each modal channel for ensuring their near-identical responses in a larger frequency range. Experimental results show that the concept of modal space three-state feedback control proposed in this paper can effectively reduce the strong coupling problem of degree of freedom space channels, which verify the effectiveness of the proposed model space state feedback control strategy for improving the control performance of the electro-hydraulic servo plane redundant driving mechanism. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
High resolution solar observations in the context of space weather prediction
NASA Astrophysics Data System (ADS)
Yang, Guo
Space weather has a great impact on the Earth and human life. It is important to study and monitor active regions on the solar surface and ultimately to predict space weather based on the Sun's activity. In this study, a system that uses the full power of speckle masking imaging by parallel processing to obtain high-spatial resolution images of the solar surface in near real-time has been developed and built. The application of this system greatly improves the ability to monitor the evolution of solar active regions and to predict the adverse effects of space weather. The data obtained by this system have also been used to study fine structures on the solar surface and their effects on the upper solar atmosphere. A solar active region has been studied using high resolution data obtained by speckle masking imaging. Evolution of a pore in an active region presented. Formation of a rudimentary penumbra is studied. The effects of the change of the magnetic fields on the upper level atmosphere is discussed. Coronal Mass Ejections (CMEs) have a great impact on space weather. To study the relationship between CMEs and filament disappearance, a list of 431 filament and prominence disappearance events has been compiled. Comparison of this list with CME data obtained by satellite has shown that most filament disappearances seem to have no corresponding CME events. Even for the limb events, only thirty percent of filament disappearances are associated with CMEs. A CME event that was observed on March 20, 2000 has been studied in detail. This event did not show the three-parts structure of typical CMEs. The kinematical and morphological properties of this event were examined.
Parallel grid generation algorithm for distributed memory computers
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Moitra, Anutosh
1994-01-01
A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.
14 CFR 1260.137 - Property trust relationship.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Property trust relationship. 1260.137 Section 1260.137 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION GRANTS AND... Property trust relationship. Real property, equipment, intangible property and debt instruments that are...
Anderson, J.B.
1960-01-01
A reactor is described which comprises a tank, a plurality of coaxial steel sleeves in the tank, a mass of water in the tank, and wire grids in abutting relationship within a plurality of elongated parallel channels within the steel sleeves, the wire being provided with a plurality of bends in the same plane forming adjacent parallel sections between bends, and the sections of adjacent grids being normally disposed relative to each other.
ERIC Educational Resources Information Center
Robinson, Clyde C.; Anderson, Genan T.; Porter, Christin L.; Hart, Craig, H.; Wouden-Miller, Melissa
2003-01-01
Explored the simultaneous sequential transition patterns of preschoolers' social play within classroom settings. Found that the proportion of social-play states did not vary during play episodes even when accounting for type of activity center, gender, and SES. Found a reciprocal relationship between parallel-aware and other social-play states…
A Tracker for Broken and Closely-Spaced Lines
1997-10-01
to combine the current level flow estimate and the previous level flow estimate. However, the result is still not good enough for some reasons. First...geometric attributes are not good enough to discriminate line segments, when they are crowded, parallel and closely-spaced to each other. On the other...level information [10]. Still, it is not good at dealing with closely-spaced line segments. Because it requires a proper size of square neighborhood to
NASA Astrophysics Data System (ADS)
Vnukov, A. A.; Shershnev, M. B.
2018-01-01
The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi
Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less
Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi; ...
2016-06-01
Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less
On a model of three-dimensional bursting and its parallel implementation
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Garzón, E. M.; Ramos, J. I.
2008-04-01
A mathematical model for the simulation of three-dimensional bursting phenomena and its parallel implementation are presented. The model consists of four nonlinearly coupled partial differential equations that include fast and slow variables, and exhibits bursting in the absence of diffusion. The differential equations have been discretized by means of a second-order accurate in both space and time, linearly-implicit finite difference method in equally-spaced grids. The resulting system of linear algebraic equations at each time level has been solved by means of the Preconditioned Conjugate Gradient (PCG) method. Three different parallel implementations of the proposed mathematical model have been developed; two of these implementations, i.e., the MPI and the PETSc codes, are based on a message passing paradigm, while the third one, i.e., the OpenMP code, is based on a shared space address paradigm. These three implementations are evaluated on two current high performance parallel architectures, i.e., a dual-processor cluster and a Shared Distributed Memory (SDM) system. A novel representation of the results that emphasizes the most relevant factors that affect the performance of the paralled implementations, is proposed. The comparative analysis of the computational results shows that the MPI and the OpenMP implementations are about twice more efficient than the PETSc code on the SDM system. It is also shown that, for the conditions reported here, the nonlinear dynamics of the three-dimensional bursting phenomena exhibits three stages characterized by asynchronous, synchronous and then asynchronous oscillations, before a quiescent state is reached. It is also shown that the fast system reaches steady state in much less time than the slow variables.
Scalable direct Vlasov solver with discontinuous Galerkin method on unstructured mesh.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J.; Ostroumov, P. N.; Mustapha, B.
2010-12-01
This paper presents the development of parallel direct Vlasov solvers with discontinuous Galerkin (DG) method for beam and plasma simulations in four dimensions. Both physical and velocity spaces are in two dimesions (2P2V) with unstructured mesh. Contrary to the standard particle-in-cell (PIC) approach for kinetic space plasma simulations, i.e., solving Vlasov-Maxwell equations, direct method has been used in this paper. There are several benefits to solving a Vlasov equation directly, such as avoiding noise associated with a finite number of particles and the capability to capture fine structure in the plasma. The most challanging part of a direct Vlasov solvermore » comes from higher dimensions, as the computational cost increases as N{sup 2d}, where d is the dimension of the physical space. Recently, due to the fast development of supercomputers, the possibility has become more realistic. Many efforts have been made to solve Vlasov equations in low dimensions before; now more interest has focused on higher dimensions. Different numerical methods have been tried so far, such as the finite difference method, Fourier Spectral method, finite volume method, and spectral element method. This paper is based on our previous efforts to use the DG method. The DG method has been proven to be very successful in solving Maxwell equations, and this paper is our first effort in applying the DG method to Vlasov equations. DG has shown several advantages, such as local mass matrix, strong stability, and easy parallelization. These are particularly suitable for Vlasov equations. Domain decomposition in high dimensions has been used for parallelization; these include a highly scalable parallel two-dimensional Poisson solver. Benchmark results have been shown and simulation results will be reported.« less
A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging
Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.
2012-01-01
Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065
Occlusal traits of deciduous dentition of preschool children of Indian children
Bahadure, Rakesh N.; Thosar, Nilima; Gaikwad, Rahul
2012-01-01
Objectives: To assess the occlusal relationship, canine relationship, crowding, primate spaces, and anterior spacing in both maxillary and mandibular arches of primary dentition of Indian children of Wardha District and also to study the age-wise differences in occlusal characteristics. Materials and Methods: A total of 1053 (609 males and 444 females) children of 3-5 year age group with complete primary dentition were examined for occlusal relationship, canine relationship, crowding, primate spaces, and anterior spacing in both maxillary and mandibular arches. Results: The data after evaluation showed significant values for all parameters except mandibular anterior spacing, which was 47.6%. Mild crowding was prevalent at 5 year age group and moderate crowding was common at 3 year-age group. Conclusion: Evaluated parameters such as terminal molar relationship and canine relationship were predominantly progressing toward to normal but contacts and crowding status were contributing almost equal to physiologic anterior spacing. Five-year-age group showed higher values with respect to all the parameters. PMID:23633806
42 CFR 456.714 - DUR/surveillance and utilization review relationship.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false DUR/surveillance and utilization review.../surveillance and utilization review relationship. (a) The retrospective DUR requirements in this subpart parallel a portion of the surveillance and utilization review (SUR) requirements in subpart A of this part...
42 CFR 456.714 - DUR/surveillance and utilization review relationship.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 4 2014-10-01 2014-10-01 false DUR/surveillance and utilization review.../surveillance and utilization review relationship. (a) The retrospective DUR requirements in this subpart parallel a portion of the surveillance and utilization review (SUR) requirements in subpart A of this part...
42 CFR 456.714 - DUR/surveillance and utilization review relationship.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false DUR/surveillance and utilization review.../surveillance and utilization review relationship. (a) The retrospective DUR requirements in this subpart parallel a portion of the surveillance and utilization review (SUR) requirements in subpart A of this part...
42 CFR 456.714 - DUR/surveillance and utilization review relationship.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false DUR/surveillance and utilization review.../surveillance and utilization review relationship. (a) The retrospective DUR requirements in this subpart parallel a portion of the surveillance and utilization review (SUR) requirements in subpart A of this part...
42 CFR 456.714 - DUR/surveillance and utilization review relationship.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 4 2012-10-01 2012-10-01 false DUR/surveillance and utilization review.../surveillance and utilization review relationship. (a) The retrospective DUR requirements in this subpart parallel a portion of the surveillance and utilization review (SUR) requirements in subpart A of this part...
NASA Technical Reports Server (NTRS)
Shen, Bo-Wen; Cheung, Samson; Li, Jui-Lin F.; Wu, Yu-ling
2013-01-01
In this study, we discuss the performance of the parallel ensemble empirical mode decomposition (EMD) in the analysis of tropical waves that are associated with tropical cyclone (TC) formation. To efficiently analyze high-resolution, global, multiple-dimensional data sets, we first implement multilevel parallelism into the ensemble EMD (EEMD) and obtain a parallel speedup of 720 using 200 eight-core processors. We then apply the parallel EEMD (PEEMD) to extract the intrinsic mode functions (IMFs) from preselected data sets that represent (1) idealized tropical waves and (2) large-scale environmental flows associated with Hurricane Sandy (2012). Results indicate that the PEEMD is efficient and effective in revealing the major wave characteristics of the data, such as wavelengths and periods, by sifting out the dominant (wave) components. This approach has a potential for hurricane climate study by examining the statistical relationship between tropical waves and TC formation.
NASA Technical Reports Server (NTRS)
Juhasz, A. J.; Bloomfield, H. S.
1985-01-01
A combinatorial reliability approach is used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis is also performed, specifically for a 100 kWe nuclear Brayton power conversion system with parallel redundancy. Although this study is done for a reactor outlet temperature of 1100K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.
Productive High Performance Parallel Programming with Auto-tuned Domain-Specific Embedded Languages
2013-01-02
Compilation JVM Java Virtual Machine KB Kilobyte KDT Knowledge Discovery Toolbox LAPACK Linear Algebra Package LLVM Low-Level Virtual Machine LOC Lines...different starting points. Leo Meyerovich also helped solidify some of the ideas here in discussions during Par Lab retreats. I would also like to thank...multi-timestep computations by blocking in both time and space. 88 Implementation Output Approx DSL Type Language Language Parallelism LoC Graphite
NASA Astrophysics Data System (ADS)
Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao
2017-01-01
Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).
14 CFR 1250.112 - Relationship with other officials.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Relationship with other officials. 1250.112 Section 1250.112 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION NONDISCRIMINATION IN... Relationship with other officials. NASA officials, in performing the functions assigned to them by this part...
14 CFR 1250.112 - Relationship with other officials.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Relationship with other officials. 1250.112 Section 1250.112 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION NONDISCRIMINATION IN... Relationship with other officials. NASA officials, in performing the functions assigned to them by this part...
Message Passing and Shared Address Space Parallelism on an SMP Cluster
NASA Technical Reports Server (NTRS)
Shan, Hongzhang; Singh, Jaswinder P.; Oliker, Leonid; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
Currently, message passing (MP) and shared address space (SAS) are the two leading parallel programming paradigms. MP has been standardized with MPI, and is the more common and mature approach; however, code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and the programming effort required for six applications under both programming models on a 32-processor PC-SMP cluster, a platform that is becoming increasingly attractive for high-end scientific computing. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and/or complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications, while being competitive for the others. A hybrid MPI+SAS strategy shows only a small performance advantage over pure MPI in some cases. Finally, improved implementations of two MPI collective operations on PC-SMP clusters are presented.
A quasilinear kinetic model for solar wind electrons and protons instabilities
NASA Astrophysics Data System (ADS)
Sarfraz, M.; Yoon, P. H.
2017-12-01
In situ measurements confirm the anisotropic behavior in temperatures of solar wind species. These anisotropies associated with charge particles are observed to be relaxed. In collionless limit, kinetic instabilities play a significant role to reshape particles distribution. The linear analysis results are encapsulated in inverse relationship between anisotropy and plasma beta based observations fittings techniques, simulations methods, or solution of linearized Vlasov equation. Here amacroscopic quasilinear technique is adopted to confirm inverse relationship through solutions of set of self-consistent kinetic equations. Firstly, for a homogeneous and non-collisional medium, quasilinear kinetic model is employed to display asymptotic variations of core and halo electrons temperatures and saturations of wave energy densities for electromagnetic electron cyclotron (EMEC) instability sourced by, T⊥}>T{∥ . It is shown that, in (β ∥ , T⊥}/T{∥ ) phase space, the saturations stages of anisotropies associated with core and halo electrons lined up on their respective marginal stability curves. Secondly, for case of electrons firehose instability ignited by excessive parallel temperature i.e T⊥}>T{∥ , both electrons and protons are allowed to dynamically evolve in time. It is also observed that, the trajectories of protons and electrons at saturation stages in phase space of anisotropy and plasma beta correspond to proton cyclotron and firehose marginal stability curves, respectively. Next, the outstanding issue that most of observed proton data resides in nearly isotropic state in phase space is interpreted. Here, in quasilinear frame-work of inhomogeneous solar wind system, a set of self-consistent quasilinear equations is formulated to show a dynamical variations of temperatures with spatial distributions. On choice of different initial parameters, it is shown that, interplay of electron and proton instabilities provides an counter-balancing force to slow down the protons away from marginal stability states. As we are dealing both, protons and electrons for radially expanding solar wind plasma, our present approach may eventually be incorporated in global-kinetic models of the solar wind species.
14 CFR 1214.306 - Payload specialist relationship with sponsoring institutions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Payload specialist relationship with sponsoring institutions. 1214.306 Section 1214.306 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Payload Specialists for Space Transportation System (STS) Missions § 1214.306 Payload...
14 CFR 1214.306 - Payload specialist relationship with sponsoring institutions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Payload specialist relationship with sponsoring institutions. 1214.306 Section 1214.306 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Payload Specialists for Space Transportation System (STS) Missions § 1214.306 Payload...
14 CFR 1214.306 - Payload specialist relationship with sponsoring institutions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Payload specialist relationship with sponsoring institutions. 1214.306 Section 1214.306 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Payload Specialists for Space Transportation System (STS) Missions § 1214.306 Payload...
14 CFR 1214.306 - Payload specialist relationship with sponsoring institutions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Payload specialist relationship with sponsoring institutions. 1214.306 Section 1214.306 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Payload Specialists for Space Transportation System (STS) Missions § 1214.306 Payload...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Sudip K; Perumalla, Kalyan S; Hirshman, Steven Paul
2013-01-01
Simulations that require solutions of block tridiagonal systems of equations rely on fast parallel solvers for runtime efficiency. Leading parallel solvers that are highly effective for general systems of equations, dense or sparse, are limited in scalability when applied to block tridiagonal systems. This paper presents scalability results as well as detailed analyses of two parallel solvers that exploit the special structure of block tridiagonal matrices to deliver superior performance, often by orders of magnitude. A rigorous analysis of their relative parallel runtimes is shown to reveal the existence of a critical block size that separates the parameter space spannedmore » by the number of block rows, the block size and the processor count, into distinct regions that favor one or the other of the two solvers. Dependence of this critical block size on the above parameters as well as on machine-specific constants is established. These formal insights are supported by empirical results on up to 2,048 cores of a Cray XT4 system. To the best of our knowledge, this is the highest reported scalability for parallel block tridiagonal solvers to date.« less
NASA Technical Reports Server (NTRS)
Falls, L. W.; Crutcher, H. L.
1976-01-01
Transformation of statistics from a dimensional set to another dimensional set involves linear functions of the original set of statistics. Similarly, linear functions will transform statistics within a dimensional set such that the new statistics are relevant to a new set of coordinate axes. A restricted case of the latter is the rotation of axes in a coordinate system involving any two correlated random variables. A special case is the transformation for horizontal wind distributions. Wind statistics are usually provided in terms of wind speed and direction (measured clockwise from north) or in east-west and north-south components. A direct application of this technique allows the determination of appropriate wind statistics parallel and normal to any preselected flight path of a space vehicle. Among the constraints for launching space vehicles are critical values selected from the distribution of the expected winds parallel to and normal to the flight path. These procedures are applied to space vehicle launches at Cape Kennedy, Florida.
Operational Use of GPS Navigation for Space Shuttle Entry
NASA Technical Reports Server (NTRS)
Goodman, John L.; Propst, Carolyn A.
2008-01-01
The STS-118 flight of the Space Shuttle Endeavour was the first shuttle mission flown with three Global Positioning System (GPS) receivers in place of the three legacy Tactical Air Navigation (TACAN) units. This marked the conclusion of a 15 year effort involving procurement, missionization, integration, and flight testing of a GPS receiver and a parallel effort to formulate and implement shuttle computer software changes to support GPS. The use of GPS data from a single receiver in parallel with TACAN during entry was successfully demonstrated by the orbiters Discovery and Atlantis during four shuttle missions in 2006 and 2007. This provided the confidence needed before flying the first all GPS, no TACAN flight with Endeavour. A significant number of lessons were learned concerning the integration of a software intensive navigation unit into a legacy avionics system. These lessons have been taken into consideration during vehicle design by other flight programs, including the vehicle that will replace the Space Shuttle, Orion.
Arkas: Rapid reproducible RNAseq analysis
Colombo, Anthony R.; J. Triche Jr, Timothy; Ramsingh, Giridharan
2017-01-01
The recently introduced Kallisto pseudoaligner has radically simplified the quantification of transcripts in RNA-sequencing experiments. We offer cloud-scale RNAseq pipelines Arkas-Quantification, and Arkas-Analysis available within Illumina’s BaseSpace cloud application platform which expedites Kallisto preparatory routines, reliably calculates differential expression, and performs gene-set enrichment of REACTOME pathways . Due to inherit inefficiencies of scale, Illumina's BaseSpace computing platform offers a massively parallel distributive environment improving data management services and data importing. Arkas-Quantification deploys Kallisto for parallel cloud computations and is conveniently integrated downstream from the BaseSpace Sequence Read Archive (SRA) import/conversion application titled SRA Import. Arkas-Analysis annotates the Kallisto results by extracting structured information directly from source FASTA files with per-contig metadata, calculates the differential expression and gene-set enrichment analysis on both coding genes and transcripts. The Arkas cloud pipeline supports ENSEMBL transcriptomes and can be used downstream from the SRA Import facilitating raw sequencing importing, SRA FASTQ conversion, RNA quantification and analysis steps. PMID:28868134
Space charge in nanostructure resonances
NASA Astrophysics Data System (ADS)
Price, Peter J.
1996-10-01
In quantum ballistic propagation of electrons through a variety of nanostructures, resonance in the energy-dependent transmission and reflection probabilities generically is associated with (1) a quasi-level with a decay lifetime, and (2) a bulge in electron density within the structure. It can be shown that, to a good approximation, a simple formula in all cases connects the density of states for the latter to the energy dependence of the phase angles of the eigen values of the S-matrix governing the propagation. For both the Lorentzian resonances (normal or inverted) and for the Fano-type resonances, as a consequence of this eigen value formula, the space charge due to filled states over the energy range of a resonance is just equal (for each spin state) to one electron charge. The Coulomb interaction within this space charge is known to 'distort' the electrical characteristics of resonant nanostructures. In these systems, however, the exchange effect should effectively cancel the interaction between states with parallel spins, leaving only the anti-parallel spin contribution.
1972-03-07
This early chart conceptualizes the use of two parallel Solid Rocket Motor Boosters in conjunction with three main engines to launch the proposed Space Shuttle to orbit. At approximately twenty-five miles altitude, the boosters would detach from the Orbiter and parachute back to Earth where they would be recovered and refurbished for future use. The Shuttle was designed as NASA's first reusable space vehicle, launching vertically like a spacecraft and landing on runways like conventional aircraft. Marshall Space Flight Center had management responsibility for the Shuttle's propulsion elements, including the Solid Rocket Boosters.
Wigner, E.P.
1960-11-22
A nuclear reactor is described wherein horizontal rods of thermal- neutron-fissionable material are disposed in a body of heavy water and extend through and are supported by spaced parallel walls of graphite.
Parallel processing and expert systems
NASA Technical Reports Server (NTRS)
Lau, Sonie; Yan, Jerry C.
1991-01-01
Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, V.
2012-09-01
As a result of continual space activity since the 1950s, there are now a large number of man-made Resident Space Objects (RSOs) orbiting the Earth. Because of the large number of items and their relative speeds, the possibility of destructive collisions involving important space assets is now of significant concern to users and operators of space-borne technologies. As a result, a growing number of international agencies are researching methods for improving techniques to maintain Space Situational Awareness (SSA). Computer simulation is a method commonly used by many countries to validate competing methodologies prior to full scale adoption. The use of supercomputing and/or reduced scale testing is often necessary to effectively simulate such a complex problem on todays computers. Recently the authors presented a simulation aimed at reducing the computational burden by selecting the minimum level of fidelity necessary for contrasting methodologies and by utilising multi-core CPU parallelism for increased computational efficiency. The resulting simulation runs on a single PC while maintaining the ability to effectively evaluate competing methodologies. Nonetheless, the ability to control the scale and expand upon the computational demands of the sensor management system is limited. In this paper, we examine the advantages of increasing the parallelism of the simulation by means of General Purpose computing on Graphics Processing Units (GPGPU). As many sub-processes pertaining to SSA management are independent, we demonstrate how parallelisation via GPGPU has the potential to significantly enhance not only research into techniques for maintaining SSA, but also to enhance the level of sophistication of existing space surveillance sensors and sensor management systems. Nonetheless, the use of GPGPU imposes certain limitations and adds to the implementation complexity, both of which require consideration to achieve an effective system. We discuss these challenges and how they can be overcome. We further describe an application of the parallelised system where visibility prediction is used to enhance sensor management. This facilitates significant improvement in maximum catalogue error when RSOs become temporarily unobservable. The objective is to demonstrate the enhanced scalability and increased computational capability of the system.
Parallel MR imaging: a user's guide.
Glockner, James F; Hu, Houchun H; Stanley, David W; Angelos, Lisa; King, Kevin
2005-01-01
Parallel imaging is a recently developed family of techniques that take advantage of the spatial information inherent in phased-array radiofrequency coils to reduce acquisition times in magnetic resonance imaging. In parallel imaging, the number of sampled k-space lines is reduced, often by a factor of two or greater, thereby significantly shortening the acquisition time. Parallel imaging techniques have only recently become commercially available, and the wide range of clinical applications is just beginning to be explored. The potential clinical applications primarily involve reduction in acquisition time, improved spatial resolution, or a combination of the two. Improvements in image quality can be achieved by reducing the echo train lengths of fast spin-echo and single-shot fast spin-echo sequences. Parallel imaging is particularly attractive for cardiac and vascular applications and will likely prove valuable as 3-T body and cardiovascular imaging becomes part of standard clinical practice. Limitations of parallel imaging include reduced signal-to-noise ratio and reconstruction artifacts. It is important to consider these limitations when deciding when to use these techniques. (c) RSNA, 2005.
Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.
1996-01-01
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.
Bale, S D; Mozer, F S
2007-05-18
Large parallel (
Zhao, Juanjuan; Chen, Shengbin; Jiang, Bo; Ren, Yin; Wang, Hua; Vause, Jonathan; Yu, Haidong
2013-01-01
Irrespective of which side is taken in the densification-sprawl debate, insights into the relationship between urban green space coverage and urbanization have been recognized as essential for guiding sustainable urban development. However, knowledge of the relationships between socio-economic variables of urbanization and long-term green space change is still limited. In this paper, using simple regression, hierarchical partitioning and multi-regression, the temporal trend in green space coverage and its relationship with urbanization were investigated using data from 286 cities between 1989 and 2009, covering all provinces in mainland China with the exception of Tibet. We found that: [1] average green space coverage of cities investigated increased steadily from 17.0% in 1989 to 37.3% in 2009; [2] cities with higher recent green space coverage also had relatively higher green space coverage historically; [3] cities in the same region exhibited similar long-term trends in green space coverage; [4] eight of the nine variables characterizing urbanization showed a significant positive linear relationship with green space coverage, with 'per capita GDP' having the highest independent contribution (24.2%); [5] among the climatic and geographic factors investigated, only mean elevation showed a significant effect; and [6] using the seven largest contributing individual factors, a linear model to predict variance in green space coverage was constructed. Here, we demonstrated that green space coverage in built-up areas tended to reflect the effects of urbanization rather than those of climatic or geographic factors. Quantification of the urbanization effects and the characteristics of green space development in China may provide a valuable reference for research into the processes of urban sprawl and its relationship with green space change. Copyright © 2012 Elsevier B.V. All rights reserved.
Vectoring of parallel synthetic jets: A parametric study
NASA Astrophysics Data System (ADS)
Berk, Tim; Gomit, Guillaume; Ganapathisubramani, Bharathram
2016-11-01
The vectoring of a pair of parallel synthetic jets can be described using five dimensionless parameters: the aspect ratio of the slots, the Strouhal number, the Reynolds number, the phase difference between the jets and the spacing between the slots. In the present study, the influence of the latter four on the vectoring behaviour of the jets is examined experimentally using particle image velocimetry. Time-averaged velocity maps are used to study the variations in vectoring behaviour for a parametric sweep of each of the four parameters independently. A topological map is constructed for the full four-dimensional parameter space. The vectoring behaviour is described both qualitatively and quantitatively. A vectoring mechanism is proposed, based on measured vortex positions. We acknowledge the financial support from the European Research Council (ERC Grant Agreement No. 277472).
Use of PZT's for adaptive control of Fabry-Perot etalon plate figure
NASA Technical Reports Server (NTRS)
Skinner, WIlbert; Niciejewski, R.
2005-01-01
A Fabry Perot etalon, consisting of two spaced and reflective glass flats, provides the mechanism by which high resolution spectroscopy may be performed over narrow spectral regions. Space based applications include direct measurements of Doppler shifts of airglow absorption and emission features and the Doppler broadening of spectral lines. The technique requires a high degree of parallelism between the two flats to be maintained through harsh launch conditions. Monitoring and adjusting the plate figure by illuminating the Fabry Perot interferometer with a suitable monochromatic source may be performed on orbit to actively control of the parallelism of the flats. This report describes the use of such a technique in a laboratory environment applied to a piezo-electric stack attached to the center of a Fabry Perot etalon.
Applications of massively parallel computers in telemetry processing
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; Pritchard, Jim; Knoble, Gordon
1994-01-01
Telemetry processing refers to the reconstruction of full resolution raw instrumentation data with artifacts, of space and ground recording and transmission, removed. Being the first processing phase of satellite data, this process is also referred to as level-zero processing. This study is aimed at investigating the use of massively parallel computing technology in providing level-zero processing to spaceflights that adhere to the recommendations of the Consultative Committee on Space Data Systems (CCSDS). The workload characteristics, of level-zero processing, are used to identify processing requirements in high-performance computing systems. An example of level-zero functions on a SIMD MPP, such as the MasPar, is discussed. The requirements in this paper are based in part on the Earth Observing System (EOS) Data and Operation System (EDOS).
Direct Images, Fields of Hilbert Spaces, and Geometric Quantization
NASA Astrophysics Data System (ADS)
Lempert, László; Szőke, Róbert
2014-04-01
Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.
Observing with HST V: Improvements to the Scheduling of HST Parallel Observations
NASA Astrophysics Data System (ADS)
Taylor, D. K.; Vanorsow, D.; Lucks, M.; Henry, R.; Ratnatunga, K.; Patterson, A.
1994-12-01
Recent improvements to the Hubble Space Telescope (HST) ground system have significantly increased the frequency of pure parallel observations, i.e. the simultaneous use of multiple HST instruments by different observers. Opportunities for parallel observations are limited by a variety of timing, hardware, and scientific constraints. Formerly, such opportunities were heuristically predicted prior to the construction of the primary schedule (or calendar), and lack of complete information resulted in high rates of scheduling failures and missed opportunities. In the current process the search for parallel opportunities is delayed until the primary schedule is complete, at which point new software tools are employed to identify places where parallel observations are supported. The result has been a considerable increase in parallel throughput. A new technique, known as ``parallel crafting,'' is currently under development to streamline further the parallel scheduling process. This radically new method will replace the standard exposure logsheet with a set of abstract rules from which observation parameters will be constructed ``on the fly'' to best match the constraints of the parallel opportunity. Currently, parallel observers must specify a huge (and highly redundant) set of exposure types in order to cover all possible types of parallel opportunities. Crafting rules permit the observer to express timing, filter, and splitting preferences in a far more succinct manner. The issue of coordinated parallel observations (same PI using different instruments simultaneously), long a troublesome aspect of the ground system, is also being addressed. For Cycle 5, the Phase II Proposal Instructions now have an exposure-level PAR WITH special requirement. While only the primary's alignment will be scheduled on the calendar, new commanding will provide for parallel exposures with both instruments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aoki, Kenji
A read/write head for a magnetic tape includes an elongated chip assembly and a tape running surface formed in the longitudinal direction of the chip assembly. A pair of substantially spaced parallel read/write gap lines for supporting read/write elements extend longitudinally along the tape running surface of the chip assembly. Also, at least one groove is formed on the tape running surface on both sides of each of the read/write gap lines and extends substantially parallel to the read/write gap lines.
Accelerating semantic graph databases on commodity clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morari, Alessandro; Castellana, Vito G.; Haglin, David J.
We are developing a full software system for accelerating semantic graph databases on commodity cluster that scales to hundreds of nodes while maintaining constant query throughput. Our framework comprises a SPARQL to C++ compiler, a library of parallel graph methods and a custom multithreaded runtime layer, which provides a Partitioned Global Address Space (PGAS) programming model with fork/join parallelism and automatic load balancing over a commodity clusters. We present preliminary results for the compiler and for the runtime.
Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.
Dematté, Lorenzo
2012-01-01
Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output
ERIC Educational Resources Information Center
Beckmann, Sybilla; Izsák, Andrew
2015-01-01
In this article, we present a mathematical analysis that distinguishes two distinct quantitative perspectives on ratios and proportional relationships: variable number of fixed quantities and fixed numbers of variable parts. This parallels the distinction between measurement and partitive meanings for division and between two meanings for…
Warren G. Harding and the Press.
ERIC Educational Resources Information Center
Whitaker, W. Richard
There are many parallels between the Richard M. Nixon administration and Warren G. Harding's term: both Republicans, both touched by scandal, and both having a unique relationship with the press. But in Harding's case the relationship was a positive one. One of Harding's first official acts as president was to restore the regular White House news…
NASA Technical Reports Server (NTRS)
Erickson, Jon D. (Editor)
1992-01-01
The present volume on cooperative intelligent robotics in space discusses sensing and perception, Space Station Freedom robotics, cooperative human/intelligent robot teams, and intelligent space robotics. Attention is given to space robotics reasoning and control, ground-based space applications, intelligent space robotics architectures, free-flying orbital space robotics, and cooperative intelligent robotics in space exploration. Topics addressed include proportional proximity sensing for telerobots using coherent lasar radar, ground operation of the mobile servicing system on Space Station Freedom, teleprogramming a cooperative space robotic workcell for space stations, and knowledge-based task planning for the special-purpose dextrous manipulator. Also discussed are dimensions of complexity in learning from interactive instruction, an overview of the dynamic predictive architecture for robotic assistants, recent developments at the Goddard engineering testbed, and parallel fault-tolerant robot control.
Che, Dexin; Hu, Jianping; Zhen, Shuangju; Yu, Chengfu; Li, Bin; Chang, Xi; Zhang, Wei
2017-01-01
This study tested a parallel two-mediator model in which the relationship between dimensions of emotional intelligence and online gaming addiction are mediated by perceived helplessness and perceived self-efficacy, respectively. The sample included 931 male adolescents (mean age = 16.18 years, SD = 0.95) from southern China. Data on emotional intelligence (four dimensions, including self-management of emotion, social skills, empathy and utilization of emotions), perceived stress (two facets, including perceived self-efficacy and perceived helplessness) and online gaming addiction were collected, and bootstrap methods were used to test this parallel two-mediator model. Our findings revealed that perceived self-efficacy mediated the relationship between three dimensions of emotional intelligence (i.e., self-management, social skills, and empathy) and online gaming addiction, and perceived helplessness mediated the relationship between two dimensions of emotional intelligence (i.e., self-management and emotion utilization) and online gaming addiction. These findings underscore the importance of separating the four dimensions of emotional intelligence and two facets of perceived stress to understand the complex relationship between these factors and online gaming addiction.
Che, Dexin; Hu, Jianping; Zhen, Shuangju; Yu, Chengfu; Li, Bin; Chang, Xi; Zhang, Wei
2017-01-01
This study tested a parallel two-mediator model in which the relationship between dimensions of emotional intelligence and online gaming addiction are mediated by perceived helplessness and perceived self-efficacy, respectively. The sample included 931 male adolescents (mean age = 16.18 years, SD = 0.95) from southern China. Data on emotional intelligence (four dimensions, including self-management of emotion, social skills, empathy and utilization of emotions), perceived stress (two facets, including perceived self-efficacy and perceived helplessness) and online gaming addiction were collected, and bootstrap methods were used to test this parallel two-mediator model. Our findings revealed that perceived self-efficacy mediated the relationship between three dimensions of emotional intelligence (i.e., self-management, social skills, and empathy) and online gaming addiction, and perceived helplessness mediated the relationship between two dimensions of emotional intelligence (i.e., self-management and emotion utilization) and online gaming addiction. These findings underscore the importance of separating the four dimensions of emotional intelligence and two facets of perceived stress to understand the complex relationship between these factors and online gaming addiction. PMID:28751876
A mechanism for efficient debugging of parallel programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, B.P.; Choi, J.D.
1988-01-01
This paper addresses the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors (SMMP). The authors describe the use of flowback analysis to provide information on causal relationships between events in a program's execution without re-executing the program for debugging. The authors introduce a mechanism called incremental tracing that, by using semantic analyses of the debugged program, makes the flowback analysis practical with only a small amount of trace generated during execution. The extend flowback analysis to apply to parallel programs and describe a method to detect race conditions in the interactions ofmore » the co-operating processes.« less
Binocular optical axis parallelism detection precision analysis based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Ying, Jiaju; Liu, Bingqi
2018-02-01
According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.
14 CFR § 1214.306 - Payload specialist relationship with sponsoring institutions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 5 2014-01-01 2014-01-01 false Payload specialist relationship with sponsoring institutions. § 1214.306 Section § 1214.306 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Payload Specialists for Space Transportation System (STS) Missions § 1214.306 Payload...
NASA Technical Reports Server (NTRS)
Cohen, Jarrett
1999-01-01
Parallel computers built out of mass-market parts are cost-effectively performing data processing and simulation tasks. The Supercomputing (now known as "SC") series of conferences celebrated its 10th anniversary last November. While vendors have come and gone, the dominant paradigm for tackling big problems still is a shared-resource, commercial supercomputer. Growing numbers of users needing a cheaper or dedicated-access alternative are building their own supercomputers out of mass-market parts. Such machines are generally called Beowulf-class systems after the 11th century epic. This modern-day Beowulf story began in 1994 at NASA's Goddard Space Flight Center. A laboratory for the Earth and space sciences, computing managers there threw down a gauntlet to develop a $50,000 gigaFLOPS workstation for processing satellite data sets. Soon, Thomas Sterling and Don Becker were working on the Beowulf concept at the University Space Research Association (USRA)-run Center of Excellence in Space Data and Information Sciences (CESDIS). Beowulf clusters mix three primary ingredients: commodity personal computers or workstations, low-cost Ethernet networks, and the open-source Linux operating system. One of the larger Beowulfs is Goddard's Highly-parallel Integrated Virtual Environment, or HIVE for short.
Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-29
In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less
Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI
NASA Astrophysics Data System (ADS)
Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan
2016-10-01
Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.
Decentralized Interleaving of Paralleled Dc-Dc Buck Converters: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Rodriguez, Miguel; Sinha, Mohit
We present a decentralized control strategy that yields switch interleaving among parallel connected dc-dc buck converters without communication. The proposed method is based on the digital implementation of the dynamics of a nonlinear oscillator circuit as the controller. Each controller is fully decentralized, i.e., it only requires the locally measured output current to synthesize the pulse width modulation (PWM) carrier waveform. By virtue of the intrinsic electrical coupling between converters, the nonlinear oscillator-based controllers converge to an interleaved state with uniform phase-spacing across PWM carriers. To the knowledge of the authors, this work represents the first fully decentralized strategy formore » switch interleaving of paralleled dc-dc buck converters.« less
Patent data mining method and apparatus
Boyack, Kevin W.; Grafe, V. Gerald; Johnson, David K.; Wylie, Brian N.
2002-01-01
A method of data mining represents related patents in a multidimensional space. Distance between patents in the multidimensional space corresponds to the extent of relationship between the patents. The relationship between pairings of patents can be expressed based on weighted combinations of several predicates. The user can select portions of the space to perceive. The user also can interact with and control the communication of the space, focusing attention on aspects of the space of most interest. The multidimensional spatial representation allows more ready comprehension of the structure of the relationships among the patents.
Molecular-dynamics simulations of self-assembled monolayers (SAM) on parallel computers
NASA Astrophysics Data System (ADS)
Vemparala, Satyavani
The purpose of this dissertation is to investigate the properties of self-assembled monolayers, particularly alkanethiols and Poly (ethylene glycol) terminated alkanethiols. These simulations are based on realistic interatomic potentials and require scalable and portable multiresolution algorithms implemented on parallel computers. Large-scale molecular dynamics simulations of self-assembled alkanethiol monolayer systems have been carried out using an all-atom model involving a million atoms to investigate their structural properties as a function of temperature, lattice spacing and molecular chain-length. Results show that the alkanethiol chains tilt from the surface normal by a collective angle of 25° along next-nearest neighbor direction at 300K. At 350K the system transforms to a disordered phase characterized by small tilt angle, flexible tilt direction, and random distribution of backbone planes. With increasing lattice spacing, a, the tilt angle increases rapidly from a nearly zero value at a = 4.7A to as high as 34° at a = 5.3A at 300K. We also studied the effect of end groups on the tilt structure of SAM films. We characterized the system with respect to temperature, the alkane chain length, lattice spacing, and the length of the end group. We found that the gauche defects were predominant only in the tails, and the gauche defects increased with the temperature and number of EG units. Effect of electric field on the structure of poly (ethylene glycol) (PEG) terminated alkanethiol self assembled monolayer (SAM) on gold has been studied using parallel molecular dynamics method. An applied electric field triggers a conformational transition from all-trans to a mostly gauche conformation. The polarity of the electric field has a significant effect on the surface structure of PEG leading to a profound effect on the hydrophilicity of the surface. The electric field applied anti-parallel to the surface normal causes a reversible transition to an ordered state in which the oxygen atoms are exposed. On the other hand, an electric field applied in a direction parallel to the surface normal introduces considerable disorder in the system and the oxygen atoms are buried inside.
NASA Technical Reports Server (NTRS)
Mclyman, W. T.
1981-01-01
Transformer transmits power and digital data across rotating interface. Array has many parallel data channels, each with potential l megabaud data rate. Ferrite-cored transformers are spaced along rotor; airgap between them reduces crosstalk.
Breden, C.R.; Schultz, A.B.
1961-06-01
A reactor core formed of bundles of parallel fuel elements in the form of ribbons is patented. The fuel ribbons are twisted about their axes so as to have contact with one another at regions spaced lengthwise of the ribbons and to be out of contact with one another at locations between these spaced regions. The contact between the ribbons is sufficient to allow them to be held together in a stable bundle in a containing tube without intermediate support, while permitting enough space between the ribbon for coolant flowing.
Lightweight High Efficiency Electric Motors for Space Applications
NASA Technical Reports Server (NTRS)
Robertson, Glen A.; Tyler, Tony R.; Piper, P. J.
2011-01-01
Lightweight high efficiency electric motors are needed across a wide range of space applications from - thrust vector actuator control for launch and flight applications to - general vehicle, base camp habitat and experiment control for various mechanisms to - robotics for various stationary and mobile space exploration missions. QM Power?s Parallel Path Magnetic Technology Motors have slowly proven themselves to be a leading motor technology in this area; winning a NASA Phase II for "Lightweight High Efficiency Electric Motors and Actuators for Low Temperature Mobility and Robotics Applications" a US Army Phase II SBIR for "Improved Robot Actuator Motors for Medical Applications", an NSF Phase II SBIR for "Novel Low-Cost Electric Motors for Variable Speed Applications" and a DOE SBIR Phase I for "High Efficiency Commercial Refrigeration Motors" Parallel Path Magnetic Technology obtains the benefits of using permanent magnets while minimizing the historical trade-offs/limitations found in conventional permanent magnet designs. The resulting devices are smaller, lower weight, lower cost and have higher efficiency than competitive permanent magnet and non-permanent magnet designs. QM Power?s motors have been extensively tested and successfully validated by multiple commercial and aerospace customers and partners as Boeing Research and Technology. Prototypes have been made between 0.1 and 10 HP. They are also in the process of scaling motors to over 100kW with their development partners. In this paper, Parallel Path Magnetic Technology Motors will be discussed; specifically addressing their higher efficiency, higher power density, lighter weight, smaller physical size, higher low end torque, wider power zone, cooler temperatures, and greater reliability with lower cost and significant environment benefit for the same peak output power compared to typically motors. A further discussion on the inherent redundancy of these motors for space applications will be provided.
Models of Wake-Vortex Spreading Mechanisms and Their Estimated Uncertainties
NASA Technical Reports Server (NTRS)
Rossow, Vernon J.; Hardy, Gordon H.; Meyn, Larry A.
2006-01-01
One of the primary constraints on the capacity of the nation's air transportation system is the landing capacity at its busiest airports. Many airports with nearly-simultaneous operations on closely-spaced parallel runways (i.e., as close as 750 ft (246m)) suffer a severe decrease in runway acceptance rate when weather conditions do not allow full utilization. The objective of a research program at NASA Ames Research Center is to develop the technologies needed for traffic management in the airport environment so that operations now allowed on closely-spaced parallel runways under Visual Meteorological Conditions can also be carried out under Instrument Meteorological Conditions. As part of this overall research objective, the study reported here has developed improved models for the various aerodynamic mechanisms that spread and transport wake vortices. The purpose of the study is to continue the development of relationships that increase the accuracy of estimates for the along-trail separation distances available before the vortex wake of a leading aircraft intrudes into the airspace of a following aircraft. Details of the models used and their uncertainties are presented in the appendices to the paper. Suggestions are made as to the theoretical and experimental research needed to increase the accuracy of and confidence level in the models presented and instrumentation required or more precise estimates of the motion and spread of vortex wakes. The improved wake models indicate that, if the following aircraft is upwind of the leading aircraft, the vortex wakes of the leading aircraft will not intrude into the airspace of the following aircraft for about 7s (based on pessimistic assumptions) for most atmospheric conditions. The wake-spreading models also indicate that longer time intervals before wake intrusion are available when atmospheric turbulence levels are mild or moderate. However, if the estimates for those time intervals are to be reliable, further study is necessary to develop the instrumentation and procedures needed to accurately define when the more benign atmospheric conditions exist.
Conceptual design of a hybrid parallel mechanism for mask exchanging of TMT
NASA Astrophysics Data System (ADS)
Wang, Jianping; Zhou, Hongfei; Li, Kexuan; Zhou, Zengxiang; Zhai, Chao
2015-10-01
Mask exchange system is an important part of the Multi-Object Broadband Imaging Echellette (MOBIE) on the Thirty Meter Telescope (TMT). To solve the problem of stiffness changing with the gravity vector of the mask exchange system in the MOBIE, the hybrid parallel mechanism design method was introduced into the whole research. By using the characteristics of high stiffness and precision of parallel structure, combined with large moving range of serial structure, a conceptual design of a hybrid parallel mask exchange system based on 3-RPS parallel mechanism was presented. According to the position requirements of the MOBIE, the SolidWorks structure model of the hybrid parallel mask exchange robot was established and the appropriate installation position without interfering with the related components and light path in the MOBIE of TMT was analyzed. Simulation results in SolidWorks suggested that 3-RPS parallel platform had good stiffness property in different gravity vector directions. Furthermore, through the research of the mechanism theory, the inverse kinematics solution of the 3-RPS parallel platform was calculated and the mathematical relationship between the attitude angle of moving platform and the angle of ball-hinges on the moving platform was established, in order to analyze the attitude adjustment ability of the hybrid parallel mask exchange robot. The proposed conceptual design has some guiding significance for the design of mask exchange system of the MOBIE on TMT.
A Study of Parallels Between Antarctica South Pole Traverse Equipment and Lunar/Mars Surface Systems
NASA Technical Reports Server (NTRS)
Mueller, Robert P.; Hoffman, Stephen, J.; Thur, Paul
2010-01-01
The parallels between an actual Antarctica South Pole re-supply traverse conducted by the National Science Foundation (NSF) Office of Polar Programs in 2009 have been studied with respect to the latest mission architecture concepts being generated by the United States National Aeronautics and Space Administration (NASA) for lunar and Mars surface systems scenarios. The challenges faced by both endeavors are similar since they must both deliver equipment and supplies to support operations in an extreme environment with little margin for error in order to be successful. By carefully and closely monitoring the manifesting and operational support equipment lists which will enable this South Pole traverse, functional areas have been identified. The equipment required to support these functions will be listed with relevant properties such as mass, volume, spare parts and maintenance schedules. This equipment will be compared to space systems currently in use and projected to be required to support equivalent and parallel functions in Lunar and Mars missions in order to provide a level of realistic benchmarking. Space operations have historically required significant amounts of support equipment and tools to operate and maintain the space systems that are the primary focus of the mission. By gaining insight and expertise in Antarctic South Pole traverses, space missions can use the experience gained over the last half century of Antarctic operations in order to design for operations, maintenance, dual use, robustness and safety which will result in a more cost effective, user friendly, and lower risk surface system on the Moon and Mars. It is anticipated that the U.S Antarctic Program (USAP) will also realize benefits for this interaction with NASA in at least two areas: an understanding of how NASA plans and carries out its missions and possible improved efficiency through factors such as weight savings, alternative technologies, or modifications in training and operations.
Effects of Distant Green Space on Physical Activity in Sydney, Australia.
Chong, Shanley; Byun, Roy; Mazumdar, Soumya; Bauman, Adrian; Jalaludin, Bin
2017-01-01
The aim was to investigate the association between distant green space and physical activity modified by local green space. Information about physical activity, demographic and socioeconomic background at the individual level was extracted from the New South Wales Population Health Survey. The proportion of a postcode that was parkland was used as a proxy measure for access to parklands and was calculated for each individual. There was a significant relationship between distant green space and engaging in moderate-to-vigorous physical activity (MVPA) at least once a week. No significant relationship was found between adequate physical activity and distant green space. No significant relationships were found between adequate physical activity, engaging in MVPA, and local green space. However, if respondents lived in greater local green space (≥25%), there was a significant relationship between engaging in MVPA at least once a week and distance green space of ≥20%. This study highlights the important effect of distant green space on physical activity. Our findings also suggest that moderate size of local green space together with moderate size of distant green space are important levers for participation of physical activity.
ChemHTPS - A virtual high-throughput screening program suite for the chemical and materials sciences
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Evangelista, William; Hachmann, Johannes
The discovery of new compounds, materials, and chemical reactions with exceptional properties is the key for the grand challenges in innovation, energy and sustainability. This process can be dramatically accelerated by means of the virtual high-throughput screening (HTPS) of large-scale candidate libraries. The resulting data can further be used to study the underlying structure-property relationships and thus facilitate rational design capability. This approach has been extensively used for many years in the drug discovery community. However, the lack of openly available virtual HTPS tools is limiting the use of these techniques in various other applications such as photovoltaics, optoelectronics, and catalysis. Thus, we developed ChemHTPS, a general-purpose, comprehensive and user-friendly suite, that will allow users to efficiently perform large in silico modeling studies and high-throughput analyses in these applications. ChemHTPS also includes a massively parallel molecular library generator which offers a multitude of options to customize and restrict the scope of the enumerated chemical space and thus tailor it for the demands of specific applications. To streamline the non-combinatorial exploration of chemical space, we incorporate genetic algorithms into the framework. In addition to implementing smarter algorithms, we also focus on the ease of use, workflow, and code integration to make this technology more accessible to the community.
Pilot study about dose-effect relationship of ocular injury in argon laser photocoagulation
NASA Astrophysics Data System (ADS)
Chen, P.; Zhang, C. P.; Fu, X. B.; Zhang, T. M.; Wang, C. Z.; Qian, H. W.; San, Q.
2011-03-01
The aim of this article was to study the injury effect of either convergent or parallel argon laser beam on rabbit retina, get the dose-effect relationship for the two types of laser beams, and calculate the damage threshold of argon laser for human retinas. An argon laser therapeutic instrument for ophthalmology was used in this study. A total of 80 rabbit eyes were irradiated for 600 lesions, half of which were treated by convergent laser and the other half were done with parallel laser beam. After irradiation, slit lamp microscope and fundus photography were used to observe the lesions, change and the incidence of injury was processed statistically to get the damage threshold of rabbit retina. Based on results from the experiments on animals and the data from clinical cases of laser treatment, the photocoagulation damage thresholds of human retinas for convergent and parallel argon laser were calculated to be 0.464 and 0.285 mJ respectively. These data provided biological reference for safely operation when employing laser photocoagulation in clinical practice and other fields.
2012-01-01
Little research has examined different dimensions of narcissism that may parallel psychopathy facets in criminally-involved individuals. The present study examined the pattern of relationships between grandiose and vulnerable narcissism, assessed using the Narcissistic Personality Inventory-16 and the Hypersensitive Narcissism Scale, respectively, and the four facets of psychopathy (interpersonal, affective, lifestyle, and antisocial) assessed via the Psychopathy Checklist: Screening Version (PCL:SV). As predicted, grandiose and vulnerable narcissism showed differential relationships to psychopathy facets, with grandiose narcissism relating positively to the interpersonal facet of psychopathy and vulnerable narcissism relating positively to the lifestyle facet of psychopathy. Paralleling existing psychopathy research, vulnerable narcissism showed stronger associations than grandiose narcissism to 1) other forms of psychopathology, including internalizing and substance use disorders, and 2) self- and other-directed aggression, measured using the Life History of Aggression and the Forms of Aggression Questionnaire. Grandiose narcissism was nonetheless associated with social dysfunction marked by a manipulative and deceitful interpersonal style and unprovoked aggression. Potentially important implications for uncovering etiological pathways and developing treatment interventions for these disorders in externalizing adults are discussed. PMID:22448731
2014-05-01
fusion, space and astrophysical plasmas, but still the general picture can be presented quite well with the fluid approach [6, 7]. The microscopic...purpose computing CPU for algorithms where processing of large blocks of data is done in parallel. The reason for that is the GPU’s highly effective...parallel structure. Most of the image and video processing computations involve heavy matrix and vector op- erations over large amounts of data and
Heart Fibrillation and Parallel Supercomputers
NASA Technical Reports Server (NTRS)
Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.
1997-01-01
The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.
A parallel coordinates style interface for exploratory volume visualization.
Tory, Melanie; Potts, Simeon; Möller, Torsten
2005-01-01
We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.
20 kHz main inverter unit. [for space station power supplies
NASA Technical Reports Server (NTRS)
Hussey, S.
1989-01-01
A proof-of-concept main inverter unit has demonstrated the operation of a pulse-width-modulated parallel resonant power stage topology as a 20-kHz ac power source driver, showing simple output regulation, parallel operation, power sharing and short-circuit operation. The use of a two-stage dc input filter controls the electromagnetic compatibility (EMC) characteristics of the dc power bus, and the use of an ac harmonic trap controls the EMC characteristics of the 20-kHz ac power bus.
(2+1)-dimensional spacetimes containing closed timelike curves
NASA Astrophysics Data System (ADS)
Headrick, Matthew P.; Gott, J. Richard, III
1994-12-01
We investigate the global geometries of (2+1)-dimensional spacetimes as characterized by the transformations undergone by tangent spaces upon parallel transport around closed curves. We critically discuss the use of the term ``total energy-momentum'' as a label for such parallel-transport transformations, pointing out several problems with it. We then investigate parallel-transport transformations in the known (2+1)-dimensional spacetimes containing closed timelike curves (CTC's), and introduce a few new such spacetimes. Using the more specific concept of the holonomy of a closed curve, applicable in simply connected spacetimes, we emphasize that Gott's two-particle CTC-containing spacetime does not have a tachyonic geometry. Finally, we prove the following modified version of Kabat's conjecture: if a CTC is deformable to spacelike or null infinity while remaining a CTC, then its parallel-transport transformation cannot be a rotation; therefore its holonomy, if defined, cannot be a rotation other than through a multiple of 2π.
NASA Astrophysics Data System (ADS)
Quan, Zhe; Wu, Lei
2017-09-01
This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.
Parallel software for lattice N = 4 supersymmetric Yang-Mills theory
NASA Astrophysics Data System (ADS)
Schaich, David; DeGrand, Thomas
2015-05-01
We present new parallel software, SUSY LATTICE, for lattice studies of four-dimensional N = 4 supersymmetric Yang-Mills theory with gauge group SU(N). The lattice action is constructed to exactly preserve a single supersymmetry charge at non-zero lattice spacing, up to additional potential terms included to stabilize numerical simulations. The software evolved from the MILC code for lattice QCD, and retains a similar large-scale framework despite the different target theory. Many routines are adapted from an existing serial code (Catterall and Joseph, 2012), which SUSY LATTICE supersedes. This paper provides an overview of the new parallel software, summarizing the lattice system, describing the applications that are currently provided and explaining their basic workflow for non-experts in lattice gauge theory. We discuss the parallel performance of the code, and highlight some notable aspects of the documentation for those interested in contributing to its future development.
Domain decomposition methods in aerodynamics
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Saltz, Joel
1990-01-01
Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.
NASA Astrophysics Data System (ADS)
Geng, Qi; Bruland, Amund; Macias, Francisco Javier
2018-01-01
The consumption of TBM disc cutters is influenced by the ground conditions (e.g. intact rock properties, rock mass properties, etc.), the TBM boring parameters (e.g. thrust, RPM, penetration, etc.) and the cutterhead design parameters (e.g. cutterhead shape, cutter layout). Previous researchers have done much work on the influence of the ground conditions and TBM boring parameters on cutter consumption; however, limited research has been found on the relationship between the cutterhead design and cutter consumption. The purpose of the present paper is to study the influence of layout on consumption for the TBM face cutters. Data collected from six tunnels (i.e. the Røssåga Headrace Tunnel in Norway, the Qinling Railway Tunnel in China, tubes 3 and 4 of the Guadarrama Railway Tunnel in Spain, the parallel tubes of the Vigo-Das Maceiras Tunnel in Spain) were used for analysis. The cutter consumption shape curve defined as the fitted function of the normalized cutter consumption versus the cutter position radius is found to be uniquely determined by the cutter layout and was used for analysis. The straightness and smoothness indexes are introduced to evaluate the quality of the shape curves. The analytical results suggest that the spacing of face cutters in the inner and outer parts of cutterhead should to be slightly larger and smaller, respectively, than the average spacing, and the difference of the position angles between the neighbouring cutters should be constant among the cutter positions. The 2-spiral layout pattern is found to be better than other layout patterns in view of cutter consumption and cutterhead force balance.
Framing the Dialogue: Strategies, Issues and Opportunities
1993-05-01
issue is the relationship between the declining Federal financing of public works and the Federal interest in providing infrastructure services. Large...programs and projects are significant. lmprove Infrastructure Managmnt : Management improvements closely parallel the issues associated with strategic... Relationship Between Examination of the linkage between standards GKY & Standards & and the delivery of goods and services from Associates Performance
Parallel Symmetric Eigenvalue Problem Solvers
2015-05-01
get research, tutoring, and mentoring experience as an undergraduate. Last but not least, I thank my family for their love and support. v TABLE OF...32 4.6.2 Choice of the Ritz shifts . . . . . . . . . . . . . . . . . . . . 37 4.7 Relationship between...pencil. I will conclude with a discussion of the relationship between Trace- Min and simultaneous iteration. If both methods solve the linear systems
ERIC Educational Resources Information Center
Sezer, Adem; Inel, Yusuf; Seçkin, Ahmet Çagdas; Uluçinar, Ufuk
2017-01-01
This study aimed to detect any relationship that may exist between classroom teacher candidates' class participation and their attention levels. The research method was a convergent parallel design, mixing quantitative and qualitative research techniques, and the study group was composed of 21 freshmen studying in the Classroom Teaching Department…
The Relationship between Online Students' Use of Services and Their Feelings of Mattering
ERIC Educational Resources Information Center
Hart, Tracy L.
2017-01-01
The purpose of this single case study was to examine the relationship between online students' use of support services and their feelings of mattering using a convergent parallel research design to collect quantitative and qualitative data. Students enrolled exclusively in online classes during the academic year 2015-2016 at the University of New…
Marshall Space Flight Center 1960-1985: 25th anniversary report
NASA Technical Reports Server (NTRS)
1985-01-01
The Marshall Space FLight Center marks its 25th aniversary with a record of notable achievements. These accomplishments are the essence of the Marshall Center's history. Behind the scenes of the space launches and missions, however, lies the story of challenges faced and problems solved. The highlights of that story are presented. The story is organized not as a straight chronology but as three parallel reviews of the major assignments: propulsion systems and launch vehicles, space science research and technology, and manned space systems. The general goals were to reach space, to know and understand the space environment, and to inhabit and utilize space for the benefit of mankind. Also included is a chronology of major events, presented as a fold-out chart for ready reference.
NASA Astrophysics Data System (ADS)
Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh
2015-07-01
This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.
NASA Technical Reports Server (NTRS)
Fijany, Amir
1993-01-01
In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.
Local and non-local deficits in amblyopia: acuity and spatial interactions.
Bonneh, Yoram S; Sagi, Dov; Polat, Uri
2004-12-01
Amblyopic vision is thought to be limited by abnormal long-range spatial interactions, but their exact mode of action and relationship to the main amblyopic deficit in visual acuity is largely unknown. We studied this relationship in a group (N=59) of anisometropic (N=21) and strabismic (or combined, N=38) subjects, using (1) a single and multi-pattern (crowded) computerized static Tumbling-E test with scaled spacing of two pattern widths (TeVA), in addition to an optotype (ETDRS chart) acuity test (VA) and (2) contrast detection of Gabor patches with lateral flankers (lateral masking) along the horizontal and vertical axes as well as in collinear and parallel configurations. By correlating the different measures of visual acuity and contrast suppression, we found that (1) the VA of the strabismic subjects could be decomposed into two uncorrelated components measured in TeVA: acuity for isolated patterns and acuity reduction due to flanking patterns. The latter comprised over 60% of the VA magnitude, on the average and accounted for over 50% of its variance. In contrast, a slight reduction in acuity was found in the anisometropic subjects, and the acuity for a single pattern could account for 70% of the VA variance. (2) The lateral suppression (contrast threshold elevation) in a parallel configuration along the horizontal axis was correlated with the VA (R2=0.7), as well as with the crowding effect (TeVA elevation, R2=0.5) for the strabismic group. Some correlation with the VA was also found for the collinear configuration in the anisometropic group, but less suppression and no correlation were found for all the vertical configurations in all the groups. The results indicate the existence of a specific non-local component of the strabismic deficit, in addition to the local acuity deficit in all amblyopia types. This deficit might reflect long-range lateral inhibition, or alternatively, an inaccurate and scattered top-down attentional selection mechanism.
Operator assistant to support deep space network link monitor and control
NASA Technical Reports Server (NTRS)
Cooper, Lynne P.; Desai, Rajiv; Martinez, Elmain
1992-01-01
Preparing the Deep Space Network (DSN) stations to support spacecraft missions (referred to as pre-cal, for pre-calibration) is currently an operator and time intensive activity. Operators are responsible for sending and monitoring several hundred operator directivities, messages, and warnings. Operator directives are used to configure and calibrate the various subsystems (antenna, receiver, etc.) necessary to establish a spacecraft link. Messages and warnings are issued by the subsystems upon completion of an operation, changes of status, or an anomalous condition. Some points of pre-cal are logically parallel. Significant time savings could be realized if the existing Link Monitor and Control system (LMC) could support the operator in exploiting the parallelism inherent in pre-cal activities. Currently, operators may work on the individual subsystems in parallel, however, the burden of monitoring these parallel operations resides solely with the operator. Messages, warnings, and directives are all presented as they are received; without being correlated to the event that triggered them. Pre-cal is essentially an overhead activity. During pre-cal, no mission is supported, and no other activity can be performed using the equipment in the link. Therefore, it is highly desirable to reduce pre-cal time as much as possible. One approach to do this, as well as to increase efficiency and reduce errors, is the LMC Operator Assistant (OA). The LMC OA prototype demonstrates an architecture which can be used in concert with the existing LMC to exploit parallelism in pre-cal operations while providing the operators with a true monitoring capability, situational awareness and positive control. This paper presents an overview of the LMC OA architecture and the results from initial prototyping and test activities.
Wide-range radioactive-gas-concentration detector
Anderson, D.F.
1981-11-16
A wide-range radioactive-gas-concentration detector and monitor capable of measuring radioactive-gas concentrations over a range of eight orders of magnitude is described. The device is designed to have an ionization chamber sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel-plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel-plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization-chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
Treshow, M.
1958-08-19
A neuclear reactor is described of the heterogeneous type and employing replaceable tubular fuel elements and heavy water as a coolant and moderator. A pluraltty of fuel tubesa having their axes parallel, extend through a tank type pressure vessel which contatns the liquid moderator. The fuel elements are disposed within the fuel tubes in the reaetive portion of the pressure vessel during normal operation and the fuel tubes have removable plug members at each end to permit charging and discharging of the fuel elements. The fuel elements are cylindrical strands of jacketed fissionable material having helical exterior ribs. A bundle of fuel elements are held within each fuel tube with their longitudinal axes parallel, the ribs serving to space them apart along their lengths. Coolant liquid is circulated through the fuel tubes between the spaced fuel elements. Suitable control rod and monitoring means are provided for controlling the reactor.
[PVFS 2000: An operational parallel file system for Beowulf
NASA Technical Reports Server (NTRS)
Ligon, Walt
2004-01-01
The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.
NASA Astrophysics Data System (ADS)
Zuluaga, Luisa F.; Fossen, Haakon; Rotevatn, Atle
2014-11-01
Monoclinal fault propagation folds are a common type of structure in orogenic foreland settings, particularly on the Colorado Plateau. We have studied a portion of the San Rafael monocline, Utah, assumed to have formed through pure thrust- or reverse-slip (blind) fault movement, and mapped a particular sequence of subseismic cataclastic deformation structures (deformation bands) that can be related in terms of geometry, density and orientation to the dip of the forelimb or fold interlimb angle. In simple terms, deformation bands parallel to bedding are the first structures to form, increasing exponentially in number as the forelimb gets steeper. At about 30° rotation of the forelimb, bands forming ladder structures start to cross-cut bedding, consolidating themselves into a well-defined and regularly spaced network of deformation band zones that rotate with the layering during further deformation. In summary, we demonstrate a close relationship between limb dip and deformation band density that can be used to predict the distribution and orientation of such subseismic structures in subsurface reservoirs of similar type. Furthermore, given the fact that these cataclastic deformation bands compartmentalize fluid flow, this relationship can be used to predict or model fluid flow across and along comparable fault-propagation folds.
NASA Astrophysics Data System (ADS)
Yang, Nancy; Yee, J.; Zheng, B.; Gaiser, K.; Reynolds, T.; Clemon, L.; Lu, W. Y.; Schoenung, J. M.; Lavernia, E. J.
2017-04-01
We investigate the process-structure-property relationships for 316L stainless steel prototyping utilizing 3-D laser engineered net shaping (LENS), a commercial direct energy deposition additive manufacturing process. The study concluded that the resultant physical metallurgy of 3-D LENS 316L prototypes is dictated by the interactive metallurgical reactions, during instantaneous powder feeding/melting, molten metal flow and liquid metal solidification. The study also showed 3-D LENS manufacturing is capable of building high strength and ductile 316L prototypes due to its fine cellular spacing from fast solidification cooling, and the well-fused epitaxial interfaces at metal flow trails and interpass boundaries. However, without further LENS process control and optimization, the deposits are vulnerable to localized hardness variation attributed to heterogeneous microstructure, i.e., the interpass heat-affected zone (HAZ) from repetitive thermal heating during successive layer depositions. Most significantly, the current deposits exhibit anisotropic tensile behavior, i.e., lower strain and/or premature interpass delamination parallel to build direction (axial). This anisotropic behavior is attributed to the presence of interpass HAZ, which coexists with flying feedstock inclusions and porosity from incomplete molten metal fusion. The current observations and findings contribute to the scientific basis for future process control and optimization necessary for material property control and defect mitigation.
Neural representation of objects in space: a dual coding account.
Humphreys, G W
1998-01-01
I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227
Creating pedagogical spaces for developing doctor professional identity.
Clandinin, D Jean; Cave, Marie-Therese
2008-08-01
Working with doctors to develop their identities as technically skilled as well as caring, compassionate and ethical practitioners is a challenge in medical education. One way of resolving this derives from a narrative reflective practice approach to working with residents. We examine the use of such an approach. This paper draws on a 2006 study carried out with four family medicine residents into the potential of writing, sharing and inquiring into parallel charts in order to help develop doctor identity. Each resident wrote 10 parallel charts over 10 weeks. All residents met bi-weekly as a group with two researchers to narratively inquire into the stories told in their charts. One parallel chart and the ensuing group inquiry about the chart are described. In the narrative reflective practice process, one resident tells of working with a patient and, through writing, sharing and inquiry, integrates her practice and how she learned to be a doctor in one cultural setting into another cultural setting; another resident affirms her relational way of practising medicine, and a third resident begins to see the complexity of attending to patients' experiences. The process shows the importance of creating pedagogical spaces to allow doctors to tell and retell, through narrative inquiry, their stories of their experiences. This pedagogical approach creates spaces for doctors to individually develop their own stories by which to live as doctors through narrative reflection on their interwoven personal, professional and cultural stories as they are shaped by, and enacted within, their professional contexts.
Hubble Sees a Legion of Galaxies
2017-12-08
Peering deep into the early universe, this picturesque parallel field observation from the NASA/ESA Hubble Space Telescope reveals thousands of colorful galaxies swimming in the inky blackness of space. A few foreground stars from our own galaxy, the Milky Way, are also visible. In October 2013 Hubble’s Wide Field Camera 3 (WFC3) and Advanced Camera for Surveys (ACS) began observing this portion of sky as part of the Frontier Fields program. This spectacular skyscape was captured during the study of the giant galaxy cluster Abell 2744, otherwise known as Pandora’s Box. While one of Hubble’s cameras concentrated on Abell 2744, the other camera viewed this adjacent patch of sky near to the cluster. Containing countless galaxies of various ages, shapes and sizes, this parallel field observation is nearly as deep as the Hubble Ultra-Deep Field. In addition to showcasing the stunning beauty of the deep universe in incredible detail, this parallel field — when compared to other deep fields — will help astronomers understand how similar the universe looks in different directions. Image credit: NASA, ESA and the HST Frontier Fields team (STScI), NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Hyperconnectivity, Attribute-Space Connectivity and Path Openings: Theoretical Relationships
NASA Astrophysics Data System (ADS)
Wilkinson, Michael H. F.
In this paper the relationship of hyperconnected filters with path openings and attribute-space connected filters is studied. Using a recently developed axiomatic framework based on hyperconnectivity operators, which are the hyperconnected equivalents of connectivity openings, it is shown that path openings are a special case of hyperconnected area openings. The new axiomatics also yield insight into the relationship between hyperconnectivity and attribute-space connectivity. It is shown any hyperconnectivity is an attribute-space connectivity, but that the reverse is not true.
2005-08-01
differences, including longer event durations for SEPs from quasi- Yago & Kamide (2003) have shown that the lognormal plot is parallel shocks due to the longer...Urpo, S. 1999, A&A, 348, 271 ApJ, 598, 1392 Klein, K.-L., & Trottet, G. 2001, Space Sci. Rev., 95, 215 Yago , K., & Kamide, Y. 2003, Space Weather, 1
NASA Astrophysics Data System (ADS)
Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A.; Oliveira, Micael J. T.; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G.; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A. L.
2012-06-01
Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.
Sublattice parallel replica dynamics.
Martínez, Enrique; Uberuaga, Blas P; Voter, Arthur F
2014-06-01
Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998)] by combining it with the synchronous sublattice approach of Shim and Amar [ and , Phys. Rev. B 71, 125432 (2005)], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.
Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A L
2012-06-13
Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.
Client's view of a successful helping relationship.
Ribner, David S; Knei-Paz, Cigal
2002-10-01
This study asked clients from multiproblem families to describe a successful helping relationship. The replies were analyzed using narrative research techniques and results are presented in conceptual categories with illustrative quotations from the interviews. The article offers conclusions about client preferences in the areas of working relationship, work styles, and worker characteristics. The results revealed two general domains of the client-worker relationship: factors that provided a sense of equality in the relationship, for example, love, friendship, and a nonjudgmental stance; and the notion that the helping relationship should parallel more normative contacts and include components such as flexibility, chemistry, luck, and going the extra distance.
Safety analysis of urban arterials at the meso level.
Li, Jia; Wang, Xuesong
2017-11-01
Urban arterials form the main structure of street networks. They typically have multiple lanes, high traffic volume, and high crash frequency. Classical crash prediction models investigate the relationship between arterial characteristics and traffic safety by treating road segments and intersections as isolated units. This micro-level analysis does not work when examining urban arterial crashes because signal spacing is typically short for urban arterials, and there are interactions between intersections and road segments that classical models do not accommodate. Signal spacing also has safety effects on both intersections and road segments that classical models cannot fully account for because they allocate crashes separately to intersections and road segments. In addition, classical models do not consider the impact on arterial safety of the immediately surrounding street network pattern. This study proposes a new modeling methodology that will offer an integrated treatment of intersections and road segments by combining signalized intersections and their adjacent road segments into a single unit based on road geometric design characteristics and operational conditions. These are called meso-level units because they offer an analytical approach between micro and macro. The safety effects of signal spacing and street network pattern were estimated for this study based on 118 meso-level units obtained from 21 urban arterials in Shanghai, and were examined using CAR (conditional auto regressive) models that corrected for spatial correlation among the units within individual arterials. Results showed shorter arterial signal spacing was associated with higher total and PDO (property damage only) crashes, while arterials with a greater number of parallel roads were associated with lower total, PDO, and injury crashes. The findings from this study can be used in the traffic safety planning, design, and management of urban arterials. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Xinping; Wang, Dexiang; Hao, Hongke; Zhang, Fangfang; Hu, Youning
2017-07-26
In this study Yan'an City, a typical hilly valley city, was considered as the study area in order to explain the relationships between the surface urban heat island (SUHI) and land use/land cover (LULC) types, the landscape pattern metrics of LULC types and land surface temperature (LST) and remote sensing indexes were retrieved from Landsat data during 1990-2015, and to find factors contributed to the green space cool island intensity (GSCI) through field measurements of 34 green spaces. The results showed that during 1990-2015, because of local anthropogenic activities, SUHI was mainly located in lower vegetation cover areas. There was a significant suburban-urban gradient in the average LST, as well as its heterogeneity and fluctuations. Six landscape metrics comprising the fractal dimension index, percentage of landscape, aggregation index, division index, Shannon's diversity index, and expansion intensity of the classified LST spatiotemporal changes were paralleled to LULC changes, especially for construction land, during the past 25 years. In the urban area, an index-based built-up index was the key positive factor for explaining LST increases, whereas the normalized difference vegetation index and modified normalized difference water index were crucial factors for explaining LST decreases during the study periods. In terms of the heat mitigation performance of green spaces, mixed forest was better than pure forest, and the urban forest configuration had positive effects on GSCI. The results of this study provide insights into the importance of species choice and the spatial design of green spaces for cooling the environment.
Controls on valley spacing in landscapes subject to rapid base-level fall
McGuire, Luke; Pelletier, John D.
2015-01-01
What controls the architecture of drainage networks is a fundamental question in geomorphology. Recent work has elucidated the mechanisms of drainage network development in steadily uplifting landscapes, but the controls on drainage-network morphology in transient landscapes are relatively unknown. In this paper we exploit natural experiments in drainage network development in incised Plio-Quaternary alluvial fan surfaces in order to understand and quantify drainage network development in highly transient landscapes, i.e. initially unincised low-relief surfaces that experience a pulse of rapid base-level drop followed by relative base-level stasis. Parallel drainage networks formed on incised alluvial-fan surfaces tend to have a drainage spacing that is approximately proportional to the magnitude of the base-level drop. Numerical experiments suggest that this observed relationship between the magnitude of base-level drop and mean drainage spacing is the result of feedbacks among the depth of valley incision, mass wasting and nonlinear increases in the rate of colluvial sediment transport with slope gradient on steep valley side slopes that lead to increasingly wide valleys in cases of larger base-level drop. We identify a threshold magnitude of base-level drop above which side slopes lengthen sufficiently to promote increases in contributing area and fluvial incision rates that lead to branching and encourage drainage networks to transition from systems of first-order valleys to systems of higher-order, branching valleys. The headward growth of these branching tributaries prevents the development of adjacent, ephemeral drainages and promotes a higher mean valley spacing relative to cases in which tributaries do not form. Model results offer additional insights into the response of initially unincised landscapes to rapid base-level drop and provide a preliminary basis for understanding how varying amounts of base-level change influence valley network morphology.
Apodized Pupil Lyot Coronagraphs designs for future segmented space telescopes
NASA Astrophysics Data System (ADS)
St. Laurent, Kathryn; Fogarty, Kevin; Zimmerman, Neil; N’Diaye, Mamadou; Stark, Chris; Sivaramakrishnan, Anand; Pueyo, Laurent; Vanderbei, Robert; Soummer, Remi
2018-01-01
A coronagraphic starlight suppression system situated on a future flagship space observatory offers a promising avenue to image Earth-like exoplanets and search for biomarkers in their atmospheric spectra. One NASA mission concept that could serve as the platform to realize this scientific breakthrough is the Large UV/Optical/IR Surveyor (LUVOIR). Such a mission would also address a broad range of topics in astrophysics with a multi-wavelength suite of instruments.In support of the community’s assessment of the scientific capability of a LUVOIR mission, the Exoplanet Exploration Program (ExEP) has launched a multi-team technical study: Segmented Coronagraph Design and Analysis (SCDA). The goal of this study is to develop viable coronagraph instrument concepts for a LUVOIR-type mission. Results of the SCDA effort will directly inform the mission concept evaluation being carried out by the LUVOIR Science and Technology Definition Team. The apodized pupil Lyot coronagraph (APLC) is one of several coronagraph design families that the SCDA study is assessing. The APLC is a Lyot-style coronagraph that suppresses starlight through a series of amplitude operations on the on-axis field. Given a suite of seven plausible segmented telescope apertures, we have developed an object-oriented software toolkit to automate the exploration of thousands of APLC design parameter combinations. In the course of exploring this parameter space we have established relationships between APLC throughput and telescope aperture geometry, Lyot stop, inner working angle, bandwidth, and contrast level. In parallel with the parameter space exploration, we have investigated several strategies to improve the robustness of APLC designs to fabrication and alignment errors and integrated a Design Reference Mission framework to evaluate designs with scientific yield metrics.
Zhang, Xinping; Hao, Hongke; Zhang, Fangfang; Hu, Youning
2017-01-01
In this study Yan’an City, a typical hilly valley city, was considered as the study area in order to explain the relationships between the surface urban heat island (SUHI) and land use/land cover (LULC) types, the landscape pattern metrics of LULC types and land surface temperature (LST) and remote sensing indexes were retrieved from Landsat data during 1990–2015, and to find factors contributed to the green space cool island intensity (GSCI) through field measurements of 34 green spaces. The results showed that during 1990–2015, because of local anthropogenic activities, SUHI was mainly located in lower vegetation cover areas. There was a significant suburban-urban gradient in the average LST, as well as its heterogeneity and fluctuations. Six landscape metrics comprising the fractal dimension index, percentage of landscape, aggregation index, division index, Shannon’s diversity index, and expansion intensity of the classified LST spatiotemporal changes were paralleled to LULC changes, especially for construction land, during the past 25 years. In the urban area, an index-based built-up index was the key positive factor for explaining LST increases, whereas the normalized difference vegetation index and modified normalized difference water index were crucial factors for explaining LST decreases during the study periods. In terms of the heat mitigation performance of green spaces, mixed forest was better than pure forest, and the urban forest configuration had positive effects on GSCI. The results of this study provide insights into the importance of species choice and the spatial design of green spaces for cooling the environment. PMID:28933770
SiGN-SSM: open source parallel software for estimating gene networks with state space models.
Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru
2011-04-15
SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.
NASA Astrophysics Data System (ADS)
Catarino, I.; Soni, V.; Barreto, J.; Martins, D.; Kar, S.
2017-02-01
The conduction cooling of both a 6 T superconducting magnet along with a sample probe in a parallel configuration is addressed in this work. A Gifford-McMahon (GM) cryocooler is directly cooling the NbTi magnet, which aims to be kept at 4 K, while a gas-gap heat switch (GGHS) manages the cooling power to be diverted to the sample probe, which may be swept from 4 K up to 300 K. A first prototype of a GGHS was customized and validated for this purpose. A sample probe assembly has been designed and assembled with the existing cryogen-free magnet system. The whole test setup and components are described and the preliminary experimental results on the integration are presented and discussed. The magnet was charged up to 3 T with a 4 K sample space and up to 1 T with a sweeping sample space temperature up to 300 K while acting on the GGHS. Despite some identified thermal insulation problems that occurred during this first test, the overall results demonstrated the feasibility of the cryogen-free parallel conduction cooling on study.
Hyper-Parallel Tempering Monte Carlo Method and It's Applications
NASA Astrophysics Data System (ADS)
Yan, Qiliang; de Pablo, Juan
2000-03-01
A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.
Qiu, Gongzhe
2017-01-01
Due to the symmetry of conventional periodic-permanent-magnet electromagnetic acoustic transducers (PPM EMATs), two shear (SH) waves can be generated and propagated simultaneously in opposite directions, which makes the signal recognition and interpretation complicatedly. Thus, this work presents a new SH wave PPM EMAT design, rotating the parallel line sources to realize the wave beam focusing in a single-direction. The theoretical model of distributed line sources was deduced firstly, and the effects of some parameters, such as the inner coil width, adjacent line sources spacing and the angle between parallel line sources, on SH wave focusing and directivity were studied mainly with the help of 3D FEM. Employing the proposed PPM EMATs, some experiments are carried out to verify the reliability of FEM simulation. The results indicate that rotating the parallel line sources can strength the wave on the closing side of line sources, decreasing the inner coil width and the adjacent line sources spacing can improve the amplitude and directivity of signals excited by transducers. Compared with traditional PPM EMATs, both the capacity of unidirectional excitation and directivity of the proposed PPM EMATs are improved significantly. PMID:29186790
Song, Xiaochun; Qiu, Gongzhe
2017-11-24
Due to the symmetry of conventional periodic-permanent-magnet electromagnetic acoustic transducers (PPM EMATs), two shear (SH) waves can be generated and propagated simultaneously in opposite directions, which makes the signal recognition and interpretation complicatedly. Thus, this work presents a new SH wave PPM EMAT design, rotating the parallel line sources to realize the wave beam focusing in a single-direction. The theoretical model of distributed line sources was deduced firstly, and the effects of some parameters, such as the inner coil width, adjacent line sources spacing and the angle between parallel line sources, on SH wave focusing and directivity were studied mainly with the help of 3D FEM. Employing the proposed PPM EMATs, some experiments are carried out to verify the reliability of FEM simulation. The results indicate that rotating the parallel line sources can strength the wave on the closing side of line sources, decreasing the inner coil width and the adjacent line sources spacing can improve the amplitude and directivity of signals excited by transducers. Compared with traditional PPM EMATs, both the capacity of unidirectional excitation and directivity of the proposed PPM EMATs are improved significantly.
Mitochondrial gene rearrangements confirm the parallel evolution of the crab-like form.
Morrison, C L; Harvey, A W; Lavery, S; Tieu, K; Huang, Y; Cunningham, C W
2002-01-01
The repeated appearance of strikingly similar crab-like forms in independent decapod crustacean lineages represents a remarkable case of parallel evolution. Uncertainty surrounding the phylogenetic relationships among crab-like lineages has hampered evolutionary studies. As is often the case, aligned DNA sequences by themselves were unable to fully resolve these relationships. Four nested mitochondrial gene rearrangements--including one of the few reported movements of an arthropod protein-coding gene--are congruent with the DNA phylogeny and help to resolve a crucial node. A phylogenetic analysis of DNA sequences, and gene rearrangements, supported five independent origins of the crab-like form, and suggests that the evolution of the crab-like form may be irreversible. This result supports the utility of mitochondrial gene rearrangements in phylogenetic reconstruction. PMID:11886621
Faster Bit-Parallel Algorithms for Unordered Pseudo-tree Matching and Tree Homeomorphism
NASA Astrophysics Data System (ADS)
Kaneta, Yusaku; Arimura, Hiroki
In this paper, we consider the unordered pseudo-tree matching problem, which is a problem of, given two unordered labeled trees P and T, finding all occurrences of P in T via such many-one embeddings that preserve node labels and parent-child relationship. This problem is closely related to tree pattern matching problem for XPath queries with child axis only. If m > w , we present an efficient algorithm that solves the problem in O(nm log(w)/w) time using O(hm/w + mlog(w)/w) space and O(m log(w)) preprocessing on a unit-cost arithmetic RAM model with addition, where m is the number of nodes in P, n is the number of nodes in T, h is the height of T, and w is the word length. We also discuss a modification of our algorithm for the unordered tree homeomorphism problem, which corresponds to a tree pattern matching problem for XPath queries with descendant axis only.
Decentralized Interleaving of Paralleled Dc-Dc Buck Converters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Rodriguez, Miguel; Sinha, Mohit
We present a decentralized control strategy that yields switch interleaving among parallel-connected dc-dc buck converters. The proposed method is based on the digital implementation of the dynamics of a nonlinear oscillator circuit as the controller. Each controller is fully decentralized, i.e., it only requires the locally measured output current to synthesize the pulse width modulation (PWM) carrier waveform and no communication between different controllers is needed. By virtue of the intrinsic electrical coupling between converters, the nonlinear oscillator-based controllers converge to an interleaved state with uniform phase-spacing across PWM carriers. To the knowledge of the authors, this work presents themore » first fully decentralized strategy for switch interleaving in paralleled dc-dc buck converters.« less
The structure of the electron diffusion region during asymmetric anti-parallel magnetic reconnection
NASA Astrophysics Data System (ADS)
Swisdak, M.; Drake, J. F.; Price, L.; Burch, J. L.; Cassak, P.
2017-12-01
The structure of the electron diffusion region during asymmetric magnetic reconnection is ex- plored with high-resolution particle-in-cell simulations that focus on an magnetopause event ob- served by the Magnetospheric Multiscale Mission (MMS). A major surprise is the development of a standing, oblique whistler-like structure with regions of intense positive and negative dissipation. This structure arises from high-speed electrons that flow along the magnetosheath magnetic sepa- ratrices, converge in the dissipation region and jet across the x-line into the magnetosphere. The jet produces a region of negative charge and generates intense parallel electric fields that eject the electrons downstream along the magnetospheric separatrices. The ejected electrons produce the parallel velocity-space crescents documented by MMS.
NASA Technical Reports Server (NTRS)
Waller, Marvin C. (Editor); Scanlon, Charles H. (Editor)
1996-01-01
A Government and Industry workshop on Flight-Deck-Centered Parallel Runway Approaches in Instrument Meteorological Conditions (IMC) was conducted October 29, 1996 at the NASA Langley Research Center. This document contains the slides and records of the proceedings of the workshop. The purpose of the workshop was to disclose to the National airspace community the status of ongoing NASA R&D to address the closely spaced parallel runway problem in IMC and to seek advice and input on direction of future work to assure an optimized research approach. The workshop also included a description of a Paired Approach Concept which is being studied at United Airlines for application at the San Francisco International Airport.
Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Crockett, Thomas W.
1999-01-01
This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.
NASA Technical Reports Server (NTRS)
Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.
1992-01-01
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.
Fuzzy Logic Based Autonomous Parallel Parking System with Kalman Filtering
NASA Astrophysics Data System (ADS)
Panomruttanarug, Benjamas; Higuchi, Kohji
This paper presents an emulation of fuzzy logic control schemes for an autonomous parallel parking system in a backward maneuver. There are four infrared sensors sending the distance data to a microcontroller for generating an obstacle-free parking path. Two of them mounted on the front and rear wheels on the parking side are used as the inputs to the fuzzy rules to calculate a proper steering angle while backing. The other two attached to the front and rear ends serve for avoiding collision with other cars along the parking space. At the end of parking processes, the vehicle will be in line with other parked cars and positioned in the middle of the free space. Fuzzy rules are designed based upon a wall following process. Performance of the infrared sensors is improved using Kalman filtering. The design method needs extra information from ultrasonic sensors. Starting from modeling the ultrasonic sensor in 1-D state space forms, one makes use of the infrared sensor as a measurement to update the predicted values. Experimental results demonstrate the effectiveness of sensor improvement.
Artificial dielectric stepped-refractive-index lens for the terahertz region.
Hernandez-Serrano, A I; Mendis, Rajind; Reichel, Kimberly S; Zhang, Wei; Castro-Camus, E; Mittleman, Daniel M
2018-02-05
In this paper we theoretically and experimentally demonstrate a stepped-refractive-index convergent lens made of a parallel stack of metallic plates for terahertz frequencies based on artificial dielectrics. The lens consist of a non-uniformly spaced stack of metallic plates, forming a mirror-symmetric array of parallel-plate waveguides (PPWGs). The operation of the device is based on the TE 1 mode of the PPWG. The effective refractive index of the TE 1 mode is a function of the frequency of operation and the spacing between the plates of the PPWG. By varying the spacing between the plates, we can modify the local refractive index of the structure in every individual PPWG that constitutes the lens producing a stepped refractive index profile across the multi stack structure. The theoretical and experimental results show that this structure is capable of focusing a 1 cm diameter beam to a line focus of less than 4 mm for the design frequency of 0.18 THz. This structure shows that this artificial-dielectric concept is an important technology for the fabrication of next generation terahertz devices.
NASA Technical Reports Server (NTRS)
1972-01-01
An analysis and design effort was conducted as part of the study of solid rocket motor for a space shuttle booster. The 156-inch-diameter, parallel burn solid rocket motor was selected as its baseline because it is transportable and is the most cost-effective, reliable system that has been developed and demonstrated. The basic approach was to concentrate on the selected baseline design, and to draw from the baseline sufficient data to describe the alternate approaches also studied. The following conclusions were reached with respect to technical feasibility of the use of solid rocket booster motors for the space shuttle vehicle: (1) The 156-inch, parallel-burn baseline SRM design meets NASA's study requirements while incorporating conservative safety factors. (2) The solid rocket motor booster represents a cost-effective approach. (3) Baseline costs are conservative and are based on a demonstrated design. (4) Recovery and reuse are feasible and offer substantial cost savings. (5) Abort can be accomplished successfully. (6) Ecological effects are acceptable.
Do Monkeys Think in Metaphors? Representations of Space and Time in Monkeys and Humans
ERIC Educational Resources Information Center
Merritt, Dustin J.; Casasanto, Daniel; Brannon, Elizabeth M.
2010-01-01
Research on the relationship between the representation of space and time has produced two contrasting proposals. ATOM posits that space and time are represented via a common magnitude system, suggesting a symmetrical relationship between space and time. According to metaphor theory, however, representations of time depend on representations of…
Otazo, Ricardo; Lin, Fa-Hsuan; Wiggins, Graham; Jordan, Ramiro; Sodickson, Daniel; Posse, Stefan
2009-01-01
Standard parallel magnetic resonance imaging (MRI) techniques suffer from residual aliasing artifacts when the coil sensitivities vary within the image voxel. In this work, a parallel MRI approach known as Superresolution SENSE (SURE-SENSE) is presented in which acceleration is performed by acquiring only the central region of k-space instead of increasing the sampling distance over the complete k-space matrix and reconstruction is explicitly based on intra-voxel coil sensitivity variation. In SURE-SENSE, parallel MRI reconstruction is formulated as a superresolution imaging problem where a collection of low resolution images acquired with multiple receiver coils are combined into a single image with higher spatial resolution using coil sensitivities acquired with high spatial resolution. The effective acceleration of conventional gradient encoding is given by the gain in spatial resolution, which is dictated by the degree of variation of the different coil sensitivity profiles within the low resolution image voxel. Since SURE-SENSE is an ill-posed inverse problem, Tikhonov regularization is employed to control noise amplification. Unlike standard SENSE, for which acceleration is constrained to the phase-encoding dimension/s, SURE-SENSE allows acceleration along all encoding directions — for example, two-dimensional acceleration of a 2D echo-planar acquisition. SURE-SENSE is particularly suitable for low spatial resolution imaging modalities such as spectroscopic imaging and functional imaging with high temporal resolution. Application to echo-planar functional and spectroscopic imaging in human brain is presented using two-dimensional acceleration with a 32-channel receiver coil. PMID:19341804
Parallel Monte Carlo Search for Hough Transform
NASA Astrophysics Data System (ADS)
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.
2017-10-01
We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.
Terminal ballistics of a reduced-mass penetrator. Final report, January 1990--December 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silsby, G.F.
1996-07-01
This report presents the results of an experimental program to examine the performance of a reduced-mass concept penetrator impacting semi-infinite rolled homogeneous armor (RHA) at normal incidence. The reduced-mass penetrator used in this program is a solid tungsten alloy rod with eight holes drilled parallel to its axis, equally spaced on a circle, with axes parallel to the rod axis. Its performance was contrasted with baseline data for length-to- diameter ratios (L/D) 4 and 5 solid tungsten alloy penetrators. Striking velocity was nominally 1.6 km/s. A determined effort to reduce the scatter in the data by analysis of collateral datamore » from the US Army Research Laboratory (ARL) and literature sources suggested only a rather weak influence of L/D on penetration even at L/Ds approaching 1 and provided a tentative relationship to remove the influence of target lateral edge effects. It tightened up the holed-out rod data enough to be able to conclude with a moderate degree of certainty that there was no improvement in penetration as suggested by a simplistic density law model. A companion work by Kimsey of ARL examines the performance of this novel penetrator concept computationally, using the Eulerian code CTH. His work explains the possible causes of reduced performance suggested by analysis by Zook and Frank of ARL, though with some relative improvement in performance at higher velocities.« less
Lead-Free Experiment in a Space Environment
NASA Technical Reports Server (NTRS)
Blanche, J. F.; Strickland, S. M.
2012-01-01
This Technical Memorandum addresses the Lead-Free Technology Experiment in Space Environment that flew as part of the seventh Materials International Space Station Experiment outside the International Space Station for approximately 18 months. Its intent was to provide data on the performance of lead-free electronics in an actual space environment. Its postflight condition is compared to the preflight condition as well as to the condition of an identical package operating in parallel in the laboratory. Some tin whisker growth was seen on a flight board but the whiskers were few and short. There were no solder joint failures, no tin pest formation, and no significant intermetallic compound formation or growth on either the flight or ground units.
MMS Observations of Parallel Electric Fields During a Quasi-Perpendicular Bow Shock Crossing
NASA Astrophysics Data System (ADS)
Goodrich, K.; Schwartz, S. J.; Ergun, R.; Wilder, F. D.; Holmes, J.; Burch, J. L.; Gershman, D. J.; Giles, B. L.; Khotyaintsev, Y. V.; Le Contel, O.; Lindqvist, P. A.; Strangeway, R. J.; Russell, C.; Torbert, R. B.
2016-12-01
Previous observations of the terrestrial bow shock have frequently shown large-amplitude fluctuations in the parallel electric field. These parallel electric fields are seen as both nonlinear solitary structures, such as double layers and electron phase-space holes, and short-wavelength waves, which can reach amplitudes greater than 100 mV/m. The Magnetospheric Multi-Scale (MMS) Mission has crossed the Earth's bow shock more than 200 times. The parallel electric field signatures observed in these crossings are seen in very discrete packets and evolve over time scales of less than a second, indicating the presence of a wealth of kinetic-scale activity. The high time resolution of the Fast Particle Instrument (FPI) available on MMS offers greater detail of the kinetic-scale physics that occur at bow shocks than ever before, allowing greater insight into the overall effect of these observed electric fields. We present a characterization of these parallel electric fields found in a single bow shock event and how it reflects the kinetic-scale activity that can occur at the terrestrial bow shock.
NASA Astrophysics Data System (ADS)
Tatsuura, Satoshi; Wada, Osamu; Furuki, Makoto; Tian, Minquan; Sato, Yasuhiro; Iwasa, Izumi; Pu, Lyong Sun
2001-04-01
In this study, we introduce a new concept of all-optical two-dimensional serial-to-parallel pulse converters. Femtosecond optical pulses can be understood as thin plates of light traveling in space. When a femtosecond signal-pulse train and a single gate pulse were fed onto a material with a finite incident angle, each signal-pulse plate met the gate-pulse plate at different locations in the material due to the time-of-flight effect. Meeting points can be made two-dimensional by adding a partial time delay to the gate pulse. By placing a nonlinear optical material at an appropriate position, two-dimensional serial-to-parallel conversion of a signal-pulse train can be achieved with a single gate pulse. We demonstrated the detection of parallel outputs from a 1-Tb/s optical-pulse train through the use of a BaB2O4 crystal. We also succeeded in demonstrating 1-Tb/s serial-to-parallel operation through the use of a novel organic nonlinear optical material, squarylium-dye J-aggregate film, which exhibits ultrafast recovery of bleached absorption.
Green Space, Violence, and Crime: A Systematic Review.
Bogar, Sandra; Beyer, Kirsten M
2016-04-01
To determine the state of evidence on relationships among urban green space, violence, and crime in the United States. Major bibliographic databases were searched for studies meeting inclusion criteria. Additional studies were culled from study references and authors' personal collections. Comparison among studies was limited by variations in study design and measurement and results were mixed. However, more evidence supports the positive impact of green space on violence and crime, indicating great potential for green space to shape health-promoting environments. Numerous factors influence the relationships among green space, crime, and violence. Additional research and standardization among research studies are needed to better understand these relationships. © The Author(s) 2015.
USDA-ARS?s Scientific Manuscript database
Our work in dogs has revealed a U-shaped dose response between selenium status and prostatic DNA damage that remarkably parallels the relationship between dietary selenium and prostate cancer risk in men, suggesting that more selenium is not necessarily better. Herein, we extend this canine work to ...
NASA Astrophysics Data System (ADS)
Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.
2018-05-01
As one of the non-conventional mathematics concepts, Parallel Coordinates is potential to be learned by pre-service mathematics teachers in order to give them experiences in constructing richer schemes and doing abstraction process. Unfortunately, the study related to this issue is still limited. This study wants to answer a research question “to what extent the abstraction process of pre-service mathematics teachers in learning concept of Parallel Coordinates could indicate their performance in learning Analytic Geometry”. This is a case study that part of a larger study in examining mathematical abstraction of pre-service mathematics teachers in learning non-conventional mathematics concept. Descriptive statistics method is used in this study to analyze the scores from three different tests: Cartesian Coordinate, Parallel Coordinates, and Analytic Geometry. The participants in this study consist of 45 pre-service mathematics teachers. The result shows that there is a linear association between the score on Cartesian Coordinate and Parallel Coordinates. There also found that the higher levels of the abstraction process in learning Parallel Coordinates are linearly associated with higher student achievement in Analytic Geometry. The result of this study shows that the concept of Parallel Coordinates has a significant role for pre-service mathematics teachers in learning Analytic Geometry.
A Structure-Toxicity Study of Aß42 Reveals a New Anti-Parallel Aggregation Pathway
Vignaud, Hélène; Bobo, Claude; Lascu, Ioan; Sörgjerd, Karin Margareta; Zako, Tamotsu; Maeda, Mizuo; Salin, Benedicte; Lecomte, Sophie; Cullin, Christophe
2013-01-01
Amyloid beta (Aβ) peptides produced by APP cleavage are central to the pathology of Alzheimer’s disease. Despite widespread interest in this issue, the relationship between the auto-assembly and toxicity of these peptides remains controversial. One intriguing feature stems from their capacity to form anti-parallel ß-sheet oligomeric intermediates that can be converted into a parallel topology to allow the formation of protofibrillar and fibrillar Aβ. Here, we present a novel approach to determining the molecular aspects of Aß assembly that is responsible for its in vivo toxicity. We selected Aß mutants with varying intracellular toxicities. In vitro, only toxic Aß (including wild-type Aß42) formed urea-resistant oligomers. These oligomers were able to assemble into fibrils that are rich in anti-parallel ß-sheet structures. Our results support the existence of a new pathway that depends on the folding capacity of Aß . PMID:24244667
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonachea, Dan; Hargrove, P.
GASNet is a language-independent, low-level networking layer that provides network-independent, high-performance communication primitives tailored for implementing parallel global address space SPMD languages and libraries such as UPC, UPC++, Co-Array Fortran, Legion, Chapel, and many others. The interface is primarily intended as a compilation target and for use by runtime library writers (as opposed to end users), and the primary goals are high performance, interface portability, and expressiveness. GASNet stands for "Global-Address Space Networking".
Study of solid rocket motors for a space shuttle booster. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Vonderesch, A. H.
1972-01-01
The factors affecting the choice of the 156 inch diameter, parallel burn, solid propellant rocket engine for use with the space shuttle booster are presented. Primary considerations leading to the selection are: (1) low booster vehicle cost, (2) the largest proven transportable system, (3) a demonstrated design, (4) recovery/reuse is feasible, (5) abort can be easily accomplished, and (6) ecological effects are minor.
Medendorp, W. P.
2015-01-01
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289
Analytical model for vibration prediction of two parallel tunnels in a full-space
NASA Astrophysics Data System (ADS)
He, Chao; Zhou, Shunhua; Guo, Peijun; Di, Honggui; Zhang, Xiaohui
2018-06-01
This paper presents a three-dimensional analytical model for the prediction of ground vibrations from two parallel tunnels embedded in a full-space. The two tunnels are modelled as cylindrical shells of infinite length, and the surrounding soil is modelled as a full-space with two cylindrical cavities. A virtual interface is introduced to divide the soil into the right layer and the left layer. By transforming the cylindrical waves into the plane waves, the solution of wave propagation in the full-space with two cylindrical cavities is obtained. The transformations from the plane waves to cylindrical waves are then used to satisfy the boundary conditions on the tunnel-soil interfaces. The proposed model provides a highly efficient tool to predict the ground vibration induced by the underground railway, which accounts for the dynamic interaction between neighbouring tunnels. Analysis of the vibration fields produced over a range of frequencies and soil properties is conducted. When the distance between the two tunnels is smaller than three times the tunnel diameter, the interaction between neighbouring tunnels is highly significant, at times in the order of 20 dB. It is necessary to consider the interaction between neighbouring tunnels for the prediction of ground vibrations induced underground railways.
Reverse control for humanoid robot task recognition.
Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul
2012-12-01
Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.
Target intersection probabilities for parallel-line and continuous-grid types of search
McCammon, R.B.
1977-01-01
The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.
DOT National Transportation Integrated Search
2005-04-01
The relationship between Iowas roads and : drainage developed when rural roads were originally : constructed. The land parallel to roadways was : excavated to create road embankments. The resulting : ditches provided an outlet for shallow tiles to...
NASA Technical Reports Server (NTRS)
Jarrett, T. W.
1972-01-01
Various space shuttle ascent configurations were tested in a trisonic wind tunnel to determine the aerodynamic characteristics. The ascent configuration consisted of a NASA/MSC 040 orbiter in combination with various HO centerline tank and booster geometries. The aerodynamic interference between components of the space shuttle and the effect on the orbiter aerodynamics was determined. The various aerodynamic configurations tested were: (1) centerline HO tanks T1 and T2, (2) centerline HO tank T3, and (3) centerline HO tank H4.
Space-multiplexed optical scanner.
Riza, Nabeel A; Yaqoob, Zahid
2004-05-01
A low-loss two-dimensional optical beam scanner that is capable of delivering large (e.g., > 10 degrees) angular scans along the elevation as well as the azimuthal direction is presented. The proposed scanner is based on a space-switched parallel-serial architecture that employs a coarse-scanner module and a fine-scanner module that produce an ultrahigh scan space-fill factor, e.g., 900 x 900 distinguishable beams in a 10 degrees (elevation) x 10 degrees (azimuth) scan space. The experimentally demonstrated one-dimensional version of the proposed scanner has a supercontinuous scan, 100 distinguishable beam spots in a 2.29 degrees total scan range, and 1.5-dB optical insertion loss.
Predicting near-ground vortex lifetimes using Weibull density functions
DOT National Transportation Integrated Search
2007-01-08
To mitigate safety hazards posed by near-ground vortex lateral transport, under : instrument flight rules (IFR), parallel runway operations must adopt aircraft spacing : standards that often reduce capacity. Once the phenomenon of lateral transport i...
Motion of Aircraft Wake Vortices in Ground Effect.
DOT National Transportation Integrated Search
2000-04-01
This report addresses the wake-turbulence separation standards for close-spaced parallel runways. Ground-wind anemometer data collected at Kennedy (landing) and O'Hare (takeoff) airports are analyzed to assess the lateral transport probability for wa...
Setsompop, Kawin; Alagappan, Vijayanand; Gagoski, Borjan; Witzel, Thomas; Polimeni, Jonathan; Potthast, Andreas; Hebrank, Franz; Fontius, Ulrich; Schmitt, Franz; Wald, Lawrence L; Adalsteinsson, Elfar
2008-12-01
Slice-selective RF waveforms that mitigate severe B1+ inhomogeneity at 7 Tesla using parallel excitation were designed and validated in a water phantom and human studies on six subjects using a 16-element degenerate stripline array coil driven with a butler matrix to utilize the eight most favorable birdcage modes. The parallel RF waveform design applied magnitude least-squares (MLS) criteria with an optimized k-space excitation trajectory to significantly improve profile uniformity compared to conventional least-squares (LS) designs. Parallel excitation RF pulses designed to excite a uniform in-plane flip angle (FA) with slice selection in the z-direction were demonstrated and compared with conventional sinc-pulse excitation and RF shimming. In all cases, the parallel RF excitation significantly mitigated the effects of inhomogeneous B1+ on the excitation FA. The optimized parallel RF pulses for human B1+ mitigation were only 67% longer than a conventional sinc-based excitation, but significantly outperformed RF shimming. For example the standard deviations (SDs) of the in-plane FA (averaged over six human studies) were 16.7% for conventional sinc excitation, 13.3% for RF shimming, and 7.6% for parallel excitation. This work demonstrates that excitations with parallel RF systems can provide slice selection with spatially uniform FAs at high field strengths with only a small pulse-duration penalty. (c) 2008 Wiley-Liss, Inc.
Shrimankar, D D; Sathe, S R
2016-01-01
Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.
Shrimankar, D. D.; Sathe, S. R.
2016-01-01
Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shipman, Galen M.
These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematicmore » approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.« less
Focused terahertz waves generated by a phase velocity gradient in a parallel-plate waveguide.
McKinney, Robert W; Monnai, Yasuaki; Mendis, Rajind; Mittleman, Daniel
2015-10-19
We demonstrate the focusing of a free-space THz beam emerging from a leaky parallel-plate waveguide (PPWG). Focusing is accomplished by grading the launch angle of the leaky wave using a PPWG with gradient plate separation. Inside the PPWG, the phase velocity of the guided TE1 mode exceeds the vacuum light speed, allowing the wave to leak into free space from a slit cut along the top plate. Since the leaky wave angle changes as the plate separation decreases, the beam divergence can be controlled by grading the plate separation along the propagation axis. We experimentally demonstrate focusing of the leaky wave at a selected location at frequencies of 100 GHz and 170 GHz, and compare our measurements with numerical simulations. The proposed concept can be valuable for implementing a flat and wide-aperture beam-former for THz communications systems.
Computations on the massively parallel processor at the Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Strong, James P.
1991-01-01
Described are four significant algorithms implemented on the massively parallel processor (MPP) at the Goddard Space Flight Center. Two are in the area of image analysis. Of the other two, one is a mathematical simulation experiment and the other deals with the efficient transfer of data between distantly separated processors in the MPP array. The first algorithm presented is the automatic determination of elevations from stereo pairs. The second algorithm solves mathematical logistic equations capable of producing both ordered and chaotic (or random) solutions. This work can potentially lead to the simulation of artificial life processes. The third algorithm is the automatic segmentation of images into reasonable regions based on some similarity criterion, while the fourth is an implementation of a bitonic sort of data which significantly overcomes the nearest neighbor interconnection constraints on the MPP for transferring data between distant processors.
Wide range radioactive gas concentration detector
Anderson, David F.
1984-01-01
A wide range radioactive gas concentration detector and monitor which is capable of measuring radioactive gas concentrations over a range of eight orders of magnitude. The device of the present invention is designed to have an ionization chamber which is sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.
UPC++ Programmer’s Guide (v1.0 2017.9)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachan, J.; Baden, S.; Bonachea, D.
UPC++ is a C++11 library that provides Asynchronous Partitioned Global Address Space (APGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The APGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, APGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, allmore » operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.« less
UPC++ Programmer’s Guide, v1.0-2018.3.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachan, J.; Baden, S.; Bonachea, Dan
UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operationsmore » that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.« less
Huang, Yongyang; Badar, Mudabbir; Nitkowski, Arthur; Weinroth, Aaron; Tansu, Nelson; Zhou, Chao
2017-01-01
Space-division multiplexing optical coherence tomography (SDM-OCT) is a recently developed parallel OCT imaging method in order to achieve multi-fold speed improvement. However, the assembly of fiber optics components used in the first prototype system was labor-intensive and susceptible to errors. Here, we demonstrate a high-speed SDM-OCT system using an integrated photonic chip that can be reliably manufactured with high precisions and low per-unit cost. A three-layer cascade of 1 × 2 splitters was integrated in the photonic chip to split the incident light into 8 parallel imaging channels with ~3.7 mm optical delay in air between each channel. High-speed imaging (~1s/volume) of porcine eyes ex vivo and wide-field imaging (~18.0 × 14.3 mm2) of human fingers in vivo were demonstrated with the chip-based SDM-OCT system. PMID:28856055
Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin
2013-08-09
Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.
Simulation/Emulation Techniques: Compressing Schedules With Parallel (HW/SW) Development
NASA Technical Reports Server (NTRS)
Mangieri, Mark L.; Hoang, June
2014-01-01
NASA has always been in the business of balancing new technologies and techniques to achieve human space travel objectives. NASA's Kedalion engineering analysis lab has been validating and using many contemporary avionics HW/SW development and integration techniques, which represent new paradigms to NASA's heritage culture. Kedalion has validated many of the Orion HW/SW engineering techniques borrowed from the adjacent commercial aircraft avionics solution space, inserting new techniques and skills into the Multi - Purpose Crew Vehicle (MPCV) Orion program. Using contemporary agile techniques, Commercial-off-the-shelf (COTS) products, early rapid prototyping, in-house expertise and tools, and extensive use of simulators and emulators, NASA has achieved cost effective paradigms that are currently serving the Orion program effectively. Elements of long lead custom hardware on the Orion program have necessitated early use of simulators and emulators in advance of deliverable hardware to achieve parallel design and development on a compressed schedule.
NASA Technical Reports Server (NTRS)
Menon, R. G.; Kurdila, A. J.
1992-01-01
This paper presents a concurrent methodology to simulate the dynamics of flexible multibody systems with a large number of degrees of freedom. A general class of open-loop structures is treated and a redundant coordinate formulation is adopted. A range space method is used in which the constraint forces are calculated using a preconditioned conjugate gradient method. By using a preconditioner motivated by the regular ordering of the directed graph of the structures, it is shown that the method is order N in the total number of coordinates of the system. The overall formulation has the advantage that it permits fine parallelization and does not rely on system topology to induce concurrency. It can be efficiently implemented on the present generation of parallel computers with a large number of processors. Validation of the method is presented via numerical simulations of space structures incorporating large number of flexible degrees of freedom.
Comparison of Procedures for Dual and Triple Closely Spaced Parallel Runways
NASA Technical Reports Server (NTRS)
Verma, Savita; Ballinger, Deborah; Subramanian Shobana; Kozon, Thomas
2012-01-01
A human-in-the-loop high fidelity flight simulation experiment was conducted, which investigated and compared breakout procedures for Very Closely Spaced Parallel Approaches (VCSPA) with two and three runways. To understand the feasibility, usability and human factors of two and three runway VCSPA, data were collected and analyzed on the dependent variables of breakout cross track error and pilot workload. Independent variables included number of runways, cause of breakout and location of breakout. Results indicated larger cross track error and higher workload using three runways as compared to 2-runway operations. Significant interaction effects involving breakout cause and breakout location were also observed. Across all conditions, cross track error values showed high levels of breakout trajectory accuracy and pilot workload remained manageable. Results suggest possible avenues of future adaptation for adopting these procedures (e.g., pilot training), while also showing potential promise of the concept.
NASA Technical Reports Server (NTRS)
Macconochie, Ian O. (Inventor); Mikulas, Martin M., Jr. (Inventor); Pennington, Jack E. (Inventor); Kinkead, Rebecca L. (Inventor); Bryan, Charles F., Jr. (Inventor)
1988-01-01
A space spider crane for the movement, placement, and or assembly of various components on or in the vicinity of a space structure is described. As permanent space structures are utilized by the space program, a means will be required to transport cargo and perform various repair tasks. A space spider crane comprising a small central body with attached manipulators and legs fulfills this requirement. The manipulators may be equipped with constant pressure gripping end effectors or tools to accomplish various repair tasks. The legs are also equipped with constant pressure gripping end effectors to grip the space structure. Control of the space spider crane may be achieved either by computer software or a remotely situated human operator, who maintains visual contact via television cameras mounted on the space spider crane. One possible walking program consists of a parallel motion walking program whereby the small central body alternatively leans forward and backward relative to end effectors.
The Basal Ganglia and Adaptive Motor Control
NASA Astrophysics Data System (ADS)
Graybiel, Ann M.; Aosaki, Toshihiko; Flaherty, Alice W.; Kimura, Minoru
1994-09-01
The basal ganglia are neural structures within the motor and cognitive control circuits in the mammalian forebrain and are interconnected with the neocortex by multiple loops. Dysfunction in these parallel loops caused by damage to the striatum results in major defects in voluntary movement, exemplified in Parkinson's disease and Huntington's disease. These parallel loops have a distributed modular architecture resembling local expert architectures of computational learning models. During sensorimotor learning, such distributed networks may be coordinated by widely spaced striatal interneurons that acquire response properties on the basis of experienced reward.
The Wang Landau parallel algorithm for the simple grids. Optimizing OpenMPI parallel implementation
NASA Astrophysics Data System (ADS)
Kussainov, A. S.
2017-12-01
The Wang Landau Monte Carlo algorithm to calculate density of states for the different simple spin lattices was implemented. The energy space was split between the individual threads and balanced according to the expected runtime for the individual processes. Custom spin clustering mechanism, necessary for overcoming of the critical slowdown in the certain energy subspaces, was devised. Stable reconstruction of the density of states was of primary importance. Some data post-processing techniques were involved to produce the expected smooth density of states.
NASA Astrophysics Data System (ADS)
Vafin, S.; Schlickeiser, R.; Yoon, P. H.
2016-05-01
The general electromagnetic fluctuation theory for magnetized plasmas is used to calculate the steady-state wave number spectra and total electromagnetic field strength of low-frequency collective weakly damped eigenmodes with parallel wavevectors in a Maxwellian electron-proton plasma. These result from the equilibrium of spontaneous emission and collisionless damping, and they represent the minimum electromagnetic fluctuations guaranteed in quiet thermal space plasmas, including the interstellar and interplanetary medium. Depending on the plasma beta, the ratio of |δB |/B0 can be as high as 10-12 .
Parasitic momentum flux in the tokamak core
Stoltzfus-Dueck, T.
2017-03-06
A geometrical correction to the E × B drift causes an outward flux of co-current momentum whenever electrostatic potential energy is transferred to ion parallel flows. The robust, fully nonlinear symmetry breaking follows from the free-energy flow in phase space and does not depend on any assumed linear eigenmode structure. The resulting rotation peaking is counter-current and scales as temperature over plasma current. Lastly, this peaking mechanism can only act when fluctuations are low-frequency enough to excite ion parallel flows, which may explain some recent experimental observations related to rotation reversals.
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
NASA Astrophysics Data System (ADS)
Sample, J. C.
2006-12-01
Deformation bands occur in an outcrop of a petroleum-bearing, sandstone-rich unit of the Monterey Formation along the active Newport-Inglewood fault zone (NIFZ), near Corona del Mar, California. The deformation bands likely developed in a damage zone associated with a strand of the NIFZ. The bands appear to have formed in poorly lithified sandstone. They are relatively oil-free whereas the matrix sandstone contains oil in pore space. The deformation bands acted as baffles to flow, but continuing deformation likely breached permeability barriers over time. Thus the bands did not completely isolate compartments from oil migration, but similar structures in the subsurface would likely slow the rate of production in reservoirs. The network of bands at Corona del Mar forms a mesh with band intersection lines lying parallel to the trend of the NIFZ (northwest). This geometry formed as continuing deformation in the NIFZ rotated early bands into unfavorable orientations for continuing deformation, and new bands formed at high angles to the first set. Permeability in this setting is likely to have been anisotropic, higher parallel to strike of the NIFZ and lower vertically and perpendicular to the strike of the fault zone. One unique type of deformation band found here formed by dilation and early oil migration along fractures, and consequent carbonate cementation along fracture margins. These are thin, planar zones of oil 1 - 2 mm thick sandwiched between parallel, carbonate-cemented, positively weathering ribs. These bands appear to represent early oil migration by hydrofracture. Based on crosscutting relationships between structures and cements, there are three distinct phases of oil migration: early migration along discrete hydrofractures; dominant pore migration associated with periodic breaching of deformation bands; and late migration along open fractures, some several centimeters in width. This sequence may be representative of migration histories along the NIFZ in the Los Angeles basin.
A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, K; Seymour, R; Wang, W
2009-02-17
A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less
Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil
2013-08-15
Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.