DOE Office of Scientific and Technical Information (OSTI.GOV)
BOLOTNIKOV,A.E.; ABDUL-JABBAR, N.M.; BABALOLA, S.
2007-08-21
In the past, various virtual Frisch-grid designs have been proposed for cadmium zinc telluride (CZT) and other compound semiconductor detectors. These include three-terminal, semi-spherical, CAPture, Frisch-ring, capacitive Frisch-grid and pixel devices (along with their modifications). Among them, the Frisch-grid design employing a non-contacting ring extended over the entire side surfaces of parallelepiped-shaped CZT crystals is the most promising. The defect-free parallelepiped-shaped crystals with typical dimensions of 5x5{approx}12 mm3 are easy to produce and can be arranged into large arrays used for imaging and gamma-ray spectroscopy. In this paper, we report on further advances of the virtual Frisch-grid detector design formore » the parallelepiped-shaped CZT crystals. Both the experimental testing and modeling results are described.« less
Bolotnikov, A E; Ackley, K; Camarda, G S; Cherches, C; Cui, Y; De Geronimo, G; Fried, J; Hodges, D; Hossain, A; Lee, W; Mahler, G; Maritato, M; Petryk, M; Roy, U; Salwen, C; Vernon, E; Yang, G; James, R B
2015-07-01
We developed a robust and low-cost array of virtual Frisch-grid CdZnTe detectors coupled to a front-end readout application-specific integrated circuit (ASIC) for spectroscopy and imaging of gamma rays. The array operates as a self-reliant detector module. It is comprised of 36 close-packed 6 × 6 × 15 mm(3) detectors grouped into 3 × 3 sub-arrays of 2 × 2 detectors with the common cathodes. The front-end analog ASIC accommodates up to 36 anode and 9 cathode inputs. Several detector modules can be integrated into a single- or multi-layer unit operating as a Compton or a coded-aperture camera. We present the results from testing two fully assembled modules and readout electronics. The further enhancement of the arrays' performance and reduction of their cost are possible by using position-sensitive virtual Frisch-grid detectors, which allow for accurate corrections of the response of material non-uniformities caused by crystal defects.
Bolotnikov, A. E.; Ackley, K.; Camarda, G. S.; ...
2015-07-28
We developed a robust and low-cost array of virtual Frisch-grid CdZnTe (CZT) detectors coupled to a front-end readout ASIC for spectroscopy and imaging of gamma rays. The array operates as a self-reliant detector module. It is comprised of 36 close-packed 6x6x15 mm 3 detectors grouped into 3x3 sub-arrays of 2x2 detectors with the common cathodes. The front-end analog ASIC accommodates up to 36 anode and 9 cathode inputs. Several detector modules can be integrated into a single- or multi-layer unit operating as a Compton or a coded-aperture camera. We present the results from testing two fully assembled modules and readoutmore » electronics. The further enhancement of the arrays’ performance and reduction of their cost are made possible by using position-sensitive virtual Frisch-grid detectors, which allow for accurate corrections of the response of material non-uniformities caused by crystal defects.« less
Enhanced R200 with Frisch-Grid CZT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolotnikov, A.
2017-12-01
The goal of this project is to demonstrate an engineering prototype of a gamma ray spectrometer that uses Cadmium Zinc Telluride (CZT) in a configuration comprised of an array of position-sensitive virtual Frisch grid (PSVFG) detectors and show its capability to perform functions that would be useful to the IAEA. The detectors should achieve energy resolution of ~2% at 200 keV and <1% at > 662 keV, thereby outperforming all hand-held instruments currently in use other than cryogenically cooled germanium. BNL will make every effort to transfer the technology to an industrial partner so that robust, fieldable instruments can bemore » manufactured.« less
Enhanced R200 with Frisch-Grid CZT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolotnikov, Aleksey
The goal of this project is to demonstrate an engineering prototype of a gamma ray spectrometer that uses Cadmium Zinc Telluride (CZT) in a configuration comprised of an array of position-sensitive virtual Frisch grid (PSVFG) detectors and show its capability to perform functions that would be useful to the IAEA. The detectors should achieve energy resolution of ~2% at 200 keV and <1% at > 662 keV, thereby outperforming all hand-held instruments currently in use other than cryogenically cooled germanium. BNL will make every effort to transfer the technology to an industrial partner so that robust, fieldable instruments can bemore » manufactured.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolotnikov, A. E., E-mail: bolotnik@bnl.gov; Ackley, K.; Camarda, G. S.
We developed a robust and low-cost array of virtual Frisch-grid CdZnTe detectors coupled to a front-end readout application-specific integrated circuit (ASIC) for spectroscopy and imaging of gamma rays. The array operates as a self-reliant detector module. It is comprised of 36 close-packed 6 × 6 × 15 mm{sup 3} detectors grouped into 3 × 3 sub-arrays of 2 × 2 detectors with the common cathodes. The front-end analog ASIC accommodates up to 36 anode and 9 cathode inputs. Several detector modules can be integrated into a single- or multi-layer unit operating as a Compton or a coded-aperture camera. We presentmore » the results from testing two fully assembled modules and readout electronics. The further enhancement of the arrays’ performance and reduction of their cost are possible by using position-sensitive virtual Frisch-grid detectors, which allow for accurate corrections of the response of material non-uniformities caused by crystal defects.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolotnikov, A. E.; Ackley, K.; Camarda, G. S.
We developed a robust and low-cost array of virtual Frisch-grid CdZnTe (CZT) detectors coupled to a front-end readout ASIC for spectroscopy and imaging of gamma rays. The array operates as a self-reliant detector module. It is comprised of 36 close-packed 6x6x15 mm 3 detectors grouped into 3x3 sub-arrays of 2x2 detectors with the common cathodes. The front-end analog ASIC accommodates up to 36 anode and 9 cathode inputs. Several detector modules can be integrated into a single- or multi-layer unit operating as a Compton or a coded-aperture camera. We present the results from testing two fully assembled modules and readoutmore » electronics. The further enhancement of the arrays’ performance and reduction of their cost are made possible by using position-sensitive virtual Frisch-grid detectors, which allow for accurate corrections of the response of material non-uniformities caused by crystal defects.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolotnikov, Aleksey; Cui, Yonggang; Vernon, Emerson
This document presents motivations, goals and the current status of this project; development (fabrication, performance) of position-sensitive virtual Frisch-grid detectors proposed for nanoRaider, an instrument commonly used by nuclear inspectors; ASIC developments for CZT detectors; and the electronics development for the detector prototype..
NASA Astrophysics Data System (ADS)
Edwards, Nathaniel S.; Conley, Jerrod C.; Reichenberger, Michael A.; Nelson, Kyle A.; Tiner, Christopher N.; Hinson, Niklas J.; Ugorowski, Philip B.; Fronk, Ryan G.; McGregor, Douglas S.
2018-06-01
The propagation of electrons through several linear pore densities of reticulated vitreous carbon (RVC) foam was studied using a Frisch-grid parallel-plate ionization chamber pressurized to 1 psig of P-10 proportional gas. The operating voltages of the electrodes contained within the Frisch-grid parallel-plate ionization chamber were defined by measuring counting curves using a collimated 241Am alpha-particle source with and without a Frisch grid. RVC foam samples with linear pore densities of 5, 10, 20, 30, 45, 80, and 100 pores per linear inch were separately positioned between the cathode and anode. Pulse-height spectra and count rates from a collimated 241Am alpha-particle source positioned between the cathode and each RVC foam sample were measured and compared to a measurement without an RVC foam sample. The Frisch grid was positioned in between the RVC foam sample and the anode. The measured pulse-height spectra were indiscernible from background and resulted in negligible net count rates for all RVC foam samples. The Frisch grid parallel-plate ionization chamber measurement results indicate that electrons do not traverse the bulk of RVC foam and consequently do not produce a pulse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ocampo Giraldo, L. A.; Bolotnikov, A. E.; Camarda, G. S.
Position-sensitive virtual Frisch-grid (VFG) CdZnTe (CZT) detectors offer a unique capability for correcting the response nonuniformities caused by crystal defects. This allowed us to achieve high energy resolution, while using typical-grade commercial CZT crystals with relaxed requirements to their quality, thus reducing the overall cost of detectors. Another advantage of the VFG detectors is that they can be integrated into arrays and used in small compact hand-held instruments or large-area gamma cameras that will enhance detection capability for many practical applications, including nonproliferation, medical imaging, and gamma-ray astronomy. Here in this paper, we present the results from testing small arraymore » prototypes coupled with front-end application-specified integrated circuit. Each detector in the array is furnished with 5-mm-wide charge-sensing pads placed near the anode. The pads signals are converted into XY coordinates, which combined with the cathode signals (for Z coordinates) provide 3-D position information of all interaction points. The basic array consists of a number of detectors grouped into 2×2 subarrays, each having a common cathode made by connecting together the cathodes of the individual detectors. Lastly, these features can significantly improve the performance of detectors while using typical-grade low-cost CZT crystals to reduce the overall cost of the proposed instrument.« less
Ocampo Giraldo, L. A.; Bolotnikov, A. E.; Camarda, G. S.; ...
2017-08-22
Position-sensitive virtual Frisch-grid (VFG) CdZnTe (CZT) detectors offer a unique capability for correcting the response nonuniformities caused by crystal defects. This allowed us to achieve high energy resolution, while using typical-grade commercial CZT crystals with relaxed requirements to their quality, thus reducing the overall cost of detectors. Another advantage of the VFG detectors is that they can be integrated into arrays and used in small compact hand-held instruments or large-area gamma cameras that will enhance detection capability for many practical applications, including nonproliferation, medical imaging, and gamma-ray astronomy. Here in this paper, we present the results from testing small arraymore » prototypes coupled with front-end application-specified integrated circuit. Each detector in the array is furnished with 5-mm-wide charge-sensing pads placed near the anode. The pads signals are converted into XY coordinates, which combined with the cathode signals (for Z coordinates) provide 3-D position information of all interaction points. The basic array consists of a number of detectors grouped into 2×2 subarrays, each having a common cathode made by connecting together the cathodes of the individual detectors. Lastly, these features can significantly improve the performance of detectors while using typical-grade low-cost CZT crystals to reduce the overall cost of the proposed instrument.« less
NASA Technical Reports Server (NTRS)
Moiseev, A.; Bolotnikov, A.; DeGeronimo, G.; Hays, E.; James, R.; Thompson, D.; Vernon, E.
2017-01-01
We will present a concept for a calorimeter based on a novel approach of 3D position-sensitive virtual Frisch-grid CdZnTe (hereafter CZT) detectors. This calorimeter aims to measure photons with energies from approximately 100 keV to 20 - 50 MeV . The expected energy resolution at 662 keV is better than 1% FWHM, and the photon interaction position-measurement accuracy is better than 1 mm in all 3 dimensions. Each CZT bar is a rectangular prism with typical cross-section from 5 x 5 to 7 x 7 mm2 and length of 2 - 4 cm. The bars are arranged in modules of 4 x 4 bars, and the modules themselves can be assembled into a larger array. The 3D virtual voxel approach solves a long-standing problem with CZT detectors associated with material imperfections that limit the performance and usefulness of relatively thick detectors (i.e., greater than 1 cm). Also, it allows us to use the standard (unselected) grade crystals, while achieving the energy resolution of the premium detectors and thus substantially reducing the cost of the instrument. Such a calorimeter can be successfully used in space telescopes that use Compton scattering of gamma rays, such as AMEGO, serving as part of its calorimeter and providing the position and energy measurement for Compton-scattered photons (like a focal plane detector in a Compton camera). Also, it could provide suitable energy resolution to allow for spectroscopic measurements of gamma ray lines from nuclear decays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moiseev, Alexander; Bolotnikov, A.; DeGeronimo, G.
Here, we will present a concept for a calorimeter based on a novel approach of 3D position-sensitive virtual Frisch-grid CdZnTe (hereafter CZT) detectors. This calorimeter aims to measure photons with energies from ~100 keV to 20–50 MeV . The expected energy resolution at 662 keV is better than 1% FWHM, and the photon interaction position-measurement accuracy is better than 1 mm in all 3 dimensions. Each CZT bar is a rectangular prism with typical cross-section from 5×5 to 7×7 mm 2 and length of 2–4 cm. The bars are arranged in modules of 4×4 bars, and the modules themselves canmore » be assembled into a larger array. The 3D virtual voxel approach solves a long-standing problem with CZT detectors associated with material imperfections that limit the performance and usefulness of relatively thick detectors (i.e., >1 cm). Also, it allows us to use the standard (unselected) grade crystals, while achieving the energy resolution of the premium detectors and thus substantially reducing the cost of the instrument. Such a calorimeter can be successfully used in space telescopes that use Compton scattering of γ-rays, such as AMEGO, serving as part of its calorimeter and providing the position and energy measurement for Compton-scattered photons (like a focal plane detector in a Compton camera). Also, it could provide suitable energy resolution to allow for spectroscopic measurements of γ-ray lines from nuclear decays.« less
Moiseev, Alexander; Bolotnikov, A.; DeGeronimo, G.; ...
2017-12-19
Here, we will present a concept for a calorimeter based on a novel approach of 3D position-sensitive virtual Frisch-grid CdZnTe (hereafter CZT) detectors. This calorimeter aims to measure photons with energies from ~100 keV to 20–50 MeV . The expected energy resolution at 662 keV is better than 1% FWHM, and the photon interaction position-measurement accuracy is better than 1 mm in all 3 dimensions. Each CZT bar is a rectangular prism with typical cross-section from 5×5 to 7×7 mm 2 and length of 2–4 cm. The bars are arranged in modules of 4×4 bars, and the modules themselves canmore » be assembled into a larger array. The 3D virtual voxel approach solves a long-standing problem with CZT detectors associated with material imperfections that limit the performance and usefulness of relatively thick detectors (i.e., >1 cm). Also, it allows us to use the standard (unselected) grade crystals, while achieving the energy resolution of the premium detectors and thus substantially reducing the cost of the instrument. Such a calorimeter can be successfully used in space telescopes that use Compton scattering of γ-rays, such as AMEGO, serving as part of its calorimeter and providing the position and energy measurement for Compton-scattered photons (like a focal plane detector in a Compton camera). Also, it could provide suitable energy resolution to allow for spectroscopic measurements of γ-ray lines from nuclear decays.« less
Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong
In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm 3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sourcesmore » using a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.« less
Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors
Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong; ...
2016-02-15
In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm 3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sourcesmore » using a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.« less
Array Detector Modules for Spent Fuel Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolotnikov, Aleksey
Brookhaven National Laboratory (BNL) proposes to evaluate the arrays of position-sensitive virtual Frisch-grid (VFG) detectors for passive gamma-ray emission tomography (ET) to verify the spent fuel in storage casks before storing them in geo-repositories. Our primary objective is to conduct a preliminary analysis of the arrays capabilities and to perform field measurements to validate the effectiveness of the proposed array modules. The outcome of this proposal will consist of baseline designs for the future ET system which can ultimately be used together with neutrons detectors. This will demonstrate the usage of this technology in spent fuel storage casks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ocampo, Luis
Abstract— Arrays of position-sensitive virtual Frisch-grid CdZnTe (CZT) detectors with enhanced energy resolution have been proposed for spectroscopy and imaging of gamma-ray sources in different applications. The flexibility of the array design, which can employ CZT crystals with thicknesses up to several centimeters in the direction of electron drift, allows for integration into different kinds of field-portable instruments. These can include small hand-held devices, compact gamma cameras and large field-of-view imaging systems. In this work, we present results for a small linear array of such detectors optimized for the low-energy region, 50-400 keV gamma-rays, which is principally intended for incorporationmore » into hand-held instruments. There are many potential application areas for such instruments, including uranium enrichment measurements, storage monitoring, dosimetry and other safeguards-related tasks that can benefit from compactness and isotope-identification capability. The array described here provides a relatively large area with a minimum number of readout channels, which potentially allows the developers to avoid using an ASIC-based electronic readout by substituting it with hybrid preamplifiers followed by digitizers. The array prototype consists of six (5x5.7x25 mm3) CZT detectors positioned in a line facing the source to achieve a maximum exposure area (~10 cm2). Each detector is furnished with 5 mm-wide charge-sensing pads placed near the anode. The pad signals are converted into X-Y coordinates for each interaction event, which are combined with the cathode signals (for determining the Z coordinates) to give 3D positional information for all interaction points. This information is used to correct the response non-uniformity caused by material inhomogeneity, which therefore allows the usage of standard-grade (unselected) CZT crystals, while achieving high-resolution spectroscopic performance for the instrument. In this presentation we describe the design of the array, the results from detailed laboratory tests, and preliminary results from measurements taken during a field test.« less
High-resolution ionization detector and array of such detectors
McGregor, Douglas S [Ypsilanti, MI; Rojeski, Ronald A [Pleasanton, CA
2001-01-16
A high-resolution ionization detector and an array of such detectors are described which utilize a reference pattern of conductive or semiconductive material to form interaction, pervious and measurement regions in an ionization substrate of, for example, CdZnTe material. The ionization detector is a room temperature semiconductor radiation detector. Various geometries of such a detector and an array of such detectors produce room temperature operated gamma ray spectrometers with relatively high resolution. For example, a 1 cm.sup.3 detector is capable of measuring .sup.137 Cs 662 keV gamma rays with room temperature energy resolution approaching 2% at FWHM. Two major types of such detectors include a parallel strip semiconductor Frisch grid detector and the geometrically weighted trapezoid prism semiconductor Frisch grid detector. The geometrically weighted detector records room temperature (24.degree. C.) energy resolutions of 2.68% FWHM for .sup.137 Cs 662 keV gamma rays and 2.45% FWHM for .sup.60 Co 1.332 MeV gamma rays. The detectors perform well without any electronic pulse rejection, correction or compensation techniques. The devices operate at room temperature with simple commercially available NIM bin electronics and do not require special preamplifiers or cooling stages for good spectroscopic results.
Making MUSIC: A multiple sampling ionization chamber
NASA Astrophysics Data System (ADS)
Shumard, B.; Henderson, D. J.; Rehm, K. E.; Tang, X. D.
2007-08-01
A multiple sampling ionization chamber (MUSIC) was developed for use in conjunction with the Atlas scattering chamber (ATSCAT). This chamber was developed to study the (α, p) reaction in stable and radioactive beams. The gas filled ionization chamber is used as a target and detector for both particles in the outgoing channel (p + beam particles for elastic scattering or p + residual nucleus for (α, p) reactions). The MUSIC detector is followed by a Si array to provide a trigger for anode events. The anode events are gated by a gating grid so that only (α, p) reactions where the proton reaches the Si detector result in an anode event. The MUSIC detector is a segmented ionization chamber. The active length of the chamber is 11.95 in. and is divided into 16 equal anode segments (3.5 in. × 0.70 in. with 0.3 in. spacing between pads). The dead area of the chamber was reduced by the addition of a Delrin snout that extends 0.875 in. into the chamber from the front face, to which a mylar window is affixed. 0.5 in. above the anode is a Frisch grid that is held at ground potential. 0.5 in. above the Frisch grid is a gating grid. The gating grid functions as a drift electron barrier, effectively halting the gathering of signals. Setting two sets of alternating wires at differing potentials creates a lateral electric field which traps the drift electrons, stopping the collection of anode signals. The chamber also has a reinforced mylar exit window separating the Si array from the target gas. This allows protons from the (α, p) reaction to be detected. The detection of these protons opens the gating grid to allow the drift electrons released from the ionizing gas during the (α, p) reaction to reach the anode segment below the reaction.
Characterization of an Ionization Readout Tile for nEXO
Jewell, M.; Schubert, A.; Cen, W. R.; ...
2018-01-10
Here, a new design for the anode of a time projection chamber, consisting of a charge-detecting "tile", is investigated for use in large scale liquid xenon detectors. The tile is produced by depositing 60 orthogonal metal charge-collecting strips, 3 mm wide, on a 10 cm × 10 cm fused-silica wafer. These charge tiles may be employed by large detectors, such as the proposed tonne-scale nEXO experiment to search for neutrinoless double-beta decay. Modular by design, an array of tiles can cover a sizable area. The width of each strip is small compared to the size of the tile, so amore » Frisch grid is not required. A grid-less, tiled anode design is beneficial for an experiment such as nEXO, where a wire tensioning support structure and Frisch grid might contribute radioactive backgrounds and would have to be designed to accommodate cycling to cryogenic temperatures. The segmented anode also reduces some degeneracies in signal reconstruction that arise in large-area crossed-wire time projection chambers. A prototype tile was tested in a cell containing liquid xenon. Very good agreement is achieved between the measured ionization spectrum of a 207Bi source and simulations that include the microphysics of recombination in xenon and a detailed modeling of the electrostatic field of the detector. An energy resolution σ/ E=5.5% is observed at 570 keV, comparable to the best intrinsic ionization-only resolution reported in literature for liquid xenon at 936 V/cm.« less
Characterization of an Ionization Readout Tile for nEXO
NASA Astrophysics Data System (ADS)
Jewell, M.; Schubert, A.; Cen, W. R.; Dalmasson, J.; DeVoe, R.; Fabris, L.; Gratta, G.; Jamil, A.; Li, G.; Odian, A.; Patel, M.; Pocar, A.; Qiu, D.; Wang, Q.; Wen, L. J.; Albert, J. B.; Anton, G.; Arnquist, I. J.; Badhrees, I.; Barbeau, P.; Beck, D.; Belov, V.; Bourque, F.; Brodsky, J. P.; Brown, E.; Brunner, T.; Burenkov, A.; Cao, G. F.; Cao, L.; Chambers, C.; Charlebois, S. A.; Chiu, M.; Cleveland, B.; Coon, M.; Craycraft, A.; Cree, W.; Côté, M.; Daniels, T.; Daugherty, S. J.; Daughhetee, J.; Delaquis, S.; Der Mesrobian-Kabakian, A.; Didberidze, T.; Dilling, J.; Ding, Y. Y.; Dolinski, M. J.; Dragone, A.; Fairbank, W.; Farine, J.; Feyzbakhsh, S.; Fontaine, R.; Fudenberg, D.; Giacomini, G.; Gornea, R.; Hansen, E. V.; Harris, D.; Hasan, M.; Heffner, M.; Hoppe, E. W.; House, A.; Hufschmidt, P.; Hughes, M.; Hößl, J.; Ito, Y.; Iverson, A.; Jiang, X. S.; Johnston, S.; Karelin, A.; Kaufman, L. J.; Koffas, T.; Kravitz, S.; Krücken, R.; Kuchenkov, A.; Kumar, K. S.; Lan, Y.; Leonard, D. S.; Li, S.; Li, Z.; Licciardi, C.; Lin, Y. H.; MacLellan, R.; Michel, T.; Mong, B.; Moore, D.; Murray, K.; Newby, R. J.; Ning, Z.; Njoya, O.; Nolet, F.; Odgers, K.; Oriunno, M.; Orrell, J. L.; Ostrovskiy, I.; Overman, C. T.; Ortega, G. S.; Parent, S.; Piepke, A.; Pratte, J.-F.; Radeka, V.; Raguzin, E.; Rao, T.; Rescia, S.; Retiere, F.; Robinson, A.; Rossignol, T.; Rowson, P. C.; Roy, N.; Saldanha, R.; Sangiorgio, S.; Schmidt, S.; Schneider, J.; Sinclair, D.; Skarpaas, K.; Soma, A. K.; St-Hilaire, G.; Stekhanov, V.; Stiegler, T.; Sun, X. L.; Tarka, M.; Todd, J.; Tolba, T.; Tsang, R.; Tsang, T.; Vachon, F.; Veeraraghavan, V.; Visser, G.; Vuilleumier, J.-L.; Wagenpfeil, M.; Weber, M.; Wei, W.; Wichoski, U.; Wrede, G.; Wu, S. X.; Wu, W. H.; Yang, L.; Yen, Y.-R.; Zeldovich, O.; Zhang, X.; Zhao, J.; Zhou, Y.; Ziegler, T.
2018-01-01
A new design for the anode of a time projection chamber, consisting of a charge-detecting "tile", is investigated for use in large scale liquid xenon detectors. The tile is produced by depositing 60 orthogonal metal charge-collecting strips, 3 mm wide, on a 10 cm × 10 cm fused-silica wafer. These charge tiles may be employed by large detectors, such as the proposed tonne-scale nEXO experiment to search for neutrinoless double-beta decay. Modular by design, an array of tiles can cover a sizable area. The width of each strip is small compared to the size of the tile, so a Frisch grid is not required. A grid-less, tiled anode design is beneficial for an experiment such as nEXO, where a wire tensioning support structure and Frisch grid might contribute radioactive backgrounds and would have to be designed to accommodate cycling to cryogenic temperatures. The segmented anode also reduces some degeneracies in signal reconstruction that arise in large-area crossed-wire time projection chambers. A prototype tile was tested in a cell containing liquid xenon. Very good agreement is achieved between the measured ionization spectrum of a 207Bi source and simulations that include the microphysics of recombination in xenon and a detailed modeling of the electrostatic field of the detector. An energy resolution σ/E=5.5% is observed at 570 keV, comparable to the best intrinsic ionization-only resolution reported in literature for liquid xenon at 936 V/cm.
Characterization of an Ionization Readout Tile for nEXO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jewell, M.; Schubert, A.; Cen, W. R.
Here, a new design for the anode of a time projection chamber, consisting of a charge-detecting "tile", is investigated for use in large scale liquid xenon detectors. The tile is produced by depositing 60 orthogonal metal charge-collecting strips, 3 mm wide, on a 10 cm × 10 cm fused-silica wafer. These charge tiles may be employed by large detectors, such as the proposed tonne-scale nEXO experiment to search for neutrinoless double-beta decay. Modular by design, an array of tiles can cover a sizable area. The width of each strip is small compared to the size of the tile, so amore » Frisch grid is not required. A grid-less, tiled anode design is beneficial for an experiment such as nEXO, where a wire tensioning support structure and Frisch grid might contribute radioactive backgrounds and would have to be designed to accommodate cycling to cryogenic temperatures. The segmented anode also reduces some degeneracies in signal reconstruction that arise in large-area crossed-wire time projection chambers. A prototype tile was tested in a cell containing liquid xenon. Very good agreement is achieved between the measured ionization spectrum of a 207Bi source and simulations that include the microphysics of recombination in xenon and a detailed modeling of the electrostatic field of the detector. An energy resolution σ/ E=5.5% is observed at 570 keV, comparable to the best intrinsic ionization-only resolution reported in literature for liquid xenon at 936 V/cm.« less
Advanced crystal growth techniques for thallium bromide semiconductor radiation detectors
NASA Astrophysics Data System (ADS)
Datta, Amlan; Becla, Piotr; Guguschev, Christo; Motakef, Shariar
2018-02-01
Thallium Bromide (TlBr) is a promising room-temperature radiation detector candidate with excellent charge transport properties. Currently, Travelling Molten Zone (TMZ) technique is widely used for growth of semiconductor-grade TlBr crystals. However, there are several challenges associated with this type of crystal growth process including lower yield, high thermal stress, and low crystal uniformity. To overcome these shortcomings of the current technique, several different crystal growth techniques have been implemented in this study. These include: Vertical Bridgman (VB), Physical Vapor Transport (PVT), Edge-defined Film-fed Growth (EFG), and Czochralski Growth (Cz). Techniques based on melt pulling (EFG and Cz) were demonstrated for the first time for semiconductor grade TlBr material. The viability of each process along with the associated challenges for TlBr growth has been discussed. The purity of the TlBr crystals along with its crystalline and electronic properties were analyzed and correlated with the growth techniques. Uncorrected 662 keV energy resolutions around 2% were obtained from 5 mm x 5 mm x 10 mm TlBr devices with virtual Frisch-grid configuration.
NASA Astrophysics Data System (ADS)
Mandal, Krishna C.; Krishna, Ramesh M.; Pak, Rahmi O.; Mannan, Mohammad A.
2014-09-01
CdTe and Cd0.9Zn0.1Te (CZT) crystals have been studied extensively for various applications including x- and γ-ray imaging and high energy radiation detectors. The crystals were grown from zone refined ultra-pure precursor materials using a vertical Bridgman furnace. The growth process has been monitored, controlled, and optimized by a computer simulation and modeling program developed in our laboratory. The grown crystals were thoroughly characterized after cutting wafers from the ingots and processed by chemo-mechanical polishing (CMP). The infrared (IR) transmission images of the post-treated CdTe and CZT crystals showed average Te inclusion size of ~10 μm for CdTe and ~8 μm for CZT crystal. The etch pit density was ≤ 5×104 cm-2 for CdTe and ≤ 3×104 cm-2 for CZT. Various planar and Frisch collar detectors were fabricated and evaluated. From the current-voltage measurements, the electrical resistivity was estimated to be ~ 1.5×1010 Ω-cm for CdTe and 2-5×1011 Ω-cm for CZT. The Hecht analysis of electron and hole mobility-lifetime products (μτe and μτh) showed μτe = 2×10-3 cm2/V (μτh = 8×10-5 cm2/V) and 3-6×10-3 cm2/V (μτh = 4- 6×10-5 cm2/V) for CdTe and CZT, respectively. Detectors in single pixel, Frisch collar, and coplanar grid geometries were fabricated. Detectors in Frisch grid and guard-ring configuration were found to exhibit energy resolution of 1.4% and 2.6 %, respectively, for 662 keV gamma rays. Assessments of the detector performance have been carried out also using 241Am (60 keV) showing energy resolution of 4.2% FWHM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolotnikov, Alexey; De Geronimo, GianLuigi; Vernon, Emerson
We present a concept for a calorimeter based on a novel approach of 3D position-sensitive virtual Frischgrid CZT detectors. This calorimeter aims to measure photons with energies from ~100 keV to 10 (goal 50) MeV. The expected energy resolution at 662 keV is ~1% FWHM, and the photon interaction positionmeasurement accuracy is ~1 mm in all 3 dimensions. Each CZT bar is a rectangular prism with typical cross-section of 6x6 mm 2 and length of 2-4 cm. The bars are arranged in modules of 4 x 4 bars, and the modules themselves can be assembled into a larger array. Themore » 3D virtual voxel approach solves a long-standing problem with CZT detectors associated with material imperfections that limit the performance and usefulness of relatively thick detectors (i.e., > 1 cm). Also, it allows us to relax the requirements on the quality of the crystals, maintaining good energy resolution and significantly reducing the instrument cost. Such a calorimeter can be successfully used in space telescopes that use Compton scattering of γ rays, such as AMEGO, serving as part of its calorimeter and providing the position and energy measurement for Compton-scattered photons. Also, it could provide suitable energy resolution to allow for spectroscopic measurements of γ-ray lines from nuclear decays. Another viable option is to use this calorimeter as a focal plane to conduct spectroscopic measurements of cosmic γ-ray events. In combination with a coded-aperture mask, it potentially could provide mapping of the 511-keV radiation from the Galactic Center region.« less
Improving Spectroscopic Performance of a Coplanar-Anode High-Pressure Xenon Gamma-Ray Spectrometer
NASA Astrophysics Data System (ADS)
Kiff, Scott Douglas; He, Zhong; Tepper, Gary C.
2007-08-01
High-pressure xenon (HPXe) gas is a desirable radiation detection medium for homeland security applications because of its good inherent room-temperature energy resolution, potential for large, efficient devices, and stability over a broad temperature range. Past work in HPXe has produced large-diameter gridded ionization chambers with energy resolution at 662 keV between 3.5 and 4% FWHM. However, one major limitation of these detectors is resolution degradation due to Frisch grid microphonics. A coplanar-anode HPXe detector has been developed as an alternative to gridded chambers. An investigation of this detector's energy resolution is reported in this submission. A simulation package is used to investigate the contributions of important physical processes to the measured photopeak broadening. Experimental data is presented for pure Xe and Xe + 0.2%H2 mixtures, including an analysis of interaction location effects on the energy spectrum.
Conversation, Composition, Cultural Investigations: Max Frisch's "Fragebogen" in the L2 Classroom
ERIC Educational Resources Information Center
Villanueva, Daniel
2005-01-01
In this article, "Fragebogen" (1992) by Max Frisch is shown to contain appropriately challenging linguistic constructions for use in courses as a supplement to traditional anthologies in conversation/composition courses. Frisch's eleven questionnaires on topics ranging from "Heimat," marriage and private property to humor, money and death serve as…
NASA Astrophysics Data System (ADS)
Tovesson, F.; Duke, D.; Geppert-Kleinrath, V.; Manning, B.; Mayorov, D.; Mosby, S.; Schmitt, K.
2018-03-01
Different aspects of the nuclear fission process have been studied at Los Alamos Neutron Science Center (LANSCE) using various instruments and experimental techniques. Properties of the fragments emitted in fission have been investigated using Frisch-grid ionization chambers, a Time Projection Chamber (TPC), and the SPIDER instrument which employs the 2v-2E method. These instruments and experimental techniques have been used to determine fission product mass yields, the energy dependent total kinetic energy (TKE) release, and anisotropy in neutron-induced fission of U-235, U-238 and Pu-239.
Continued development of room temperature semiconductor nuclear detectors
NASA Astrophysics Data System (ADS)
Kim, Hadong; Cirignano, Leonard; Churilov, Alexei; Ciampi, Guido; Kargar, Alireza; Higgins, William; O'Dougherty, Patrick; Kim, Suyoung; Squillante, Michael R.; Shah, Kanai
2010-08-01
Thallium bromide (TlBr) and related ternary compounds, TlBrI and TlBrCl, have been under development for room temperature gamma ray spectroscopy due to several promising properties. Due to recent advances in material processing, electron mobility-lifetime product of TlBr is close to Cd(Zn)Te's value which allowed us to fabricate large working detectors. We were also able to fabricate and obtain spectroscopic results from TlBr Capacitive Frisch Grid detector and orthogonal strip detectors. In this paper we report on our recent TlBr and related ternary detector results and preliminary results from Cinnabar (HgS) detectors.
Wafer-fused semiconductor radiation detector
Lee, Edwin Y.; James, Ralph B.
2002-01-01
Wafer-fused semiconductor radiation detector useful for gamma-ray and x-ray spectrometers and imaging systems. The detector is fabricated using wafer fusion to insert an electrically conductive grid, typically comprising a metal, between two solid semiconductor pieces, one having a cathode (negative electrode) and the other having an anode (positive electrode). The wafer fused semiconductor radiation detector functions like the commonly used Frisch grid radiation detector, in which an electrically conductive grid is inserted in high vacuum between the cathode and the anode. The wafer-fused semiconductor radiation detector can be fabricated using the same or two different semiconductor materials of different sizes and of the same or different thicknesses; and it may utilize a wide range of metals, or other electrically conducting materials, to form the grid, to optimize the detector performance, without being constrained by structural dissimilarity of the individual parts. The wafer-fused detector is basically formed, for example, by etching spaced grooves across one end of one of two pieces of semiconductor materials, partially filling the grooves with a selected electrical conductor which forms a grid electrode, and then fusing the grooved end of the one semiconductor piece to an end of the other semiconductor piece with a cathode and an anode being formed on opposite ends of the semiconductor pieces.
A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.
Shankaranarayanan, Avinas; Amaldas, Christine
2010-11-01
With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.
Business Case Analysis of the Marine Corps Base Pendleton Virtual Smart Grid
2017-06-01
Metering Infrastructure on DOD installations. An examination of five case studies highlights the costs and benefits of the Virtual Smart Grid (VSG...studies highlights the costs and benefits of the Virtual Smart Grid (VSG) developed by Space and Naval Warfare Systems Command for use at Marine Corps...41 A. SMART GRID BENEFITS .....................................................................41 B. SUMMARY OF VSG ESTIMATED COSTS AND BENEFITS
An Act of Scientific Creativity: Meitner, Frisch, and Nuclear Fission
NASA Astrophysics Data System (ADS)
Stuewer, Roger H.
2002-04-01
The dominant event that lay in the background to Werner Heisenberg's fateful meeting with Niels Bohr in occupied Copenhagen in September 1941 was the discovery and interpretation of nuclear fission three years earlier. Michael Frayn has explored that meeting in his play "Copenhagen" in an act of extraordinary literary creativity. In this talk I will explore Lise Meitner's and Otto Robert Frisch's interpretation of nuclear fission as an act of extraordinary scientific creativity. My aim is to understand historically how it was possible for Meitner and Frisch, and only Meitner and Frisch, to arrive at their interpretation as they talked and walked in the snow in the small Swedish village of Kungälv over the Christmas holidays in December 1938. This will require us to examine the history of the liquid-drop model of the nucleus over the preceding decade, from George Gamow's conception of that model in 1928, through Heisenberg and Carl Friedrich von Weizsäcker's extension of it between 1933 and 1936, and finally through Bohr's use of it in his theory of the compound nucleus between 1936 and 1938. We will see how Meitner and Frisch combined their different knowledge of these developments creatively to arrive at their momentous interpretation of nuclear fission.
A logarithmic detection system suitable for a 4π array
NASA Astrophysics Data System (ADS)
Westfall, G. D.; Yurkon, J. E.; van der Plicht, J.; Koenig, Z. M.; Jacak, B. V.; Fox, R.; Crawley, G. M.; Maier, M. R.; Hasselquist, B. E.; Tickle, R. S.; Horn, D.
1985-08-01
A low pressure multiwire proportional counter, a Bragg curve counter, and an array of CaF2/plastic scintillator telescopes have been developed in a geometry suitable for close packing into a 4π detector designed to study nucleus-nucleus reactions at 100-200 MeV/nucleon. The multiwire counter is hexagonal in shape and gives X-Y position information using resistive charge division from nichrome-coated stretched polypropylene foils. The Bragg curve counter is a hexagonal pyramid with the charge taken from a Frisch gridded anode. A field shaping grid gives the Bragg curve counter a radial field. The scintillator telescopes are shaped as truncated triangular pyramids such that when stacked together they form a truncated hexagonal pyramid. The light signal of the CaF2-plastic combination is read with one phototube using a phoswich technique to separate the ΔE signal from the E signal. The entire system has been tested so far for particles with 1 <= Z <= 18 and gives good position, charge, and time resolution.
SU-F-T-436: A Method to Evaluate Dosimetric Properties of SFGRT in Eclipse TPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, M; Tobias, R; Pankuch, M
Purpose: The objective was to develop a method for dose distribution calculation of spatially-fractionated-GRID-radiotherapy (SFGRT) in Eclipse treatment-planning-system (TPS). Methods: Patient treatment-plans with SFGRT for bulky tumors were generated in Varian Eclipse version11. A virtual structure based on the GRID pattern was created and registered to a patient CT image dataset. The virtual GRID structure was positioned on the iso-center level together with matching beam geometries to simulate a commercially available GRID block made of brass. This method overcame the difficulty in treatment-planning and dose-calculation due to the lack o-the option to insert a GRID block add-on in Eclipse TPS.more » The patient treatment-planning displayed GRID effects on the target, critical structures, and dose distribution. The dose calculations were compared to the measurement results in phantom. Results: The GRID block structure was created to follow the beam divergence to the patient CT images. The inserted virtual GRID block made it possible to calculate the dose distributions and profiles at various depths in Eclipse. The virtual GRID block was added as an option to TPS. The 3D representation of the isodose distribution of the spatially-fractionated beam was generated in axial, coronal, and sagittal planes. Physics of GRID can be different from that for fields shaped by regular blocks because the charge-particle-equilibrium cannot be guaranteed for small field openings. Output factor (OF) measurement was required to calculate the MU to deliver the prescribed dose. The calculated OF based on the virtual GRID agreed well with the measured OF in phantom. Conclusion: The method to create the virtual GRID block has been proposed for the first time in Eclipse TPS. The dosedistributions, in-plane and cross-plane profiles in PTV can be displayed in 3D-space. The calculated OF’s based on the virtual GRID model compare well to the measured OF’s for SFGRT clinical use.« less
Grids, virtualization, and clouds at Fermilab
Timm, S.; Chadwick, K.; Garzoglio, G.; ...
2014-06-11
Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less
Grids, virtualization, and clouds at Fermilab
NASA Astrophysics Data System (ADS)
Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.
2014-06-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.
Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason
2010-01-01
Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.
NASA Astrophysics Data System (ADS)
Duke, D. L.; Tovesson, F.; Brys, T.; Geppert-Kleinrath, V.; Hambsch, F.-J.; Laptev, A.; Meharchand, R.; Manning, B.; Mayorov, D.; Meierbachtol, K.; Mosby, S.; Perdue, B.; Richman, D.; Shields, D.; Vidali, M.
2017-09-01
The average Total Kinetic Energy (TKE) release and fission-fragment yields in neutron-induced fission of 235U and 238U was measured using a Frisch-gridded ionization chamber. These observables are important nuclear data quantites that are relevant to applications and for informing the next generation of fission models. The measurements were performed a the Los Alamos Neutron Science Center and cover En = 200 keV - 30 MeV. The double-energy (2E) method was used to determine the fission-fragment yields and two methods of correcting for prompt-neutron emission were explored. The results of this study are correlated mass and TKE data.
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
Ion mobility spectrometer with virtual aperture grid
Pfeifer, Kent B.; Rumpf, Arthur N.
2010-11-23
An ion mobility spectrometer does not require a physical aperture grid to prevent premature ion detector response. The last electrodes adjacent to the ion collector (typically the last four or five) have an electrode pitch that is less than the width of the ion swarm and each of the adjacent electrodes is connected to a source of free charge, thereby providing a virtual aperture grid at the end of the drift region that shields the ion collector from the mirror current of the approaching ion swarm. The virtual aperture grid is less complex in assembly and function and is less sensitive to vibrations than the physical aperture grid.
AstroGrid: the UK's Virtual Observatory Initiative
NASA Astrophysics Data System (ADS)
Mann, Robert G.; Astrogrid Consortium; Lawrence, Andy; Davenhall, Clive; Mann, Bob; McMahon, Richard; Irwin, Mike; Walton, Nic; Rixon, Guy; Watson, Mike; Osborne, Julian; Page, Clive; Allan, Peter; Giaretta, David; Perry, Chris; Pike, Dave; Sherman, John; Murtagh, Fionn; Harra, Louise; Bentley, Bob; Mason, Keith; Garrington, Simon
AstroGrid is the UK's Virtual Observatory (VO) initiative. It brings together the principal astronomical data centres in the UK, and has been funded to the tune of ˜pounds 5M over the next three years, via PPARC, as part of the UK e--science programme. Its twin goals are the provision of the infrastructure and tools for the federation and exploitation of large astronomical (X-ray to radio), solar and space plasma physics datasets, and the delivery of federations of current datasets for its user communities to exploit using those tools. Whilst AstroGrid's work will be centred on existing and future (e.g. VISTA) UK datasets, it will seek solutions to generic VO problems and will contribute to the developing international virtual observatory framework: AstroGrid is a member of the EU-funded Astrophysical Virtual Observatory project, has close links to a second EU Grid initiative, the European Grid of Solar Observations (EGSO), and will seek an active role in the development of the common standards on which the international virtual observatory will rely. In this paper we shall primarily describe the concrete plans for AstroGrid's one-year Phase A study, which will centre on: (i) the definition of detailed science requirements through community consultation; (ii) the undertaking of a ``functionality market survey" to test the utility of existing technologies for the VO; and (iii) a pilot programme of database federations, each addressing different aspects of the general database federation problem. Further information on AstroGrid can be found at AstroGrid .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolotnikov, A. E.; Camarda, G. S.; Cui, Y.
Following our successful demonstration of the position-sensitive virtual Frisch-grid detectors, we investigated the feasibility of using high-granularity position sensing to correct response non-uniformities caused by the crystal defects in CdZnTe (CZT) pixelated detectors. The development of high-granularity detectors able to correct response non-uniformities on a scale comparable to the size of electron clouds opens the opportunity of using unselected off-the-shelf CZT material, whilst still assuring high spectral resolution for the majority of the detectors fabricated from an ingot. Here, we present the results from testing 3D position-sensitive 15×15×10 mm 3 pixelated detectors, fabricated with conventional pixel patterns with progressively smallermore » pixel sizes: 1.4, 0.8, and 0.5 mm. We employed the readout system based on the H3D front-end multi-channel ASIC developed by BNL's Instrumentation Division in collaboration with the University of Michigan. We use the sharing of electron clouds among several adjacent pixels to measure locations of interaction points with sub-pixel resolution. By using the detectors with small-pixel sizes and a high probability of the charge-sharing events, we were able to improve their spectral resolutions in comparison to the baseline levels, measured for the 1.4-mm pixel size detectors with small fractions of charge-sharing events. These results demonstrate that further enhancement of the performance of CZT pixelated detectors and reduction of costs are possible by using high spatial-resolution position information of interaction points to correct the small-scale response non-uniformities caused by crystal defects present in most devices.« less
THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS
Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel
2010-01-01
Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618
Self-Stabilizing and Efficient Robust Uncertainty Management
2011-10-01
Group decision making in honey bee swarms. American Scientist. 94:220-229. 71 Frisch, Karl von. (1967) The Dance Language and Orientation of... Bees . Cambridge, Mass.: The Belknap Press of Harvard University Press. 18 Thom et al. (21 August 2007) The Scent of the Waggle Dance . PLoS Biology...Orientation of Bees . Cambridge, Mass.: The Belknap Press of Harvard University Press. 02 Frisch, Karl von. (1967) The Dance Language and
Bai, Qifeng; Shao, Yonghua; Pan, Dabo; Zhang, Yang; Liu, Huanxiang; Yao, Xiaojun
2014-01-01
We designed a program called MolGridCal that can be used to screen small molecule database in grid computing on basis of JPPF grid environment. Based on MolGridCal program, we proposed an integrated strategy for virtual screening and binding mode investigation by combining molecular docking, molecular dynamics (MD) simulations and free energy calculations. To test the effectiveness of MolGridCal, we screened potential ligands for β2 adrenergic receptor (β2AR) from a database containing 50,000 small molecules. MolGridCal can not only send tasks to the grid server automatically, but also can distribute tasks using the screensaver function. As for the results of virtual screening, the known agonist BI-167107 of β2AR is ranked among the top 2% of the screened candidates, indicating MolGridCal program can give reasonable results. To further study the binding mode and refine the results of MolGridCal, more accurate docking and scoring methods are used to estimate the binding affinity for the top three molecules (agonist BI-167107, neutral antagonist alprenolol and inverse agonist ICI 118,551). The results indicate agonist BI-167107 has the best binding affinity. MD simulation and free energy calculation are employed to investigate the dynamic interaction mechanism between the ligands and β2AR. The results show that the agonist BI-167107 also has the lowest binding free energy. This study can provide a new way to perform virtual screening effectively through integrating molecular docking based on grid computing, MD simulations and free energy calculations. The source codes of MolGridCal are freely available at http://molgridcal.codeplex.com. PMID:25229694
Neutron-induced fission cross section of 242Pu from 15 MeV to 20 MeV
NASA Astrophysics Data System (ADS)
Jovančević, N.; Salvador-Castineira, P.; Daraban, L.; Vidali, M.; Heyse, J.; Oberstedt, S.; Hambsch, F.-J.; Bonaldi, C.; Geerts, W.
2017-09-01
Accurate nuclear-data needs in the fast-neutron-energy region have been recently addressed for the development of next generation nuclear power plants (GEN-IV) by the OECD Nuclear Energy Agency (NEA). This sensitivity study has shown that of particular interest is the 242Pu(n,f) cross section for fast reactor systems. Measurements have been performed with quasi-monoenergetic neutrons in the energy range from 15 MeV to 20 MeV produced by the Van de Graaff accelerator of the JRC-Geel. A twin Frisch-grid ionization chamber has been used in a back-to-back configuration as fission fragment detector. The 242Pu(n,f) cross section has been normalized to 238U(n,f) cross section data. The results were compared with existing literature data and show acceptable agreement within 5%.
Thundercloud: Domain specific information security training for the smart grid
NASA Astrophysics Data System (ADS)
Stites, Joseph
In this paper, we describe a cloud-based virtual smart grid test bed: ThunderCloud, which is intended to be used for domain-specific security training applicable to the smart grid environment. The test bed consists of virtual machines connected using a virtual internal network. ThunderCloud is remotely accessible, allowing students to undergo educational exercises online. We also describe a series of practical exercises that we have developed for providing the domain-specific training using ThunderCloud. The training exercises and attacks are designed to be realistic and to reflect known vulnerabilities and attacks reported in the smart grid environment. We were able to use ThunderCloud to offer practical domain-specific security training for smart grid environment to computer science students at little or no cost to the department and no risk to any real networks or systems.
Collaboration in a Wireless Grid Innovation Testbed by Virtual Consortium
NASA Astrophysics Data System (ADS)
Treglia, Joseph; Ramnarine-Rieks, Angela; McKnight, Lee
This paper describes the formation of the Wireless Grid Innovation Testbed (WGiT) coordinated by a virtual consortium involving academic and non-academic entities. Syracuse University and Virginia Tech are primary university partners with several other academic, government, and corporate partners. Objectives include: 1) coordinating knowledge sharing, 2) defining key parameters for wireless grids network applications, 3) dynamically connecting wired and wireless devices, content and users, 4) linking to VT-CORNET, Virginia Tech Cognitive Radio Network Testbed, 5) forming ad hoc networks or grids of mobile and fixed devices without a dedicated server, 6) deepening understanding of wireless grid application, device, network, user and market behavior through academic, trade and popular publications including online media, 7) identifying policy that may enable evaluated innovations to enter US and international markets and 8) implementation and evaluation of the international virtual collaborative process.
Virtualizing access to scientific applications with the Application Hosting Environment
NASA Astrophysics Data System (ADS)
Zasada, S. J.; Coveney, P. V.
2009-12-01
The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.
A fast 1-D detector for imaging and time resolved SAXS experiments
NASA Astrophysics Data System (ADS)
Menk, R. H.; Arfelli, F.; Bernstorff, S.; Pontoni, D.; Sarvestani, A.; Besch, H. J.; Walenta, A. H.
1999-02-01
A one-dimensional test detector on the principle of a highly segmented ionization chamber with shielding grid (Frisch grid) was developed to evaluate if this kind of detector is suitable for advanced small-angle X-ray scattering (SAXS) experiments. At present it consists of 128 pixels which can be read out within 0.2 ms with a noise floor of 2000 e-ENC. A quantum efficiency of 80% for a photon energy of 8 keV was achieved. This leads to DQE values of 80% for photon fluxes above 1000 photons/pixel and integration time. The shielding grid is based on the principles of the recently invented MCAT structure and the GEM structure which also allows electron amplification in the gas. In the case of the MCAT structure, an energy resolution of 20% at 5.9 keV was observed. The gas amplification mode enables imaging with this integrating detector on a subphoton noise level with respect to the integration time. Preliminary experiments of saturation behavior show that this kind of detector digests a photon flux density up to 10 12 photons/mm 2 s and operates linearly. A spatial resolution of at least three line pairs/mm was obtained. All these features show that this type of detector is well suited for time-resolved SAXS experiments as well as high flux imaging applications.
Development of portable CdZnTe spectrometers for remote sensing of signatures from nuclear materials
NASA Astrophysics Data System (ADS)
Burger, Arnold; Groza, Michael; Cui, Yunlong; Roy, Utpal N.; Hillman, Damian; Guo, Mike; Li, Longxia; Wright, Gomez W.; James, Ralph B.
2005-03-01
Room temperature cadmium zinc telluride (CZT) gamma-ray spectrometers are being developed for a number for years for medical, space and national security applications where high sensitivity, low operating power and compactness are indispensable. The technology has matured now to the point where large volume (several cubic centimeters) and high energy resolution (approximately 1% at 660 eV) of gamma photons, are becoming available for their incorporation into portable systems for remote sensing of signatures from nuclear materials. The straightforward approach of utilizing a planar CZT device has been excluded due to the incomplete collection arising from the trapping of holes and causing broadening of spectral lines at energies above 80 keV, to unacceptable levels of performance. Solutions are being pursued by developing devices aimed at processing the signal produced primarily by electrons and practically insensitive to the contribution of holes, and recent progress has been made in the areas of material growth as well as electrode and electronics design. Present materials challenges are in the growth of CZT boules from which large, oriented single crystal pieces can be cut to fabricate such sizable detectors. Since virtually all the detector grade CZT boules consist of several grains, the cost of a large, single crystal section is still high. Co-planar detectors, capacitive Frisch-grid detectors and devices taking advantage of the small pixel effect, are configurations with a range of requirements in crystallinity and defect content and involve variable degrees of complexity in the fabrication, surface passivation and signal processing. These devices have been demonstrated by several research groups and will be discussed in terms of their sensitivity and availability.
Bolotnikov, A. E.; Camarda, G. S.; Cui, Y.; ...
2015-09-06
Following our successful demonstration of the position-sensitive virtual Frisch-grid detectors, we investigated the feasibility of using high-granularity position sensing to correct response non-uniformities caused by the crystal defects in CdZnTe (CZT) pixelated detectors. The development of high-granularity detectors able to correct response non-uniformities on a scale comparable to the size of electron clouds opens the opportunity of using unselected off-the-shelf CZT material, whilst still assuring high spectral resolution for the majority of the detectors fabricated from an ingot. Here, we present the results from testing 3D position-sensitive 15×15×10 mm 3 pixelated detectors, fabricated with conventional pixel patterns with progressively smallermore » pixel sizes: 1.4, 0.8, and 0.5 mm. We employed the readout system based on the H3D front-end multi-channel ASIC developed by BNL's Instrumentation Division in collaboration with the University of Michigan. We use the sharing of electron clouds among several adjacent pixels to measure locations of interaction points with sub-pixel resolution. By using the detectors with small-pixel sizes and a high probability of the charge-sharing events, we were able to improve their spectral resolutions in comparison to the baseline levels, measured for the 1.4-mm pixel size detectors with small fractions of charge-sharing events. These results demonstrate that further enhancement of the performance of CZT pixelated detectors and reduction of costs are possible by using high spatial-resolution position information of interaction points to correct the small-scale response non-uniformities caused by crystal defects present in most devices.« less
DE-FG02-04ER25606 Identity Federation and Policy Management Guide: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphrey, Marty, A
The goal of this 3-year project was to facilitate a more productive dynamic matching between resource providers and resource consumers in Grid environments by explicitly specifying policies. There were broadly two problems being addressed by this project. First, there was a lack of an Open Grid Services Architecture (OGSA)-compliant mechanism for expressing, storing and retrieving user policies and Virtual Organization (VO) policies. Second, there was a lack of tools to resolve and enforce policies in the Open Services Grid Architecture. To address these problems, our overall approach in this project was to make all policies explicit (e.g., virtual organization policies,more » resource provider policies, resource consumer policies), thereby facilitating policy matching and policy negotiation. Policies defined on a per-user basis were created, held, and updated in MyPolMan, thereby providing a Grid user to centralize (where appropriate) and manage his/her policies. Organizationally, the corresponding service was VOPolMan, in which the policies of the Virtual Organization are expressed, managed, and dynamically consulted. Overall, we successfully defined, prototyped, and evaluated policy-based resource management and access control for OGSA-based Grids. This DOE project partially supported 17 peer-reviewed publications on a number of different topics: General security for Grids, credential management, Web services/OGSA/OGSI, policy-based grid authorization (for remote execution and for access to information), policy-directed Grid data movement/placement, policies for large-scale virtual organizations, and large-scale policy-aware grid architectures. In addition to supporting the PI, this project partially supported the training of 5 PhD students.« less
Studies of Neutron-Induced Fission of 235U, 238U, and 239Pu
NASA Astrophysics Data System (ADS)
Duke, Dana; TKE Team
2014-09-01
A Frisch-gridded ionization chamber and the double energy (2E) analysis method were used to study mass yield distributions and average total kinetic energy (TKE) release from neutron-induced fission of 235U, 238U, and 239Pu. Despite decades of fission research, little or no TKE data exist for high incident neutron energies. Additional average TKE information at incident neutron energies relevant to defense- and energy-related applications will provide a valuable observable for benchmarking simulations. The data can also be used as inputs in theoretical fission models. The Los Alamos Neutron Science Center-Weapons Neutron Research (LANSCE - WNR) provides a neutron beam from thermal to hundreds of MeV, well-suited for filling in the gaps in existing data and exploring fission behavior in the fast neutron region. The results of the studies on 238U, 235U, and 239Pu will be presented. LA-UR-14-24921.
A practical approach to virtualization in HEP
NASA Astrophysics Data System (ADS)
Buncic, P.; Aguado Sánchez, C.; Blomer, J.; Harutyunyan, A.; Mudrinic, M.
2011-01-01
In the attempt to solve the problem of processing data coming from LHC experiments at CERN at a rate of 15PB per year, for almost a decade the High Enery Physics (HEP) community has focused its efforts on the development of the Worldwide LHC Computing Grid. This generated large interest and expectations promising to revolutionize computing. Meanwhile, having initially taken part in the Grid standardization process, industry has moved in a different direction and started promoting the Cloud Computing paradigm which aims to solve problems on a similar scale and in equally seamless way as it was expected in the idealized Grid approach. A key enabling technology behind Cloud computing is server virtualization. In early 2008, an R&D project was established in the PH-SFT group at CERN to investigate how virtualization technology could be used to improve and simplify the daily interaction of physicists with experiment software frameworks and the Grid infrastructure. In this article we shall first briefly compare Grid and Cloud computing paradigms and then summarize the results of the R&D activity pointing out where and how virtualization technology could be effectively used in our field in order to maximize practical benefits whilst avoiding potential pitfalls.
New trends in the virtualization of hospitals--tools for global e-Health.
Graschew, Georgi; Roelofs, Theo A; Rakowsky, Stefan; Schlag, Peter M; Heinzlreiter, Paul; Kranzlmüller, Dieter; Volkert, Jens
2006-01-01
The development of virtual hospitals and digital medicine helps to bridge the digital divide between different regions of the world and enables equal access to high-level medical care. Pre-operative planning, intra-operative navigation and minimally-invasive surgery require a digital and virtual environment supporting the perception of the physician. As data and computing resources in a virtual hospital are distributed over many sites the concept of the Grid should be integrated with other communication networks and platforms. A promising approach is the implementation of service-oriented architectures for an invisible grid, hiding complexity for both application developers and end-users. Examples of promising medical applications of Grid technology are the real-time 3D-visualization and manipulation of patient data for individualized treatment planning and the creation of distributed intelligent databases of medical images.
The architecture of a virtual grid GIS server
NASA Astrophysics Data System (ADS)
Wu, Pengfei; Fang, Yu; Chen, Bin; Wu, Xi; Tian, Xiaoting
2008-10-01
The grid computing technology provides the service oriented architecture for distributed applications. The virtual Grid GIS server is the distributed and interoperable enterprise application GIS architecture running in the grid environment, which integrates heterogeneous GIS platforms. All sorts of legacy GIS platforms join the grid as members of GIS virtual organization. Based on Microkernel we design the ESB and portal GIS service layer, which compose Microkernel GIS. Through web portals, portal GIS services and mediation of service bus, following the principle of SoC, we separate business logic from implementing logic. Microkernel GIS greatly reduces the coupling degree between applications and GIS platforms. The enterprise applications are independent of certain GIS platforms, and making the application developers to pay attention to the business logic. Via configuration and orchestration of a set of fine-grained services, the system creates GIS Business, which acts as a whole WebGIS request when activated. In this way, the system satisfies a business workflow directly and simply, with little or no new code.
Formation of Virtual Organizations in Grids: A Game-Theoretic Approach
NASA Astrophysics Data System (ADS)
Carroll, Thomas E.; Grosu, Daniel
The execution of large scale grid applications requires the use of several computational resources owned by various Grid Service Providers (GSPs). GSPs must form Virtual Organizations (VOs) to be able to provide the composite resource to these applications. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. We formulate the resource composition among GSPs as a coalition formation problem and propose a game-theoretic framework based on cooperation structures to model it. Using this framework, we design a resource management system that supports the VO formation among GSPs in a grid computing system.
Virtual Oscillator Controls | Grid Modernization | NREL
Virtual Oscillator Controls Virtual Oscillator Controls NREL is developing virtual oscillator Santa-Barbara, and SunPower. Publications Synthesizing Virtual Oscillators To Control Islanded Inverters Synchronization of Parallel Single-Phase Inverters Using Virtual Oscillator Control, IEEE Transactions on Power
Evaluation of grid generation technologies from an applied perspective
NASA Technical Reports Server (NTRS)
Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.
1995-01-01
An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.
ERIC Educational Resources Information Center
Brooks, Tyson T.
2013-01-01
This thesis identifies three essays which contribute to the foundational understanding of the vulnerabilities and risk towards potentially implementing wireless grid Edgeware technology in a virtualized cloud environment. Since communication networks and devices are subject to becoming the target of exploitation by hackers (e.g. individuals who…
Smart-Grid Backbone Network Real-Time Delay Reduction via Integer Programming.
Pagadrai, Sasikanth; Yilmaz, Muhittin; Valluri, Pratyush
2016-08-01
This research investigates an optimal delay-based virtual topology design using integer linear programming (ILP), which is applied to the current backbone networks such as smart-grid real-time communication systems. A network traffic matrix is applied and the corresponding virtual topology problem is solved using the ILP formulations that include a network delay-dependent objective function and lightpath routing, wavelength assignment, wavelength continuity, flow routing, and traffic loss constraints. The proposed optimization approach provides an efficient deterministic integration of intelligent sensing and decision making, and network learning features for superior smart grid operations by adaptively responding the time-varying network traffic data as well as operational constraints to maintain optimal virtual topologies. A representative optical backbone network has been utilized to demonstrate the proposed optimization framework whose simulation results indicate that superior smart-grid network performance can be achieved using commercial networks and integer programming.
Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds
NASA Astrophysics Data System (ADS)
Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano
Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.
Simulation fidelity of a virtual environment display
NASA Technical Reports Server (NTRS)
Nemire, Kenneth; Jacoby, Richard H.; Ellis, Stephen R.
1994-01-01
We assessed the degree to which a virtual environment system produced a faithful simulation of three-dimensional space by investigating the influence of a pitched optic array on the perception of gravity-referenced eye level (GREL). We compared the results with those obtained in a physical environment. In a within-subjects factorial design, 12 subjects indicated GREL while viewing virtual three-dimensional arrays at different static orientations. A physical array biased GREL more than did a geometrically identical virtual pitched array. However, addition of two sets of orthogonal parallel lines (a grid) to the virtual pitched array resulted in as large a bias as that obtained with the physical pitched array. The increased bias was caused by longitudinal, but not the transverse, components of the grid. We discuss implications of our results for spatial orientation models and for designs of virtual displays.
Spatial cell firing during virtual navigation of open arenas by head-restrained mice.
Chen, Guifen; King, John Andrew; Lu, Yi; Cacucci, Francesca; Burgess, Neil
2018-06-18
We present a mouse virtual reality (VR) system which restrains head-movements to horizontal rotations, compatible with multi-photon imaging. This system allows expression of the spatial navigation and neuronal firing patterns characteristic of real open arenas (R). Comparing VR to R: place and grid, but not head-direction, cell firing had broader spatial tuning; place, but not grid, cell firing was more directional; theta frequency increased less with running speed; whereas increases in firing rates with running speed and place and grid cells' theta phase precession were similar. These results suggest that the omni-directional place cell firing in R may require local-cues unavailable in VR, and that the scale of grid and place cell firing patterns, and theta frequency, reflect translational motion inferred from both virtual (visual and proprioceptive) and real (vestibular translation and extra-maze) cues. By contrast, firing rates and theta phase precession appear to reflect visual and proprioceptive cues alone. © 2018, Chen et al.
Blanchard, Ray
2007-12-01
Frisch and Hviid (2006) recently reported a study of variables that predicted heterosexual and homosexual marriage in a national cohort of Danish men and women. They found no evidence that older brothers increase the probability that a man will legally marry another man. They concluded that their data raise questions about the universality of the widely confirmed finding that older brothers increase the probability that a man will be sexually oriented towards other men (the fraternal birth order effect). In the present article, Frisch and Hviid's data were reanalyzed using one of the procedures that have been used in prior studies of fraternal birth order. The results showed that the sex ratio of older brothers to older sisters was significantly higher than the expected value of 106 in all four of their study groups (heterosexually married men, homosexually married men, heterosexually married women, and homosexually married women). In contrast, the sex ratio of younger brothers to younger sisters approximated 106 in all four groups. According to this analysis, the only group whose data resembled data from previous studies was the homosexually married males. The writer concluded that one cannot interpret findings about the correlates of heterosexual and homosexual marriage as if they were findings about the correlates of heterosexual and homosexual orientation, and that this is underscored by comparing the markedly different older-sibling sex ratios obtained from heterosexually married persons (in the Danish study) and those obtained from heterosexually oriented persons (in previous studies). It is unclear what implications, if any, Frisch and Hviid's findings have for the study of sexual orientation in general.
Energy-aware virtual network embedding in flexi-grid optical networks
NASA Astrophysics Data System (ADS)
Lin, Rongping; Luo, Shan; Wang, Haoran; Wang, Sheng; Chen, Bin
2018-01-01
Virtual network embedding (VNE) problem is to map multiple heterogeneous virtual networks (VN) on a shared substrate network, which mitigate the ossification of the substrate network. Meanwhile, energy efficiency has been widely considered in the network design. In this paper, we aim to solve the energy-aware VNE problem in flexi-grid optical networks. We provide an integer linear programming (ILP) formulation to minimize the power increment of each arriving VN request. We also propose a polynomial-time heuristic algorithm where virtual links are embedded sequentially to keep a reasonable acceptance ratio and maintain a low energy consumption. Numerical results show the functionality of the heuristic algorithm in a 24-node network.
Studies of fission fragment properties at the Los Alamos Neutron Science Center (LANSCE)
NASA Astrophysics Data System (ADS)
Tovesson, Fredrik; Mayorov, Dmitriy; Duke, Dana; Manning, Brett; Geppert-Kleinrath, Verena
2017-09-01
Nuclear data related to the fission process are needed for a wide variety of research areas, including fundamental science, nuclear energy and non-proliferation. While some of the relevant data have been measured to the required accuracies there are still many aspects of fission that need further investigation. One such aspect is how Total Kinetic Energy (TKE), fragment yields, angular distributions and other fission observables depend on excitation energy of the fissioning system. Another question is the correlation between mass, charge and energy of fission fragments. At the Los Alamos Neutron Science Center (LANSCE) we are studying neutron-induced fission at incident energies from thermal up to hundreds of MeV using the Lujan Center and Weapons Neutron Research (WNR) facilities. Advanced instruments such as SPIDER (time-of-flight and kinetic energy spectrometer), the NIFFTE Time Projection Chamber (TPC), and Frisch grid Ionization Chambers (FGIC) are used to investigate the properties of fission fragments, and some important results for the major actinides have been obtained.
Extending Measurements to En=30 MeV and Beyond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duke, Dana Lynn
The majority of energy release in the fission process is due to the kinetic energy of the fission fragments. Average Total Kinetic Energy measurements for the major actinides over a wide range of incident neutron energies were performed at LANSCE using a Frisch-gridded ionization chamber. The experiments and results of the 238U(n,f) and 235U(n,f) will be presented, including (En), (A), and mass yield distributions as a function of neutron energy. A preliminary (En) for 239Pu(n,f) will also be shown. The (En) shows a clear structure at multichance fission thresholds for all the reactions that we studied. The fragment masses aremore » determined using the iterative double energy (2E) method, with a resolution of A = 4 - 5 amu. The correction for the prompt fission neutrons is the main source of uncertainty, especially at high incident neutron energies, since the behavior of nubar(A,En) is largely unknown. Different correction methods will be discussed.« less
Studies of fission fragment properties at the Los Alamos Neutron Science Center (LANSCE)
Tovesson, Fredrik; Mayorov, Dmitriy; Duke, Dana; ...
2017-09-13
Nuclear data related to the fission process are needed for a wide variety of research areas, including fundamental science, nuclear energy and non-proliferation. While some of the relevant data have been measured to the required accuracies there are still many aspects of fission that need further investigation. One such aspect is how Total Kinetic Energy (TKE), fragment yields, angular distributions and other fission observables depend on excitation energy of the fissioning system. Another question is the correlation between mass, charge and energy of fission fragments. At the Los Alamos Neutron Science Center (LANSCE) we are studying neutron-induced fission at incidentmore » energies from thermal up to hundreds of MeV using the Lujan Center and Weapons Neutron Research (WNR) facilities. Advanced instruments such as SPIDER (time-of-flight and kinetic energy spectrometer), the NIFFTE Time Projection Chamber (TPC), and Frisch grid Ionization Chambers (FGIC) are used to investigate the properties of fission fragments, and some important results for the major actinides have been obtained.« less
Studies of fission fragment properties at the Los Alamos Neutron Science Center (LANSCE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tovesson, Fredrik; Mayorov, Dmitriy; Duke, Dana
Nuclear data related to the fission process are needed for a wide variety of research areas, including fundamental science, nuclear energy and non-proliferation. While some of the relevant data have been measured to the required accuracies there are still many aspects of fission that need further investigation. One such aspect is how Total Kinetic Energy (TKE), fragment yields, angular distributions and other fission observables depend on excitation energy of the fissioning system. Another question is the correlation between mass, charge and energy of fission fragments. At the Los Alamos Neutron Science Center (LANSCE) we are studying neutron-induced fission at incidentmore » energies from thermal up to hundreds of MeV using the Lujan Center and Weapons Neutron Research (WNR) facilities. Advanced instruments such as SPIDER (time-of-flight and kinetic energy spectrometer), the NIFFTE Time Projection Chamber (TPC), and Frisch grid Ionization Chambers (FGIC) are used to investigate the properties of fission fragments, and some important results for the major actinides have been obtained.« less
Grid Enabled Geospatial Catalogue Web Service
NASA Technical Reports Server (NTRS)
Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush
2004-01-01
Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.
Setting Up a Grid-CERT: Experiences of an Academic CSIRT
ERIC Educational Resources Information Center
Moller, Klaus
2007-01-01
Purpose: Grid computing has often been heralded as the next logical step after the worldwide web. Users of grids can access dynamic resources such as computer storage and use the computing resources of computers under the umbrella of a virtual organisation. Although grid computing is often compared to the worldwide web, it is vastly more complex…
Grids, Clouds, and Virtualization
NASA Astrophysics Data System (ADS)
Cafaro, Massimo; Aloisio, Giovanni
This chapter introduces and puts in context Grids, Clouds, and Virtualization. Grids promised to deliver computing power on demand. However, despite a decade of active research, no viable commercial grid computing provider has emerged. On the other hand, it is widely believed - especially in the Business World - that HPC will eventually become a commodity. Just as some commercial consumers of electricity have mission requirements that necessitate they generate their own power, some consumers of computational resources will continue to need to provision their own supercomputers. Clouds are a recent business-oriented development with the potential to render this eventually as rare as organizations that generate their own electricity today, even among institutions who currently consider themselves the unassailable elite of the HPC business. Finally, Virtualization is one of the key technologies enabling many different Clouds. We begin with a brief history in order to put them in context, and recall the basic principles and concepts underlying and clearly differentiating them. A thorough overview and survey of existing technologies provides the basis to delve into details as the reader progresses through the book.
AstroGrid: Taverna in the Virtual Observatory .
NASA Astrophysics Data System (ADS)
Benson, K. M.; Walton, N. A.
This paper reports on the implementation of the Taverna workbench by AstroGrid, a tool for designing and executing workflows of tasks in the Virtual Observatory. The workflow approach helps astronomers perform complex task sequences with little technical effort. Visual approach to workflow construction streamlines highly complex analysis over public and private data and uses computational resources as minimal as a desktop computer. Some integration issues and future work are discussed in this article.
Unbalanced voltage control of virtual synchronous generator in isolated micro-grid
NASA Astrophysics Data System (ADS)
Cao, Y. Z.; Wang, H. N.; Chen, B.
2017-06-01
Virtual synchronous generator (VSG) control is recommended to stabilize the voltage and frequency in isolated micro-grid. However, common VSG control is challenged by widely used unbalance loads, and the linked unbalance voltage problem worsens the power quality of the micro-grid. In this paper, the mathematical model of VSG was presented. Based on the analysis of positive- and negative-sequence equivalent circuit of VSG, an approach was proposed to eliminate the negative-sequence voltage of VSG with unbalance loads. Delay cancellation method and PI controller were utilized to identify and suppress the negative-sequence voltages. Simulation results verify the feasibility of proposed control strategy.
A methodology toward manufacturing grid-based virtual enterprise operation platform
NASA Astrophysics Data System (ADS)
Tan, Wenan; Xu, Yicheng; Xu, Wei; Xu, Lida; Zhao, Xianhua; Wang, Li; Fu, Liuliu
2010-08-01
Virtual enterprises (VEs) have become one of main types of organisations in the manufacturing sector through which the consortium companies organise their manufacturing activities. To be competitive, a VE relies on the complementary core competences among members through resource sharing and agile manufacturing capacity. Manufacturing grid (M-Grid) is a platform in which the production resources can be shared. In this article, an M-Grid-based VE operation platform (MGVEOP) is presented as it enables the sharing of production resources among geographically distributed enterprises. The performance management system of the MGVEOP is based on the balanced scorecard and has the capacity of self-learning. The study shows that a MGVEOP can make a semi-automated process possible for a VE, and the proposed MGVEOP is efficient and agile.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yocum, D.R.; Berman, E.; Canal, P.
2007-05-01
As one of the founding members of the Open Science Grid Consortium (OSG), Fermilab enables coherent access to its production resources through the Grid infrastructure system called FermiGrid. This system successfully provides for centrally managed grid services, opportunistic resource access, development of OSG Interfaces for Fermilab, and an interface to the Fermilab dCache system. FermiGrid supports virtual organizations (VOs) including high energy physics experiments (USCMS, MINOS, D0, CDF, ILC), astrophysics experiments (SDSS, Auger, DES), biology experiments (GADU, Nanohub) and educational activities.
Virtual reality and the unfolding of higher dimensions
NASA Astrophysics Data System (ADS)
Aguilera, Julieta C.
2006-02-01
As virtual/augmented reality evolves, the need for spaces that are responsive to structures independent from three dimensional spatial constraints, become apparent. The visual medium of computer graphics may also challenge these self imposed constraints. If one can get used to how projections affect 3D objects in two dimensions, it may also be possible to compose a situation in which to get used to the variations that occur while moving through higher dimensions. The presented application is an enveloping landscape of concave and convex forms, which are determined by the orientation and displacement of the user in relation to a grid made of tesseracts (cubes in four dimensions). The interface accepts input from tridimensional and four-dimensional transformations, and smoothly displays such interactions in real-time. The motion of the user becomes the graphic element whereas the higher dimensional grid references to his/her position relative to it. The user learns how motion inputs affect the grid, recognizing a correlation between the input and the transformations. Mapping information to complex grids in virtual reality is valuable for engineers, artists and users in general because navigation can be internalized like a dance pattern, and further engage us to maneuver space in order to know and experience.
Towards a Global Service Registry for the World-Wide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro
2014-06-01
The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the current situation and how it can support the evolution of information systems.
The StratusLab cloud distribution: Use-cases and support for scientific applications
NASA Astrophysics Data System (ADS)
Floros, E.
2012-04-01
The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.
Energy-aware virtual network embedding in flexi-grid networks.
Lin, Rongping; Luo, Shan; Wang, Haoran; Wang, Sheng
2017-11-27
Network virtualization technology has been proposed to allow multiple heterogeneous virtual networks (VNs) to coexist on a shared substrate network, which increases the utilization of the substrate network. Efficiently mapping VNs on the substrate network is a major challenge on account of the VN embedding (VNE) problem. Meanwhile, energy efficiency has been widely considered in the network design in terms of operation expenses and the ecological awareness. In this paper, we aim to solve the energy-aware VNE problem in flexi-grid optical networks. We provide an integer linear programming (ILP) formulation to minimize the electricity cost of each arriving VN request. We also propose a polynomial-time heuristic algorithm where virtual links are embedded sequentially to keep a reasonable acceptance ratio and maintain a low electricity cost. Numerical results show that the heuristic algorithm performs closely to the ILP for a small size network, and we also demonstrate its applicability to larger networks.
NASA Astrophysics Data System (ADS)
Küchler, N.; Kneifel, S.; Kollias, P.; Loehnert, U.
2017-12-01
Cumulus and stratocumulus clouds strongly affect the Earth's radiation budget and are a major uncertainty source in weather and climate prediction models. To improve and evaluate models, a comprehensive understanding of cloud processes is necessary and references are needed. Therefore active and passive microwave remote sensing of clouds can be used to derive cloud properties such as liquid water path and liquid water content (LWC), which can serve as a reference for model evaluation. However, both the measurements and the assumptions when retrieving physical quantities from the measurements involve uncertainty sources. Frisch et al. (1998) combined radar and radiometer observations to derive LWC profiles. Assuming their assumptions are correct, there will be still uncertainties regarding the measurement setup. We investigate how varying beam width, temporal and vertical resolutions, frequency combinations, and beam overlap of and between the two instruments influence the retrieval of LWC profiles. Especially, we discuss the benefit of combining vertically, high resolved radar and radiometer measurements using the same antenna, i.e. having ideal beam overlap. Frisch, A. S., G. Feingold, C. W. Fairall, T. Uttal, and J. B. Snider, 1998: On cloud radar and microwave radiometer measurements of stratus cloud liquid water profiles. J. Geophys. Res.: Atmos., 103 (18), 23 195-23 197, doi:0148-0227/98/98JD-01827509.00.
Space-based Science Operations Grid Prototype
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Welch, Clara L.; Redman, Sandra
2004-01-01
Grid technology is the up and coming technology that is enabling widely disparate services to be offered to users that is very economical, easy to use and not available on a wide basis. Under the Grid concept disparate organizations generally defined as "virtual organizations" can share services i.e. sharing discipline specific computer applications, required to accomplish the specific scientific and engineering organizational goals and objectives. Grids are emerging as the new technology of the future. Grid technology has been enabled by the evolution of increasingly high speed networking. Without the evolution of high speed networking Grid technology would not have emerged. NASA/Marshall Space Flight Center's (MSFC) Flight Projects Directorate, Ground Systems Department is developing a Space-based Science Operations Grid prototype to provide to scientists and engineers the tools necessary to operate space-based science payloads/experiments and for scientists to conduct public and educational outreach. In addition Grid technology can provide new services not currently available to users. These services include mission voice and video, application sharing, telemetry management and display, payload and experiment commanding, data mining, high order data processing, discipline specific application sharing and data storage, all from a single grid portal. The Prototype will provide most of these services in a first step demonstration of integrated Grid and space-based science operations technologies. It will initially be based on the International Space Station science operational services located at the Payload Operations Integration Center at MSFC, but can be applied to many NASA projects including free flying satellites and future projects. The Prototype will use the Internet2 Abilene Research and Education Network that is currently a 10 Gb backbone network to reach the University of Alabama at Huntsville and several other, as yet unidentified, Space Station based science experimenters. There is an international aspect to the Grid involving the America's Pathway (AMPath) network, the Chilean REUNA Research and Education Network and the University of Chile in Santiago that will further demonstrate how extensive these services can be used. From the user's perspective, the Prototype will provide a single interface and logon to these varied services without the complexity of knowing the where's and how's of each service. There is a separate and deliberate emphasis on security. Security will be addressed by specifically outlining the different approaches and tools used. Grid technology, unlike the Internet, is being designed with security in mind. In addition we will show the locations, configurations and network paths associated with each service and virtual organization. We will discuss the separate virtual organizations that we define for the varied user communities. These will include certain, as yet undetermined, space-based science functions and/or processes and will include specific virtual organizations required for public and educational outreach and science and engineering collaboration. We will also discuss the Grid Prototype performance and the potential for further Grid applications both space-based and ground based projects and processes. In this paper and presentation we will detail each service and how they are integrated using Grid
NASA Astrophysics Data System (ADS)
Tudose, Alexandru; Terstyansky, Gabor; Kacsuk, Peter; Winter, Stephen
Grid Application Repositories vary greatly in terms of access interface, security system, implementation technology, communication protocols and repository model. This diversity has become a significant limitation in terms of interoperability and inter-repository access. This paper presents the Grid Application Meta-Repository System (GAMRS) as a solution that offers better options for the management of Grid applications. GAMRS proposes a generic repository architecture, which allows any Grid Application Repository (GAR) to be connected to the system independent of their underlying technology. It also presents applications in a uniform manner and makes applications from all connected repositories visible to web search engines, OGSI/WSRF Grid Services and other OAI (Open Archive Initiative)-compliant repositories. GAMRS can also function as a repository in its own right and can store applications under a new repository model. With the help of this model, applications can be presented as embedded in virtual machines (VM) and therefore they can be run in their native environments and can easily be deployed on virtualized infrastructures allowing interoperability with new generation technologies such as cloud computing, application-on-demand, automatic service/application deployments and automatic VM generation.
Recent Progress in Thallium Bromide Gamma-Ray Spectrometer Development
NASA Astrophysics Data System (ADS)
Kim, Hadong; Kargar, Alireza; Cirignano, Leonard; Churilov, Alexei; Ciampi, Guido; Higgins, William; Olschner, Fred; Shah, Kanai
2012-02-01
In recent years, progress in processing and crystal growth methods have led to a significant increase in the mobility-lifetime product of electrons in thallium bromide (TlBr). This has enabled single carrier collection devices with thickness greater than 1-cm to be fabricated. In this paper we report on our latest results from pixellated devices with depth correction as well as our initial results with Frisch collar devices. After applying depth corrections, energy resolution of approximately 2% (FWHM at 662 keV) was obtained from a 13-mm thick TlBr array operated at -18°C and under continuous bias and irradiation for more than one month. Energy resolution of 2.4% was obtained at room temperature with an 8.4-mm thick TlBr Frisch collar device.
Grid-cell representations in mental simulation
Bellmund, Jacob LS; Deuker, Lorena; Navarro Schröder, Tobias; Doeller, Christian F
2016-01-01
Anticipating the future is a key motif of the brain, possibly supported by mental simulation of upcoming events. Rodent single-cell recordings suggest the ability of spatially tuned cells to represent subsequent locations. Grid-like representations have been observed in the human entorhinal cortex during virtual and imagined navigation. However, hitherto it remains unknown if grid-like representations contribute to mental simulation in the absence of imagined movement. Participants imagined directions between building locations in a large-scale virtual-reality city while undergoing fMRI without re-exposure to the environment. Using multi-voxel pattern analysis, we provide evidence for representations of absolute imagined direction at a resolution of 30° in the parahippocampal gyrus, consistent with the head-direction system. Furthermore, we capitalize on the six-fold rotational symmetry of grid-cell firing to demonstrate a 60° periodic pattern-similarity structure in the entorhinal cortex. Our findings imply a role of the entorhinal grid-system in mental simulation and future thinking beyond spatial navigation. DOI: http://dx.doi.org/10.7554/eLife.17089.001 PMID:27572056
Magnetohydrodynamic cellular automata
NASA Technical Reports Server (NTRS)
Montgomery, David; Doolen, Gary D.
1987-01-01
A generalization of the hexagonal lattice gas model of Frisch, Hasslacher and Pomeau is shown to lead to two-dimensional magnetohydrodynamics. The method relies on the ideal point-wise conservation law for vector potential.
NASA Astrophysics Data System (ADS)
Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry
2006-12-01
We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.
NASA Astrophysics Data System (ADS)
Lim, Jaechang; Choi, Sunghwan; Kim, Jaewook; Kim, Woo Youn
2016-12-01
To assess the performance of multi-configuration methods using exact exchange Kohn-Sham (KS) orbitals, we implemented configuration interaction singles and doubles (CISD) in a real-space numerical grid code. We obtained KS orbitals with the exchange-only optimized effective potential under the Krieger-Li-Iafrate (KLI) approximation. Thanks to the distinctive features of KLI orbitals against Hartree-Fock (HF), such as bound virtual orbitals with compact shapes and orbital energy gaps similar to excitation energies; KLI-CISD for small molecules shows much faster convergence as a function of simulation box size and active space (i.e., the number of virtual orbitals) than HF-CISD. The former also gives more accurate excitation energies with a few dominant configurations than the latter, even with many more configurations. The systematic control of basis set errors is straightforward in grid bases. Therefore, grid-based multi-configuration methods using exact exchange KS orbitals provide a promising new way to make accurate electronic structure calculations.
Integration of virtualized worker nodes in standard batch systems
NASA Astrophysics Data System (ADS)
Büge, Volker; Hessling, Hermann; Kemp, Yves; Kunze, Marcel; Oberst, Oliver; Quast, Günter; Scheurer, Armin; Synge, Owen
2010-04-01
Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.
Managing virtual machines with Vac and Vcycle
NASA Astrophysics Data System (ADS)
McNab, A.; Love, P.; MacMahon, E.
2015-12-01
We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.
Training Capability Data for Dismounted Soldier Training System
2015-06-01
Simulators (2004) An Assessment of V-IMTS (2004) Evaluation of the Virtual Squad Training System (2007) Perceived Usefulness of TTES : A Second Look (1995...Center-White Sands Missile Range, V-IMTS – Virtual Integrated MOUT ( Military Operation in Urban Terrain) Training System, VIRTSIM – Virtual... military grid reference system coordinate. There currently is no indication or capability to determine the distance traveled (e.g., pace count
Virtual Control Systems Environment (VCSE)
Atkins, Will
2018-02-14
Will Atkins, a Sandia National Laboratories computer engineer discusses cybersecurity research work for process control systems. Will explains his work on the Virtual Control Systems Environment project to develop a modeling and simulation framework of the U.S. electric grid in order to study and mitigate possible cyberattacks on infrastructure.
iRODS: A Distributed Data Management Cyberinfrastructure for Observatories
NASA Astrophysics Data System (ADS)
Rajasekar, A.; Moore, R.; Vernon, F.
2007-12-01
Large-scale and long-term preservation of both observational and synthesized data requires a system that virtualizes data management concepts. A methodology is needed that can work across long distances in space (distribution) and long-periods in time (preservation). The system needs to manage data stored on multiple types of storage systems including new systems that become available in the future. This concept is called infrastructure independence, and is typically implemented through virtualization mechanisms. Data grids are built upon concepts of data and trust virtualization. These concepts enable the management of collections of data that are distributed across multiple institutions, stored on multiple types of storage systems, and accessed by multiple types of clients. Data virtualization ensures that the name spaces used to identify files, users, and storage systems are persistent, even when files are migrated onto future technology. This is required to preserve authenticity, the link between the record and descriptive and provenance metadata. Trust virtualization ensures that access controls remain invariant as files are moved within the data grid. This is required to track the chain of custody of records over time. The Storage Resource Broker (http://www.sdsc.edu/srb) is one such data grid used in a wide variety of applications in earth and space sciences such as ROADNet (roadnet.ucsd.edu), SEEK (seek.ecoinformatics.org), GEON (www.geongrid.org) and NOAO (www.noao.edu). Recent extensions to data grids provide one more level of virtualization - policy or management virtualization. Management virtualization ensures that execution of management policies can be automated, and that rules can be created that verify assertions about the shared collections of data. When dealing with distributed large-scale data over long periods of time, the policies used to manage the data and provide assurances about the authenticity of the data become paramount. The integrated Rule-Oriented Data System (iRODS) (http://irods.sdsc.edu) provides the mechanisms needed to describe not only management policies, but also to track how the policies are applied and their execution results. The iRODS data grid maps management policies to rules that control the execution of the remote micro-services. As an example, a rule can be created that automatically creates a replica whenever a file is added to a specific collection, or extracts its metadata automatically and registers it in a searchable catalog. For the replication operation, the persistent state information consists of the replica location, the creation date, the owner, the replica size, etc. The mechanism used by iRODS for providing policy virtualization is based on well-defined functions, called micro-services, which are chained into alternative workflows using rules. A rule engine, based on the event-condition-action paradigm executes the rule-based workflows after an event. Rules can be deferred to a pre-determined time or executed on a periodic basis. As the data management policies evolve, the iRODS system can implement new rules, new micro-services, and new state information (metadata content) needed to manage the new policies. Each sub- collection can be managed using a different set of policies. The discussion of the concepts in rule-based policy virtualization and its application to long-term and large-scale data management for observatories such as ORION and NEON will be the basis of the paper.
A Security-façade Library for Virtual-observatory Software
NASA Astrophysics Data System (ADS)
Rixon, G.
2009-09-01
The security-façade library implements, for Java, IVOA's security standards. It supports the authentication mechanisms for SOAP and REST web-services, the sign-on mechanisms (with MyProxy, AstroGrid Accounts protocol or local credential-caches), the delegation protocol, and RFC3820-enabled HTTPS for Apache Tomcat. Using the façade, a developer who is not a security specialist can easily add access control to a virtual-observatory service and call secured services from an application. The library has been an internal part of AstroGrid software for some time and it is now offered for use by other developers.
Fully developed turbulence and complex time singularities
NASA Astrophysics Data System (ADS)
Dombre, T.; Gagne, Y.; Hopfinger, E.
The hypothesis of Frisch and Morf (1981), relating intermittent bursts observed in high-pass-filtered turbulent-flow data to complex time singularities in the solution of the Navier-Stokes equations, is tested experimentally. Velocity signals filtered at high-pass frequency 1 kHz and low-pass frequency 6 kHz are recorded for 7 min at sampling frequency 20 kHz in a flow of mean velocity 6.1 m/s, with mesh length d = 7.5 cm, observation point x/d = 40, R sub lambda = 67, dissipation length eta = 0.5 mm, and Kolmogorov frequency fK = about 2 kHz. The results are presented in graphs, and it is shown that the exponential behavior of the energy spectrum settles well before fK, the spectra of individual bursts having exponential behavior and delta(asterisk) values consistent with the Frisch-Morf hypothesis, at least for high-amplitude events.
Neutron-multiplicity experiments for enhanced fission modelling
NASA Astrophysics Data System (ADS)
Al-Adili, Ali; Tarrío, Diego; Hambsch, Franz-Josef; Göök, Alf; Jansson, Kaj; Solders, Andreas; Rakapoulos, Vasileios; Gustavsson, Cecilia; Lantz, Mattias; Mattera, Andrea; Oberstedt, Stephan; Prokofiev, Alexander V.; Sundén, Erik A.; Vidali, Marzio; Österlund, Michael; Pomp, Stephan
2017-09-01
The nuclear de-excitation process of fission fragments (FF) provides fundamental information for the understanding of nuclear fission and nuclear structure in neutron-rich isotopes. The variation of the prompt-neutron multiplicity, ν(A), as a function of the incident neutron energy (En) is one of many open questions. It leads to significantly different treatments in various fission models and implies that experimental data are analyzed based on contradicting assumptions. One critical question is whether the additional excitation energy (Eexc) is manifested through an increase of ν(A) for all fragments or for the heavy ones only. A systematic investigation of ν(A) as a function of En has been initiated. Correlations between prompt-fission neutrons and fission fragments are obtained by using liquid scintillators in conjunction with a Frisch-grid ionization chamber. The proof-of-principle has been achieved on the reaction 235U(nth,f) at the Van De Graff (VdG) accelerator of the JRC-Geel using a fully digital data acquisition system. Neutrons from 252Cf(sf) were measured separately to quantify the neutron-scattering component due to surrounding shielding material and to determine the intrinsic detector efficiency. Prelimenary results on ν(A) and spectrum in correlation with FF properties are presented.
Neutron-Induced Charged Particle Studies at LANSCE
NASA Astrophysics Data System (ADS)
Lee, Hye Young; Haight, Robert C.
2014-09-01
Direct measurements on neutron-induced charged particle reactions are of interest for nuclear astrophysics and applied nuclear energy. LANSCE (Los Alamos Neutron Science Center) produces neutrons in energy of thermal to several hundreds MeV. There has been an effort at LANSCE to upgrade neutron-induced charged particle detection technique, which follows on (n,z) measurements made previously here and will have improved capabilities including larger solid angles, higher efficiency, and better signal to background ratios. For studying cross sections of low-energy neutron induced alpha reactions, Frisch-gridded ionization chamber is designed with segmented anodes for improving signal-to-noise ratio near reaction thresholds. Since double-differential cross sections on (n,p) and (n,a) reactions up to tens of MeV provide important information on deducing nuclear level density, the ionization chamber will be coupled with silicon strip detectors (DSSD) in order to stop energetic charged particles. In this paper, we will present the status of this development including the progress on detector design, calibrations and Monte Carlo simulations. This work is funded by the US Department of Energy - Los Alamos National Security, LLC under Contract DE-AC52-06NA25396.
Establishment of key grid-connected performance index system for integrated PV-ES system
NASA Astrophysics Data System (ADS)
Li, Q.; Yuan, X. D.; Qi, Q.; Liu, H. M.
2016-08-01
In order to further promote integrated optimization operation of distributed new energy/ energy storage/ active load, this paper studies the integrated photovoltaic-energy storage (PV-ES) system which is connected with the distribution network, and analyzes typical structure and configuration selection for integrated PV-ES generation system. By combining practical grid- connected characteristics requirements and technology standard specification of photovoltaic generation system, this paper takes full account of energy storage system, and then proposes several new grid-connected performance indexes such as paralleled current sharing characteristic, parallel response consistency, adjusting characteristic, virtual moment of inertia characteristic, on- grid/off-grid switch characteristic, and so on. A comprehensive and feasible grid-connected performance index system is then established to support grid-connected performance testing on integrated PV-ES system.
The Dudley Challenges: Virtual Balloon Journeys and Real Learning
ERIC Educational Resources Information Center
Hackett, Shirley; Davies, John; Tibble, Eric
2005-01-01
The Dudley Challenges were developed to celebrate the millennium and the first year of the Dudley Grid for Learning. Three Challenges, Challenge 2000 the original resource, Challenge Europa and Junior Trek, are based on virtual balloon journeys, visiting a series of interesting cultural centres. Access to each centre visited is by solving a series…
A Virtual World of Visualization
NASA Technical Reports Server (NTRS)
1998-01-01
In 1990, Sterling Software, Inc., developed the Flow Analysis Software Toolkit (FAST) for NASA Ames on contract. FAST is a workstation based modular analysis and visualization tool. It is used to visualize and animate grids and grid oriented data, typically generated by finite difference, finite element and other analytical methods. FAST is now available through COSMIC, NASA's software storehouse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McJunkin, Timothy; Epiney, Aaron; Rabiti, Cristian
2017-06-01
This report provides a summary of the effort in the Nuclear-Renewable Hybrid Energy System (N-R HES) project on the level 4 milestone to consider integration of existing grid models into the factors for optimization on shorter time intervals than the existing electric grid models with the Risk Analysis Virtual Environment (RAVEN) and Modelica [1] optimizations and economic analysis that are the focus of the project to date.
Multi-agent grid system Agent-GRID with dynamic load balancing of cluster nodes
NASA Astrophysics Data System (ADS)
Satymbekov, M. N.; Pak, I. T.; Naizabayeva, L.; Nurzhanov, Ch. A.
2017-12-01
In this study the work presents the system designed for automated load balancing of the contributor by analysing the load of compute nodes and the subsequent migration of virtual machines from loaded nodes to less loaded ones. This system increases the performance of cluster nodes and helps in the timely processing of data. A grid system balances the work of cluster nodes the relevance of the system is the award of multi-agent balancing for the solution of such problems.
Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz
2016-01-01
This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.
Virtual Inertia: Current Trends and Future Directions
Tamrakar, Ujjwol; Shrestha, Dipesh; Maharjan, Manisha; ...
2017-06-26
The modern power system is progressing from a synchronous machine-based system towards an inverter-dominated system, with a large-scale penetration of renewable energy sources (RESs) like wind and photovoltaics. RES units today represent a major share of the generation, and the traditional approach of integrating themas grid following units can lead to frequency instability. Many researchers have pointed towards using inverters with virtual inertia control algorithms so that they appear as synchronous generators to the grid, maintaining and enhancing system stability. Our paper presents a literature review of the current state-of-the-art of virtual inertia implementation techniques, and explores potential research directionsmore » and challenges. The major virtual inertia topologies are compared and classified. Through literature review and simulations of some selected topologies it has been shown that similar inertial response can be achieved by relating the parameters of these topologies through time constants and inertia constants, although the exact frequency dynamics may vary slightly. The suitability of a topology depends on system control architecture and desired level of detail in replication of the dynamics of synchronous generators. We present a discussion on the challenges and research directions which points out several research needs, especially for systems level integration of virtual inertia systems.« less
Virtual Inertia: Current Trends and Future Directions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamrakar, Ujjwol; Shrestha, Dipesh; Maharjan, Manisha
The modern power system is progressing from a synchronous machine-based system towards an inverter-dominated system, with a large-scale penetration of renewable energy sources (RESs) like wind and photovoltaics. RES units today represent a major share of the generation, and the traditional approach of integrating themas grid following units can lead to frequency instability. Many researchers have pointed towards using inverters with virtual inertia control algorithms so that they appear as synchronous generators to the grid, maintaining and enhancing system stability. Our paper presents a literature review of the current state-of-the-art of virtual inertia implementation techniques, and explores potential research directionsmore » and challenges. The major virtual inertia topologies are compared and classified. Through literature review and simulations of some selected topologies it has been shown that similar inertial response can be achieved by relating the parameters of these topologies through time constants and inertia constants, although the exact frequency dynamics may vary slightly. The suitability of a topology depends on system control architecture and desired level of detail in replication of the dynamics of synchronous generators. We present a discussion on the challenges and research directions which points out several research needs, especially for systems level integration of virtual inertia systems.« less
A virtual data language and system for scientific workflow management in data grid environments
NASA Astrophysics Data System (ADS)
Zhao, Yong
With advances in scientific instrumentation and simulation, scientific data is growing fast in both size and analysis complexity. So-called Data Grids aim to provide high performance, distributed data analysis infrastructure for data- intensive sciences, where scientists distributed worldwide need to extract information from large collections of data, and to share both data products and the resources needed to produce and store them. However, the description, composition, and execution of even logically simple scientific workflows are often complicated by the need to deal with "messy" issues like heterogeneous storage formats and ad-hoc file system structures. We show how these difficulties can be overcome via a typed workflow notation called virtual data language, within which issues of physical representation are cleanly separated from logical typing, and by the implementation of this notation within the context of a powerful virtual data system that supports distributed execution. The resulting language and system are capable of expressing complex workflows in a simple compact form, enacting those workflows in distributed environments, monitoring and recording the execution processes, and tracing the derivation history of data products. We describe the motivation, design, implementation, and evaluation of the virtual data language and system, and the application of the virtual data paradigm in various science disciplines, including astronomy, cognitive neuroscience.
None
2018-02-13
NETL's Advanced Virtual Energy Simulation Training and Research, or AVESTAR, Center is designed to promote operational excellence for the nation's energy systems, from smart power plants to smart grid. The AVESTAR Center brings together advanced dynamic simulation and control technologies, state-of-the-art simulation-based training facilities, and leading industry experts to focus on the optimal operation of clean energy plants in the smart grid era.
An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation
NASA Astrophysics Data System (ADS)
Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi
A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.
Lattice gas simulations of dynamical geometry in two dimensions.
Klales, Anna; Cianci, Donato; Needell, Zachary; Meyer, David A; Love, Peter J
2010-10-01
We present a hydrodynamic lattice gas model for two-dimensional flows on curved surfaces with dynamical geometry. This model is an extension to two dimensions of the dynamical geometry lattice gas model previously studied in one dimension. We expand upon a variation of the two-dimensional flat space Frisch-Hasslacher-Pomeau (FHP) model created by Frisch [Phys. Rev. Lett. 56, 1505 (1986)] and independently by Wolfram, and modified by Boghosian [Philos. Trans. R. Soc. London, Ser. A 360, 333 (2002)]. We define a hydrodynamic lattice gas model on an arbitrary triangulation whose flat space limit is the FHP model. Rules that change the geometry are constructed using the Pachner moves, which alter the triangulation but not the topology. We present results on the growth of the number of triangles as a function of time. Simulations show that the number of triangles grows with time as t(1/3), in agreement with a mean-field prediction. We also present preliminary results on the distribution of curvature for a typical triangulation in these simulations.
Membrane potential dynamics of grid cells
Domnisoru, Cristina; Kinkhabwala, Amina A.; Tank, David W.
2014-01-01
During navigation, grid cells increase their spike rates in firing fields arranged on a strikingly regular triangular lattice, while their spike timing is often modulated by theta oscillations. Oscillatory interference models of grid cells predict theta amplitude modulations of membrane potential during firing field traversals, while competing attractor network models predict slow depolarizing ramps. Here, using in-vivo whole-cell recordings, we tested these models by directly measuring grid cell intracellular potentials in mice running along linear tracks in virtual reality. Grid cells had large and reproducible ramps of membrane potential depolarization that were the characteristic signature tightly correlated with firing fields. Grid cells also exhibited intracellular theta oscillations that influenced their spike timing. However, the properties of theta amplitude modulations were not consistent with the view that they determine firing field locations. Our results support cellular and network mechanisms in which grid fields are produced by slow ramps, as in attractor models, while theta oscillations control spike timing. PMID:23395984
Design and implementation of spatial knowledge grid for integrated spatial analysis
NASA Astrophysics Data System (ADS)
Liu, Xiangnan; Guan, Li; Wang, Ping
2006-10-01
Supported by spatial information grid(SIG), the spatial knowledge grid (SKG) for integrated spatial analysis utilizes the middleware technology in constructing the spatial information grid computation environment and spatial information service system, develops spatial entity oriented spatial data organization technology, carries out the profound computation of the spatial structure and spatial process pattern on the basis of Grid GIS infrastructure, spatial data grid and spatial information grid (specialized definition). At the same time, it realizes the complex spatial pattern expression and the spatial function process simulation by taking the spatial intelligent agent as the core to establish space initiative computation. Moreover through the establishment of virtual geographical environment with man-machine interactivity and blending, complex spatial modeling, network cooperation work and spatial community decision knowledge driven are achieved. The framework of SKG is discussed systematically in this paper. Its implement flow and the key technology with examples of overlay analysis are proposed as well.
The Input-Interface of Webcam Applied in 3D Virtual Reality Systems
ERIC Educational Resources Information Center
Sun, Huey-Min; Cheng, Wen-Lin
2009-01-01
Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…
ERIC Educational Resources Information Center
Liu, Chang; Franklin, Teresa; Shelor, Roger; Ozercan, Sertac; Reuter, Jarrod; Ye, En; Moriarty, Scott
2011-01-01
Game-like three-dimensional (3D) virtual worlds have become popular venues for youth to explore and interact with friends. To bring vital financial literacy education to them in places they frequent, a multi-disciplinary team of computer scientists, educators, and financial experts developed a youth-oriented financial literacy education game in…
Column generation algorithms for virtual network embedding in flexi-grid optical networks.
Lin, Rongping; Luo, Shan; Zhou, Jingwei; Wang, Sheng; Chen, Bin; Zhang, Xiaoning; Cai, Anliang; Zhong, Wen-De; Zukerman, Moshe
2018-04-16
Network virtualization provides means for efficient management of network resources by embedding multiple virtual networks (VNs) to share efficiently the same substrate network. Such virtual network embedding (VNE) gives rise to a challenging problem of how to optimize resource allocation to VNs and to guarantee their performance requirements. In this paper, we provide VNE algorithms for efficient management of flexi-grid optical networks. We provide an exact algorithm aiming to minimize the total embedding cost in terms of spectrum cost and computation cost for a single VN request. Then, to achieve scalability, we also develop a heuristic algorithm for the same problem. We apply these two algorithms for a dynamic traffic scenario where many VN requests arrive one-by-one. We first demonstrate by simulations for the case of a six-node network that the heuristic algorithm obtains very close blocking probabilities to exact algorithm (about 0.2% higher). Then, for a network of realistic size (namely, USnet) we demonstrate that the blocking probability of our new heuristic algorithm is about one magnitude lower than a simpler heuristic algorithm, which was a component of an earlier published algorithm.
Vision-Based Navigation and Parallel Computing
1990-08-01
33 5.8. Behizad Kamgar-Parsi and Behrooz Karngar-Parsi,"On Problem 5- lving with Hopfield Neural Networks", CAR-TR-462, CS-TR...Second. the hypercube connections support logarithmic implementations of fundamental parallel algorithms. such as grid permutations and scan...the pose space. It also uses a set of virtual processors to represent an orthogonal projection grid , and projections of the six dimensional pose space
The functional micro-organization of grid cells revealed by cellular-resolution imaging
Heys, James G.; Rangarajan, Krsna V.; Dombeck, Daniel A.
2015-01-01
Summary Establishing how grid cells are anatomically arranged, on a microscopic scale, in relation to their firing patterns in the environment would facilitate a greater micro-circuit level understanding of the brain’s representation of space. However, all previous grid cell recordings used electrode techniques that provide limited descriptions of fine-scale organization. We therefore developed a technique for cellular-resolution functional imaging of medial entorhinal cortex (MEC) neurons in mice navigating a virtual linear track, enabling a new experimental approach to study MEC. Using these methods, we show that grid cells are physically clustered in MEC compared to non-grid cells. Additionally, we demonstrate that grid cells are functionally micro-organized: The similarity between the environment firing locations of grid cell pairs varies as a function of the distance between them according to a “Mexican Hat” shaped profile. This suggests that, on average, nearby grid cells have more similar spatial firing phases than those further apart. PMID:25467986
The Integration of CloudStack and OCCI/OpenNebula with DIRAC
NASA Astrophysics Data System (ADS)
Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan
2012-12-01
The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.
ERIC Educational Resources Information Center
Maidana, Nora L.; da Fonseca, Monaliza; Barros, Suelen F.; Vanin, Vito R.
2016-01-01
The Virtual Laboratory was created as a complementary educational activity, with the aim of working abstract concepts from an experimental point of view. In this work, the motion of a ring rolling and slipping in front of a grid printed panel was recorded. The frames separated from this video received a time code, and the resulting set of images…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
The Fermilab Grid and Cloud Computing Department and the KISTI Global Science experimental Data hub Center propose a joint project. The goals are to enable scientific workflows of stakeholders to run on multiple cloud resources by use of (a) Virtual Infrastructure Automation and Provisioning, (b) Interoperability and Federat ion of Cloud Resources , and (c) High-Throughput Fabric Virtualization. This is a matching fund project in which Fermilab and KISTI will contribute equal resources .
Synergy Between Archives, VO, and the Grid at ESAC
NASA Astrophysics Data System (ADS)
Arviset, C.; Alvarez, R.; Gabriel, C.; Osuna, P.; Ott, S.
2011-07-01
Over the years, in support to the Science Operations Centers at ESAC, we have set up two Grid infrastructures. These have been built: 1) to facilitate daily research for scientists at ESAC, 2) to provide high computing capabilities for project data processing pipelines (e.g., Herschel), 3) to support science operations activities (e.g., calibration monitoring). Furthermore, closer collaboration between the science archives, the Virtual Observatory (VO) and data processing activities has led to an other Grid use case: the Remote Interface to XMM-Newton SAS Analysis (RISA). This web service-based system allows users to launch SAS tasks transparently to the GRID, save results on http-based storage and visualize them through VO tools. This paper presents real and operational use cases of Grid usages in these contexts
Interfacing HTCondor-CE with OpenStack
NASA Astrophysics Data System (ADS)
Bockelman, B.; Caballero Bejar, J.; Hover, J.
2017-10-01
Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.
ERIC Educational Resources Information Center
Jacobson, Bonnie
Certain major thinkers regard both change and anger as inevitable aspects of human living. A system called Re-Creative Psychology (originally conceived by Paul Frisch) addresses one way to frame anger in order to create constructive change. It is an organized system of behavior management which deals with three broad categories:…
Stepping Outside the Normed Sample: Implications for Validity
ERIC Educational Resources Information Center
Hays, Danica G.; Wood, Chris
2017-01-01
We present considerations for validity when a population outside of a normed sample is assessed and those data are interpreted. Using a career group counseling example exploring life satisfaction changes as evidenced by the Quality of Life Inventory (Frisch, 1994), we showcase qualitative and quantitative approaches to explore how normative data…
Total Kinetic Energy and Fragment Mass Distribution of Neutron-Induced Fission of U-233
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higgins, Daniel James; Schmitt, Kyle Thomas; Mosby, Shea Morgan
Properties of fission in U-233 were studied at the Los Alamos Neutron Science Center (LANSCE) at incident neutron energies from thermal to 40 MeV at both the Lujan Neutron Scattering Center flight path 12 and at WNR flight path 90-Left from Dec 2016 to Jan 2017. Fission fragments are observed in coincidence using a twin ionization chamber with Frisch grids. The average total kinetic energy (TKE) released from fission and fragment mass distributions are calculated from observations of energy deposited in the detector and conservation of mass and momentum. Accurate experimental measurements of these parameters are necessary to better understandmore » the fission process and obtain data necessary for calculating criticality. The average TKE released from fission has been well characterized for several isotopes at thermal neutron energy, however, few measurements have been made at fast neutron energies. This experiment expands on previous successful experiments using an ionization chamber to measure TKE and fragment mass distributions of U-235, U-238, and Pu-239. This experiment requires the full spectrum of neutron energies and can therefore only be performed at a small number of facilities in the world. The required full neutron energy spectrum is obtained by combining measurements from WNR 90L and Lujan FP12 at LANSCE.« less
NASA Astrophysics Data System (ADS)
Khryachkov, Vitaly; Goverdovskii, Andrei; Ketlerov, Vladimir; Mitrofanov, Vecheslav; Sergachev, Alexei
2018-03-01
Binary fission of 232Th and 238U induced by fast neutrons were under intent investigation in the IPPE during recent years. These measurements were performed with a twin ionization chamber with Frisch grids. Signals from the detector were digitized for further processing with a specially developed software. It results in information of kinetic energies, masses, directions and Bragg curves of registered fission fragments. Total statistics of a few million fission events were collected during each experiment. It was discovered that for several combinations of fission fragment masses their total kinetic energy was very close to total free energy of the fissioning system. The probability of such fission events for the fast neutron induced fission was found to be much higher than for spontaneous fission of 252Cf and thermal neutron induced fission of 235U. For experiments with 238U target the energy of incident neutrons were 5 MeV and 6.5 MeV. Close analysis of dependence of fission fragment distribution on compound nucleus excitation energy gave us some explanation of the phenomenon. It could be a process in highly excited compound nucleus which leads the fissioning system from the scission point into the fusion valley with high probability.
10B(n,α)7Li and 10B(n,α1γ)7Li cross section data up to 3 MeV incident neutron energy
NASA Astrophysics Data System (ADS)
Bevilacqua, Riccardo; Hambsch, Franz-Josef; Vidali, Marzio; Ruskov, Ivan; Lamia, Livio
2017-09-01
The 10B(n,α) reaction cross-section is a well-established neutron cross-section standard for incident neutron energies up to 1 MeV. However, above this energy limit there are only scarce direct (n,α) measurements available and these few experimental data are showing large inconsistencies with each other. These discrepancies are reflected in the evaluated data libraries: ENDF/B-VII.1, JEFF-3.1.2 and JENDL-4.0 are in excellent agreement up to 100 keV incident neutrons, whereas the 10B(n,α) data in the different libraries show large differences in the MeV region. To address these inconsistencies, we have measured the cross section of the two branches of the 10B(n,α) reaction for incident neutron energies up to 3 MeV. We present here the 10B(n,α) and the 10B(n,α1γ) reactions cross section data, their branching ratio and the total 10B(n,α) reaction cross section. The measurements were conducted with a dedicated Frisch-grid ionization chamber installed at the GELINA pulsed neutron source of the EC-JRC. We compare our results with existing experimental data and evaluations.
Extensible Interest Management for Scalable Persistent Distributed Virtual Environments
1999-12-01
Calvin, Cebula et al. 1995; Morse, Bic et al. 2000) uses a two grid, with each grid cell having two multicast addresses. An entity expresses interest...Entity distribution for experimental runs 78 s I * • ...... ^..... * * a» Sis*«*»* 1 ***** Jj |r...Multiple Users and Shared Applications with VRML. VRML 97, Monterey, CA. pp. 33-40. Calvin, J. O., D. P. Cebula , et al. (1995). Data Subscription in
Telemedical applications and grid technology
NASA Astrophysics Data System (ADS)
Graschew, Georgi; Roelofs, Theo A.; Rakowsky, Stefan; Schlag, Peter M.; Kaiser, Silvan; Albayrak, Sahin
2005-11-01
Due to the experience in the exploitation of previous European telemedicine projects an open Euro-Mediterranean consortium proposes the Virtual Euro-Mediterranean Hospital (VEMH) initiative. The provision of the same advanced technologies to the European and Mediterranean Countries should contribute to their better dialogue for integration. VEMH aims to facilitate the interconnection of various services through real integration which must take into account the social, human and cultural dimensions. VEMH will provide a platform consisting of a satellite and terrestrial link for the application of medical e-learning, real-time telemedicine and medical assistance. The methodologies for the VEMH are medical-needs-driven instead of technology-driven. They supply new management tools for virtual medical communities and allow management of clinical outcomes for implementation of evidence-based medicine. Due to the distributed character of the VEMH Grid technology becomes inevitable for successful deployment of the services. Existing Grid Engines provide basic computing power needed by today's medical analysis tasks but lack other capabilities needed for communication and knowledge sharing services envisioned. When it comes to heterogeneous systems to be shared by different institutions especially the high level system management areas are still unsupported. Therefore a Metagrid Engine is needed that provides a superset of functionalities across different Grid Engines and manages strong privacy and Quality of Service constraints at this comprehensive level.
The functional micro-organization of grid cells revealed by cellular-resolution imaging.
Heys, James G; Rangarajan, Krsna V; Dombeck, Daniel A
2014-12-03
Establishing how grid cells are anatomically arranged, on a microscopic scale, in relation to their firing patterns in the environment would facilitate a greater microcircuit-level understanding of the brain's representation of space. However, all previous grid cell recordings used electrode techniques that provide limited descriptions of fine-scale organization. We therefore developed a technique for cellular-resolution functional imaging of medial entorhinal cortex (MEC) neurons in mice navigating a virtual linear track, enabling a new experimental approach to study MEC. Using these methods, we show that grid cells are physically clustered in MEC compared to nongrid cells. Additionally, we demonstrate that grid cells are functionally micro-organized: the similarity between the environment firing locations of grid cell pairs varies as a function of the distance between them according to a "Mexican hat"-shaped profile. This suggests that, on average, nearby grid cells have more similar spatial firing phases than those further apart. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Happenny, Sean F.
The United States’ power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power distribution networks utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Demonstrating security in embedded systems is another research area PNNL ismore » tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the networks protecting them are becoming easier to breach. Providing a virtual power substation network to each student team at the National Collegiate Cyber Defense Competition, thereby supporting the education of future cyber security professionals, is another way PNNL is helping to strengthen the security of the nation’s power infrastructure.« less
Virtual sensors for robust on-line monitoring (OLM) and Diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Lerchen, Megan E.; Ramuhalli, Pradeep
Unscheduled shutdown of nuclear power facilities for recalibration and replacement of faulty sensors can be expensive and disruptive to grid management. In this work, we present virtual (software) sensors that can replace a faulty physical sensor for a short duration thus allowing recalibration to be safely deferred to a later time. The virtual sensor model uses a Gaussian process model to process input data from redundant and other nearby sensors. Predicted data includes uncertainty bounds including spatial association uncertainty and measurement noise and error. Using data from an instrumented cooling water flow loop testbed, the virtual sensor model has predictedmore » correct sensor measurements and the associated error corresponding to a faulty sensor.« less
A web system of virtual morphometric globes for Mars and the Moon
NASA Astrophysics Data System (ADS)
Florinsky, I. V.; Garov, A. S.; Karachevtseva, I. P.
2018-09-01
We developed a web system of virtual morphometric globes for Mars and the Moon. As the initial data, we used 15-arc-minutes gridded global digital elevation models (DEMs) extracted from the Mars Orbiter Laser Altimeter (MOLA) and the Lunar Orbiter Laser Altimeter (LOLA) gridded archives. We derived global digital models of sixteen morphometric variables including horizontal, vertical, minimal, and maximal curvatures, as well as catchment area and topographic index. The morphometric models were integrated into the web system developed as a distributed application consisting of a client front-end and a server back-end. The following main functions are implemented in the system: (1) selection of a morphometric variable; (2) two-dimensional visualization of a calculated global morphometric model; (3) 3D visualization of a calculated global morphometric model on the sphere surface; (4) change of a globe scale; and (5) globe rotation by an arbitrary angle. Free, real-time web access to the system is provided. The web system of virtual morphometric globes can be used for geological and geomorphological studies of Mars and the Moon at the global, continental, and regional scales.
Grist : grid-based data mining for astronomy
NASA Technical Reports Server (NTRS)
Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden;
2004-01-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
Grist: Grid-based Data Mining for Astronomy
NASA Astrophysics Data System (ADS)
Jacob, J. C.; Katz, D. S.; Miller, C. D.; Walia, H.; Williams, R. D.; Djorgovski, S. G.; Graham, M. J.; Mahabal, A. A.; Babu, G. J.; vanden Berk, D. E.; Nichol, R.
2005-12-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the ``hyperatlas'' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
NASA Astrophysics Data System (ADS)
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
Performance of a Heterogeneous Grid Partitioner for N-body Applications
NASA Technical Reports Server (NTRS)
Harvey, Daniel J.; Das, Sajal K.; Biswas, Rupak
2003-01-01
An important characteristic of distributed grids is that they allow geographically separated multicomputers to be tied together in a transparent virtual environment to solve large-scale computational problems. However, many of these applications require effective runtime load balancing for the resulting solutions to be viable. Recently, we developed a latency tolerant partitioner, called MinEX, specifically for use in distributed grid environments. This paper compares the performance of MinEX to that of METIS, a popular multilevel family of partitioners, using simulated heterogeneous grid configurations. A solver for the classical N-body problem is implemented to provide a framework for the comparisons. Experimental results show that MinEX provides superior quality partitions while being competitive to METIS in speed of execution.
The Perceived Quality of Life among School District Superintendents in Illinois Public Schools
ERIC Educational Resources Information Center
Heffernan, Debra J.
2012-01-01
The purpose of this study was to determine the perception of quality of life among Illinois male and female superintendents, and to determine demographic differences. Frisch's Quality of Life Inventory (QOLI) was used, which measured perceived levels of importance, satisfaction and weighted satisfaction (importance and satisfaction) in sixteen…
Lithium Borides - High Energy Materials
2000-02-28
1993. 99, 7983. (32) Pulay P.; Hamilton. T. P. J. Chem. Phys. 1988, 88. 4926 . (33) Frisch. M. J.: Trucks. G. W.; Schlegel. H. B.: Gill, P. M. W...25] P.V. Sudhakar, K. Lammertsma, J. Chem. Phys. 99 (1993) 7929. [26] M.J. van der Woerd, K. Lammertsma, B.J. Duke, H.F. Schaefer , III, J
Ban, Tomohiro; Ohue, Masahito; Akiyama, Yutaka
2018-04-01
The identification of comprehensive drug-target interactions is important in drug discovery. Although numerous computational methods have been developed over the years, a gold standard technique has not been established. Computational ligand docking and structure-based drug design allow researchers to predict the binding affinity between a compound and a target protein, and thus, they are often used to virtually screen compound libraries. In addition, docking techniques have also been applied to the virtual screening of target proteins (inverse docking) to predict target proteins of a drug candidate. Nevertheless, a more accurate docking method is currently required. In this study, we proposed a method in which a predicted ligand-binding site is covered by multiple grids, termed multiple grid arrangement. Notably, multiple grid arrangement facilitates the conformational search for a grid-based ligand docking software and can be applied to the state-of-the-art commercial docking software Glide (Schrödinger, LLC). We validated the proposed method by re-docking with the Astex diverse benchmark dataset and blind binding site situations, which improved the correct prediction rate of the top scoring docking pose from 27.1% to 34.1%; however, only a slight improvement in target prediction accuracy was observed with inverse docking scenarios. These findings highlight the limitations and challenges of current scoring functions and the need for more accurate docking methods. The proposed multiple grid arrangement method was implemented in Glide by modifying a cross-docking script for Glide, xglide.py. The script of our method is freely available online at http://www.bi.cs.titech.ac.jp/mga_glide/. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sulakhe, D.; Rodriguez, A.; Wilde, M.
2008-03-01
Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less
NASA Astrophysics Data System (ADS)
Barbera, Roberto; Donvit, Giacinto; Falzone, Alberto; Rocca, Giuseppe La; Maggi, Giorgio Pietro; Milanesi, Luciano; Vicarioicario, Saverio
This paper depicts the solution proposed by INFN to allow users, not owning a personal digital certificate and therefore not belonging to any specific Virtual Organization (VO), to access Grid infrastructures via the GENIUS Grid portal enabled with robot certificates. Robot certificates, also known as portal certificates, are associated with a specific application that the user wants to share with the whole Grid community and have recently been introduced by the EUGridPMA (European Policy Management Authority for Grid Authentication) to perform automated tasks on Grids on behalf of users. They are proven to be extremely useful to automate grid service monitoring, data processing production, distributed data collection systems, etc. In this paper, robot certificates have been used to allow bioinformaticians involved in the Italian LIBI project to perform large scale phylogenetic analyses. The distributed environment set up in this work strongly simplify the grid access of occasional users and represents a valuable step forward to wide the communities of users.
FermiGrid - experience and future plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chadwick, K.; Berman, E.; Canal, P.
2007-09-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less
Interoperable PKI Data Distribution in Computational Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.
One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Gridmore » Security Infrastructure (GSI).« less
Testing the seismology-based landquake monitoring system
NASA Astrophysics Data System (ADS)
Chao, Wei-An
2016-04-01
I have developed a real-time landquake monitoring system (RLMs), which monitor large-scale landquake activities in the Taiwan using real-time seismic network of Broadband Array in Taiwan for Seismology (BATS). The RLM system applies a grid-based general source inversion (GSI) technique to obtain the preliminary source location and force mechanism. A 2-D virtual source-grid on the Taiwan Island is created with an interval of 0.2° in both latitude and longitude. The depth of each grid point is fixed on the free surface topography. A database is stored on the hard disk for the synthetics, which are obtained using Green's functions computed by the propagator matrix approach for 1-D average velocity model, at all stations from each virtual source-grid due to nine elementary source components: six elementary moment tensors and three orthogonal (north, east and vertical) single-forces. Offline RLM system was carried out for events detected in previous studies. An important aspect of the RLM system is the implementation of GSI approach for different source types (e.g., full moment tensor, double couple faulting, and explosion source) by the grid search through the 2-D virtual source to automatically identify landquake event based on the improvement in waveform fitness and evaluate the best-fit solution in the monitoring area. With this approach, not only the force mechanisms but also the event occurrence time and location can be obtained simultaneously about 6-8 min after an occurrence of an event. To improve the insufficient accuracy of GSI-determined lotion, I further conduct a landquake epicenter determination (LED) method that maximizes the coherency of the high-frequency (1-3 Hz) horizontal envelope functions to determine the final source location. With good knowledge about the source location, I perform landquake force history (LFH) inversion to investigate the source dynamics (e.g., trajectory) for the relatively large-sized landquake event. With providing aforementioned source information in real-time, the government and emergency response agencies have sufficient reaction time for rapid assessment and response to landquake hazards. Since 2016, the RLM system has operated online.
AliEn—ALICE environment on the GRID
NASA Astrophysics Data System (ADS)
Saiz, P.; Aphecetche, L.; Bunčić, P.; Piskač, R.; Revsbech, J.-E.; Šego, V.; Alice Collaboration
2003-04-01
AliEn ( http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system.
Uniformity on the grid via a configuration framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Igor V Terekhov et al.
2003-03-11
As Grid permeates modern computing, Grid solutions continue to emerge and take shape. The actual Grid development projects continue to provide higher-level services that evolve in functionality and operate with application-level concepts which are often specific to the virtual organizations that use them. Physically, however, grids are comprised of sites whose resources are diverse and seldom project readily onto a grid's set of concepts. In practice, this also creates problems for site administrators who actually instantiate grid services. In this paper, we present a flexible, uniform framework to configure a grid site and its facilities, and otherwise describe the resourcesmore » and services it offers. We start from a site configuration and instantiate services for resource advertisement, monitoring and data handling; we also apply our framework to hosting environment creation. We use our ideas in the Information Management part of the SAM-Grid project, a grid system which will deliver petabyte-scale data to the hundreds of users. Our users are High Energy Physics experimenters who are scattered worldwide across dozens of institutions and always use facilities that are shared with other experiments as well as other grids. Our implementation represents information in the XML format and includes tools written in XQuery and XSLT.« less
[Parallel virtual reality visualization of extreme large medical datasets].
Tang, Min
2010-04-01
On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.
FermiGrid—experience and future plans
NASA Astrophysics Data System (ADS)
Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.
2008-07-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.
Estimating scatter in cone beam CT with striped ratio grids: A preliminary investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsieh, Scott, E-mail: sshsieh@stanford.edu
2016-09-15
Purpose: To propose a new method for estimating scatter in x-ray imaging. Conventional antiscatter grids reject scatter at an efficiency that is constant or slowly varying over the surface of the grid. A striped ratio antiscatter grid, composed of stripes that alternate between high and low grid ratio, could be used instead. Such a striped ratio grid would reduce scatter-to-primary ratio as a conventional grid would, but more importantly, the signal discontinuities at the boundaries of stripes can be used to estimate local scatter content. Methods: Signal discontinuities provide information on scatter, but are contaminated by variation in primary radiation.more » A nonlinear image processing algorithm is used to estimate the scatter content in the presence of primary variation. We emulated a striped ratio grid by imaging phantoms with two sequential CT scans, one with and one without a conventional grid. These two scans are processed together to mimic a striped ratio grid. This represents a best case limit of the striped ratio grid, in that the extent of grid ratio modulation is very high and the scatter contrast is maximized. Results: In a uniform cylinder, the striped ratio grid virtually eliminates cupping. Artifacts from scatter are improved in an anthropomorphic phantom. Some banding artifacts are induced by the striped ratio grid. Conclusions: Striped ratio grids could be a simple and effective evolution of conventional antiscatter grids. Construction and validation of a physical prototype remains an important future step.« less
Operating a production pilot factory serving several scientific domains
NASA Astrophysics Data System (ADS)
Sfiligoi, I.; Würthwein, F.; Andrews, W.; Dost, J. M.; MacNeill, I.; McCrea, A.; Sheripon, E.; Murphy, C. W.
2011-12-01
Pilot infrastructures are becoming prominent players in the Grid environment. One of the major advantages is represented by the reduced effort required by the user communities (also known as Virtual Organizations or VOs) due to the outsourcing of the Grid interfacing services, i.e. the pilot factory, to Grid experts. One such pilot factory, based on the glideinWMS pilot infrastructure, is being operated by the Open Science Grid at University of California San Diego (UCSD). This pilot factory is serving multiple VOs from several scientific domains. Currently the three major clients are the analysis operations of the HEP experiment CMS, the community VO HCC, which serves mostly math, biology and computer science users, and the structural biology VO NEBioGrid. The UCSD glidein factory allows the served VOs to use Grid resources distributed over 150 sites in North and South America, in Europe, and in Asia. This paper presents the steps taken to create a production quality pilot factory, together with the challenges encountered along the road.
Accessing eSDO Solar Image Processing and Visualization through AstroGrid
NASA Astrophysics Data System (ADS)
Auden, E.; Dalla, S.
2008-08-01
The eSDO project is funded by the UK's Science and Technology Facilities Council (STFC) to integrate Solar Dynamics Observatory (SDO) data, algorithms, and visualization tools with the UK's Virtual Observatory project, AstroGrid. In preparation for the SDO launch in January 2009, the eSDO team has developed nine algorithms covering coronal behaviour, feature recognition, and global / local helioseismology. Each of these algorithms has been deployed as an AstroGrid Common Execution Architecture (CEA) application so that they can be included in complex VO workflows. In addition, the PLASTIC-enabled eSDO "Streaming Tool" online movie application allows users to search multi-instrument solar archives through AstroGrid web services and visualise the image data through galleries, an interactive movie viewing applet, and QuickTime movies generated on-the-fly.
A Nuclear Tech Course = Nuclear Technology in War and Peace: A Study of Issues and Choices.
ERIC Educational Resources Information Center
Shanebrook, J. Richard
A nuclear technology college course for engineering students is outlined and described. The course begins with an historical account of the scientific discoveries leading up to the uranium experiments of Hahn and Strassman in Germany and the subsequent explanation of nuclear fission by Meitner and Frisch. The technological achievements of the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Foster, I.; Gawor, J.
In this paper we report on the features of the Java Commodity Grid Kit. The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus protocols, allowing the Java CoG Kit to communicate also with the C Globus reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well as numerous additional libraries and frameworks developed by the Java community tomore » enable network, Internet, enterprise, and peer-to peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus software. In this paper we also report on the efforts to develop server side Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Globus jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume
NASA Astrophysics Data System (ADS)
Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration
2017-11-01
An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.
Privacy protection in HealthGrid: distributing encryption management over the VO.
Torres, Erik; de Alfonso, Carlos; Blanquer, Ignacio; Hernández, Vicente
2006-01-01
Grid technologies have proven to be very successful in tackling challenging problems in which data access and processing is a bottleneck. Notwithstanding the benefits that Grid technologies could have in Health applications, privacy leakages of current DataGrid technologies due to the sharing of data in VOs and the use of remote resources, compromise its widespreading. Privacy control for Grid technology has become a key requirement for the adoption of Grids in the Healthcare sector. Encrypted storage of confidential data effectively reduces the risk of disclosure. A self-enforcing scheme for encrypted data storage can be achieved by combining Grid security systems with distributed key management and classical cryptography techniques. Virtual Organizations, as the main unit of user management in Grid, can provide a way to organize key sharing, access control lists and secure encryption management. This paper provides programming models and discusses the value, costs and behavior of such a system implemented on top of one of the latest Grid middlewares. This work is partially funded by the Spanish Ministry of Science and Technology in the frame of the project Investigación y Desarrollo de Servicios GRID: Aplicación a Modelos Cliente-Servidor, Colaborativos y de Alta Productividad, with reference TIC2003-01318.
Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale
NASA Astrophysics Data System (ADS)
Barrios, M. I.
2013-12-01
The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.
NASA Astrophysics Data System (ADS)
Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, D.; Gill, R.; Sinno, S. S.; Shen, Y.; Carriere, L. E.; Brieger, L.; Moore, R.; Rajasekar, A.; Schroeder, W.; Wan, M.
2011-12-01
Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service. A virtual climate data server is an OAIS-compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have developed prototype vCDSs to manage NetCDF, HDF, and GeoTIF data products. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA's Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into these virtualized resources, multiple vCDSs can use iRODS's federation and realized object capabilities to create an integrated ecosystem of data servers that can scale and adapt to changing requirements. This approach enables platform- or software-as-a-service deployment of the vCDSs and allows the NCCS to offer virtualization-as-a-service, a capacity to respond in an agile way to new customer requests for data services, and a path for migrating existing services into the cloud. We have registered MODIS Atmosphere data products in a vCDS that contains 54 million registered files, 630TB of data, and over 300 million metadata values. We are now assembling IPCC AR5 data into a production vCDS that will provide the platform upon which NCCS's Earth System Grid (ESG) node publishes to the extended science community. In this talk, we describe our approach, experiences, lessons learned, and plans for the future.
A desktop system of virtual morphometric globes for Mars and the Moon
NASA Astrophysics Data System (ADS)
Florinsky, I. V.; Filippov, S. V.
2017-03-01
Global morphometric models can be useful for earth and planetary studies. Virtual globes - programs implementing interactive three-dimensional (3D) models of planets - are increasingly used in geo- and planetary sciences. We describe the development of a desktop system of virtual morphometric globes for Mars and the Moon. As the initial data, we used 15'-gridded global digital elevation models (DEMs) extracted from the Mars Orbiter Laser Altimeter (MOLA) and the Lunar Orbiter Laser Altimeter (LOLA) gridded archives. For two celestial bodies, we derived global digital models of several morphometric attributes, such as horizontal curvature, vertical curvature, minimal curvature, maximal curvature, and catchment area. To develop the system, we used Blender, the free open-source software for 3D modeling and visualization. First, a 3D sphere model was generated. Second, the global morphometric maps were imposed to the sphere surface as textures. Finally, the real-time 3D graphics Blender engine was used to implement rotation and zooming of the globes. The testing of the developed system demonstrated its good performance. Morphometric globes clearly represent peculiarities of planetary topography, according to the physical and mathematical sense of a particular morphometric variable.
Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud
NASA Astrophysics Data System (ADS)
Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde
2014-06-01
The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Gawor, J.; Lane, P.
In this paper we report on the features of the Java Commodity Grid Kit (Java CoG Kit). The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus Toolkit protocols, allowing the Java CoG Kit to also communicate with the services distributed as part of the C Globus Toolkit reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well asmore » numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise and peer-to-peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus Toolkit software. In this paper we also report on the efforts to develop serverside Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Grid jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
Development of Armenian-Georgian Virtual Observatory
NASA Astrophysics Data System (ADS)
Mickaelian, Areg; Kochiashvili, Nino; Astsatryan, Hrach; Harutyunian, Haik; Magakyan, Tigran; Chargeishvili, Ketevan; Natsvlishvili, Rezo; Kukhianidze, Vasil; Ramishvili, Giorgi; Sargsyan, Lusine; Sinamyan, Parandzem; Kochiashvili, Ia; Mikayelyan, Gor
2009-10-01
The Armenian-Georgian Virtual Observatory (ArGVO) project is the first initiative in the world to create a regional VO infrastructure based on national VO projects and regional Grid. The Byurakan and Abastumani Astrophysical Observatories are scientific partners since 1946, after establishment of the Byurakan observatory . The Armenian VO project (ArVO) is being developed since 2005 and is a part of the International Virtual Observatory Alliance (IVOA). It is based on the Digitized First Byurakan Survey (DFBS, the digitized version of famous Markarian survey) and other Armenian archival data. Similarly, the Georgian VO will be created to serve as a research environment to utilize the digitized Georgian plate archives. Therefore, one of the main goals for creation of the regional VO is the digitization of large amounts of plates preserved at the plate stacks of these two observatories. The total amount of plates is more than 100,000 units. Observational programs of high importance have been selected and some 3000 plates will be digitized during the next two years; the priority is being defined by the usefulness of the material for future science projects, like search for new objects, optical identifications of radio, IR, and X-ray sources, study of variability and proper motions, etc. Having the digitized material in VO standards, a VO database through the regional Grid infrastructure will be active. This partnership is being carried out in the framework of the ISTC project A-1606 "Development of Armenian-Georgian Grid Infrastructure and Applications in the Fields of High Energy Physics, Astrophysics and Quantum Physics".
VOMS/VOMRS utilization patterns and convergence plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceccanti, A.; /INFN, CNAF; Ciaschini, V.
2010-01-01
The Grid community uses two well-established registration services, which allow users to be authenticated under the auspices of Virtual Organizations (VOs). The Virtual Organization Membership Service (VOMS), developed in the context of the Enabling Grid for E-sciencE (EGEE) project, is an Attribute Authority service that issues attributes expressing membership information of a subject within a VO. VOMS allows to partition users in groups, assign them roles and free-form attributes which are then used to drive authorization decisions. The VOMS administrative application, VOMS-Admin, manages and populates the VOMS database with membership information. The Virtual Organization Management Registration Service (VOMRS), developed atmore » Fermilab, extends the basic registration and management functionalities present in VOMS-Admin. It implements a registration workflow that requires VO usage policy acceptance and membership approval by administrators. VOMRS supports management of multiple grid certificates, and handling users' request for group and role assignments, and membership status. VOMRS is capable of interfacing to local systems with personnel information (e.g. the CERN Human Resource Database) and of pulling relevant member information from them. VOMRS synchronizes the relevant subset of information with VOMS. The recent development of new features in VOMS-Admin raises the possibility of rationalizing the support and converging on a single solution by continuing and extending existing collaborations between EGEE and OSG. Such strategy is supported by WLCG, OSG, US CMS, US Atlas, and other stakeholders worldwide. In this paper, we will analyze features in use by major experiments and the use cases for registration addressed by the mature single solution.« less
VOMS/VOMRS utilization patterns and convergence plan
NASA Astrophysics Data System (ADS)
Ceccanti, A.; Ciaschini, V.; Dimou, M.; Garzoglio, G.; Levshina, T.; Traylen, S.; Venturi, V.
2010-04-01
The Grid community uses two well-established registration services, which allow users to be authenticated under the auspices of Virtual Organizations (VOs). The Virtual Organization Membership Service (VOMS), developed in the context of the Enabling Grid for E-sciencE (EGEE) project, is an Attribute Authority service that issues attributes expressing membership information of a subject within a VO. VOMS allows to partition users in groups, assign them roles and free-form attributes which are then used to drive authorization decisions. The VOMS administrative application, VOMS-Admin, manages and populates the VOMS database with membership information. The Virtual Organization Management Registration Service (VOMRS), developed at Fermilab, extends the basic registration and management functionalities present in VOMS-Admin. It implements a registration workflow that requires VO usage policy acceptance and membership approval by administrators. VOMRS supports management of multiple grid certificates, and handling users' request for group and role assignments, and membership status. VOMRS is capable of interfacing to local systems with personnel information (e.g. the CERN Human Resource Database) and of pulling relevant member information from them. VOMRS synchronizes the relevant subset of information with VOMS. The recent development of new features in VOMS-Admin raises the possibility of rationalizing the support and converging on a single solution by continuing and extending existing collaborations between EGEE and OSG. Such strategy is supported by WLCG, OSG, US CMS, US Atlas, and other stakeholders worldwide. In this paper, we will analyze features in use by major experiments and the use cases for registration addressed by the mature single solution.
Neutron induced fission cross section measurements of 240Pu and 242Pu
NASA Astrophysics Data System (ADS)
Belloni, F.; Eykens, R.; Heyse, J.; Matei, C.; Moens, A.; Nolte, R.; Plompen, A. J. M.; Richter, S.; Sibbens, G.; Vanleeuw, D.; Wynants, R.
2017-09-01
Accurate neutron induced fission cross section of 240Pu and 242Pu are required in view of making nuclear technology safer and more efficient to meet the upcoming needs for the future generation of nuclear power plants (GEN-IV). The probability for a neutron to induce such reactions figures in the NEA Nuclear Data High Priority Request List [1]. A measurement campaign to determine neutron induced fission cross sections of 240Pu and 242Pu at 2.51 MeV and 14.83 MeV has been carried out at the 3.7 MV Van De Graaff linear accelerator at Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig. Two identical Frisch Grid fission chambers, housing back to back a 238U and a APu target (A = 240 or A = 242), were employed to detect the total fission yield. The targets were molecular plated on 0.25 mm aluminium foils kept at ground potential and the employed gas was P10. The neutron fluence was measured with the proton recoil telescope (T1), which is the German primary standard for neutron fluence measurements. The two measurements were related using a De Pangher long counter and the charge as monitors. The experimental results have an average uncertainty of 3-4% at 2.51 MeV and for 6-8% at 14.81 MeV and have been compared to the data available in literature.
High-Performance Tiled WMS and KML Web Server
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2007-01-01
This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.
Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.
Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann
2015-01-01
Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.
2000-09-29
of the birth of new physics and astronomy , and as contribution to obscure rhetoric in speculative quantum physics texts. In fact, not only...Copernican system has to be valid (Myaterium Cosmographicum). (One might, however, with justification doubt that the system presented by Copernicus in his...Kepleri astronomi Opera Omnia, Vol. I. Editit Christian Frisch. Frankofurti a.M.-Erlangae, Heyder & Zimmer 1858-1871. (Johannes Kepler, Gesammelte Werke
Rupturing the Codes: The Use of Drama and Dramatic Literature in the History Classroom.
ERIC Educational Resources Information Center
Leistler, John D.
This paper discusses plays and companion art pieces suitable for use in the United States history classroom. After a poster from a production of Max Frisch's "Biedermann und die Brandstifter," the paper presents a list of 18 questions ("lenses") for the study of plays with a historical connection; a list of 15 plays for…
Caring for the Caregiver: The Use of Music and Music Therapy in Grief and Trauma
ERIC Educational Resources Information Center
Loewy, Joanne V., Ed.; Hara, Andrea Frisch, Ed.
2002-01-01
A collection of reflections on music therapy interventions provided as a part of the New York City Music Therapy Relief Project, sponsored by AMTA and the Recording Academy after September 11th, 2001. Edited by Joanne V. Loewy and Andrea Frisch Hara. Each chapter is written by a different therapist involved in the project.
A Heliosphere Buffeted by Interstellar Turbulence?
NASA Astrophysics Data System (ADS)
Jokipii, J. R.; Giacalone, J.
2014-12-01
Recent observations from IBEX combined with previous measurements from other sources suggest new, local, effects of interstellar turbulence. Observations of various interstellar parameters such as the magnetic field, fluid velocity and electron density, over large spatial scales, have revealed a broadband Kolmogorov spectrum of interstellar turbulence which pervades most of interstellar space. The outer scale (or coherence scale of this turbulence) is found to be approximately 10^19 cm and the inner cutoff scale is less than 1000 km. The root-mean-square relative fluctuation in the fluid and the magnetic-field parameters is of order unity. If this turbulence exists at the heliosphere, the root-mean-square relative fluctuations at 100 (heliospheric) AU scales is approximately 0.1. The recently published value for the change In observed velocity direction for the interstellar flow relative to the heliosphere (Frisch, etal, 2014)is consistent with this. Similarly, interpreting the width of the IBEX ribbon in terms of a fluctuating magnetic field also is in agreement with this picture. Observations of TeV cosmic rays can also be explained. Potential effects of these fluctuations in the interstellar medium on the heliosphere will be discussed. Reference: Frisch, etal, Science, 341, 480
Voxel inversion of airborne electromagnetic data for improved model integration
NASA Astrophysics Data System (ADS)
Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders
2014-05-01
Inversion of electromagnetic data has migrated from single site interpretations to inversions including entire surveys using spatial constraints to obtain geologically reasonable results. Though, the model space is usually linked to the actual observation points. For airborne electromagnetic (AEM) surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space, and the geophysical information has to be relocated for integration in (hydro)geological models. We have developed a new geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the soil properties is computed everywhere by means of an interpolation function (e.g. inverse distance or kriging). Given this definition of the voxel model space, the 1D forward responses of the AEM data are computed as follows: 1) a 1D model subdivision, in terms of model thicknesses, is defined for each 1D data set, creating "virtual" layers. 2) the "virtual" 1D models at the sounding positions are finalized by interpolating the soil properties (the resistivity) in the center of the "virtual" layers. 3) the forward response is computed in 1D for each "virtual" model. We tested the new inversion scheme on an AEM survey carried out with the SkyTEM system close to Odder, in Denmark. The survey comprises 106054 dual mode AEM soundings, and covers an area of approximately 13 km X 16 km. The voxel inversion was carried out on a structured grid of 260 X 325 X 29 xyz nodes (50 m xy spacing), for a total of 2450500 inversion parameters. A classical spatially constrained inversion (SCI) was carried out on the same data set, using 106054 spatially constrained 1D models with 29 layers. For comparison, the SCI inversion models have been gridded on the same grid of the voxel inversion. The new voxel inversion and the classic SCI give similar data fit and inversion models. The voxel inversion decouples the geophysical model from the position of acquired data, and at the same time fits the data as well as the classic SCI inversion. Compared to the classic approach, the voxel inversion is better suited for informing directly (hydro)geological models and for sequential/Joint/Coupled (hydro)geological inversion. We believe that this new approach will facilitate the integration of geophysics, geology and hydrology for improved groundwater and environmental management.
NASA Technical Reports Server (NTRS)
Swinbank, Richard; Purser, James
2006-01-01
Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude-longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges). One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as "Fibonacci grids". They can be thought of as mathematically ideal generalizations of the patterns occurring naturally in the spiral arrangements of seeds and fruit found in sunflower heads and pineapples (to give two of the many botanical examples). These grids possess virtually uniform and highly isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of these grids in shallow water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional semi-implicit and semi-Lagrangian weather prediction or climate models.
Decentralized control of units in smart grids for the support of renewable energy supply
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonnenschein, Michael, E-mail: Michael.Sonnenschein@Uni-Oldenburg.DE; Lünsdorf, Ontje, E-mail: Ontje.Luensdorf@OFFIS.DE; Bremer, Jörg, E-mail: Joerg.Bremer@Uni-Oldenburg.DE
Due to the significant environmental impact of power production from fossil fuels and nuclear fission, future energy systems will increasingly rely on distributed and renewable energy sources (RES). The electrical feed-in from photovoltaic (PV) systems and wind energy converters (WEC) varies greatly both over short and long time periods (from minutes to seasons), and (not only) by this effect the supply of electrical power from RES and the demand for electrical power are not per se matching. In addition, with a growing share of generation capacity especially in distribution grids, the top-down paradigm of electricity distribution is gradually replaced bymore » a bottom-up power supply. This altogether leads to new problems regarding the safe and reliable operation of power grids. In order to address these challenges, the notion of Smart Grids has been introduced. The inherent flexibilities, i.e. the set of feasible power schedules, of distributed power units have to be controlled in order to support demand–supply matching as well as stable grid operation. Controllable power units are e.g. combined heat and power plants, power storage systems such as batteries, and flexible power consumers such as heat pumps. By controlling the flexibilities of these units we are particularly able to optimize the local utilization of RES feed-in in a given power grid by integrating both supply and demand management measures with special respect to the electrical infrastructure. In this context, decentralized systems, autonomous agents and the concept of self-organizing systems will become key elements of the ICT based control of power units. In this contribution, we first show how a decentralized load management system for battery charging/discharging of electrical vehicles (EVs) can increase the locally used share of supply from PV systems in a low voltage grid. For a reliable demand side management of large sets of appliances, dynamic clustering of these appliances into uniformly controlled appliance sets is necessary. We introduce a method for self-organized clustering for this purpose and show how control of such clusters can affect load peaks in distribution grids. Subsequently, we give a short overview on how we are going to expand the idea of self-organized clusters of units into creating a virtual control center for dynamic virtual power plants (DVPP) offering products at a power market. For an efficient organization of DVPPs, the flexibilities of units have to be represented in a compact and easy to use manner. We give an introduction how the problem of representing a set of possibly 10{sup 100} feasible schedules can be solved by a machine-learning approach. In summary, this article provides an overall impression how we use agent based control techniques and methods of self-organization to support the further integration of distributed and renewable energy sources into power grids and energy markets. - Highlights: • Distributed load management for electrical vehicles supports local supply from PV. • Appliances can self-organize into so called virtual appliances for load control. • Dynamic VPPs can be controlled by extensively decentralized control centers. • Flexibilities of units can efficiently be represented by support-vector descriptions.« less
Nbody Simulations and Weak Gravitational Lensing using new HPC-Grid resources: the PI2S2 project
NASA Astrophysics Data System (ADS)
Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Comparato, M.
2008-08-01
We present the main project of the new grid infrastructure and the researches, that have been already started in Sicily and will be completed by next year. The PI2S2 project of the COMETA consortium is funded by the Italian Ministry of University and Research and will be completed in 2009. Funds are from the European Union Structural Funds for Objective 1 regions. The project, together with a similar project called Trinacria GRID Virtual Laboratory (Trigrid VL), aims to create in Sicily a computational grid for e-science and e-commerce applications with the main goal of increasing the technological innovation of local enterprises and their competition on the global market. PI2S2 project aims to build and develop an e-Infrastructure in Sicily, based on the grid paradigm, mainly for research activity using the grid environment and High Performance Computer systems. As an example we present the first results of a new grid version of FLY a tree Nbody code developed by INAF Astrophysical Observatory of Catania, already published in the CPC program Library, that will be used in the Weak Gravitational Lensing field.
Integrating existing software toolkits into VO system
NASA Astrophysics Data System (ADS)
Cui, Chenzhou; Zhao, Yong-Heng; Wang, Xiaoqian; Sang, Jian; Luo, Ze
2004-09-01
Virtual Observatory (VO) is a collection of interoperating data archives and software tools. Taking advantages of the latest information technologies, it aims to provide a data-intensively online research environment for astronomers all around the world. A large number of high-qualified astronomical software packages and libraries are powerful and easy of use, and have been widely used by astronomers for many years. Integrating those toolkits into the VO system is a necessary and important task for the VO developers. VO architecture greatly depends on Grid and Web services, consequently the general VO integration route is "Java Ready - Grid Ready - VO Ready". In the paper, we discuss the importance of VO integration for existing toolkits and discuss the possible solutions. We introduce two efforts in the field from China-VO project, "gImageMagick" and "Galactic abundance gradients statistical research under grid environment". We also discuss what additional work should be done to convert Grid service to VO service.
[Tumor Data Interacted System Design Based on Grid Platform].
Liu, Ying; Cao, Jiaji; Zhang, Haowei; Zhang, Ke
2016-06-01
In order to satisfy demands of massive and heterogeneous tumor clinical data processing and the multi-center collaborative diagnosis and treatment for tumor diseases,a Tumor Data Interacted System(TDIS)was established based on grid platform,so that an implementing virtualization platform of tumor diagnosis service was realized,sharing tumor information in real time and carrying on standardized management.The system adopts Globus Toolkit 4.0tools to build the open grid service framework and encapsulats data resources based on Web Services Resource Framework(WSRF).The system uses the middleware technology to provide unified access interface for heterogeneous data interaction,which could optimize interactive process with virtualized service to query and call tumor information resources flexibly.For massive amounts of heterogeneous tumor data,the federated stored and multiple authorized mode is selected as security services mechanism,real-time monitoring and balancing load.The system can cooperatively manage multi-center heterogeneous tumor data to realize the tumor patient data query,sharing and analysis,and compare and match resources in typical clinical database or clinical information database in other service node,thus it can assist doctors in consulting similar case and making up multidisciplinary treatment plan for tumors.Consequently,the system can improve efficiency of diagnosis and treatment for tumor,and promote the development of collaborative tumor diagnosis model.
The Medium and the Message: Oral History, New Media, and a Grassroots History of Working Women
ERIC Educational Resources Information Center
Meyerowitz, Ruth; Zinni, Christine F.
2009-01-01
In the Spring of 2000, Ruth Meyerowitz and Christine Zinni began collaborative efforts--inside and outside of academia--to enhance a course on The History of Working Women at SUNY Buffalo. Videotaping the oral histories of women labor leaders, they later teamed up with Michael Frisch and Randforce Associates--a research group at SUNY at Buffalo's…
Final Report on Contract F49620-85-C-0026. Volume 1.
1987-05-01
RFOSR-TR-S?-1349-VOL-1 UNCLMSSIFIED F40-BS-C46F/ 20/4 L E.EEEEEEEEE.E. 0 0 ’El..... oso - ~L.0 3 111IL25 . MJCROC(OP RESOLUTION TEST CHARI WiloUiAl SURIAU...Yakhot, R. Panda , U. Frisch, and R.H. Kraichnan, Weak Interactions and Local Order in Strong Turbulence, Phys. Rev. Let. (1986) submitted. -3- V. Yakhot
Service-Oriented Architecture for NVO and TeraGrid Computing
NASA Technical Reports Server (NTRS)
Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew
2008-01-01
The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.
Context-dependent spatially periodic activity in the human entorhinal cortex
Nguyen, T. Peter; Török, Ágoston; Shen, Jason Y.; Briggs, Deborah E.; Modur, Pradeep N.; Buchanan, Robert J.
2017-01-01
The spatially periodic activity of grid cells in the entorhinal cortex (EC) of the rodent, primate, and human provides a coordinate system that, together with the hippocampus, informs an individual of its location relative to the environment and encodes the memory of that location. Among the most defining features of grid-cell activity are the 60° rotational symmetry of grids and preservation of grid scale across environments. Grid cells, however, do display a limited degree of adaptation to environments. It remains unclear if this level of environment invariance generalizes to human grid-cell analogs, where the relative contribution of visual input to the multimodal sensory input of the EC is significantly larger than in rodents. Patients diagnosed with nontractable epilepsy who were implanted with entorhinal cortical electrodes performing virtual navigation tasks to memorized locations enabled us to investigate associations between grid-like patterns and environment. Here, we report that the activity of human entorhinal cortical neurons exhibits adaptive scaling in grid period, grid orientation, and rotational symmetry in close association with changes in environment size, shape, and visual cues, suggesting scale invariance of the frequency, rather than the wavelength, of spatially periodic activity. Our results demonstrate that neurons in the human EC represent space with an enhanced flexibility relative to neurons in rodents because they are endowed with adaptive scalability and context dependency. PMID:28396399
Study on Global GIS architecture and its key technologies
NASA Astrophysics Data System (ADS)
Cheng, Chengqi; Guan, Li; Lv, Xuefeng
2009-09-01
Global GIS (G2IS) is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. Based on global subdivision grid (GSG), Global GIS architecture is presented in this paper, taking advantage of computer cluster theory, the space-time integration technology and the virtual reality technology. Global GIS system architecture is composed of five layers, including data storage layer, data representation layer, network and cluster layer, data management layer and data application layer. Thereinto, it is designed that functions of four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. Three kinds of core supportive technologies, which are computer cluster theory, the space-time integration technology and the virtual reality technology, and its application pattern in the Global GIS are introduced in detail. The primary ideas of Global GIS in this paper will be an important development tendency of GIS.
Study on Global GIS architecture and its key technologies
NASA Astrophysics Data System (ADS)
Cheng, Chengqi; Guan, Li; Lv, Xuefeng
2010-11-01
Global GIS (G2IS) is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. Based on global subdivision grid (GSG), Global GIS architecture is presented in this paper, taking advantage of computer cluster theory, the space-time integration technology and the virtual reality technology. Global GIS system architecture is composed of five layers, including data storage layer, data representation layer, network and cluster layer, data management layer and data application layer. Thereinto, it is designed that functions of four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. Three kinds of core supportive technologies, which are computer cluster theory, the space-time integration technology and the virtual reality technology, and its application pattern in the Global GIS are introduced in detail. The primary ideas of Global GIS in this paper will be an important development tendency of GIS.
Earth System Grid II (ESG): Turning Climate Model Datasets Into Community Resources
NASA Astrophysics Data System (ADS)
Williams, D.; Middleton, D.; Foster, I.; Nevedova, V.; Kesselman, C.; Chervenak, A.; Bharathi, S.; Drach, B.; Cinquni, L.; Brown, D.; Strand, G.; Fox, P.; Garcia, J.; Bernholdte, D.; Chanchio, K.; Pouchard, L.; Chen, M.; Shoshani, A.; Sim, A.
2003-12-01
High-resolution, long-duration simulations performed with advanced DOE SciDAC/NCAR climate models will produce tens of petabytes of output. To be useful, this output must be made available to global change impacts researchers nationwide, both at national laboratories and at universities, other research laboratories, and other institutions. To this end, we propose to create a new Earth System Grid, ESG-II - a virtual collaborative environment that links distributed centers, users, models, and data. ESG-II will provide scientists with virtual proximity to the distributed data and resources that they require to perform their research. The creation of this environment will significantly increase the scientific productivity of U.S. climate researchers by turning climate datasets into community resources. In creating ESG-II, we will integrate and extend a range of Grid and collaboratory technologies, including the DODS remote access protocols for environmental data, Globus Toolkit technologies for authentication, resource discovery, and resource access, and Data Grid technologies developed in other projects. We will develop new technologies for (1) creating and operating "filtering servers" capable of performing sophisticated analyses, and (2) delivering results to users. In so doing, we will simultaneously contribute to climate science and advance the state of the art in collaboratory technology. We expect our results to be useful to numerous other DOE projects. The three-year R&D program will be undertaken by a talented and experienced team of computer scientists at five laboratories (ANL, LBNL, LLNL, NCAR, ORNL) and one university (ISI), working in close collaboration with climate scientists at several sites.
Distribution Locational Real-Time Pricing Based Smart Building Control and Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Jun; Dai, Xiaoxiao; Zhang, Yingchen
This paper proposes an real-virtual parallel computing scheme for smart building operations aiming at augmenting overall social welfare. The University of Denver's campus power grid and Ritchie fitness center is used for demonstrating the proposed approach. An artificial virtual system is built in parallel to the real physical system to evaluate the overall social cost of the building operation based on the social science based working productivity model, numerical experiment based building energy consumption model and the power system based real-time pricing mechanism. Through interactive feedback exchanged between the real and virtual system, enlarged social welfare, including monetary cost reductionmore » and energy saving, as well as working productivity improvements, can be achieved.« less
Using virtualization to protect the proprietary material science applications in volunteer computing
NASA Astrophysics Data System (ADS)
Khrapov, Nikolay P.; Rozen, Valery V.; Samtsevich, Artem I.; Posypkin, Mikhail A.; Sukhomlin, Vladimir A.; Oganov, Artem R.
2018-04-01
USPEX is a world-leading software for computational material design. In essence, USPEX splits simulation into a large number of workunits that can be processed independently. This scheme ideally fits the desktop grid architecture. Workunit processing is done by a simulation package aimed at energy minimization. Many of such packages are proprietary and should be protected from unauthorized access when running on a volunteer PC. In this paper we present an original approach based on virtualization. In a nutshell, the proprietary code and input files are stored in an encrypted folder and run inside a virtual machine image that is also password protected. The paper describes this approach in detail and discusses its application in USPEX@home volunteer project.
-performance Computing Grid Computing Networking Mass Storage Plan for the Future State of the Laboratory to help decipher the language of high-energy physics. Virtual Ask-a-Scientist Read transcripts from past online chat sessions. last modified 1/04/2005 email Fermilab Fermi National Accelerator Laboratory
Turbulence Characteristics in an Elevated Shear Layer over Owens Valley
2010-02-14
Arnéodo, G. Grasseau, Y. Gagne, E. J. Hopfinger, and U. Frisch, 1989: Wavelet analysis of turbulence reveals the multifractal nature of the Richardson...Helmholtz (KH) instability, the tur- bulence inertial subrange, turbulence intermittency, and cross -scale energy transfer over complex terrain. The...or cross -valley) and the normal (also referred to as along- valley) wind components, respectively. Figure 2 shows profiles derived from the 1800 UTC
Wind turbine wake interactions at field scale: An LES study of the SWiFT facility
NASA Astrophysics Data System (ADS)
Yang, Xiaolei; Boomsma, Aaron; Barone, Matthew; Sotiropoulos, Fotis
2014-06-01
The University of Minnesota Virtual Wind Simulator (VWiS) code is employed to simulate turbine/atmosphere interactions in the Scaled Wind Farm Technology (SWiFT) facility developed by Sandia National Laboratories in Lubbock, TX, USA. The facility presently consists of three turbines and the simulations consider the case of wind blowing from South such that two turbines are in the free stream and the third turbine in the direct wake of one upstream turbine with separation of 5 rotor diameters. Large-eddy simulation (LES) on two successively finer grids is carried out to examine the sensitivity of the computed solutions to grid refinement. It is found that the details of the break-up of the tip vortices into small-scale turbulence structures can only be resolved on the finer grid. It is also shown that the power coefficient CP of the downwind turbine predicted on the coarse grid is somewhat higher than that obtained on the fine mesh. On the other hand, the rms (root-mean-square) of the CP fluctuations are nearly the same on both grids, although more small-scale turbulence structures are resolved upwind of the downwind turbine on the finer grid.
FAS multigrid calculations of three dimensional flow using non-staggered grids
NASA Technical Reports Server (NTRS)
Matovic, D.; Pollard, A.; Becker, H. A.; Grandmaison, E. W.
1993-01-01
Grid staggering is a well known remedy for the problem of velocity/pressure coupling in incompressible flow calculations. Numerous inconveniences occur, however, when staggered grids are implemented, particularly when a general-purpose code, capable of handling irregular three-dimensional domains, is sought. In several non-staggered grid numerical procedures proposed in the literature, the velocity/pressure coupling is achieved by either pressure or velocity (momentum) averaging. This approach is not convenient for simultaneous (block) solvers that are preferred when using multigrid methods. A new method is introduced in this paper that is based upon non-staggered grid formulation with a set of virtual cell face velocities used for pressure/velocity coupling. Instead of pressure or velocity averaging, a momentum balance at the cell face is used as a link between the momentum and mass balance constraints. The numerical stencil is limited to 9 nodes (in 2D) or 27 nodes (in 3D), both during the smoothing and inter-grid transfer, which is a convenient feature when a block point solver is applied. The results for a lid-driven cavity and a cube in a lid-driven cavity are presented and compared to staggered grid calculations using the same multigrid algorithm. The method is shown to be stable and produce a smooth (wiggle-free) pressure field.
Virtual hydrology observatory: an immersive visualization of hydrology modeling
NASA Astrophysics Data System (ADS)
Su, Simon; Cruz-Neira, Carolina; Habib, Emad; Gerndt, Andreas
2009-02-01
The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual hydrology observatory application to facilitate the introduction of field experience and observational skills into hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting (WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and data optimization. Once all the visualization components are generated, they are integrated into the simulation data using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVETM like system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience.
A high-order staggered meshless method for elliptic problems
Trask, Nathaniel; Perego, Mauro; Bochev, Pavel Blagoveston
2017-03-21
Here, we present a new meshless method for scalar diffusion equations, which is motivated by their compatible discretizations on primal-dual grids. Unlike the latter though, our approach is truly meshless because it only requires the graph of nearby neighbor connectivity of the discretization points. This graph defines a local primal-dual grid complex with a virtual dual grid, in the sense that specification of the dual metric attributes is implicit in the method's construction. Our method combines a topological gradient operator on the local primal grid with a generalized moving least squares approximation of the divergence on the local dual grid. We show that the resulting approximation of the div-grad operator maintains polynomial reproduction to arbitrary orders and yields a meshless method, which attainsmore » $$O(h^{m})$$ convergence in both $L^2$- and $H^1$-norms, similar to mixed finite element methods. We demonstrate this convergence on curvilinear domains using manufactured solutions in two and three dimensions. Application of the new method to problems with discontinuous coefficients reveals solutions that are qualitatively similar to those of compatible mesh-based discretizations.« less
NASA Astrophysics Data System (ADS)
McNab, A.
2017-10-01
This paper describes GridPP’s Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG and other HEP experiments. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack, or an Infrastructure-as-a-Client system such as Vac. The paper describes our experience in using this platform, in developing and operating VM lifecycle managers Vac and Vcycle, and in interacting with VMs provided by LHCb, ATLAS, ALICE, CMS, and the GridPP DIRAC service to run production workloads.
FROG: Time Series Analysis for the Web Service Era
NASA Astrophysics Data System (ADS)
Allan, A.
2005-12-01
The FROG application is part of the next generation Starlink{http://www.starlink.ac.uk} software work (Draper et al. 2005) and released under the GNU Public License{http://www.gnu.org/copyleft/gpl.html} (GPL). Written in Java, it has been designed for the Web and Grid Service era as an extensible, pluggable, tool for time series analysis and display. With an integrated SOAP server the packages functionality is exposed to the user for use in their own code, and to be used remotely over the Grid, as part of the Virtual Observatory (VO).
Leigh, J.; Renambot, L.; Johnson, Aaron H.; Jeong, B.; Jagodic, R.; Schwarz, N.; Svistula, D.; Singh, R.; Aguilera, J.; Wang, X.; Vishwanath, V.; Lopez, B.; Sandin, D.; Peterka, T.; Girado, J.; Kooima, R.; Ge, J.; Long, L.; Verlo, A.; DeFanti, T.A.; Brown, M.; Cox, D.; Patterson, R.; Dorn, P.; Wefel, P.; Levy, S.; Talandis, J.; Reitzer, J.; Prudhomme, T.; Coffin, T.; Davis, B.; Wielinga, P.; Stolk, B.; Bum, Koo G.; Kim, J.; Han, S.; Corrie, B.; Zimmerman, T.; Boulanger, P.; Garcia, M.
2006-01-01
The research outlined in this paper marks an initial global cooperative effort between visualization and collaboration researchers to build a persistent virtual visualization facility linked by ultra-high-speed optical networks. The goal is to enable the comprehensive and synergistic research and development of the necessary hardware, software and interaction techniques to realize the next generation of end-user tools for scientists to collaborate on the global Lambda Grid. This paper outlines some of the visualization research projects that were demonstrated at the iGrid 2005 workshop in San Diego, California.
Cost Optimization Model for Business Applications in Virtualized Grid Environments
NASA Astrophysics Data System (ADS)
Strebel, Jörg
The advent of Grid computing gives enterprises an ever increasing choice of computing options, yet research has so far hardly addressed the problem of mixing the different computing options in a cost-minimal fashion. The following paper presents a comprehensive cost model and a mixed integer optimization model which can be used to minimize the IT expenditures of an enterprise and help in decision-making when to outsource certain business software applications. A sample scenario is analyzed and promising cost savings are demonstrated. Possible applications of the model to future research questions are outlined.
Speaking Personally--With John "Pathfinder" Lester
ERIC Educational Resources Information Center
Beaubois, Terry
2013-01-01
John Lester is currently the chief learning officer at ReactionGrid, a software company developing 3-D simulations and multiuser virtual world platforms. Lester's background includes working with Linden Lab on Second Life's education activities and neuroscience research. His primary focus is on collaborative learning and instructional…
NASA Astrophysics Data System (ADS)
Hambsch, F.-J.; Salvador-Castiñeira, P.; Oberstedt, S.; Göök, A.; Billnert, R.
2016-06-01
In recent years JRC-IRMM has been investigating fission cross-sections of 240,242Pu in the fast-neutron energy range relevant for innovative reactor systems and requested in the High Priority Request List (HPRL) of the OECD/Nuclear Energy Agency (NEA). In addition to that, prompt neutron multiplicities are being investigated for the major isotopes 235U, 239Pu in the neutron-resonance region using a newly developed scintillation detector array (SCINTIA) and an innovative modification of the Frisch-grid ionisation chamber for fission-fragment detection. These data are highly relevant for improved neutron data evaluation and requested by the OECD/Working Party on Evaluation Cooperation (WPEC). Thirdly, also prompt fission γ-ray emission is investigated using highly efficient lanthanide-halide detectors with superior timing resolution. Again, those data are requested in the HPRL for major actinides to solve open questions on an under-prediction of decay heat in nuclear reactors. The information on prompt fission neutron and γ-ray emission is crucial for benchmarking nuclear models to study the de-excitation process of neutron-rich fission fragments. Information on γ-ray emission probabilities is also useful in decommissioning exercises on damaged nuclear power plants like Fukushima Daiichi to which JRC-IRMM is contributing. The results on the 240,242Pu fission cross section, 235U prompt neutron multiplicity in the resonance region and correlations with fission fragments and prompt γ-ray emission for several isotopes will be presented and put into perspective.
A web-system of virtual morphometric globes
NASA Astrophysics Data System (ADS)
Florinsky, Igor; Garov, Andrei; Karachevtseva, Irina
2017-04-01
Virtual globes — programs implementing interactive three-dimensional (3D) models of planets — are increasingly used in geo- and planetary sciences. We develop a web-system of virtual morphometric globes. As the initial data, we used the following global digital elevation models (DEMs): (1) a DEM of the Earth extracted from SRTM30_PLUS database; (2) a DEM of Mars extracted from the Mars Orbiter Laser Altimeter (MOLA) gridded data record archive; and (3) A DEM of the Moon extracted from the Lunar Orbiter Laser Altimeter (LOLA) gridded data record archive. From these DEMs, we derived global digital models of the following 16 local, nonlocal, and combined morphometric variables: horizontal curvature, vertical curvature, mean curvature, Gaussian curvature, minimal curvature, maximal curvature, unsphericity curvature, difference curvature, vertical excess curvature, horizontal excess curvature, ring curvature, accumulation curvature, catchment area, dispersive area, topographic index, and stream power index (definitions, formulae, and interpretations can be found elsewhere [1]). To calculate local morphometric variables, we applied a finite-difference method intended for spheroidal equal angular grids [1]. Digital models of a nonlocal and combined morphometric variables were derived by a method of Martz and de Jong adapted to spheroidal equal angular grids [1]. DEM processing was performed in the software LandLord [1]. The calculated morphometric models were integrated into the testing version of the system. The following main functions are implemented in the system: (1) selection of a celestial body; (2) selection of a morphometric variable; (3) 2D visualization of a calculated global morphometric model (a map in equirectangular projection); (4) 3D visualization of a calculated global morphometric model on the sphere surface (a globe by itself); (5) change of a globe scale (zooming); and (6) globe rotation by an arbitrary angle. The testing version of the system represents morphometric models with the resolution of 15'. In the final version of the system, we plan to implement a multiscale 3D visualization for models of 17 morphometric variables with the resolution from 15' to 30". The web-system of virtual morphometric globes is designed as a separate unit of a 3D web GIS for storage, processing, and access to planetary data [2], which is currently developed as an extension of an existing 2D web GIS (http://cartsrv.mexlab.ru/geoportal). Free, real-time web access to the system of virtual globes will be provided. The testing version of the system is available at: http://cartsrv.mexlab.ru/virtualglobe. The study is supported by the Russian Foundation for Basic Research, grant 15-07-02484. References 1. Florinsky, I.V., 2016. Digital Terrain Analysis in Soil Science and Geology. 2nd ed. Academic Press, Amsterdam, 486 p. 2. Garov, A.S., Karachevtseva, I.P., Matveev, E.V., Zubarev, A.E., and Florinsky, I.V., 2016. Development of a heterogenic distributed environment for spatial data processing using cloud technologies. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 41(B4): 385-390.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Tillay
For three years, Sandia National Laboratories, Georgia Institute of Technology, and University of Illinois at Urbana-Champaign investigated a smart grid vision in which renewable-centric Virtual Power Plants (VPPs) provided ancillary services with interoperable distributed energy resources (DER). This team researched, designed, built, and evaluated real-time VPP designs incorporating DER forecasting, stochastic optimization, controls, and cyber security to construct a system capable of delivering reliable ancillary services, which have been traditionally provided by large power plants or other dedicated equipment. VPPs have become possible through an evolving landscape of state and national interconnection standards, which now require DER to include grid-supportmore » functionality and communications capabilities. This makes it possible for third party aggregators to provide a range of critical grid services such as voltage regulation, frequency regulation, and contingency reserves to grid operators. This paradigm (a) enables renewable energy, demand response, and energy storage to participate in grid operations and provide grid services, (b) improves grid reliability by providing additional operating reserves for utilities, independent system operators (ISOs), and regional transmission organization (RTOs), and (c) removes renewable energy high-penetration barriers by providing services with photovoltaics and wind resources that traditionally were the jobs of thermal generators. Therefore, it is believed VPP deployment will have far-reaching positive consequences for grid operations and may provide a robust pathway to high penetrations of renewables on US power systems. In this report, we design VPPs to provide a range of grid-support services and demonstrate one VPP which simultaneously provides bulk-system energy and ancillary reserves.« less
Grid-based Continual Analysis of Molecular Interior for Drug Discovery, QSAR and QSPR.
Potemkin, Andrey V; Grishina, Maria A; Potemkin, Vladimir A
2017-01-01
In 1979, R.D.Cramer and M.Milne made a first realization of 3D comparison of molecules by aligning them in space and by mapping their molecular fields to a 3D grid. Further, this approach was developed as the DYLOMMS (Dynamic Lattice- Oriented Molecular Modelling System) approach. In 1984, H.Wold and S.Wold proposed the use of partial least squares (PLS) analysis, instead of principal component analysis, to correlate the field values with biological activities. Then, in 1988, the method which was called CoMFA (Comparative Molecular Field Analysis) was introduced and the appropriate software became commercially available. Since 1988, a lot of 3D QSAR methods, algorithms and their modifications are introduced for solving of virtual drug discovery problems (e.g., CoMSIA, CoMMA, HINT, HASL, GOLPE, GRID, PARM, Raptor, BiS, CiS, ConGO,). All the methods can be divided into two groups (classes):1. Methods studying the exterior of molecules; 2) Methods studying the interior of molecules. A series of grid-based computational technologies for Continual Molecular Interior analysis (CoMIn) are invented in the current paper. The grid-based analysis is fulfilled by means of a lattice construction analogously to many other grid-based methods. The further continual elucidation of molecular structure is performed in various ways. (i) In terms of intermolecular interactions potentials. This can be represented as a superposition of Coulomb, Van der Waals interactions and hydrogen bonds. All the potentials are well known continual functions and their values can be determined in all lattice points for a molecule. (ii) In the terms of quantum functions such as electron density distribution, Laplacian and Hamiltonian of electron density distribution, potential energy distribution, the highest occupied and the lowest unoccupied molecular orbitals distribution and their superposition. To reduce time of calculations using quantum methods based on the first principles, an original quantum free-orbital approach AlteQ is proposed. All the functions can be calculated using a quantum approach at a sufficient level of theory and their values can be determined in all lattice points for a molecule. Then, the molecules of a dataset can be superimposed in the lattice for the maximal coincidence (or minimal deviations) of the potentials (i) or the quantum functions (ii). The methods and criteria of the superimposition are discussed. After that a functional relationship between biological activity or property and characteristics of potentials (i) or functions (ii) is created. The methods of the quantitative relationship construction are discussed. New approaches for rational virtual drug design based on the intermolecular potentials and quantum functions are invented. All the invented methods are realized at www.chemosophia.com web page. Therefore, a set of 3D QSAR approaches for continual molecular interior study giving a lot of opportunities for virtual drug discovery, virtual screening and ligand-based drug design are invented. The continual elucidation of molecular structure is performed in the terms of intermolecular interactions potentials and in the terms of quantum functions such as electron density distribution, Laplacian and Hamiltonian of electron density distribution, potential energy distribution, the highest occupied and the lowest unoccupied molecular orbitals distribution and their superposition. To reduce time of calculations using quantum methods based on the first principles, an original quantum free-orbital approach AlteQ is proposed. The methods of the quantitative relationship construction are discussed. New approaches for rational virtual drug design based on the intermolecular potentials and quantum functions are invented. All the invented methods are realized at www.chemosophia.com web page. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Kasam, Vinod; Salzemann, Jean; Botha, Marli; Dacosta, Ana; Degliesposti, Gianluca; Isea, Raul; Kim, Doman; Maass, Astrid; Kenyon, Colin; Rastelli, Giulio; Hofmann-Apitius, Martin; Breton, Vincent
2009-05-01
Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (PfGST, PfDHFR, PvDHFR (wild type and mutant forms) implicated in malaria. Grid-enabled virtual screening approach is proposed to produce focus compound libraries for other biological targets relevant to fight the infectious diseases of the developing world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
TESP combines existing domain simulators in the electric power grid, with new transactive agents, growth models and evaluation scripts. The existing domain simulators include GridLAB-D for the distribution grid and single-family residential buildings, MATPOWER for transmission and bulk generation, and EnergyPlus for large buildings. More are planned for subsequent versions of TESP. The new elements are: TEAgents - simulate market participants and transactive systems for market clearing. Some of this functionality was extracted from GridLAB-D and implemented in Python for customization by PNNL and others; Growth Model - a means for simulating system changes over a multiyear period, including bothmore » normal load growth and specific investment decisions. Customizable in Python code; and Evaluation Script - a means of evaluating different transactive systems through customizable post-processing in Python code. TESP provides a method for other researchers and vendors to design transactive systems, and test them in a virtual environment. It allows customization of the key components by modifying Python code.« less
Cellular automaton formulation of passive scalar dynamics
NASA Technical Reports Server (NTRS)
Chen, Hudong; Matthaeus, William H.
1987-01-01
Cellular automata modeling of the advection of a passive scalar in a two-dimensional flow is examined in the context of discrete lattice kinetic theory. It is shown that if the passive scalar is represented by tagging or 'coloring' automation particles a passive advection-diffusion equation emerges without use of perturbation expansions. For the specific case of the hydrodynamic lattice gas model of Frisch et al. (1986), the diffusion coefficient is calculated by perturbation.
Wide-range radioactive-gas-concentration detector
Anderson, D.F.
1981-11-16
A wide-range radioactive-gas-concentration detector and monitor capable of measuring radioactive-gas concentrations over a range of eight orders of magnitude is described. The device is designed to have an ionization chamber sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel-plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel-plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization-chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.
ERIC Educational Resources Information Center
da Silveira, Pedro Rodrigo Castro
2014-01-01
This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…
Downstream Benefits of Energy Management Systems
2015-12-01
OAT Outside Air Temperature POM Presidio of Monterey RCx Retro-Commissioning Solar PV Solar Photovoltaic VSG Virtual Smart Grid xiv THIS PAGE...efficiency, including some advanced demonstration projects for EMSs, microgrids, extensive solar photovoltaic (PV) generation capacity, and others...approach to reducing consumption, maintaining mission assurance, and providing reliable power to critical loads. (Deputy Undersecretary of Defense
Huang, Ping-Tzan; Jong, Tai-Lang; Li, Chien-Ming; Chen, Wei-Ling; Lin, Chia-Hung
2017-08-01
Blood leakage and blood loss are serious complications during hemodialysis. From the hemodialysis survey reports, these life-threatening events occur to attract nephrology nurses and patients themselves. When the venous needle and blood line are disconnected, it takes only a few minutes for an adult patient to lose over 40% of his / her blood, which is a sufficient amount of blood loss to cause the patient to die. Therefore, we propose integrating a flexible sensor and self-organizing algorithm to design a cloud computing-based warning device for blood leakage detection. The flexible sensor is fabricated via a screen-printing technique using metallic materials on a soft substrate in an array configuration. The self-organizing algorithm constructs a virtual direct current grid-based alarm unit in an embedded system. This warning device is employed to identify blood leakage levels via a wireless network and cloud computing. It has been validated experimentally, and the experimental results suggest specifications for its commercial designs. The proposed model can also be implemented in an embedded system.
Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks.
Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue
2017-06-06
Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.
Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks
Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue
2017-01-01
Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304
Grid-Based Surface Generalized Born Model for Calculation of Electrostatic Binding Free Energies.
Forouzesh, Negin; Izadi, Saeed; Onufriev, Alexey V
2017-10-23
Fast and accurate calculation of solvation free energies is central to many applications, such as rational drug design. In this study, we present a grid-based molecular surface implementation of "R6" flavor of the generalized Born (GB) implicit solvent model, named GBNSR6. The speed, accuracy relative to numerical Poisson-Boltzmann treatment, and sensitivity to grid surface parameters are tested on a set of 15 small protein-ligand complexes and a set of biomolecules in the range of 268 to 25099 atoms. Our results demonstrate that the proposed model provides a relatively successful compromise between the speed and accuracy of computing polar components of the solvation free energies (ΔG pol ) and binding free energies (ΔΔG pol ). The model tolerates a relatively coarse grid size h = 0.5 Å, where the grid artifact error in computing ΔΔG pol remains in the range of k B T ∼ 0.6 kcal/mol. The estimated ΔΔG pol s are well correlated (r 2 = 0.97) with the numerical Poisson-Boltzmann reference, while showing virtually no systematic bias and RMSE = 1.43 kcal/mol. The grid-based GBNSR6 model is available in Amber (AmberTools) package of molecular simulation programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, X; Liang, X; Penagaricano, J
2015-06-15
Purpose: To present the first clinical applications of Helical Tomotherapy-based spatially fractionated radiotherapy (HT-GRID) for deep seated tumors and associated dosimetric study. Methods: Ten previously treated GRID patients were selected (5 HT-GRID and 5 LINAC-GRID using a commercially available GRID block). Each case was re-planned either in HT-GRID or LINAC-GRID for a total of 10 plans for both techniques using same prescribed dose of 20 Gy to maximum point dose of GRID GTV. For TOMO-GRID, a programmable virtual TOMOGRID template mimicking a GRID pattern was generated. Dosimetric parameters compared included: GRID GTV mean dose (Dmean) and equivalent uniform dose (EUD),more » GRID GTV dose inhomogeneity (Ratio(valley/peak)), normal tissue Dmean and EUD, and other organs-at-risk(OARs) doses. Results: The median tumor volume was 634 cc, ranging from 182 to 4646 cc. Median distance from skin to the deepest part of tumor was 22cm, ranging from 8.9 to 38cm. The median GRID GTV Dmean and EUD was 10.65Gy (9.8–12.5Gy) and 7.62Gy (4.31–11.06Gy) for HT-GRID and was 6.73Gy (4.44–8.44Gy) and 3.95Gy (0.14–4.2Gy) for LINAC-GRID. The median Ratio(valley/peak) was 0.144(0.05–0.29) for HT-GRID and was 0.055(0.0001–0.14) for LINAC-GRID. For normal tissue in HT-GRID, the median Dmean and EUD was 1.24Gy (0.34–2.54Gy) and 5.45 Gy(3.45–6.89Gy) and was 0.61 Gy(0.11–1.52Gy) and 6Gy(4.45–6.82Gy) for LINAC-GRID. The OAR doses were comparable between the HT-GRID and LINAC-GRID. However, in some cases it was not possible to avoid a critical structure in LINAC-GRID; while HT-GRID can spare more tissue doses for certain critical structures. Conclusion: HT-GRID delivers higher GRID GTV Dmean, EUD and Ratio(valley/peak) compared to LINAC-GRID. HT-GRID delivers higher Dmean and lower EUD for normal tissue compared to LINAC-GRID. TOMOGRID template can be highly patient-specific and allows adjustment of the GRID pattern to different tumor sizes and shapes when they are deeply-seated and cannot be safely treated with LINAC-GRID.« less
Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system.
Aronov, Dmitriy; Tank, David W
2014-10-22
Virtual reality (VR) enables precise control of an animal's environment and otherwise impossible experimental manipulations. Neural activity in rodents has been studied on virtual 1D tracks. However, 2D navigation imposes additional requirements, such as the processing of head direction and environment boundaries, and it is unknown whether the neural circuits underlying 2D representations can be sufficiently engaged in VR. We implemented a VR setup for rats, including software and large-scale electrophysiology, that supports 2D navigation by allowing rotation and walking in any direction. The entorhinal-hippocampal circuit, including place, head direction, and grid cells, showed 2D activity patterns similar to those in the real world. Furthermore, border cells were observed, and hippocampal remapping was driven by environment shape, suggesting functional processing of virtual boundaries. These results illustrate that 2D spatial representations can be engaged by visual and rotational vestibular stimuli alone and suggest a novel VR tool for studying rat navigation.
Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system
Aronov, Dmitriy; Tank, David W.
2015-01-01
SUMMARY Virtual reality (VR) enables precise control of an animal’s environment and otherwise impossible experimental manipulations. Neural activity in navigating rodents has been studied on virtual linear tracks. However, the spatial navigation system’s engagement in complete two-dimensional environments has not been shown. We describe a VR setup for rats, including control software and a large-scale electrophysiology system, which supports 2D navigation by allowing animals to rotate and walk in any direction. The entorhinal-hippocampal circuit, including place cells, grid cells, head direction cells and border cells, showed 2D activity patterns in VR similar to those in the real world. Hippocampal neurons exhibited various remapping responses to changes in the appearance or the shape of the virtual environment, including a novel form in which a VR-induced cue conflict caused remapping to lock to geometry rather than salient cues. These results suggest a general-purpose tool for novel types of experimental manipulations in navigating rats. PMID:25374363
Design of Waste Heat Boiler for Scranton Army Ammunition Plant
1980-08-01
order to calculate velocity measurements, the flow (stagnation pressure) was measured with a pitot tube and a slant manometer. The 1.245 m (4...p where V = velocity (ft/sec) Referring to Figure A-l and letting the subscript 1 indicate conditions at the inlet to the pitot tube where...Mexico, Mo. 65265 Frisch Dampers* Octapus Equipment Co. Buffalo, N.Y. 14221 Alternate source (s) of supply Henry Vogt Machine Co. 1000 W
Tweeting Napoleon and Friending Clausewitz: Social Media and the Military Strategist
2015-06-01
Informatics 2012 (June 23, 2012), accessed February 8, 2015, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3799184/. 27 Frisch et al., “Use of Social...As she herself aptly describes, the academic naysayers believed the learning process had to be like “cod-liver oil ,” terrible tasting but good for...Increase Knowledge and Skills of British Columbia Nurses.” NI 2012: Proceedings of the 11th International Congress on Nursing Informatics 2012 (June
Kielar, Aneta; Meltzer-Asscher, Aya; Thompson, Cynthia
2012-01-01
Sentence comprehension requires processing of argument structure information associated with verbs, i.e. the number and type of arguments that they select. Many individuals with agrammatic aphasia show impaired production of verbs with greater argument structure density. The extent to which these participants also show argument structure deficits during comprehension, however, is unclear. Some studies find normal access to verb arguments, whereas others report impaired ability. The present study investigated verb argument structure processing in agrammatic aphasia by examining event-related potentials associated with argument structure violations in healthy young and older adults as well as aphasic individuals. A semantic violation condition was included to investigate possible differences in sensitivity to semantic and argument structure information during sentence processing. Results for the healthy control participants showed a negativity followed by a positive shift (N400-P600) in the argument structure violation condition, as found in previous ERP studies (Friederici & Frisch, 2000; Frisch, Hahne, & Friederici, 2004). In contrast, individuals with agrammatic aphasia showed a P600, but no N400, response to argument structure mismatches. Additionally, compared to the control groups, the agrammatic participants showed an attenuated, but relatively preserved, N400 response to semantic violations. These data show that agrammatic individuals do not demonstrate normal real-time sensitivity to verb argument structure requirements during sentence processing. PMID:23022079
Mukherjee, Sudipto; Rizzo, Robert C.
2014-01-01
Scoring functions are a critically important component of computer-aided screening methods for the identification of lead compounds during early stages of drug discovery. Here, we present a new multi-grid implementation of the footprint similarity (FPS) scoring function that was recently developed in our laboratory which has proven useful for identification of compounds which bind to a protein on a per-residue basis in a way that resembles a known reference. The grid-based FPS method is much faster than its Cartesian-space counterpart which makes it computationally tractable for on-the-fly docking, virtual screening, or de novo design. In this work, we establish that: (i) relatively few grids can be used to accurately approximate Cartesian space footprint similarity, (ii) the method yields improved success over the standard DOCK energy function for pose identification across a large test set of experimental co-crystal structures, for crossdocking, and for database enrichment, and (iii) grid-based FPS scoring can be used to tailor construction of new molecules to have specific properties, as demonstrated in a series of test cases targeting the viral protein HIVgp41. The method will be made available in the program DOCK6. PMID:23436713
Geometric Stitching Method for Double Cameras with Weak Convergence Geometry
NASA Astrophysics Data System (ADS)
Zhou, N.; He, H.; Bao, Y.; Yue, C.; Xing, K.; Cao, S.
2017-05-01
In this paper, a new geometric stitching method is proposed which utilizes digital elevation model (DEM)-aided block adjustment to solve relative orientation parameters for dual-camera with weak convergence geometry. A rational function model (RFM) with affine transformation is chosen as the relative orientation model. To deal with the weak geometry, a reference DEM is used in this method as an additional constraint in the block adjustment, which only calculates the planimetry coordinates of tie points (TPs). After that we can use the obtained affine transform coefficients to generate virtual grid, and update rational polynomial coefficients (RPCs) to complete the geometric stitching. Our proposed method was tested on GaoFen-2(GF-2) dual-camera panchromatic (PAN) images. The test results show that the proposed method can achieve an accuracy of better than 0.5 pixel in planimetry and have a seamless visual effect. For regions with small relief, when global DEM with 1 km grid, SRTM with 90 m grid and ASTER GDEM V2 with 30 m grid replaced DEM with 1m grid as elevation constraint it is almost no loss of accuracy. The test results proved the effectiveness and feasibility of the stitching method.
Physicists Get INSPIREd: INSPIRE Project and Grid Applications
NASA Astrophysics Data System (ADS)
Klem, Jukka; Iwaszkiewicz, Jan
2011-12-01
INSPIRE is the new high-energy physics scientific information system developed by CERN, DESY, Fermilab and SLAC. INSPIRE combines the curated and trusted contents of SPIRES database with Invenio digital library technology. INSPIRE contains the entire HEP literature with about one million records and in addition to becoming the reference HEP scientific information platform, it aims to provide new kinds of data mining services and metrics to assess the impact of articles and authors. Grid and cloud computing provide new opportunities to offer better services in areas that require large CPU and storage resources including document Optical Character Recognition (OCR) processing, full-text indexing of articles and improved metrics. D4Science-II is a European project that develops and operates an e-Infrastructure supporting Virtual Research Environments (VREs). It develops an enabling technology (gCube) which implements a mechanism for facilitating the interoperation of its e-Infrastructure with other autonomously running data e-Infrastructures. As a result, this creates the core of an e-Infrastructure ecosystem. INSPIRE is one of the e-Infrastructures participating in D4Science-II project. In the context of the D4Science-II project, the INSPIRE e-Infrastructure makes available some of its resources and services to other members of the resulting ecosystem. Moreover, it benefits from the ecosystem via a dedicated Virtual Organization giving access to an array of resources ranging from computing and storage resources of grid infrastructures to data and services.
CO2 Mitigation Measures of Power Sector and Its Integrated Optimization in China
Dai, Pan; Chen, Guang; Zhou, Hao; Su, Meirong; Bao, Haixia
2012-01-01
Power sector is responsible for about 40% of the total CO2 emissions in the world and plays a leading role in climate change mitigation. In this study, measures that lower CO2 emissions from the supply side, demand side, and power grid are discussed, based on which, an integrated optimization model of CO2 mitigation (IOCM) is proposed. Virtual energy, referring to energy saving capacity in both demand side and the power grid, together with conventional energy in supply side, is unified planning for IOCM. Consequently, the optimal plan of energy distribution, considering both economic benefits and mitigation benefits, is figured out through the application of IOCM. The results indicate that development of demand side management (DSM) and smart grid can make great contributions to CO2 mitigation of power sector in China by reducing the CO2 emissions by 10.02% and 12.59%, respectively, in 2015, and in 2020. PMID:23213305
Wide range radioactive gas concentration detector
Anderson, David F.
1984-01-01
A wide range radioactive gas concentration detector and monitor which is capable of measuring radioactive gas concentrations over a range of eight orders of magnitude. The device of the present invention is designed to have an ionization chamber which is sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.
INL and NREL Demonstrate Power Grid Simulation at a Distance | News | NREL
RTDSs can form a virtual laboratory that allows multiple laboratories to cooperate on energy integration Laboratory (NREL) and Idaho National Laboratory (INL) have successfully demonstrated the capability to within the DOE national laboratory complex. The two national laboratories were able to connect their
Conception and characterization of a virtual coplanar grid for a 11×11 pixelated CZT detector
NASA Astrophysics Data System (ADS)
Espagnet, Romain; Frezza, Andrea; Martin, Jean-Pierre; Hamel, Louis-André; Després, Philippe
2017-07-01
Due to the low mobility of holes in CZT, commercially available detectors with a relatively large volume typically use a pixelated anode structure. They are mostly used in imaging applications and often require a dense electronic readout scheme. These large volume detectors are also interesting for high-sensitivity applications and a CZT-based blood gamma counter was developed from a 20×20×15 mm3 crystal available commercially and having a 11×11 pixelated readout scheme. A method is proposed here to reduce the number of channels required to use the crystal in a high-sensitivity counting application, dedicated to pharmacokinetic modelling in PET and SPECT. Inspired by a classic coplanar anode, an implementation of a virtual coplanar grid was done by connecting the 121 pixels of the detector to form intercalated bands. The layout, the front-end electronics and the characterization of the detector in this 2-channel anode geometry is presented. The coefficients required to compensate for electron trapping in CZT were determined experimentally to improve the performance. The resulting virtual coplanar detector has an intrinsic efficiency of 34% and an energy resolution of 8% at 662 keV. The detector's response was linear between 80 keV and 1372 keV. This suggests that large CZT crystals offer an excellent alternative to scintillation detectors for some applications, especially those where high-sensitivity and compactness are required.
1986-05-01
4. Bossi, J. A., Price, G. A., and Winkleblack, S. A., " Flexible Spacecraft Controller Design Using the Integrated Analysis Capability (IAC)," AIAA...P., "Integrated Control System Design Capabilities at the Goddard Space Flight Center," Pro- ceedings of the 2nd IEEE Control Systems Society...Symposium on Computer- Aided Control System Design (CACSD), Santa Barbara, California, March, 13-15 1985. 6. Frisch, H. P. "Integrated Analysis Capability
Modulational stability of periodic solutions of the Kuramoto-Sivaskinsky equation
NASA Technical Reports Server (NTRS)
Papageorgiou, Demetrios T.; Papanicolaou, George C.; Smyrlis, Yiorgos S.
1993-01-01
We study the long-wave, modulational, stability of steady periodic solutions of the Kuramoto-Sivashinsky equation. The analysis is fully nonlinear at first, and can in principle be carried out to all orders in the small parameter, which is the ratio of the spatial period to a characteristic length of the envelope perturbations. In the linearized regime, we recover a high-order version of the results of Frisch, She, and Thual, which shows that the periodic waves are much more stable than previously expected.
Too much noise on the dance floor
Schürch, Roger; Couvillon, Margaret J.
2013-01-01
Successful honey bee foragers communicate where they have found a good resource with the waggle dance, a symbolic language that encodes a distance and direction. Both of these components are repeated several times (1 to > 100) within the same dance. Additionally, both these components vary within a dance. Here we discuss some causes and consequences of intra-dance and inter-dance angular variation and advocate revisiting von Frisch and Lindauer’s earlier work to gain a better understanding of honey bee foraging ecology. PMID:23750292
Mass production of extensive air showers for the Pierre Auger Collaboration using Grid Technology
NASA Astrophysics Data System (ADS)
Lozano Bahilo, Julio; Pierre Auger Collaboration
2012-06-01
When ultra-high energy cosmic rays enter the atmosphere they interact producing extensive air showers (EAS) which are the objects studied by the Pierre Auger Observatory. The number of particles involved in an EAS at these energies is of the order of billions and the generation of a single simulated EAS requires many hours of computing time with current processors. In addition, the storage space consumed by the output of one simulated EAS is very high. Therefore we have to make use of Grid resources to be able to generate sufficient quantities of showers for our physics studies in reasonable time periods. We have developed a set of highly automated scripts written in common software scripting languages in order to deal with the high number of jobs which we have to submit regularly to the Grid. In spite of the low number of sites supporting our Virtual Organization (VO) we have reached the top spot on CPU consumption among non LHC (Large Hadron Collider) VOs within EGI (European Grid Infrastructure).
The GridPP DIRAC project - DIRAC for non-LHC communities
NASA Astrophysics Data System (ADS)
Bauer, D.; Colling, D.; Currie, R.; Fayer, S.; Huffman, A.; Martyniak, J.; Rand, D.; Richards, A.
2015-12-01
The GridPP consortium in the UK is currently testing a multi-VO DIRAC service aimed at non-LHC VOs. These VOs (Virtual Organisations) are typically small and generally do not have a dedicated computing support post. The majority of these represent particle physics experiments (e.g. NA62 and COMET), although the scope of the DIRAC service is not limited to this field. A few VOs have designed bespoke tools around the EMI-WMS & LFC, while others have so far eschewed distributed resources as they perceive the overhead for accessing them to be too high. The aim of the GridPP DIRAC project is to provide an easily adaptable toolkit for such VOs in order to lower the threshold for access to distributed resources such as Grid and cloud computing. As well as hosting a centrally run DIRAC service, we will also publish our changes and additions to the upstream DIRAC codebase under an open-source license. We report on the current status of this project and show increasing adoption of DIRAC within the non-LHC communities.
A secure and efficiently searchable health information architecture.
Yasnoff, William A
2016-06-01
Patient-centric repositories of health records are an important component of health information infrastructure. However, patient information in a single repository is potentially vulnerable to loss of the entire dataset from a single unauthorized intrusion. A new health record storage architecture, the personal grid, eliminates this risk by separately storing and encrypting each person's record. The tradeoff for this improved security is that a personal grid repository must be sequentially searched since each record must be individually accessed and decrypted. To allow reasonable search times for large numbers of records, parallel processing with hundreds (or even thousands) of on-demand virtual servers (now available in cloud computing environments) is used. Estimated search times for a 10 million record personal grid using 500 servers vary from 7 to 33min depending on the complexity of the query. Since extremely rapid searching is not a critical requirement of health information infrastructure, the personal grid may provide a practical and useful alternative architecture that eliminates the large-scale security vulnerabilities of traditional databases by sacrificing unnecessary searching speed. Copyright © 2016 Elsevier Inc. All rights reserved.
Augmenting the access grid using augmented reality
NASA Astrophysics Data System (ADS)
Li, Ying
2012-01-01
The Access Grid (AG) targets an advanced collaboration environment, with which multi-party group of people from remote sites can collaborate over high-performance networks. However, current AG still employs VIC (Video Conferencing Tool) to offer only pure video for remote communication, while most AG users expect to collaboratively refer and manipulate the 3D geometric models of grid services' results in live videos of AG session. Augmented Reality (AR) technique can overcome the deficiencies with its characteristics of combining virtual and real, real-time interaction and 3D registration, so it is necessary for AG to utilize AR to better assist the advanced collaboration environment. This paper introduces an effort to augment the AG by adding support for AR capability, which is encapsulated in the node service infrastructure, named as Augmented Reality Service (ARS). The ARS can merge the 3D geometric models of grid services' results and real video scene of AG into one AR environment, and provide the opportunity for distributed AG users to interactively and collaboratively participate in the AR environment with better experience.
System-Level Virtualization Research at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J
2010-01-01
System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less
A policy system for Grid Management and Monitoring
NASA Astrophysics Data System (ADS)
Stagni, Federico; Santinelli, Roberto; LHCb Collaboration
2011-12-01
Organizations using a Grid computing model are faced with non-traditional administrative challenges: the heterogeneous nature of the underlying resources requires professionals acting as Grid Administrators. Members of a Virtual Organization (VO) can use a subset of available resources and services in the grid infrastructure and in an ideal world, the more resoures are exploited the better. In the real world, the less faulty services, the better: experienced Grid administrators apply procedures for adding and removing services, based on their status, as it is reported by an ever-growing set of monitoring tools. When a procedure is agreed and well-exercised, a formal policy could be derived. For this reason, using the DIRAC framework in the LHCb collaboration, we developed a policy system that can enforce management and operational policies, in a VO-specific fashion. A single policy makes an assessment on the status of a subject, relative to one or more monitoring information. Subjects of the policies are monitored entities of an established Grid ontology. The status of a same entity is evaluated against a number of policies, whose results are then combined by a Policy Decision Point. Such results are enforced in a Policy Enforcing Point, which provides plug-ins for actions, like raising alarms, sending notifications, automatic addition and removal of services and resources from the Grid mask. Policy results are shown in the web portal, and site-specific views are provided also. This innovative system provides advantages in terms of procedures automation, information aggregation and problem solving.
Pinthong, Watthanai; Muangruen, Panya
2016-01-01
Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555
Implementation of a Virtual Microphone Array to Obtain High Resolution Acoustic Images
Izquierdo, Alberto; Suárez, Luis; Suárez, David
2017-01-01
Using arrays with digital MEMS (Micro-Electro-Mechanical System) microphones and FPGA-based (Field Programmable Gate Array) acquisition/processing systems allows building systems with hundreds of sensors at a reduced cost. The problem arises when systems with thousands of sensors are needed. This work analyzes the implementation and performance of a virtual array with 6400 (80 × 80) MEMS microphones. This virtual array is implemented by changing the position of a physical array of 64 (8 × 8) microphones in a grid with 10 × 10 positions, using a 2D positioning system. This virtual array obtains an array spatial aperture of 1 × 1 m2. Based on the SODAR (SOund Detection And Ranging) principle, the measured beampattern and the focusing capacity of the virtual array have been analyzed, since beamforming algorithms assume to be working with spherical waves, due to the large dimensions of the array in comparison with the distance between the target (a mannequin) and the array. Finally, the acoustic images of the mannequin, obtained for different frequency and range values, have been obtained, showing high angular resolutions and the possibility to identify different parts of the body of the mannequin. PMID:29295485
Initial steps towards a production platform for DNA sequence analysis on the grid.
Luyf, Angela C M; van Schaik, Barbera D C; de Vries, Michel; Baas, Frank; van Kampen, Antoine H C; Olabarriaga, Silvia D
2010-12-14
Bioinformatics is confronted with a new data explosion due to the availability of high throughput DNA sequencers. Data storage and analysis becomes a problem on local servers, and therefore it is needed to switch to other IT infrastructures. Grid and workflow technology can help to handle the data more efficiently, as well as facilitate collaborations. However, interfaces to grids are often unfriendly to novice users. In this study we reused a platform that was developed in the VL-e project for the analysis of medical images. Data transfer, workflow execution and job monitoring are operated from one graphical interface. We developed workflows for two sequence alignment tools (BLAST and BLAT) as a proof of concept. The analysis time was significantly reduced. All workflows and executables are available for the members of the Dutch Life Science Grid and the VL-e Medical virtual organizations All components are open source and can be transported to other grid infrastructures. The availability of in-house expertise and tools facilitates the usage of grid resources by new users. Our first results indicate that this is a practical, powerful and scalable solution to address the capacity and collaboration issues raised by the deployment of next generation sequencers. We currently adopt this methodology on a daily basis for DNA sequencing and other applications. More information and source code is available via http://www.bioinformaticslaboratory.nl/
Data location-aware job scheduling in the grid. Application to the GridWay metascheduler
NASA Astrophysics Data System (ADS)
Delgado Peris, Antonio; Hernandez, Jose; Huedo, Eduardo; Llorente, Ignacio M.
2010-04-01
Grid infrastructures constitute nowadays the core of the computing facilities of the biggest LHC experiments. These experiments produce and manage petabytes of data per year and run thousands of computing jobs every day to process that data. It is the duty of metaschedulers to allocate the tasks to the most appropriate resources at the proper time. Our work reviews the policies that have been proposed for the scheduling of grid jobs in the context of very data-intensive applications. We indicate some of the practical problems that such models will face and describe what we consider essential characteristics of an optimum scheduling system: aim to minimise not only job turnaround time but also data replication, flexibility to support different virtual organisation requirements and capability to coordinate the tasks of data placement and job allocation while keeping their execution decoupled. These ideas have guided the development of an enhanced prototype for GridWay, a general purpose metascheduler, part of the Globus Toolkit and member of the EGEE's RESPECT program. Current GridWay's scheduling algorithm is unaware of data location. Our prototype makes it possible for job requests to set data needs not only as absolute requirements but also as functions for resource ranking. As our tests show, this makes it more flexible than currently used resource brokers to implement different data-aware scheduling algorithms.
Use of containerisation as an alternative to full virtualisation in grid environments.
NASA Astrophysics Data System (ADS)
Long, Robin
2015-12-01
Virtualisation is a key tool on the grid. It can be used to provide varying work environments or as part of a cloud infrastructure. Virtualisation itself carries certain overheads that decrease the performance of the system through requiring extra resources to virtualise the software and hardware stack, and CPU-cycles wasted instantiating or destroying virtual machines for each job. With the rise and improvements in containerisation, where only the software stack is kept separate and no hardware or kernel virtualisation is used, there is scope for speed improvements and efficiency increases over standard virtualisation. We compare containerisation and virtualisation, including a comparison against bare-metal machines as a benchmark.
Tomography for two-dimensional gas temperature distribution based on TDLAS
NASA Astrophysics Data System (ADS)
Luo, Can; Wang, Yunchu; Xing, Fei
2018-03-01
Based on tunable diode laser absorption spectroscopy (TDLAS), the tomography is used to reconstruct the combustion gas temperature distribution. The effects of number of rays, number of grids, and spacing of rays on the temperature reconstruction results for parallel ray are researched. The reconstruction quality is proportional to the ray number. The quality tends to be smoother when the ray number exceeds a certain value. The best quality is achieved when η is between 0.5 and 1. A virtual ray method combined with the reconstruction algorithms is tested. It is found that virtual ray method is effective to improve the accuracy of reconstruction results, compared with the original method. The linear interpolation method and cubic spline interpolation method, are used to improve the calculation accuracy of virtual ray absorption value. According to the calculation results, cubic spline interpolation is better. Moreover, the temperature distribution of a TBCC combustion chamber is used to validate those conclusions.
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.
2005 TACOM APBI - Partnering to Reset, Recapitalize and Restructure the Force
2005-10-28
training. 28 Oct 05~APBI ~9~ Force Projection ~ Technology Challenges (cont.) Force Sustainment Systems Develop smart airdrop systems using Global... UART ). General Purpose Electronic Test Equipment (GPETE) Transform multiple conventional GPETE instruments into a single Virtual Instrument with a...Consists of tools and equipment to refill and repair carbon dioxide fire extinguishers. Rapid Runway Repair - Components include sand grid sections
OASIS: a data and software distribution service for Open Science Grid
NASA Astrophysics Data System (ADS)
Bockelman, B.; Caballero Bejar, J.; De Stefano, J.; Hover, J.; Quick, R.; Teige, S.
2014-06-01
The Open Science Grid encourages the concept of software portability: a user's scientific application should be able to run at as many sites as possible. It is necessary to provide a mechanism for OSG Virtual Organizations to install software at sites. Since its initial release, the OSG Compute Element has provided an application software installation directory to Virtual Organizations, where they can create their own sub-directory, install software into that sub-directory, and have the directory shared on the worker nodes at that site. The current model has shortcomings with regard to permissions, policies, versioning, and the lack of a unified, collective procedure or toolset for deploying software across all sites. Therefore, a new mechanism for data and software distributing is desirable. The architecture for the OSG Application Software Installation Service (OASIS) is a server-client model: the software and data are installed only once in a single place, and are automatically distributed to all client sites simultaneously. Central file distribution offers other advantages, including server-side authentication and authorization, activity records, quota management, data validation and inspection, and well-defined versioning and deletion policies. The architecture, as well as a complete analysis of the current implementation, will be described in this paper.
A Security Monitoring Framework For Virtualization Based HEP Infrastructures
NASA Astrophysics Data System (ADS)
Gomez Ramirez, A.; Martinez Pedreira, M.; Grigoras, C.; Betev, L.; Lara, C.; Kebschull, U.;
2017-10-01
High Energy Physics (HEP) distributed computing infrastructures require automatic tools to monitor, analyze and react to potential security incidents. These tools should collect and inspect data such as resource consumption, logs and sequence of system calls for detecting anomalies that indicate the presence of a malicious agent. They should also be able to perform automated reactions to attacks without administrator intervention. We describe a novel framework that accomplishes these requirements, with a proof of concept implementation for the ALICE experiment at CERN. We show how we achieve a fully virtualized environment that improves the security by isolating services and Jobs without a significant performance impact. We also describe a collected dataset for Machine Learning based Intrusion Prevention and Detection Systems on Grid computing. This dataset is composed of resource consumption measurements (such as CPU, RAM and network traffic), logfiles from operating system services, and system call data collected from production Jobs running in an ALICE Grid test site and a big set of malware samples. This malware set was collected from security research sites. Based on this dataset, we will proceed to develop Machine Learning algorithms able to detect malicious Jobs.
Torralba, Marta; Díaz-Pérez, Lucía C.
2017-01-01
This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239
Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data
NASA Astrophysics Data System (ADS)
Koranda, Scott
2004-03-01
The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.
VisIVO: A Tool for the Virtual Observatory and Grid Environment
NASA Astrophysics Data System (ADS)
Becciani, U.; Comparato, M.; Costa, A.; Larsson, B.; Gheller, C.; Pasian, F.; Smareglia, R.
2007-10-01
We present the new features of VisIVO, software for the visualization and analysis of astrophysical data which can be retrieved from the Virtual Observatory framework and used for cosmological simulations running both on Windows and GNU/Linux platforms. VisIVO is VO standards compliant and supports the most important astronomical data formats such as FITS, HDF5 and VOTables. It is free software and can be downloaded from the web site http://visivo.cineca.it. VisIVO can interoperate with other astronomical VO compliant tools through PLASTIC (PLatform for AStronomical Tool InterConnection). This feature allows VisIVO to share data with many other astronomical packages to further analyze the loaded data.
Interstellar Dust in the Heliosheath: Tentative Discovery of the Magnetic Wall of the Heliosphere
NASA Astrophysics Data System (ADS)
Frisch, P. C.
2005-12-01
The evident identification of interstellar dust grains entrained in the magnetic wall of the heliosphere is reported. It is shown that the distribution of dust grains causing the weak polarization of light from nearby stars is consistent with polarization by small charged interstellar dust grains captured in the heliosphere magnetic wall (Tinbergen 1982, Frisch 2005). There is an offset between the deflected small charged polarizing dust grains, radius less than 0.2 microns, and the undeflected large grain population, radius larger than 0.2 microns. The region of maximum polarization is towards ecliptic coordinates lambda,beta = 295,0 deg, which is offset along the ecliptic longitude by about 35 deg from the heliosphere nose and extends to low ecliptic latitudes where the heliosphere magnetic wall is expected. An offset is also found between the best aligned dust grains, near lambda=281 deg to 220 deg, and the upwind direction of the undeflected inflow of large grains seen by Ulysses and Galileo. In the aligned-grain region, the polarization strength anti-correlates with ecliptic latitude, indicating that the magnetic wall was predominantly at negative ecliptic latitudes when these data were acquired. These data are consistent with model predictions for an interstellar magnetic field which is tilted by 60 deg with respect to the ecliptic plane, and parallel to the galactic plane. References: Tinbergen, 1982: AA, v105, p53; Frisch, 2005: to appear in ApJL.
Cloud services for the Fermilab scientific stakeholders
Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...
2015-12-23
As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less
Cloud services for the Fermilab scientific stakeholders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, S.; Garzoglio, G.; Mhashilkar, P.
As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less
A national-scale authentication infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, R.; Engert, D.; Foster, I.
2000-12-01
Today, individuals and institutions in science and industry are increasingly forming virtual organizations to pool resources and tackle a common goal. Participants in virtual organizations commonly need to share resources such as data archives, computer cycles, and networks - resources usually available only with restrictions based on the requested resource's nature and the user's identity. Thus, any sharing mechanism must have the ability to authenticate the user's identity and determine if the user is authorized to request the resource. Virtual organizations tend to be fluid, however, so authentication mechanisms must be flexible and lightweight, allowing administrators to quickly establish andmore » change resource-sharing arrangements. However, because virtual organizations complement rather than replace existing institutions, sharing mechanisms cannot change local policies and must allow individual institutions to maintain control over their own resources. Our group has created and deployed an authentication and authorization infrastructure that meets these requirements: the Grid Security Infrastructure. GSI offers secure single sign-ons and preserves site control over access policies and local security. It provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications.« less
Self-localization of wireless sensor networks using self-organizing maps
NASA Astrophysics Data System (ADS)
Ertin, Emre; Priddy, Kevin L.
2005-03-01
Recently there has been a renewed interest in the notion of deploying large numbers of networked sensors for applications ranging from environmental monitoring to surveillance. In a typical scenario a number of sensors are distributed in a region of interest. Each sensor is equipped with sensing, processing and communication capabilities. The information gathered from the sensors can be used to detect, track and classify objects of interest. For a number of locations the sensors location is crucial in interpreting the data collected from those sensors. Scalability requirements dictate sensor nodes that are inexpensive devices without a dedicated localization hardware such as GPS. Therefore the network has to rely on information collected within the network to self-localize. In the literature a number of algorithms has been proposed for network localization which uses measurements informative of range, angle, proximity between nodes. Recent work by Patwari and Hero relies on sensor data without explicit range estimates. The assumption is that the correlation structure in the data is a monotone function of the intersensor distances. In this paper we propose a new method based on unsupervised learning techniques to extract location information from the sensor data itself. We consider a grid consisting of virtual nodes and try to fit grid in the actual sensor network data using the method of self organizing maps. Then known sensor network geometry can be used to rotate and scale the grid to a global coordinate system. Finally, we illustrate how the virtual nodes location information can be used to track a target.
Virtual phantom magnetic resonance imaging (ViP MRI) on a clinical MRI platform.
Saint-Jalmes, Hervé; Bordelois, Alejandro; Gambarota, Giulio
2018-01-01
The purpose of this study was to implement Virtual Phantom Magnetic Resonance Imaging (ViP MRI), a technique that allows for generating reference signals in MR images using radiofrequency (RF) signals, on a clinical MR system and to test newly designed virtual phantoms. MRI experiments were conducted on a 1.5 T MRI scanner. Electromagnetic modelling of the ViP system was done using the principle of reciprocity. The ViP RF signals were generated using a compact waveform generator (dimensions of 26 cm × 18 cm × 16 cm), connected to a homebuilt 25 mm-diameter RF coil. The ViP RF signals were transmitted to the MRI scanner bore, simultaneously with the acquisition of the signal from the object of interest. Different types of MRI data acquisition (2D and 3D gradient-echo) as well as different phantoms, including the Shepp-Logan phantom, were tested. Furthermore, a uniquely designed virtual phantom - in the shape of a grid - was generated; this newly proposed phantom allows for the investigations of the vendor distortion correction field. High quality MR images of virtual phantoms were obtained. An excellent agreement was found between the experimental data and the inverse cube law, which was the expected functional dependence obtained from the electromagnetic modelling of the ViP system. Short-term time stability measurements yielded a coefficient of variation in the signal intensity over time equal to 0.23% and 0.13% for virtual and physical phantom, respectively. MR images of the virtual grid-shaped phantom were reconstructed with the vendor distortion correction; this allowed for a direct visualization of the vendor distortion correction field. Furthermore, as expected from the electromagnetic modelling of the ViP system, a very compact coil (diameter ~ cm) and very small currents (intensity ~ mA) were sufficient to generate a signal comparable to that of physical phantoms in MRI experiments. The ViP MRI technique was successfully implemented on a clinical MR system. One of the major advantages of ViP MRI over previous approaches is that the generation and transmission of RF signals can be achieved with a self-contained apparatus. As such, the ViP MRI technique is transposable to different platforms (preclinical and clinical) of different vendors. It is also shown here that ViP MRI could be used to generate signals whose characteristics cannot be reproduced by physical objects. This could be exploited to assess MRI system properties, such as the vendor distortion correction field. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Pasian, F.
2015-06-01
The origins of the Italian contribution to the international Virtual Observatory (VO) were mainly tied to the definition and implementation of a Data Grid using Grid standards. From there on, by means of a step-wise evolution, activities started including the implementation of VO-aware tools and facilities, or the production of services accessing data archives in ways compliant to the international VO standards. An important activity the Italian VO community has carried out is the dissemination of the VO capabilities to professionals, students and amateurs: in particular, an important and maybe unique success has been bringing to the classrooms the VO, and using it as a powerful tool to teach astronomy at all levels, from junior high school to undergraduate courses. Lately, there has been also direct involvement of the Italian community in the definition of standards and services within the framework of the International Virtual Observatory Alliance (IVOA), and participation and leadership in the IVOA Working Groups. Along this path, the national funding for these activities has been rather low, although essential to carry the activities on. There were no bursts of funding to allow a quick rise in activities leading to the fast realisation of tools and systems. Rather, the manpower involved in VObs.it has been always fairly low but steady. In the view of managing a national VO initiative with a low budget, strategic choices were made to exploit the available resources and to guarantee a constant background activity, mainly geared at providing services to the community, development in lower-priority VO areas, dissemination and support.
NASA Astrophysics Data System (ADS)
Hanisch, R. J.
2014-11-01
The concept of the Virtual Observatory arose more-or-less simultaneously in the United States and Europe circa 2000. Ten pages of Astronomy and Astrophysics in the New Millennium: Panel Reports (National Academy Press, Washington, 2001), that is, the detailed recommendations of the Panel on Theory, Computation, and Data Exploration of the 2000 Decadal Survey in Astronomy, are dedicated to describing the motivation for, scientific value of, and major components required in implementing the National Virtual Observatory. European initiatives included the Astrophysical Virtual Observatory at the European Southern Observatory, the AstroGrid project in the United Kingdom, and the Euro-VO (sponsored by the European Union). Organizational/conceptual meetings were held in the US at the California Institute of Technology (Virtual Observatories of the Future, June 13-16, 2000) and at ESO Headquarters in Garching, Germany (Mining the Sky, July 31-August 4, 2000; Toward an International Virtual Observatory, June 10-14, 2002). The nascent US, UK, and European VO projects formed the International Virtual Observatory Alliance (IVOA) at the June 2002 meeting in Garching, with yours truly as the first chair. The IVOA has grown to a membership of twenty-one national projects and programs on six continents, and has developed a broad suite of data access protocols and standards that have been widely implemented. Astronomers can now discover, access, and compare data from hundreds of telescopes and facilities, hosted at hundreds of organizations worldwide, stored in thousands of databases, all with a single query.
Xia, Kelin
2017-12-20
In this paper, a multiscale virtual particle based elastic network model (MVP-ENM) is proposed for the normal mode analysis of large-sized biomolecules. The multiscale virtual particle (MVP) model is proposed for the discretization of biomolecular density data. With this model, large-sized biomolecular structures can be coarse-grained into virtual particles such that a balance between model accuracy and computational cost can be achieved. An elastic network is constructed by assuming "connections" between virtual particles. The connection is described by a special harmonic potential function, which considers the influence from both the mass distributions and distance relations of the virtual particles. Two independent models, i.e., the multiscale virtual particle based Gaussian network model (MVP-GNM) and the multiscale virtual particle based anisotropic network model (MVP-ANM), are proposed. It has been found that in the Debye-Waller factor (B-factor) prediction, the results from our MVP-GNM with a high resolution are as good as the ones from GNM. Even with low resolutions, our MVP-GNM can still capture the global behavior of the B-factor very well with mismatches predominantly from the regions with large B-factor values. Further, it has been demonstrated that the low-frequency eigenmodes from our MVP-ANM are highly consistent with the ones from ANM even with very low resolutions and a coarse grid. Finally, the great advantage of MVP-ANM model for large-sized biomolecules has been demonstrated by using two poliovirus virus structures. The paper ends with a conclusion.
The Language Grid: supporting intercultural collaboration
NASA Astrophysics Data System (ADS)
Ishida, T.
2018-03-01
A variety of language resources already exist online. Unfortunately, since many language resources have usage restrictions, it is virtually impossible for each user to negotiate with every language resource provider when combining several resources to achieve the intended purpose. To increase the accessibility and usability of language resources (dictionaries, parallel texts, part-of-speech taggers, machine translators, etc.), we proposed the Language Grid [1]; it wraps existing language resources as atomic services and enables users to create new services by combining the atomic services, and reduces the negotiation costs related to intellectual property rights [4]. Our slogan is “language services from language resources.” We believe that modularization with recombination is the key to creating a full range of customized language environments for various user communities.
The Need of Nested Grids for Aerial and Satellite Images and Digital Elevation Models
NASA Astrophysics Data System (ADS)
Villa, G.; Mas, S.; Fernández-Villarino, X.; Martínez-Luceño, J.; Ojeda, J. C.; Pérez-Martín, B.; Tejeiro, J. A.; García-González, C.; López-Romero, E.; Soteres, C.
2016-06-01
Usual workflows for production, archiving, dissemination and use of Earth observation images (both aerial and from remote sensing satellites) pose big interoperability problems, as for example: non-alignment of pixels at the different levels of the pyramids that makes it impossible to overlay, compare and mosaic different orthoimages, without resampling them and the need to apply multiple resamplings and compression-decompression cycles. These problems cause great inefficiencies in production, dissemination through web services and processing in "Big Data" environments. Most of them can be avoided, or at least greatly reduced, with the use of a common "nested grid" for mutiresolution production, archiving, dissemination and exploitation of orthoimagery, digital elevation models and other raster data. "Nested grids" are space allocation schemas that organize image footprints, pixel sizes and pixel positions at all pyramid levels, in order to achieve coherent and consistent multiresolution coverage of a whole working area. A "nested grid" must be complemented by an appropriate "tiling schema", ideally based on the "quad-tree" concept. In the last years a "de facto standard" grid and Tiling Schema has emerged and has been adopted by virtually all major geospatial data providers. It has also been adopted by OGC in its "WMTS Simple Profile" standard. In this paper we explain how the adequate use of this tiling schema as common nested grid for orthoimagery, DEMs and other types of raster data constitutes the most practical solution to most of the interoperability problems of these types of data.
Obstacle-avoiding navigation system
Borenstein, Johann; Koren, Yoram; Levine, Simon P.
1991-01-01
A system for guiding an autonomous or semi-autonomous vehicle through a field of operation having obstacles thereon to be avoided employs a memory for containing data which defines an array of grid cells which correspond to respective subfields in the field of operation of the vehicle. Each grid cell in the memory contains a value which is indicative of the likelihood, or probability, that an obstacle is present in the respectively associated subfield. The values in the grid cells are incremented individually in response to each scan of the subfields, and precomputation and use of a look-up table avoids complex trigonometric functions. A further array of grid cells is fixed with respect to the vehicle form a conceptual active window which overlies the incremented grid cells. Thus, when the cells in the active window overly grid cell having values which are indicative of the presence of obstacles, the value therein is used as a multiplier of the precomputed vectorial values. The resulting plurality of vectorial values are summed vectorially in one embodiment of the invention to produce a virtual composite repulsive vector which is then summed vectorially with a target-directed vector for producing a resultant vector for guiding the vehicle. In an alternative embodiment, a plurality of vectors surrounding the vehicle are computed, each having a value corresponding to obstacle density. In such an embodiment, target location information is used to select between alternative directions of travel having low associated obstacle densities.
Cyberinfrastructure for high energy physics in Korea
NASA Astrophysics Data System (ADS)
Cho, Kihyeon; Kim, Hyunwoo; Jeung, Minho; High Energy Physics Team
2010-04-01
We introduce the hierarchy of cyberinfrastructure which consists of infrastructure (supercomputing and networks), Grid, e-Science, community and physics from bottom layer to top layer. KISTI is the national headquarter of supercomputer, network, Grid and e-Science in Korea. Therefore, KISTI is the best place to for high energy physicists to use cyberinfrastructure. We explain this concept on the CDF and the ALICE experiments. In the meantime, the goal of e-Science is to study high energy physics anytime and anywhere even if we are not on-site of accelerator laboratories. The components are data production, data processing and data analysis. The data production is to take both on-line and off-line shifts remotely. The data processing is to run jobs anytime, anywhere using Grid farms. The data analysis is to work together to publish papers using collaborative environment such as EVO (Enabling Virtual Organization) system. We also present the global community activities of FKPPL (France-Korea Particle Physics Laboratory) and physics as top layer.
Grid accounting service: state and future development
NASA Astrophysics Data System (ADS)
Levshina, T.; Sehgal, C.; Bockelman, B.; Weitzel, D.; Guru, A.
2014-06-01
During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at University of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.
Software for Managing Parametric Studies
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian
2003-01-01
The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.
Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL
NASA Astrophysics Data System (ADS)
Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong
2011-12-01
We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it "multi-tier". The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.
System-of-Systems Approach for Integrated Energy Systems Modeling and Simulation: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Saurabh; Ruth, Mark; Pratt, Annabelle
Today’s electricity grid is the most complex system ever built—and the future grid is likely to be even more complex because it will incorporate distributed energy resources (DERs) such as wind, solar, and various other sources of generation and energy storage. The complexity is further augmented by the possible evolution to new retail market structures that provide incentives to owners of DERs to support the grid. To understand and test new retail market structures and technologies such as DERs, demand-response equipment, and energy management systems while providing reliable electricity to all customers, an Integrated Energy System Model (IESM) is beingmore » developed at NREL. The IESM is composed of a power flow simulator (GridLAB-D), home energy management systems implemented using GAMS/Pyomo, a market layer, and hardware-in-the-loop simulation (testing appliances such as HVAC, dishwasher, etc.). The IESM is a system-of-systems (SoS) simulator wherein the constituent systems are brought together in a virtual testbed. We will describe an SoS approach for developing a distributed simulation environment. We will elaborate on the methodology and the control mechanisms used in the co-simulation illustrated by a case study.« less
Too much noise on the dance floor: Intra- and inter-dance angular error in honey bee waggle dances.
Schürch, Roger; Couvillon, Margaret J
2013-01-01
Successful honey bee foragers communicate where they have found a good resource with the waggle dance, a symbolic language that encodes a distance and direction. Both of these components are repeated several times (1 to > 100) within the same dance. Additionally, both these components vary within a dance. Here we discuss some causes and consequences of intra-dance and inter-dance angular variation and advocate revisiting von Frisch and Lindauer's earlier work to gain a better understanding of honey bee foraging ecology.
Evaluation of Lightweight, Relocatable Structures for Use in Theaters of Operations.
1982-05-01
ADA117 02& CONSTRUCTION ENGINEERING RESEARCH LAB (ARMY) CHAMPAIGN IL F/6 13/2EVALUATION OF LI HThEISHT, RELOCATABLE STRUCTURES FOR USE IN TH-ETC(U...NUMBER(s) M. Frisch J. LambertM._ Ptak 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK U.S. ARMY 4A 6 - T CONSTRUCTION ...System prefabricated buildings portable shelters 24L A2rRACT (C60901we S ,ewInd efb nOee -e and ldenif 7 by block numbe) The U.S. Army Construction
1990-09-01
231 Harry L. Frisch PART V: IONOMERS/STRUCTURE SMALL ANGLE X - RAY SCATTERING ON POLY(ETHYLENE-METHACRYLIC ACID) LEAD AND LEAD SULFIDE IONOMERS 237...E.J. Kramer, R.J. Composto, R.S. Stein, T.P. Russell, G.P. Felcher, A. Mansour, and A. Karim * td:tt Papet Vil X - RAY REFLECTIVITY AND FLUORESCENCE...Sammann DETERMINATION OF PARTICLE SIZE OF A DISPERSED PHASE BY SMALL-ANGLE X - RAY SCATTERING 413 Frank C. Wilson *Invited Paper ix SYNTHESIS AND
1001 Ways to run AutoDock Vina for virtual screening
NASA Astrophysics Data System (ADS)
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
1001 Ways to run AutoDock Vina for virtual screening.
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
Efficient Double Auction Mechanisms in the Energy Grid with Connected and Islanded Microgrids
NASA Astrophysics Data System (ADS)
Faqiry, Mohammad Nazif
The future energy grid is expected to operate in a decentralized fashion as a network of autonomous microgrids that are coordinated by a Distribution System Operator (DSO), which should allocate energy to them in an efficient manner. Each microgrid operating in either islanded or grid-connected mode may be considered to manage its own resources. This can take place through auctions with individual units of the microgrid as the agents. This research proposes efficient auction mechanisms for the energy grid, with is-landed and connected microgrids. The microgrid level auction is carried out by means of an intermediate agent called an aggregator. The individual consumer and producer units are modeled as selfish agents. With the microgrid in islanded mode, two aggregator-level auction classes are analyzed: (i) price-heterogeneous, and (ii) price homogeneous. Under the price heterogeneity paradigm, this research extends earlier work on the well-known, single-sided Kelly mechanism to double auctions. As in Kelly auctions, the proposed algorithm implements the bidding without using any agent level private infor-mation (i.e. generation capacity and utility functions). The proposed auction is shown to be an efficient mechanism that maximizes the social welfare, i.e. the sum of the utilities of all the agents. Furthermore, the research considers the situation where a subset of agents act as a coalition to redistribute the allocated energy and price using any other specific fairness criterion. The price homogeneous double auction algorithm proposed in this research ad-dresses the problem of price-anticipation, where each agent tries to influence the equilibri-um price of energy by placing strategic bids. As a result of this behavior, the auction's efficiency is lowered. This research proposes a novel approach that is implemented by the aggregator, called virtual bidding, where the efficiency can be asymptotically maximized, even in the presence of price anticipatory bidders. Next, an auction mechanism for the energy grid, with multiple connected mi-crogrids is considered. A globally efficient bi-level auction algorithm is proposed. At the upper-level, the algorithm takes into account physical grid constraints in allocating energy to the microgrids. It is implemented by the DSO as a linear objective quadratic constraint problem that allows price heterogeneity across the aggregators. In parallel, each aggrega-tor implements its own lower-level price homogeneous auction with virtual bidding. The research concludes with a preliminary study on extending the DSO level auc-tion to multi-period day-ahead scheduling. It takes into account storage units and conven-tional generators that are present in the grid by formulating the auction as a mixed inte-ger linear programming problem.
Discovery of novel inhibitors of the NorA multidrug transporter of Staphylococcus aureus.
Brincat, Jean Pierre; Carosati, Emanuele; Sabatini, Stefano; Manfroni, Giuseppe; Fravolini, Arnaldo; Raygada, Jose L; Patel, Diixa; Kaatz, Glenn W; Cruciani, Gabriele
2011-01-13
Four novel inhibitors of the NorA efflux pump of Staphylococcus aureus, discovered through a virtual screening process, are reported. The four compounds belong to different chemical classes and were tested for their in vitro ability to block the efflux of a well-known NorA substrate, as well as for their ability to potentiate the effect of ciprofloxacin (CPX) on several strains of S. aureus, including a NorA overexpressing strain. Additionally, the MIC values of each of the compounds individually are reported. A structure-activity relationship study was also performed on these novel chemotypes, revealing three new compounds that are also potent NorA inhibitors. The virtual screening procedure employed FLAP, a new methodology based on GRID force field descriptors.
Therapeutic benefits in grid irradiation on Tomotherapy for bulky, radiation-resistant tumors.
Narayanasamy, Ganesh; Zhang, Xin; Meigooni, Ali; Paudel, Nava; Morrill, Steven; Maraboyina, Sanjay; Peacock, Loverd; Penagaricano, Jose
2017-08-01
Spatially fractionated radiation therapy (SFRT or grid therapy) has proven to be effective in management of bulky tumors. The aim of this project is to study the therapeutic ratio (TR) of helical Tomotherapy (HT)-based grid therapy using linear-quadratic cell survival model. HT-based grid (or HT-GRID) plan was generated using a patient-specific virtual grid pattern of high-dose cylindrical regions using MLCs. TR was defined as the ratio of normal tissue surviving fraction (SF) under HT-GRID irradiation to an open debulking field of an equivalent dose that result in the same tumor cell SF. TR was estimated from DVH data on ten HT-GRID patient plans with deep seated, bulky tumor. Dependence of the TR values on radiosensitivity of the tumor cells and prescription dose was analyzed. The mean ± standard deviation (SD) of TR was 4.0 ± 0.7 (range: 3.1-5.5) for the 10 patients with single fraction maximum dose of 20 Gy to GTV assuming a tumor cell SF at 2 Gy (SF2 t ) value of 0·5. In addition, the mean ± SD of TR values for SF2 t values of 0.3 and 0.7 were found to be 1 ± 0.1 and 18.0 ± 5.1, respectively. Reducing the prescription dose to 15 and 10 Gy lowered the respective TR values to 2.0 ± 0.2 and 1.2 ± 0.04 for a SF2 t value of 0.5. HT-GRID therapy demonstrates a significant therapeutic advantage over uniform dose from an open field irradiation for the same tumor cell kill. TR increases with the radioresistance of the tumor cells and with prescription dose.
Physics-Based Virtual Fly-Outs of Projectiles on Supercomputers
2006-11-01
moved along with its grid as it flew downrange. The supersonic projectile modeled in this study is an ogive- cylinder -finned configuration (see...resulting from the unsteady jet interaction flow field is clearly evident (Figure 10). The effect of the jet is stronger as evidenced by the larger...little or no effect on the other aerodynamic forces. These results show the potential to gain fundamental understanding of the complex, flow
1999-05-01
and status. Previously, natural resources and physical labor were broad measurements of the wealth of a business , a corporation , or a state.19 With...30 5 Global Grid......................................................................................................32 6 Components of ...John D. Jones and Marc F. Griesbach, eds., Just War Theory in the Nuclear Age (New York, N.Y.: University Press of America, 1985), 3-34; see also A.J
2001-03-01
natural resources and physical labor were broad meas urements of the wealth of a business , a corporation , or a state.18 With the globalization of ...5 Global Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 6 Components of Information Superiority...point where the source of attacks can be verified. 6 Notes 1. John D. Jones and Marc F. Griesbach, eds., Just War Theory in the Nuclear Age (Lanham
Cipolletta, Sabrina; Malighetti, Clelia; Serino, Silvia; Riva, Giuseppe; Winter, David
2017-06-01
Anorexia nervosa (AN) is an eating disorder characterized by severe body image disturbances. Recent studies from spatial cognition showed a connection between the experience of body and of space. The objectives of this study were to explore the meanings that characterize AN experience and to deepen the examination of spatiality in relational terms, through the study of how the patient construes herself and her interpersonal world. More specifically this study aimed (1) to verify whether spatial variables and aspects of construing differentiate patients with AN and healthy controls (HCs) and are related to severity of anorexic symptomatology; (2) to explore correlations between impairments in spatial abilities and interpersonal construing. A sample of 12 AN patients and 12 HCs participated in the study. The Eating Disorder Inventory, a virtual reality-based procedure, traditional measures of spatial abilities, and repertory grids were administered. The AN group compared to HCs showed significant impairments in spatial abilities, more unidimensional construing, and more extreme construing of the present self and of the self as seen by others. All these dimensions correlated with the severity of symptomatology. Extreme ways of construing characterized individuals with AN and might represent the interpersonal aspect of impairment in spatial reference frames. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities
NASA Astrophysics Data System (ADS)
Garzoglio, Gabriele
2012-12-01
In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.
ALICE HLT Cluster operation during ALICE Run 2
NASA Astrophysics Data System (ADS)
Lehrbach, J.; Krzewicki, M.; Rohr, D.; Engel, H.; Gomez Ramirez, A.; Lindenstruth, V.; Berzano, D.;
2017-10-01
ALICE (A Large Ion Collider Experiment) is one of the four major detectors located at the LHC at CERN, focusing on the study of heavy-ion collisions. The ALICE High Level Trigger (HLT) is a compute cluster which reconstructs the events and compresses the data in real-time. The data compression by the HLT is a vital part of data taking especially during the heavy-ion runs in order to be able to store the data which implies that reliability of the whole cluster is an important matter. To guarantee a consistent state among all compute nodes of the HLT cluster we have automatized the operation as much as possible. For automatic deployment of the nodes we use Foreman with locally mirrored repositories and for configuration management of the nodes we use Puppet. Important parameters like temperatures, network traffic, CPU load etc. of the nodes are monitored with Zabbix. During periods without beam the HLT cluster is used for tests and as one of the WLCG Grid sites to compute offline jobs in order to maximize the usage of our cluster. To prevent interference with normal HLT operations we separate the virtual machines running the Grid jobs from the normal HLT operation via virtual networks (VLANs). In this paper we give an overview of the ALICE HLT operation in 2016.
A Voice and Mouse Input Interface for 3D Virtual Environments
NASA Technical Reports Server (NTRS)
Kao, David L.; Bryson, Steve T.
2003-01-01
There have been many successful stories on how 3D input devices can be fully integrated into an immersive virtual environment. Electromagnetic trackers, optical trackers, gloves, and flying mice are just some of these input devices. Though we can use existing 3D input devices that are commonly used for VR applications, there are several factors that prevent us from choosing these input devices for our applications. One main factor is that most of these tracking devices are not suitable for prolonged use due to human fatigue associated with using them. A second factor is that many of them would occupy additional office space. Another factor is that many of the 3D input devices are expensive due to the unusual hardware that are required. For our VR applications, we want a user interface that would work naturally with standard equipment. In this paper, we demonstrate applications or our proposed muitimodal interface using a 3D dome display. We also show that effective data analysis can be achieved while the scientists view their data rendered inside the dome display and perform user interactions simply using the mouse and voice input. Though the sphere coordinate grid seems to be ideal for interaction using a 3D dome display, we can also use other non-spherical grids as well.
Virtual patient simulator for distributed collaborative medical education.
Caudell, Thomas P; Summers, Kenneth L; Holten, Jim; Hakamata, Takeshi; Mowafi, Moad; Jacobs, Joshua; Lozanoff, Beth K; Lozanoff, Scott; Wilks, David; Keep, Marcus F; Saiki, Stanley; Alverson, Dale
2003-01-01
Project TOUCH (Telehealth Outreach for Unified Community Health; http://hsc.unm.edu/touch) investigates the feasibility of using advanced technologies to enhance education in an innovative problem-based learning format currently being used in medical school curricula, applying specific clinical case models, and deploying to remote sites/workstations. The University of New Mexico's School of Medicine and the John A. Burns School of Medicine at the University of Hawai'i face similar health care challenges in providing and delivering services and training to remote and rural areas. Recognizing that health care needs are local and require local solutions, both states are committed to improving health care delivery to their unique populations by sharing information and experiences through emerging telehealth technologies by using high-performance computing and communications resources. The purpose of this study is to describe the deployment of a problem-based learning case distributed over the National Computational Science Alliance's Access Grid. Emphasis is placed on the underlying technical components of the TOUCH project, including the virtual reality development tool Flatland, the artificial intelligence-based simulation engine, the Access Grid, high-performance computing platforms, and the software that connects them all. In addition, educational and technical challenges for Project TOUCH are identified. Copyright 2003 Wiley-Liss, Inc.
Investigation of storage options for scientific computing on Grid and Cloud facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storagemore » server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on bare metal nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.« less
Performance Studies on Distributed Virtual Screening
Krüger, Jens; de la Garza, Luis; Kohlbacher, Oliver; Nagel, Wolfgang E.
2014-01-01
Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly. PMID:25032219
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
NASA Astrophysics Data System (ADS)
Tiede, Dirk; Lang, Stefan
2010-11-01
In this paper we focus on the application of transferable, object-based image analysis algorithms for dwelling extraction in a camp for internally displaced people (IDP) in Darfur, Sudan along with innovative means for scientific visualisation of the results. Three very high spatial resolution satellite images (QuickBird: 2002, 2004, 2008) were used for: (1) extracting different types of dwellings and (2) calculating and visualizing added-value products such as dwelling density and camp structure. The results were visualized on virtual globes (Google Earth and ArcGIS Explorer) revealing the analysis results (analytical 3D views,) transformed into the third dimension (z-value). Data formats depend on virtual globe software including KML/KMZ (keyhole mark-up language) and ESRI 3D shapefiles streamed as ArcGIS Server-based globe service. In addition, means for improving overall performance of automated dwelling structures using grid computing techniques are discussed using examples from a similar study.
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
Neutron-induced fission cross section of 240Pu from 0.5 MeV to 3 MeV
NASA Astrophysics Data System (ADS)
Salvador-Castiñeira, P.; Bryś, T.; Eykens, R.; Hambsch, F.-J.; Göök, A.; Moens, A.; Oberstedt, S.; Sibbens, G.; Vanleeuw, D.; Vidali, M.; Pretel, C.
2015-07-01
240Pu has recently been pointed out by a sensitivity study of the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA) to be one of the isotopes whose fission cross section lacks accuracy to meet the upcoming needs for the future generation of nuclear power plants (GEN-IV). In the High Priority Request List (HPRL) of the OECD, it is suggested that the knowledge of the 240Pu(n ,f ) cross section should be improved to an accuracy within 1-3 %, compared to the present 5%. A measurement of the 240Pu cross section has been performed at the Van de Graaff accelerator of the Joint Research Center (JRC) Institute for Reference Materials and Measurements (IRMM) using quasi-monoenergetic neutrons in the energy range from 0.5 MeV to 3 MeV. A twin Frisch-grid ionization chamber (TFGIC) has been used in a back-to-back configuration as fission fragment detector. The 240Pu(n ,f ) cross section has been normalized to three different isotopes: 237Np(n ,f ) , 235U (n ,f ) , and 238U (n ,f ) . Additionally, the secondary standard reactions were benchmarked through measurements against the primary standard reaction 235U (n ,f ) in the same geometry. A comprehensive study of the corrections applied to the data and the associated uncertainties is given. The results obtained are in agreement with previous experimental data at the threshold region. For neutron energies higher than 1 MeV, the results of this experiment are slightly lower than the ENDF/B-VII.1 evaluation, but in agreement with the experiments of Laptev et al. (2004) as well as Staples and Morley (1998).
Wood, T J; Avery, G; Balcam, S; Needler, L; Smith, A; Saunderson, J R; Beavis, A W
2015-01-01
Objective: The aim of this study was to investigate via simulation a proposed change to clinical practice for chest radiography. The validity of using a scatter rejection grid across the diagnostic energy range (60–125 kVp), in conjunction with appropriate tube current–time product (mAs) for imaging with a computed radiography (CR) system was investigated. Methods: A digitally reconstructed radiograph algorithm was used, which was capable of simulating CR chest radiographs with various tube voltages, receptor doses and scatter rejection methods. Four experienced image evaluators graded images with a grid (n = 80) at tube voltages across the diagnostic energy range and varying detector air kermas. These were scored against corresponding images reconstructed without a grid, as per current clinical protocol. Results: For all patients, diagnostic image quality improved with the use of a grid, without the need to increase tube mAs (and therefore patient dose), irrespective of the tube voltage used. Increasing tube mAs by an amount determined by the Bucky factor made little difference to image quality. Conclusion: A virtual clinical trial has been performed with simulated chest CR images. Results indicate that the use of a grid improves diagnostic image quality for average adults, without the need to increase tube mAs, even at low tube voltages. Advances in knowledge: Validated with images containing realistic anatomical noise, it is possible to improve image quality by utilizing grids for chest radiography with CR systems without increasing patient exposure. Increasing tube mAs by an amount determined by the Bucky factor is not justified. PMID:25571914
NASA Astrophysics Data System (ADS)
Hey, Tony
2002-08-01
After defining what is meant by the term 'e-Science', this talk will survey the activity on e-Science and Grids in Europe. The two largest initiatives in Europe are the European Commission's portfolio of Grid projects and the UK e-Science program. The EU under its R Framework Program are funding nearly twenty Grid projects in a wide variety of application areas. These projects are in varying stages of maturity and this talk will focus on a subset that have most significant progress. These include the EU DataGrid project led by CERN and two projects - EuroGrid and Grip - that evolved from the German national Unicore project. A summary of the other EU Grid projects will be included. The UK e-Science initiative is a 180M program entirely focused on e-Science applications requiring resource sharing, a virtual organization and a Grid infrastructure. The UK program is unique for three reasons: (1) the program covers all areas of science and engineering; (2) all of the funding is devoted to Grid application and middleware development and not to funding major hardware platforms; and (3) there is an explicit connection with industry to produce robust and secure industrial-strength versions of Grid middleware that could be used in business-critical applications. A part of the funding, around 50M, but requiring an additional 'matching' $30M from industry in collaborative projects, forms the UK e-Science 'Core Program'. It is the responsibility of the Core Program to identify and support a set of generic middleware requirements that have emerged from a requirements analysis of the e-Science application projects. This has led to a much more data-centric vision for 'the Grid' in the UK in which access to HPC facilities forms only one element. More important for the UK projects are issues such as enabling access and federation of scientific data held in files, relational databases and other archives. Automatic annotation of data generated by high throughput experiments with XML-based metadata is seen as a key step towards developing higher-level Grid services for information retrieval and knowledge discovery. The talk will conclude with a survey of other Grid initiatives across Europe and look at possible future European projects.
Expanding the user base beyond HEP for the Ganga distributed analysis user interface
NASA Astrophysics Data System (ADS)
Currie, R.; Egede, U.; Richards, A.; Slater, M.; Williams, M.
2017-10-01
This document presents the result of recent developments within Ganga[1] project to support users from new communities outside of HEP. In particular I will examine the case of users from the Large Scale Survey Telescope (LSST) group looking to use resources provided by the UK based GridPP[2][3] DIRAC[4][5] instance. An example use case is work performed with users from the LSST Virtual Organisation (VO) to distribute the workflow used for galaxy shape identification analyses. This work highlighted some LSST specific challenges which could be well solved by common tools within the HEP community. As a result of this work the LSST community was able to take advantage of GridPP[2][3] resources to perform large computing tasks within the UK.
Voxel inversion of airborne electromagnetic data
NASA Astrophysics Data System (ADS)
Auken, E.; Fiandaca, G.; Kirkegaard, C.; Vest Christiansen, A.
2013-12-01
Inversion of electromagnetic data usually refers to a model space being linked to the actual observation points, and for airborne surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space. This means that incorporating the geophysical data into the geological and/or hydrological modelling grids involves a spatial relocation of the models, which in itself is a subtle process where valuable information is easily lost. Also the integration of prior information, e.g. from boreholes, is difficult when the observation points do not coincide with the position of the prior information, as well as the joint inversion of airborne and ground-based surveys. We developed a geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models, for easier incorporation of prior information and for straightforward integration of different data types in joint inversion. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the properties is computed everywhere by means of an interpolation function f (e.g. inverse distance or kriging). The position of the nodes is fixed during the inversion and is chosen to sample the soil taking into account topography and inversion resolution. Given this definition of the voxel model space, both 1D and 2D/3D forward responses can be computed. The 1D forward responses are computed as follows: A) a 1D model subdivision, in terms of model thicknesses and direction of the "virtual" horizontal stratification, is defined for each 1D data set. For EM soundings the "virtual" horizontal stratification is set up parallel to the topography at the sounding position. B) the "virtual" 1D models are constructed by interpolating the soil properties in the medium point of the "virtual" layers. For 2D/3D forward responses the algorithm operates similarly, simply filling the 2D/3D meshes of the forward responses by computing the interpolation values in the centres of the mesh cells. The new definition of the voxel model space allows for incorporating straightforwardly the geophysical information into geological and/or hydrological models, just by using for defining the geophysical model space a voxel (hydro)geological grid. This simplify also the propagation of the uncertainty of geophysical parameters into the (hydro)geological models. Furthermore, prior information from boreholes, like resistivity logs, can be applied directly to the voxel model space, even if the borehole positions do not coincide with the actual observation points. In fact, the prior information is constrained to the model parameters through the interpolation function at the borehole locations. The presented algorithm is a further development of the AarhusInv program package developed at Aarhus University (formerly em1dinv), which manages both large scale AEM surveys and ground-based data. This work has been carried out as part of the HyGEM project, supported by the Danish Council of Strategic Research under grant number DSF 11-116763.
The agent-based spatial information semantic grid
NASA Astrophysics Data System (ADS)
Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren
2006-10-01
Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid management layer establishes a virtual environment that integrates seamlessly all GIS notes. 2) When the resource management system searches data on different spatial information systems, it transfers the meaning of different Local Ontology Agents rather than access data directly. So the ability of search and query can be said to be on the semantic level. 3) The data access procedure is transparent to guests, that is, they could access the information from remote site as current disk because the General Ontology Agent could automatically link data by the Data Agents that link the Ontology concept to GIS data. 4) The capability of processing massive spatial data. Storing, accessing and managing massive spatial data from TB to PB; efficiently analyzing and processing spatial data to produce model, information and knowledge; and providing 3D and multimedia visualization services. 5) The capability of high performance computing and processing on spatial information. Solving spatial problems with high precision, high quality, and on a large scale; and process spatial information in real time or on time, with high-speed and high efficiency. 6) The capability of sharing spatial resources. The distributed heterogeneous spatial information resources are Shared and realizing integrated and inter-operated on semantic level, so as to make best use of spatial information resources,such as computing resources, storage devices, spatial data (integrating from GIS, RS and GPS), spatial applications and services, GIS platforms, 7) The capability of integrating legacy GIS system. A ASISG can not only be used to construct new advanced spatial application systems, but also integrate legacy GIS system, so as to keep extensibility and inheritance and guarantee investment of users. 8) The capability of collaboration. Large-scale spatial information applications and services always involve different departments in different geographic places, so remote and uniform services are needed. 9) The capability of supporting integration of heterogeneous systems. Large-scale spatial information systems are always synthetically applications, so ASISG should provide interoperation and consistency through adopting open and applied technology standards. 10) The capability of adapting dynamic changes. Business requirements, application patterns, management strategies, and IT products always change endlessly for any departments, so ASISG should be self-adaptive. Two examples are provided in this paper, those examples provide a detailed way on how you design your semantic grid based on Multi-Agent systems and Ontology. In conclusion, the semantic grid of spatial information system could improve the ability of the integration and interoperability of spatial information grid.
Electronic-projecting Moire method applying CBR-technology
NASA Astrophysics Data System (ADS)
Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.
2018-01-01
Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.
An Overview of Cloud Computing in Distributed Systems
NASA Astrophysics Data System (ADS)
Divakarla, Usha; Kumari, Geetha
2010-11-01
Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.
In Silico Simulation of a Clinical Trial Concerning Tumour Response to Radiotherapy
NASA Astrophysics Data System (ADS)
Dionysiou, Dimitra D.; Stamatakos, Georgios S.; Athanaileas, Theodoras E.; Merrychtas, Andreas; Kaklamani, Dimitra; Varvarigou, Theodora; Uzunoglu, Nikolaos
2008-11-01
The aim of this paper is to demonstrate how multilevel tumour growth and response to therapeutic treatment models can be used in order to simulate clinical trials, with the long-term intention of both better designing clinical studies and understanding their outcome based on basic biological science. For this purpose, an already developed computer simulation model of glioblastoma multiforme response to radiotherapy has been used and a clinical study concerning glioblastoma multiforme response to radiotherapy has been simulated. In order to facilitate the simulation of such virtual trials, a toolkit enabling the user-friendly execution of the simulations on grid infrastructures has been designed and developed. The results of the conducted virtual trial are in agreement with the outcome of the real clinical study.
NASA Astrophysics Data System (ADS)
Loos, Andreas
Wer schon einmal Nudeln selbst gemacht hat, der weiß: Frische Pasta kann ganz schön pappen. Das ist ein Problem für die Nudelindustrie, denn es ist nicht leicht, mit unregelmäßigen und klebrigen Nudel-Klumpen 500-Gramm-Beutel genau zu füllen. Einige Hersteller verwenden daher "Teilmengenwaagen". Die besitzen bis zu hundert kleineWaagschalen, die über ein Förderband mit jeweils ungefähr 50 Gramm Nudel-Klumpen befüllt werden. Dann kommt Mathematik ins Spiel: Ein Computer wählt die zehn Waagschalen aus, deren Inhalt zusammen die 500 Gramm genau erreicht, und leert sie in einen Beutel aus.
Hardware-in-the-Loop Co-simulation of Distribution Grid for Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rotger-Griful, Sergi; Chatzivasileiadis, Spyros; Jacobsen, Rune H.
2016-06-20
In modern power systems, co-simulation is proposed as an enabler for analyzing the interactions between disparate systems. This paper introduces the co-simulation platform Virtual Grid Integration Laboratory (VirGIL) including Hardware-in-the-Loop testing, and demonstrates its potential to assess demand response strategies. VirGIL is based on a modular architecture using the Functional Mock-up Interface industrial standard to integrate new simulators. VirGIL combines state-of-the-art simulators in power systems, communications, buildings, and control. In this work, VirGIL is extended with a Hardware-in-the-Loop component to control the ventilation system of a real 12-story building in Denmark. VirGIL capabilities are illustrated in three scenarios: load following,more » primary reserves and load following aggregation. Experimental results show that the system can track one minute changing signals and it can provide primary reserves for up-regulation. Furthermore, the potential of aggregating several ventilation systems is evaluated considering the impact at distribution grid level and the communications protocol effect.« less
Grid accounting service: state and future development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levshina, T.; Sehgal, C.; Bockelman, B.
2014-01-01
During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at Universitymore » of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.« less
Earth System Grid II, Turning Climate Datasets into Community Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Don
2006-08-01
The Earth System Grid (ESG) II project, funded by the Department of Energy’s Scientific Discovery through Advanced Computing program, has transformed climate data into community resources. ESG II has accomplished this goal by creating a virtual collaborative environment that links climate centers and users around the world to models and data via a computing Grid, which is based on the Department of Energy’s supercomputing resources and the Internet. Our project’s success stems from partnerships between climate researchers and computer scientists to advance basic and applied research in the terrestrial, atmospheric, and oceanic sciences. By interfacing with other climate science projects,more » we have learned that commonly used methods to manage and remotely distribute data among related groups lack infrastructure and under-utilize existing technologies. Knowledge and expertise gained from ESG II have helped the climate community plan strategies to manage a rapidly growing data environment more effectively. Moreover, approaches and technologies developed under the ESG project have impacted datasimulation integration in other disciplines, such as astrophysics, molecular biology and materials science.« less
The Decay of Forced Turbulent Coflow of He II Past a Grid
NASA Astrophysics Data System (ADS)
Babuin, S.; Varga, E.; Skrbek, L.
2014-04-01
We present an experimental study of the decay of He II turbulence created mechanically, by a bellows-induced flow past a stationary grid in a 7×7 mm2 superfluid wind tunnel. The temporal decay L( t) originating from various steady-states of vortex line length per unit volume, L 0, has been observed based on measurements of the attenuation of second-sound, in the temperature range 1.17 K< T<1.95 K. Each presented decay curve is the average of up to 150 single decay events. We find that, independently of T and L 0, within seconds past the sudden stop of the drive, all the decay curves show a universal behavior lasting up to 200 s, of the form L( t)∝( t- t 0)-3/2, where t 0 is the virtual origin time. From this decay process we deduce the effective kinematic viscosity of turbulent He II. We compare our results with the bench-mark Oregon towed grid experiments and, despite our turbulence being non-homogeneous, find strong similarities.
Using OSG Computing Resources with (iLC)Dirac
NASA Astrophysics Data System (ADS)
Sailer, A.; Petric, M.; CLICdp Collaboration
2017-10-01
CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called ‘SiteDirectors’, which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional site-specific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were developed. Not only is the usage of these types of computing elements now completely transparent for all DIRAC instances, which makes DIRAC a flexible solution for OSG based virtual organisations, but it also allows LCG Grid Sites to move to the HTCondor-CE software, without shutting DIRAC based VOs out of their site. In these proceedings we detail how we interfaced the DIRAC system to the HTCondor-CE and Globus computing elements and explain the encountered obstacles and solutions developed, and how the linear collider community uses resources in the OSG.
The International Solid Earth Research Virtual Observatory
NASA Astrophysics Data System (ADS)
Fox, G.; Pierce, M.; Rundle, J.; Donnellan, A.; Parker, J.; Granat, R.; Lyzenga, G.; McLeod, D.; Grant, L.
2004-12-01
We describe the architecture and initial implementation of the International Solid Earth Research Virtual Observatory (iSERVO). This has been prototyped within the USA as SERVOGrid and expansion is planned to Australia, China, Japan and other countries. We base our design on a globally scalable distributed "cyber-infrastructure" or Grid built around a Web Services-based approach consistent with the extended Web Service Interoperability approach. The Solid Earth Science Working Group of NASA has identified several challenges for Earth Science research. In order to investigate these, we need to couple numerical simulation codes and data mining tools to observational data sets. This observational data are now available on-line in internet-accessible forms, and the quantity of this data is expected to grow explosively over the next decade. We architect iSERVO as a loosely federated Grid of Grids with each country involved supporting a national Solid Earth Research Grid. The national Grid Operations, possibly with dedicated control centers, are linked together to support iSERVO where an International Grid control center may eventually be necessary. We address the difficult multi-administrative domain security and ownership issues by exposing capabilities as services for which the risk of abuse is minimized. We support large scale simulations within a single domain using service-hosted tools (mesh generation, data repository and sensor access, GIS, visualization). Simulations typically involve sequential or parallel machines in a single domain supported by cross-continent services. We use Web Services implement Service Oriented Architecture (SOA) using WSDL for service description and SOAP for message formats. These are augmented by UDDI, WS-Security, WS-Notification/Eventing and WS-ReliableMessaging in the WS-I+ approach. Support for the latter two capabilities will be available over the next 6 months from the NaradaBrokering messaging system. We augment these specifications with the powerful portlet architecture using WSRP and JSR168 supported by such portal containers as uPortal, WebSphere, and Apache JetSpeed2. The latter portal aggregates component user interfaces for each iSERVO service allowing flexible customization of the user interface. We exploit the portlets produced by the NSF NMI (Middleware initiative) OGCE activity. iSERVO also uses specifications from the Open Geographical Information Systems (GIS) Consortium (OGC) that defines a number of standards for modeling earth surface feature data and services for interacting with this data. The data models are expressed in the XML-based Geography Markup Language (GML), and the OGC service framework are being adapted to use the Web Service model. The SERVO prototype includes a GIS Grid that currently includes the core WMS and WFS (Map and Feature) services. We will follow the best practice in the Grid and Web Service field and will adapt our technology as appropriate. For example, we expect to support services built on WS-RF when is finalized and to make use of the database interfaces OGSA-DAI and its WS-I+ versions. Finally, we review advances in Web Service scripting (such as HPSearch) and workflow systems (such as GCF) and their applications to iSERVO.
Dynamically allocated virtual clustering management system
NASA Astrophysics Data System (ADS)
Marcus, Kelvin; Cannata, Jess
2013-05-01
The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.
Amin, Waqas; Parwani, Anil V; Schmandt, Linda; Mohanty, Sambit K; Farhat, Ghada; Pople, Andrew K; Winters, Sharon B; Whelan, Nancy B; Schneider, Althea M; Milnes, John T; Valdivieso, Federico A; Feldman, Michael; Pass, Harvey I; Dhir, Rajiv; Melamed, Jonathan; Becich, Michael J
2008-08-13
Advances in translational research have led to the need for well characterized biospecimens for research. The National Mesothelioma Virtual Bank is an initiative which collects annotated datasets relevant to human mesothelioma to develop an enterprising biospecimen resource to fulfill researchers' need. The National Mesothelioma Virtual Bank architecture is based on three major components: (a) common data elements (based on College of American Pathologists protocol and National North American Association of Central Cancer Registries standards), (b) clinical and epidemiologic data annotation, and (c) data query tools. These tools work interoperably to standardize the entire process of annotation. The National Mesothelioma Virtual Bank tool is based upon the caTISSUE Clinical Annotation Engine, developed by the University of Pittsburgh in cooperation with the Cancer Biomedical Informatics Grid (caBIG, see http://cabig.nci.nih.gov). This application provides a web-based system for annotating, importing and searching mesothelioma cases. The underlying information model is constructed utilizing Unified Modeling Language class diagrams, hierarchical relationships and Enterprise Architect software. The database provides researchers real-time access to richly annotated specimens and integral information related to mesothelioma. The data disclosed is tightly regulated depending upon users' authorization and depending on the participating institute that is amenable to the local Institutional Review Board and regulation committee reviews. The National Mesothelioma Virtual Bank currently has over 600 annotated cases available for researchers that include paraffin embedded tissues, tissue microarrays, serum and genomic DNA. The National Mesothelioma Virtual Bank is a virtual biospecimen registry with robust translational biomedical informatics support to facilitate basic science, clinical, and translational research. Furthermore, it protects patient privacy by disclosing only de-identified datasets to assure that biospecimens can be made accessible to researchers.
The Montage architecture for grid-enabled science processing of large, distributed datasets
NASA Technical Reports Server (NTRS)
Jacob, Joseph C.; Katz, Daniel S .; Prince, Thomas; Berriman, Bruce G.; Good, John C.; Laity, Anastasia C.; Deelman, Ewa; Singh, Gurmeet; Su, Mei-Hui
2004-01-01
Montage is an Earth Science Technology Office (ESTO) Computational Technologies (CT) Round III Grand Challenge investigation to deploy a portable, compute-intensive, custom astronomical image mosaicking service for the National Virtual Observatory (NVO). Although Montage is developing a compute- and data-intensive service for the astronomy community, we are also helping to address a problem that spans both Earth and Space science, namely how to efficiently access and process multi-terabyte, distributed datasets. In both communities, the datasets are massive, and are stored in distributed archives that are, in most cases, remote from the available Computational resources. Therefore, state of the art computational grid technologies are a key element of the Montage portal architecture. This paper describes the aspects of the Montage design that are applicable to both the Earth and Space science communities.
MAGNA (Materially and Geometrically Nonlinear Analysis). Part II. Preprocessor Manual.
1982-12-01
AGRID can accept a virtually arbitrary collection of point coor- dinates which lie on a surface of interest, and generate a regular grid of mesh points...in the form of a collection of such patches to be translated into an assemblage of biquadratic surface elements (see Subsection 2.1, Figure 2.2...using IMPRESS can be converted for use with the present preprocessor by means of the IMPRINT translator. IMPRINT is a collection of conversion routines
2013-05-01
Chinook salmon (presumably subyearling) was the most prevalent life-history type detected at the Russian Island and Woody Island sites. The number of...Extend and refine the computational grid We extended the Virtual Columbia River to include regions upstream of Beaver Army, which previously served as...the Columbia River above Beaver Army and particularly above the confluence of the Willamette River. That process of calibration is highly iterative
Multilevel Parallelization of AutoDock 4.2.
Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P
2011-04-28
Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.
A revolution in Distributed Virtual Globes creation with e-CORCE space program
NASA Astrophysics Data System (ADS)
Antikidis, Jean-Pierre
2010-05-01
Space applications are to-day participating to our everyday life on a continuous fashion and most of the time in an invisible way. Meteorology, telecom and more recently GPS driven applications are these days fully participating to our modern and comfortable way of life. Therefore a new revolution is underway by which Space Remote Sensing technology will bring the full of the Earth available in a digital form. Present requirements for digital Earth creation at high resolution requirement are pushing space technology to a new technological frontier that could be called the: 1 day to one week, 1 meter, 1 Earth, challenge.The e-CORCE vision (e-Constellation d'Observation Recurrente Cellulaire) relies on a complete new avenue to create a full virtual earth with the help of small satellite constellation and make them operated as sensors connected to a powerful internet based ground network. To handle this incredibly high quantity of information (10 000 Billions metric pixel ), maximum use of psycho-visual compression associated to over-simplified platforms considered as space IP nodes and a massive World-wide Grid-based system composed of more than 40 receiving and processing nodes is contemplated. The presentation will introduce the technological hurdles and the way modern upcoming cyber-infrastructure technologies called WAG (Wide Area Grid) may open a practical and economically sound solution to this never attempted challenge.
Out-of-Core Streamline Visualization on Large Unstructured Meshes
NASA Technical Reports Server (NTRS)
Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu
1997-01-01
It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.
NASA Astrophysics Data System (ADS)
Gross, Kyle; Hayashi, Soichi; Teige, Scott; Quick, Robert
2012-12-01
Large distributed computing collaborations, such as the Worldwide LHC Computing Grid (WLCG), face many issues when it comes to providing a working grid environment for their users. One of these is exchanging tickets between various ticketing systems in use by grid collaborations. Ticket systems such as Footprints, RT, Remedy, and ServiceNow all have different schema that must be addressed in order to provide a reliable exchange of information between support entities and users in different grid environments. To combat this problem, OSG Operations has created a ticket synchronization interface called GOC-TX that relies on web services instead of error-prone email parsing methods of the past. Synchronizing tickets between different ticketing systems allows any user or support entity to work on a ticket in their home environment, thus providing a familiar and comfortable place to provide updates without having to learn another ticketing system. The interface is built in a way that it is generic enough that it can be customized for nearly any ticketing system with a web-service interface with only minor changes. This allows us to be flexible and rapidly bring new ticket synchronization online. Synchronization can be triggered by different methods including mail, web services interface, and active messaging. GOC-TX currently interfaces with Global Grid User Support (GGUS) for WLCG, Remedy at Brookhaven National Lab (BNL), and Request Tracker (RT) at the Virtual Data Toolkit (VDT). Work is progressing on the Fermi National Accelerator Laboratory (FNAL) ServiceNow synchronization. This paper will explain the problems faced by OSG and how they led OSG to create and implement this ticket synchronization system along with the technical details that allow synchronization to be preformed at a production level.
The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications
NASA Technical Reports Server (NTRS)
Johnston, William E.
2002-01-01
With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.
Economic models for management of resources in peer-to-peer and grid computing
NASA Astrophysics Data System (ADS)
Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David
2001-07-01
The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.
Parallel inputs to memory in bee colour vision.
Horridge, Adrian
2016-03-01
In the 19(th) century, it was found that attraction of bees to light was controlled by light intensity irrespective of colour, and a few critical entomologists inferred that vision of bees foraging on flowers was unlike human colour vision. Therefore, quite justly, Professor Carl von Hess concluded in his book on the Comparative Physiology of Vision (1912) that bees do not distinguish colours in the way that humans enjoy. Immediately, Karl von Frisch, an assistant in the Zoology Department of the same University of Münich, set to work to show that indeed bees have colour vision like humans, thereby initiating a new research tradition, and setting off a decade of controversy that ended only at the death of Hess in 1923. Until 1939, several researchers continued the tradition of trying to untangle the mechanism of bee vision by repeatedly testing trained bees, but made little progress, partly because von Frisch and his legacy dominated the scene. The theory of trichromatic colour vision further developed after three types of receptors sensitive to green, blue, and ultraviolet (UV), were demonstrated in 1964 in the bee. Then, until the end of the century, all data was interpreted in terms of trichromatic colour space. Anomalies were nothing new, but eventually after 1996 they led to the discovery that bees have a previously unknown type of colour vision based on a monochromatic measure and distribution of blue and measures of modulation in green and blue receptor pathways. Meanwhile, in the 20(th) century, search for a suitable rationalization, and explorations of sterile culs-de-sac had filled the literature of bee colour vision, but were based on the wrong theory.
Zhang, Baofeng; D'Erasmo, Michael P; Murelli, Ryan P; Gallicchio, Emilio
2016-09-30
We report the results of a binding free energy-based virtual screening campaign of a library of 77 α-hydroxytropolone derivatives against the challenging RNase H active site of the reverse transcriptase (RT) enzyme of human immunodeficiency virus-1. Multiple protonation states, rotamer states, and binding modalities of each compound were individually evaluated. The work involved more than 300 individual absolute alchemical binding free energy parallel molecular dynamics calculations and over 1 million CPU hours on national computing clusters and a local campus computational grid. The thermodynamic and structural measures obtained in this work rationalize a series of characteristics of this system useful for guiding future synthetic and biochemical efforts. The free energy model identified key ligand-dependent entropic and conformational reorganization processes difficult to capture using standard docking and scoring approaches. Binding free energy-based optimization of the lead compounds emerging from the virtual screen has yielded four compounds with very favorable binding properties, which will be the subject of further experimental investigations. This work is one of the few reported applications of advanced-binding free energy models to large-scale virtual screening and optimization projects. It further demonstrates that, with suitable algorithms and automation, advanced-binding free energy models can have a useful role in early-stage drug-discovery programs.
Cyber-Physical System Security With Deceptive Virtual Hosts for Industrial Control Networks
Vollmer, Todd; Manic, Milos
2014-05-01
A challenge facing industrial control network administrators is protecting the typically large number of connected assets for which they are responsible. These cyber devices may be tightly coupled with the physical processes they control and human induced failures risk dire real-world consequences. Dynamic virtual honeypots are effective tools for observing and attracting network intruder activity. This paper presents a design and implementation for self-configuring honeypots that passively examine control system network traffic and actively adapt to the observed environment. In contrast to prior work in the field, six tools were analyzed for suitability of network entity information gathering. Ettercap, anmore » established network security tool not commonly used in this capacity, outperformed the other tools and was chosen for implementation. Utilizing Ettercap XML output, a novel four-step algorithm was developed for autonomous creation and update of a Honeyd configuration. This algorithm was tested on an existing small campus grid and sensor network by execution of a collaborative usage scenario. Automatically created virtual hosts were deployed in concert with an anomaly behavior (AB) system in an attack scenario. Virtual hosts were automatically configured with unique emulated network stack behaviors for 92% of the targeted devices. The AB system alerted on 100% of the monitored emulated devices.« less
Two-stage collaborative global optimization design model of the CHPG microgrid
NASA Astrophysics Data System (ADS)
Liao, Qingfen; Xu, Yeyan; Tang, Fei; Peng, Sicheng; Yang, Zheng
2017-06-01
With the continuous developing of technology and reducing of investment costs, renewable energy proportion in the power grid is becoming higher and higher because of the clean and environmental characteristics, which may need more larger-capacity energy storage devices, increasing the cost. A two-stage collaborative global optimization design model of the combined-heat-power-and-gas (abbreviated as CHPG) microgrid is proposed in this paper, to minimize the cost by using virtual storage without extending the existing storage system. P2G technology is used as virtual multi-energy storage in CHPG, which can coordinate the operation of electric energy network and natural gas network at the same time. Demand response is also one kind of good virtual storage, including economic guide for the DGs and heat pumps in demand side and priority scheduling of controllable loads. Two kinds of storage will coordinate to smooth the high-frequency fluctuations and low-frequency fluctuations of renewable energy respectively, and achieve a lower-cost operation scheme simultaneously. Finally, the feasibility and superiority of proposed design model is proved in a simulation of a CHPG microgrid.
Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.
Troshin, Peter V; Procter, James B; Barton, Geoffrey J
2011-07-15
JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.
den Besten, Matthijs; Thomas, Arthur J; Schroeder, Ralph
2009-04-22
It is often said that the life sciences are transforming into an information science. As laboratory experiments are starting to yield ever increasing amounts of data and the capacity to deal with those data is catching up, an increasing share of scientific activity is seen to be taking place outside the laboratories, sifting through the data and modelling "in silico" the processes observed "in vitro." The transformation of the life sciences and similar developments in other disciplines have inspired a variety of initiatives around the world to create technical infrastructure to support the new scientific practices that are emerging. The e-Science programme in the United Kingdom and the NSF Office for Cyberinfrastructure are examples of these. In Switzerland there have been no such national initiatives. Yet, this has not prevented scientists from exploring the development of similar types of computing infrastructures. In 2004, a group of researchers in Switzerland established a project, SwissBioGrid, to explore whether Grid computing technologies could be successfully deployed within the life sciences. This paper presents their experiences as a case study of how the life sciences are currently operating as an information science and presents the lessons learned about how existing institutional and technical arrangements facilitate or impede this operation. SwissBioGrid gave rise to two pilot projects: one for proteomics data analysis and the other for high-throughput molecular docking ("virtual screening") to find new drugs for neglected diseases (specifically, for dengue fever). The proteomics project was an example of a data management problem, applying many different analysis algorithms to Terabyte-sized datasets from mass spectrometry, involving comparisons with many different reference databases; the virtual screening project was more a purely computational problem, modelling the interactions of millions of small molecules with a limited number of protein targets on the coat of the dengue virus. Both present interesting lessons about how scientific practices are changing when they tackle the problems of large-scale data analysis and data management by means of creating a novel technical infrastructure. In the experience of SwissBioGrid, data intensive discovery has a lot to gain from close collaboration with industry and harnessing distributed computing power. Yet the diversity in life science research implies only a limited role for generic infrastructure; and the transience of support means that researchers need to integrate their efforts with others if they want to sustain the benefits of their success, which are otherwise lost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayanasamy, G; Zhang, X; Paudel, N
Purpose: The aim of this project is to study the therapeutic ratio (TR) for helical Tomotherapy (HT) based spatially fractionated radiotherapy (GRID). Estimation of TR was based on the linear-quadratic cell survival model by comparing the normal cell survival in a HT GRID to that of a uniform dose delivery in an open-field for the same tumor survival. Methods: HT GRID plan was generated using a patient specific virtual GRID block pattern of non-divergent, cylinder shaped holes using MLCs. TR was defined as the ratio of normal tissue surviving fraction (SF) under HT GRID irradiation to an open field irradiationmore » with an equivalent dose that result in the same tumor cell SF. The ratio was estimated from DVH data on ten patient plans with deep seated, bulky tumor approved by the treating radiation oncologist. Dependence of the TR values on radio-sensitivity of the tumor cells and prescription dose were also analyzed. Results: The mean ± standard deviation (SD) of TR was 4.0±0.7 (range: 3.1 to 5.5) for the 10 patients with single fraction dose of 20 Gy and tumor cell SF of 0.5 at 2 Gy. In addition, mean±SD of TR = 1±0.1 and 18.0±5.1 were found for tumor with SF of 0.3 and 0.7, respectively. Reducing the prescription dose to 15 and 10 Gy lowered the TR to 2.0±0.2 and 1.2±0.04 for a tumor cell SF of 0.5 at 2 Gy. In this study, the SF of normal cells was assumed to be 0.5 at 2 Gy. Conclusion: HT GRID displayed a significant therapeutic advantage over uniform dose from an open field irradiation. TR increases with the radioresistance of the tumor cells and with prescription dose.« less
Using Virtualization to Integrate Weather, Climate, and Coastal Science Education
NASA Astrophysics Data System (ADS)
Davis, J. R.; Paramygin, V. A.; Figueiredo, R.; Sheng, Y.
2012-12-01
To better understand and communicate the important roles of weather and climate on the coastal environment, a unique publically available tool is being developed to support research, education, and outreach activities. This tool uses virtualization technologies to facilitate an interactive, hands-on environment in which students, researchers, and general public can perform their own numerical modeling experiments. While prior efforts have focused solely on the study of the coastal and estuary environments, this effort incorporates the community supported weather and climate model (WRF-ARW) into the Coastal Science Educational Virtual Appliance (CSEVA), an education tool used to assist in the learning of coastal transport processes; storm surge and inundation; and evacuation modeling. The Weather Research and Forecasting (WRF) Model is a next-generation, community developed and supported, mesoscale numerical weather prediction system designed to be used internationally for research, operations, and teaching. It includes two dynamical solvers (ARW - Advanced Research WRF and NMM - Nonhydrostatic Mesoscale Model) as well as a data assimilation system. WRF-ARW is the ARW dynamics solver combined with other components of the WRF system which was developed primarily at NCAR, community support provided by the Mesoscale and Microscale Meteorology (MMM) division of National Center for Atmospheric Research (NCAR). Included with WRF is the WRF Pre-processing System (WPS) which is a set of programs to prepare input for real-data simulations. The CSEVA is based on the Grid Appliance (GA) framework and is built using virtual machine (VM) and virtual networking technologies. Virtualization supports integration of an operating system, libraries (e.g. Fortran, C, Perl, NetCDF, etc. necessary to build WRF), web server, numerical models/grids/inputs, pre-/post-processing tools (e.g. WPS / RIP4 or UPS), graphical user interfaces, "Cloud"-computing infrastructure and other tools into a single ready-to-use package. Thus, the previous ornery task of setting up and compiling these tools becomes obsolete and the research, educator or student can focus on using the tools to study the interactions between weather, climate and the coastal environment. The incorporation of WRF into the CSEVA has been designed to be synergistic with the extensive online tutorials and biannual tutorials hosted by NCAR. Included are working examples of the idealized test simulations provided with WRF (2D sea breeze and squalls, a large eddy simulation, a Held and Suarez simulation, etc.) To demonstrate the integration of weather, coastal and coastal science education, example applications are being developed to demonstrate how the system can be used to couple a coastal and estuarine circulation, transport and storm surge model with downscale reanalysis weather and future climate predictions. Documentation, tutorials and the enhanced CSEVA itself will be found on the web at: http://cseva.coastal.ufl.edu.
NASA Astrophysics Data System (ADS)
Williams, Mike; Egede, Ulrik; Paterson, Stuart; LHCb Collaboration
2011-12-01
The distributed analysis experience to date at LHCb has been positive: job success rates are high and wait times for high-priority jobs are low. LHCb users access the grid using the GANGA job-management package, while the LHCb virtual organization manages its resources using the DIRAC package. This clear division of labor has benefitted LHCb and its users greatly; it is a major reason why distributed analysis at LHCb has been so successful. The newly formed LHCb distributed analysis support team has also proved to be a success.
Earth Science community support in the EGI-Inspire Project
NASA Astrophysics Data System (ADS)
Schwichtenberg, H.
2012-04-01
The Earth Science Grid community is following its strategy of propagating Grid technology to the ES disciplines, setting up interactive collaboration among the members of the community and stimulating the interest of stakeholders on the political level since ten years already. This strategy was described in a roadmap published in an Earth Science Informatics journal. It was applied through different European Grid projects and led to a large Grid Earth Science VRC that covers a variety of ES disciplines; in the end, all of them were facing the same kind of ICT problems. .. The penetration of Grid in the ES community is indicated by the variety of applications, the number of countries in which ES applications are ported, the number of papers in international journals and the number of related PhDs. Among the six virtual organisations belonging to ES, one, ESR, is generic. Three others -env.see-grid-sci.eu, meteo.see-grid-sci.eu and seismo.see-grid-sci.eu- are thematic and regional (South Eastern Europe) for environment, meteorology and seismology. The sixth VO, EGEODE, is for the users of the Geocluster software. There are also ES users in national VOs or VOs related to projects. The services for the ES task in EGI-Inspire concerns the data that are a key part of any ES application. The ES community requires several interfaces to access data and metadata outside of the EGI infrastructure, e.g. by using grid-enabled database interfaces. The data centres have also developed service tools for basic research activities such as searching, browsing and downloading these datasets, but these are not accessible from applications executed on the Grid. The ES task in EGI-Inspire aims to make these tools accessible from the Grid. In collaboration with GENESI-DR (Ground European Network for Earth Science Interoperations - Digital Repositories) this task is maintaining and evolving an interface in response to new requirements that will allow data in the GENESI-DR infrastructure to be accessed from EGI resources to enable future research activities by this HUC. The international climate community for IPCC has created the Earth System Grid (ESG) to store and share climate data. There is a need to interface ESG with EGI for climate studies - parametric, regional and impact aspects. Critical points concern the interoperability of security mechanism between both "organisations", data protection policy, data transfer, data storage and data caching. Presenter: Horst Schwichtenberg Co-Authors: Monique Petitdidier (IPSL), Andre Gemünd (SCAI), Wim Som de Cerff (KNMI), Michael Schnell (SCAI)
NASA Astrophysics Data System (ADS)
Ernst, Gerhard; Hüttemann, Andreas
2010-01-01
List of contributors; 1. Introduction Gerhard Ernst and Andreas Hütteman; Part I. The Arrows of Time: 2. Does a low-entropy constraint prevent us from influencing the past? Mathias Frisch; 3. The part hypothesis meets gravity Craig Callender; 4. Quantum gravity and the arrow of time Claus Kiefer; Part II. Probability and Chance: 5. The natural-range conception of probability Jacob Rosenthal; 6. Probability in Boltzmannian statistical mechanics Roman Frigg; 7. Humean mechanics versus a metaphysics of powers Michael Esfeld; Part III. Reduction: 8. The crystallisation of Clausius's phenomenological thermodynamics C. Ulises Moulines; 9. Reduction and renormalization Robert W. Batterman; 10. Irreversibility in stochastic dynamics Jos Uffink; Index.
Gamma-Ray Detectors: From Homeland Security to the Cosmos (443rd Brookhaven Lecture)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolotnikov, Aleksey
2008-12-03
Many radiation detectors are first developed for homeland security or industrial applications. Scientists, however, are continuously realizing new roles that these detectors can play in high-energy physics and astrophysics experiments. On Wednesday, December 3, join presenter Aleksey Bolotnikov, a physicist in the Nonproliferation and National Security Department (NNSD) and a co-inventor of the cadmium-zinc-telluride Frisch-ring (CdZnTe) detector, for the 443rd Brookhaven Lecture, entitled Gamma-Ray Detectors: From Homeland Security to the Cosmos. In his lecture, Bolotnikov will highlight two primary radiation-detector technologies: CdZnTe detectors and fluid-Xeon (Xe) detectors.
NASA's Participation in the National Computational Grid
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)
1998-01-01
Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.
Open Science in the Cloud: Towards a Universal Platform for Scientific and Statistical Computing
NASA Astrophysics Data System (ADS)
Chine, Karim
The UK, through the e-Science program, the US through the NSF-funded cyber infrastructure and the European Union through the ICT Calls aimed to provide "the technological solution to the problem of efficiently connecting data, computers, and people with the goal of enabling derivation of novel scientific theories and knowledge".1 The Grid (Foster, 2002; Foster; Kesselman, Nick, & Tuecke, 2002), foreseen as a major accelerator of discovery, didn't meet the expectations it had excited at its beginnings and was not adopted by the broad population of research professionals. The Grid is a good tool for particle physicists and it has allowed them to tackle the tremendous computational challenges inherent to their field. However, as a technology and paradigm for delivering computing on demand, it doesn't work and it can't be fixed. On one hand, "the abstractions that Grids expose - to the end-user, to the deployers and to application developers - are inappropriate and they need to be higher level" (Jha, Merzky, & Fox), and on the other hand, academic Grids are inherently economically unsustainable. They can't compete with a service outsourced to the Industry whose quality and price would be driven by market forces. The virtualization technologies and their corollary, the Infrastructure-as-a-Service (IaaS) style cloud, hold the promise to enable what the Grid failed to deliver: a sustainable environment for computational sciences that would lower the barriers for accessing federated computational resources, software tools and data; enable collaboration and resources sharing and provide the building blocks of a ubiquitous platform for traceable and reproducible computational research.
Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY
NASA Astrophysics Data System (ADS)
Bystritskaya, Elena; Fomenko, Alexander; Gogitidze, Nelly; Lobodzinski, Bogdan
2014-06-01
The H1 Virtual Organization (VO), as one of the small VOs, employs most components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers (WMSs), CernVM File System (CVMFS) available to the VO HONE and local GRID User Interfaces (UIs). The general principle of monitoring GRID elements is based on the execution of short test jobs on different CE queues using submission through various WMSs and directly to the CREAM-CEs as well. Real H1 MC Production jobs with a small number of events are used to perform the tests. Test jobs are periodically submitted into GRID queues, the status of these jobs is checked, output files of completed jobs are retrieved, the result of each job is analyzed and the waiting time and run time are derived. Using this information, the status of the GRID elements is estimated and the most suitable ones are included in the automatically generated configuration files for use in the H1 MC production. The monitoring system allows for identification of problems in the GRID sites and promptly reacts on it (for example by sending GGUS (Global Grid User Support) trouble tickets). The system can easily be adapted to identify the optimal resources for tasks other than MC production, simply by changing to the relevant test jobs. The monitoring system is written mostly in Python and Perl with insertion of a few shell scripts. In addition to the test monitoring system we use information from real production jobs to monitor the availability and quality of the GRID resources. The monitoring tools register the number of job resubmissions, the percentage of failed and finished jobs relative to all jobs on the CEs and determine the average values of waiting and running time for the involved GRID queues. CEs which do not meet the set criteria can be removed from the production chain by including them in an exception table. All of these monitoring actions lead to a more reliable and faster execution of MC requests.
Decentralized energy systems for clean electricity access
NASA Astrophysics Data System (ADS)
Alstone, Peter; Gershenson, Dimitry; Kammen, Daniel M.
2015-04-01
Innovative approaches are needed to address the needs of the 1.3 billion people lacking electricity, while simultaneously transitioning to a decarbonized energy system. With particular focus on the energy needs of the underserved, we present an analytic and conceptual framework that clarifies the heterogeneous continuum of centralized on-grid electricity, autonomous mini- or community grids, and distributed, individual energy services. A historical analysis shows that the present day is a unique moment in the history of electrification where decentralized energy networks are rapidly spreading, based on super-efficient end-use appliances and low-cost photovoltaics. We document how this evolution is supported by critical and widely available information technologies, particularly mobile phones and virtual financial services. These disruptive technology systems can rapidly increase access to basic electricity services and directly inform the emerging Sustainable Development Goals for quality of life, while simultaneously driving action towards low-carbon, Earth-sustaining, inclusive energy systems.
A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).
NASA Astrophysics Data System (ADS)
Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.
2015-04-01
Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.
NASA Astrophysics Data System (ADS)
Ji, Junzhong; Song, Xiangjing; Liu, Chunnian; Zhang, Xiuzhen
2013-08-01
Community structure detection in complex networks has been intensively investigated in recent years. In this paper, we propose an adaptive approach based on ant colony clustering to discover communities in a complex network. The focus of the method is the clustering process of an ant colony in a virtual grid, where each ant represents a node in the complex network. During the ant colony search, the method uses a new fitness function to percept local environment and employs a pheromone diffusion model as a global information feedback mechanism to realize information exchange among ants. A significant advantage of our method is that the locations in the grid environment and the connections of the complex network structure are simultaneously taken into account in ants moving. Experimental results on computer-generated and real-world networks show the capability of our method to successfully detect community structures.
A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)
2001-01-01
NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation
Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
Purpose To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. Materials and methods A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. Results The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. Conclusion A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm. PMID:28886048
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.
Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.
Integrated Access to Solar Observations With EGSO
NASA Astrophysics Data System (ADS)
Csillaghy, A.
2003-12-01
{\\b Co-Authors}: J.Aboudarham (2), E.Antonucci (3), R.D.Bentely (4), L.Ciminiera (5), A.Finkelstein (4), J.B.Gurman(6), F.Hill (7), D.Pike (8), I.Scholl (9), V.Zharkova and the EGSO development team {\\b Institutions}: (2) Observatoire de Paris-Meudon (France); (3) INAF - Istituto Nazionale di Astrofisica (Italy); (4) University College London (U.K.); (5) Politecnico di Torino (Italy), (6) NASA Goddard Space Flight Center (USA); (7) National Solar Observatory (USA); (8) Rutherford Appleton Lab. (U.K.); (9) Institut d'Astrophysique Spatial, Universite de Paris-Sud (France) ; (10) University of Bradford (U.K) {\\b Abstract}: The European Grid of Solar Observations is the European contribution to the deployment of a virtual solar observatory. The project is funded under the Information Society Technologies (IST) thematic programme of the European Commission's Fifth Framework. EGSO started in March 2002 and will last until March 2005. The project is categorized as a computer science effort. Evidently, a fair amount of issues it addresses are general to grid projects. Nevertheless, EGSO is also of benefit to the application domains, including solar physics, space weather, climate physics and astrophysics. With EGSO, researchers as well as the general public can access and combine solar data from distributed archives in an integrated virtual solar resource. Users express queries based on various search parameters. The search possibilities of EGSO extend the search possibilities of traditional data access systems. For instance, users can formulate a query to search for simultaneous observations of a specific solar event in a given number of wavelengths. In other words, users can search for observations on the basis of events and phenomena, rather than just time and location. The software architecture consists of three collaborating components: a consumer, a broker and a provider. The first component, the consumer, organizes the end user interaction and controls requests submitted to the grid. The consumer is thus in charge of tasks such as request handling, request composition, data visualization and data caching. The second component, the provider, is dedicated to data providing and processing. It links the grid to individual data providers and data centers. The third component, the broker, collects information about providers and allows consumers to perform the searches on the grid. Each component can exist in multiple instances. This follows a basic grid concept: The failure or unavailability of a single component will not generate a failure of the whole system, as other systems will take over the processing of requests. The architecture relies on a global data model for the semantics. The data model is in some way the brains of the grid. It provides a description of the information entities available within the grid, as well as a description of their relationships. EGSO is now in the development phase. A demonstration (www.egso.org/demo) is provided to get an idea about how the system will function once the project is completed. The demonstration focuses on retrieving data needed to determine the energy released in the solar atmosphere during the impulsive phase of flares. It allows finding simultaneous observations in the visible, UV, Soft X-rays, hard X-rays, gamma-rays, and radio. The types of observations that can be specified are images at high space and time resolutions as well as integrated emission and spectra from a yet limited set of instruments, including the NASA spacecraft TRACE, SOHO, RHESSI, and the ground-based observatories Phoenix-2 in Switzerland and Meudon Observatory in France
The ATLAS Tier-3 in Geneva and the Trigger Development Facility
NASA Astrophysics Data System (ADS)
Gadomski, S.; Meunier, Y.; Pasche, P.; Baud, J.-P.; ATLAS Collaboration
2011-12-01
The ATLAS Tier-3 farm at the University of Geneva provides storage and processing power for analysis of ATLAS data. In addition the facility is used for development, validation and commissioning of the High Level Trigger of ATLAS [1]. The latter purpose leads to additional requirements on the availability of latest software and data, which will be presented. The farm is also a part of the WLCG [2], and is available to all members of the ATLAS Virtual Organization. The farm currently provides 268 CPU cores and 177 TB of storage space. A grid Storage Element, implemented with the Disk Pool Manager software [3], is available and integrated with the ATLAS Distributed Data Management system [4]. The batch system can be used directly by local users, or with a grid interface provided by NorduGrid ARC middleware [5]. In this article we will present the use cases that we support, as well as the experience with the software and the hardware we are using. Results of I/O benchmarking tests, which were done for our DPM Storage Element and for the NFS servers we are using, will also be presented.
NASA Cloud-Based Climate Data Services
NASA Astrophysics Data System (ADS)
McInerney, M. A.; Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, W. D., III; Thompson, J. H.; Gill, R.; Jasen, J. E.; Samowich, B.; Pobre, Z.; Salmon, E. M.; Rumney, G.; Schardt, T. D.
2012-12-01
Cloud-based scientific data services are becoming an important part of NASA's mission. Our technological response is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service (VaaS). A virtual climate data server (vCDS) is an Open Archive Information System (OAIS) compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have deployed vCDS Version 1.0 in the Amazon EC2 cloud using S3 object storage and are using the system to deliver a subset of NASA's Intergovernmental Panel on Climate Change (IPCC) data products to the latest CentOS federated version of Earth System Grid Federation (ESGF), which is also running in the Amazon cloud. vCDS-managed objects are exposed to ESGF through FUSE (Filesystem in User Space), which presents a POSIX-compliant filesystem abstraction to applications such as the ESGF server that require such an interface. A vCDS manages data as a distinguished collection for a person, project, lab, or other logical unit. A vCDS can manage a collection across multiple storage resources using rules and microservices to enforce collection policies. And a vCDS can federate with other vCDSs to manage multiple collections over multiple resources, thereby creating what can be thought of as an ecosystem of managed collections. With the vCDS approach, we are trying to enable the full information lifecycle management of scientific data collections and make tractable the task of providing diverse climate data services. In this presentation, we describe our approach, experiences, lessons learned, and plans for the future.; (A) vCDS/ESG system stack. (B) Conceptual architecture for NASA cloud-based data services.
GeoMapApp, Virtual Ocean, and other Free Data Resources for the 21st Century Classroom
NASA Astrophysics Data System (ADS)
Goodwillie, A. M.; Ryan, W.; Carbotte, S.; Melkonian, A.; Coplan, J.; Arko, R.; Ferrini, V.; O'Hara, S.; Leung, A.; Bonckzowski, J.
2008-12-01
With funding from the U.S. National Science Foundation, the Marine Geoscience Data System (MGDS) (http://www.marine-geo.org/) is developing GeoMapApp (http://www.geomapapp.org) - a computer application that provides wide-ranging map-based visualization and manipulation options for interdisciplinary geosciences research and education. The novelty comes from the use of this visual tool to discover and explore data, with seamless links to further discovery using traditional text-based approaches. Users can generate custom maps and grids and import their own data sets. Built-in functionality allows users to readily explore a broad suite of interactive data sets and interfaces. Examples include multi-resolution global digital models of topography, gravity, sediment thickness, and crustal ages; rock, fluid, biology and sediment sample information; research cruise underway geophysical and multibeam data; earthquake events; submersible dive photos of hydrothermal vents; geochemical analyses; DSDP/ODP core logs; seismic reflection profiles; contouring, shading, profiling of grids; and many more. On-line audio-visual tutorials lead users step-by-step through GeoMapApp functionality (http://www.geomapapp.org/tutorials/). Virtual Ocean (http://www.virtualocean.org/) integrates GeoMapApp with a 3-D earth browser based upon NASA WorldWind, providing yet more powerful capabilities. The searchable MGDS Media Bank (http://media.marine-geo.org/) supports viewing of remarkable images and video from the NSF Ridge 2000 and MARGINS programs. For users familiar with Google Earth (tm), KML files are available for viewing several MGDS data sets (http://www.marine-geo.org/education/kmls.php). Examples of accessing and manipulating a range of geoscience data sets from various NSF-funded programs will be shown. GeoMapApp, Virtual Ocean, the MGDS Media Bank and KML files are free MGDS data resources and work on any type of computer. They are currently used by educators, researchers, school teachers and the general public.
Haidar, Ali N; Zasada, Stefan J; Coveney, Peter V; Abdallah, Ali E; Beckles, Bruce; Jones, Mike A S
2011-06-06
We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users' experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username-password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale.
Physically Based Virtual Surgery Planning and Simulation Tools for Personal Health Care Systems
NASA Astrophysics Data System (ADS)
Dogan, Firat; Atilgan, Yasemin
The virtual surgery planning and simulation tools have gained a great deal of importance in the last decade in a consequence of increasing capacities at the information technology level. The modern hardware architectures, large scale database systems, grid based computer networks, agile development processes, better 3D visualization and all the other strong aspects of the information technology brings necessary instruments into almost every desk. The last decade’s special software and sophisticated super computer environments are now serving to individual needs inside “tiny smart boxes” for reasonable prices. However, resistance to learning new computerized environments, insufficient training and all the other old habits prevents effective utilization of IT resources by the specialists of the health sector. In this paper, all the aspects of the former and current developments in surgery planning and simulation related tools are presented, future directions and expectations are investigated for better electronic health care systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
The Fermilab Grid and Cloud Computing Department and the KISTI Global Science experimental Data hub Center are working on a multi-year Collaborative Research and Development Agreement.With the knowledge developed in the first year on how to provision and manage a federation of virtual machines through Cloud management systems. In this second year, we expanded the work on provisioning and federation, increasing both scale and diversity of solutions, and we started to build on-demand services on the established fabric, introducing the paradigm of Platform as a Service to assist with the execution of scientific workflows. We have enabled scientific workflows ofmore » stakeholders to run on multiple cloud resources at the scale of 1,000 concurrent machines. The demonstrations have been in the areas of (a) Virtual Infrastructure Automation and Provisioning, (b) Interoperability and Federation of Cloud Resources, and (c) On-demand Services for ScientificWorkflows.« less
Advances in Modal Analysis Using a Robust and Multiscale Method
NASA Astrophysics Data System (ADS)
Picard, Cécile; Frisson, Christian; Faure, François; Drettakis, George; Kry, Paul G.
2010-12-01
This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.
A Cloud-based Approach to Medical NLP
Chard, Kyle; Russell, Michael; Lussier, Yves A.; Mendonça, Eneida A; Silverstein, Jonathan C.
2011-01-01
Natural Language Processing (NLP) enables access to deep content embedded in medical texts. To date, NLP has not fulfilled its promise of enabling robust clinical encoding, clinical use, quality improvement, and research. We submit that this is in part due to poor accessibility, scalability, and flexibility of NLP systems. We describe here an approach and system which leverages cloud-based approaches such as virtual machines and Representational State Transfer (REST) to extract, process, synthesize, mine, compare/contrast, explore, and manage medical text data in a flexibly secure and scalable architecture. Available architectures in which our Smntx (pronounced as semantics) system can be deployed include: virtual machines in a HIPAA-protected hospital environment, brought up to run analysis over bulk data and destroyed in a local cloud; a commercial cloud for a large complex multi-institutional trial; and within other architectures such as caGrid, i2b2, or NHIN. PMID:22195072
Haidar, Ali N.; Zasada, Stefan J.; Coveney, Peter V.; Abdallah, Ali E.; Beckles, Bruce; Jones, Mike A. S.
2011-01-01
We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users' experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username–password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale. PMID:22670214
A cloud-based approach to medical NLP.
Chard, Kyle; Russell, Michael; Lussier, Yves A; Mendonça, Eneida A; Silverstein, Jonathan C
2011-01-01
Natural Language Processing (NLP) enables access to deep content embedded in medical texts. To date, NLP has not fulfilled its promise of enabling robust clinical encoding, clinical use, quality improvement, and research. We submit that this is in part due to poor accessibility, scalability, and flexibility of NLP systems. We describe here an approach and system which leverages cloud-based approaches such as virtual machines and Representational State Transfer (REST) to extract, process, synthesize, mine, compare/contrast, explore, and manage medical text data in a flexibly secure and scalable architecture. Available architectures in which our Smntx (pronounced as semantics) system can be deployed include: virtual machines in a HIPAA-protected hospital environment, brought up to run analysis over bulk data and destroyed in a local cloud; a commercial cloud for a large complex multi-institutional trial; and within other architectures such as caGrid, i2b2, or NHIN.
Spaceflight Operations Services Grid (SOSG) Prototype Implementation and Feasibility Study
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Thigpen, William W.; Lisotta, Anthony J.; Redman, Sandra
2004-01-01
Science Operations Services Grid is focusing on building a prototype grid-based environment that incorporates existing and new spaceflight services to enable current and future NASA programs with cost savings and new and evolvable methods to conduct science in a distributed environment. The Science Operations Services Grid (SOSG) will provide a distributed environment for widely disparate organizations to conduct their systems and processes in a more efficient and cost effective manner. These organizations include those that: 1) engage in space-based science and operations, 2) develop space-based systems and processes, and 3) conduct scientific research, bringing together disparate scientific disciplines like geology and oceanography to create new information. In addition educational outreach will be significantly enhanced by providing to schools the same tools used by NASA with the ability of the schools to actively participate on many levels in the science generated by NASA from space and on the ground. The services range from voice, video and telemetry processing and display to data mining, high level processing and visualization tools all accessible from a single portal. In this environment, users would not require high end systems or processes at their home locations to use these services. Also, the user would need to know minimal details about the applications in order to utilize the services. In addition, security at all levels is an underlying goal of the project. The Science Operations Services Grid will focus on four tools that are currently used by the ISS Payload community along with nine more that are new to the community. Under the prototype four Grid virtual organizations PO) will be developed to represent four types of users. They are a Payload (experimenters) VO, a Flight Controllers VO, an Engineering and Science Collaborators VO and an Education and Public Outreach VO. The User-based services will be implemented to replicate the operational voice, video, telemetry and commanding systems. Once the User-based services are in place, they will be analyzed to establish feasibility for Grid enabling. If feasible then each User-based service will be Grid enabled. The remaining non-Grid services if not already Web enabled will be so enabled. In the end, four portals will be developed one for each VO. Each portal will contain the appropriate User-based services required for that VO to operate.
Visualization of Flows in Packed Beds of Twisted Tapes
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Braun, M. J.; Peloso, D.; Athavale, M. M.; Mullen, R. L.
2002-01-01
A videotape presentation of the flow field in a packed bed of 48 twisted tapes which can be simulated by very thin virtual cylinders has been assembled. The indices of refraction of the oil and the Lucite twisted tapes were closely matched, and the flow was seeded with magnesium oxide particles. Planar laser light projected the flow field in two dimensions both along and transverse to the flow axis. The flow field was three dimensional and complex to describe, yet the most prominent finding was flow threads. It appeared that axial flow spiraled along either within the confines of a virtual cylindrical boundary or within the exterior region, between the tangency points, of the virtual cylinders. Random packing and bed voids created vortices and disrupted the laminar flow but minimized the entrance effects. The flow-pressure drops in the packed bed fell below the Ergun model for porous-media flows. Single-twisted-tape results of Smithberg and Landis (1964) were used to guide the analysis. In appendix A the results of several investigators are scaled to the Ergun model. Further investigations including different geometric configurations, computational fluid dynamic (CFD) gridding, and analysis are required.
Image-guided laser projection for port placement in minimally invasive surgery.
Marmurek, Jonathan; Wedlake, Chris; Pardasani, Utsav; Eagleson, Roy; Peters, Terry
2006-01-01
We present an application of an augmented reality laser projection system in which procedure-specific optimal incision sites, computed from pre-operative image acquisition, are superimposed on a patient to guide port placement in minimally invasive surgery. Tests were conducted to evaluate the fidelity of computed and measured port configurations, and to validate the accuracy with which a surgical tool-tip can be placed at an identified virtual target. A high resolution volumetric image of a thorax phantom was acquired using helical computed tomography imaging. Oriented within the thorax, a phantom organ with marked targets was visualized in a virtual environment. A graphical interface enabled marking the locations of target anatomy, and calculation of a grid of potential port locations along the intercostal rib lines. Optimal configurations of port positions and tool orientations were determined by an objective measure reflecting image-based indices of surgical dexterity, hand-eye alignment, and collision detection. Intra-operative registration of the computed virtual model and the phantom anatomy was performed using an optical tracking system. Initial trials demonstrated that computed and projected port placement provided direct access to target anatomy with an accuracy of 2 mm.
Research on Collaborative Technology in Distributed Virtual Reality System
NASA Astrophysics Data System (ADS)
Lei, ZhenJiang; Huang, JiJie; Li, Zhao; Wang, Lei; Cui, JiSheng; Tang, Zhi
2018-01-01
Distributed virtual reality technology applied to the joint training simulation needs the CSCW (Computer Supported Cooperative Work) terminal multicast technology to display and the HLA (high-level architecture) technology to ensure the temporal and spatial consistency of the simulation, in order to achieve collaborative display and collaborative computing. In this paper, the CSCW’s terminal multicast technology has been used to modify and expand the implementation framework of HLA. During the simulation initialization period, this paper has used the HLA statement and object management service interface to establish and manage the CSCW network topology, and used the HLA data filtering mechanism for each federal member to establish the corresponding Mesh tree. During the simulation running period, this paper has added a new thread for the RTI and the CSCW real-time multicast interactive technology into the RTI, so that the RTI can also use the window message mechanism to notify the application update the display screen. Through many applications of submerged simulation training in substation under the operation of large power grid, it is shown that this paper has achieved satisfactory training effect on the collaborative technology used in distributed virtual reality simulation.
A source-attractor approach to network detection of radiation sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Barry, M. L..; Grieme, M.
Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less
Negotiating governance in virtual worlds: grief play, hacktivism, and LeakOps in Second Life®
NASA Astrophysics Data System (ADS)
Bakioğlu, Burcu S.
2012-12-01
The acts of transgression in cyberspace have grown in visibility with grief play and griefing in virtual worlds. Briefly defined, griefing is the intentional harassment of other players. This paper argues that in recent years, griefing has developed from a set of trolling practices that manifests itself as offensive language and tasteless pranks into political initiatives with hacktivist undertones. Because the tactical nature of role-playing and gaming provides the anonymity and the cunningness required for hacktivistic initiatives, griefing bears the potential to take part in the transgressive politics of civil disobedience. Arguing that grief play and griefing are tactical uses of media that lead to transgressive politics, this paper will examine the role of such activities in influencing virtual politics. In order to demonstrate how this transformation has occurred, this paper will discuss the birth of vigilante organizations, specifically, that of Justice League Unlimited (JLU), and the operation conducted against them by The Wrong Hands. The said operation, whose intention was to leak JLU's secret papers, Brainiac Wiki, exposed a grid-wide surveillance operation that the vigilante group was conducting in Second Life®.
NASA Astrophysics Data System (ADS)
Nishioka, S.; Goto, I.; Miyamoto, K.; Hatayama, A.; Fukano, A.
2016-01-01
Recently, in large-scale hydrogen negative ion sources, the experimental results have shown that ion-ion plasma is formed in the vicinity of the extraction hole under the surface negative ion production case. The purpose of this paper is to clarify the mechanism of the ion-ion plasma formation by our three dimensional particle-in-cell simulation. In the present model, the electron loss along the magnetic filter field is taken into account by the " √{τ///τ⊥ } model." The simulation results show that the ion-ion plasma formation is due to the electron loss along the magnetic filter field. Moreover, the potential profile for the ion-ion plasma case has been looked into carefully in order to discuss the ion-ion plasma formation. Our present results show that the potential drop of the virtual cathode in front of the plasma grid is large when the ion-ion plasma is formed. This tendency has been explained by a relationship between the virtual cathode depth and the net particle flux density at the virtual cathode.
Brandmeir, Nicholas; Sather, Michael
2018-02-20
One of the most effective treatments for epilepsy is resection, but it remains underutilized. Efforts must be made to increase the ease, safety, and efficacy of epilepsy resection to improve utilization. Studies have shown an improved risk profile of stereoelectroencephalography (SEEG) over subdural grids (SDG) for invasive monitoring. One limitation to increased adoption of SEEG at epilepsy centers is the theoretical difficulty of planning a delayed resection once electrodes are removed. Our objective was to develop and present a technique using readily available neuronavigation technology to guide a cortical, non-lesional epilepsy resection with co-registration of imaging during invasive monitoring to imaging in an explanted patient, allowing for virtual visualization of electrodes. An example case taking advantage of the technique described above as an adjunct for an anatomically guided resection is presented with technical details and images. Intraoperative neuronavigation was successfully used to virtually represent previously removed SEEG electrodes and accuracy could be easily verified by examining scars on the scalp, bone, dura and pia. The simple technique presented can be a useful adjunct to resection following SEEG. This may help increase the adoption of SEEG, even when resection is planned.
Architecture for the Next Generation System Management Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallard, Jerome; Lebre, I Adrien; Morin, Christine
2011-01-01
To get more results or greater accuracy, computational scientists execute their applications on distributed computing platforms such as Clusters, Grids and Clouds. These platforms are different in terms of hardware and software resources as well as locality: some span across multiple sites and multiple administrative domains whereas others are limited to a single site/domain. As a consequence, in order to scale their applica- tions up the scientists have to manage technical details for each target platform. From our point of view, this complexity should be hidden from the scientists who, in most cases, would prefer to focus on their researchmore » rather than spending time dealing with platform configuration concerns. In this article, we advocate for a system management framework that aims to automatically setup the whole run-time environment according to the applications needs. The main difference with regards to usual approaches is that they generally only focus on the software layer whereas we address both the hardware and the software expecta- tions through a unique system. For each application, scientists describe their requirements through the definition of a Virtual Platform (VP) and a Virtual System Environment (VSE). Relying on the VP/VSE definitions, the framework is in charge of: (i) the configuration of the physical infrastructure to satisfy the VP requirements, (ii) the setup of the VP, and (iii) the customization of the execution environment (VSE) upon the former VP. We propose a new formalism that the system can rely upon to successfully perform each of these three steps without burdening the user with the specifics of the configuration for the physical resources, and system management tools. This formalism leverages Goldberg s theory for recursive virtual machines by introducing new concepts based on system virtualization (identity, partitioning, aggregation) and emulation (simple, abstraction). This enables the definition of complex VP/VSE configurations without making assumptions about the hardware and the software re- sources. For each requirement, the system executes the corresponding operation with the appropriate management tool. As a proof of concept, we implemented a first prototype that currently interacts with several system management tools (e.g., OSCAR, the Grid 5000 toolkit, and XtreemOS) and that can be easily extended to integrate new resource brokers or cloud systems such as Nimbus, OpenNebula or Eucalyptus for instance.« less
NASA Astrophysics Data System (ADS)
Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.
2012-12-01
This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.
A model for the effect of submerged aquatic vegetation on turbulence induced by an oscillating grid
NASA Astrophysics Data System (ADS)
Pujol, Dolors; Colomer, Jordi; Serra, Teresa; Casamitjana, Xavier
2012-12-01
The aim of this study is to model, under controlled laboratory conditions, the effect of submerged aquatic vegetation (SAV) on turbulence generated in a water column by an oscillating grid turbulence (OGT). Velocity profiles have been measured by an acoustic Doppler velocimeter (MicroADV). Experimental conditions are analysed in two canopy models (rigid and semi-rigid), using nine plant-to-plant distances (ppd), three stem diameters (d), four types of natural SAV (Cladium mariscus, Potamogeton nodosus, Myriophyllum verticillatum and Ruppia maritima) and two oscillation grid frequencies (f). To quantify this response, we have developed a non-dimensional model, with a specific turbulent kinetic energy (TKE), f, stroke (s), d, ppd, distance from the virtual origin to the measurement (zm) and space between grid bars (M). The experimental data show that, at zm/zc < 1 the turbulent kinetic energy decays with zm, according to the well-known power law, zm-2, and does not depend on the vegetation characteristics. In contrast, at zm/zc > 1, TKE decreases faster with zm and scales to the model variables according to TKE/(f·s)∝(·(. Therefore, at zm/zc > 1 the TKE is affected by the geometric characteristics of the plants (both diameter and plant-to-plant distance), an effect called sheltering. Results from semi-rigid canopies and natural SAV are found to scale with the non-dimensional model proposed for rigid canopies. We also discuss the practical implications for field conditions (wind and natural SAV).
The NASA-GES-DISC Satellite Data/Products Access, Distribution, Services and Dissemination to Users
NASA Technical Reports Server (NTRS)
Vicente, Gilberto A.
2005-01-01
The NASA/GES/DISC/DAAC is a virtual data portal that provides convenient access to Atmospheric, Oceanic and Land datasets and value added products from various current NASA missions and instruments as well as heritage datasets from AIRS/AMSU/HSB, AVHRR, CZCS, LIMS, MODIS, MSU, OCTS, SeaWiFS, SORCE, SSI, TOMS, TOVS, UARS and TRMM. The GES-DISC-DAAC also provided a variety of services that allow users to analyze and visualize gridded data interactively online without having to download any data.
Synthetic perspective optical flow: Influence on pilot control tasks
NASA Technical Reports Server (NTRS)
Bennett, C. Thomas; Johnson, Walter W.; Perrone, John A.; Phatak, Anil V.
1989-01-01
One approach used to better understand the impact of visual flow on control tasks has been to use synthetic perspective flow patterns. Such patterns are the result of apparent motion across a grid or random dot display. Unfortunately, the optical flow so generated is based on a subset of the flow information that exists in the real world. The danger is that the resulting optical motions may not generate the visual flow patterns useful for actual flight control. Researchers conducted a series of studies directed at understanding the characteristics of synthetic perspective flow that support various pilot tasks. In the first of these, they examined the control of altitude over various perspective grid textures (Johnson et al., 1987). Another set of studies was directed at studying the head tracking of targets moving in a 3-D coordinate system. These studies, parametric in nature, utilized both impoverished and complex virtual worlds represented by simple perspective grids at one extreme, and computer-generated terrain at the other. These studies are part of an applied visual research program directed at understanding the design principles required for the development of instruments displaying spatial orientation information. The experiments also highlight the need for modeling the impact of spatial displays on pilot control tasks.
NASA Astrophysics Data System (ADS)
Bykova, L. E.; Razdymakhina, O. N.
2011-07-01
In this paper the results of investigations of near-Earth asteroids (NEA) dynamics in vicinity of the 1/2 resonance with Earth are presented. For each of these asteroids the evolution of probability motion domains is investigated during several thousand years. The investigations are conducted on cluster SKIF Cyberia with digit grid 128 bit. The performance of motion prediction of many real and virtual asteroids on multiple-processor computing system with long digit grid has been shown. The estimate of performance is carried out in comparison with solution on personal computer with digit grid 80 bit
Zhang, Qinjin; Liu, Yancheng; Zhao, Youtao; Wang, Ning
2016-03-01
Multi-mode operation and transient stability are two problems that significantly affect flexible microgrid (MG). This paper proposes a multi-mode operation control strategy for flexible MG based on a three-layer hierarchical structure. The proposed structure is composed of autonomous, cooperative, and scheduling controllers. Autonomous controller is utilized to control the performance of the single micro-source inverter. An adaptive sliding-mode direct voltage loop and an improved droop power loop based on virtual negative impedance are presented respectively to enhance the system disturbance-rejection performance and the power sharing accuracy. Cooperative controller, which is composed of secondary voltage/frequency control and phase synchronization control, is designed to eliminate the voltage/frequency deviations produced by the autonomous controller and prepare for grid connection. Scheduling controller manages the power flow between the MG and the grid. The MG with the improved hierarchical control scheme can achieve seamless transitions from islanded to grid-connected mode and have a good transient performance. In addition the presented work can also optimize the power quality issues and improve the load power sharing accuracy between parallel VSIs. Finally, the transient performance and effectiveness of the proposed control scheme are evaluated by theoretical analysis and simulation results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Moser, Richard P.; Hesse, Bradford W.; Shaikh, Abdul R.; Courtney, Paul; Morgan, Glen; Augustson, Erik; Kobrin, Sarah; Levin, Kerry; Helba, Cynthia; Garner, David; Dunn, Marsha; Coa, Kisha
2011-01-01
Scientists are taking advantage of the Internet and collaborative web technology to accelerate discovery in a massively connected, participative environment —a phenomenon referred to by some as Science 2.0. As a new way of doing science, this phenomenon has the potential to push science forward in a more efficient manner than was previously possible. The Grid-Enabled Measures (GEM) database has been conceptualized as an instantiation of Science 2.0 principles by the National Cancer Institute with two overarching goals: (1) Promote the use of standardized measures, which are tied to theoretically based constructs; and (2) Facilitate the ability to share harmonized data resulting from the use of standardized measures. This is done by creating an online venue connected to the Cancer Biomedical Informatics Grid (caBIG®) where a virtual community of researchers can collaborate together and come to consensus on measures by rating, commenting and viewing meta-data about the measures and associated constructs. This paper will describe the web 2.0 principles on which the GEM database is based, describe its functionality, and discuss some of the important issues involved with creating the GEM database, such as the role of mutually agreed-on ontologies (i.e., knowledge categories and the relationships among these categories— for data sharing). PMID:21521586
Press Meeting 20 January 2003: First Light for Europe's Virtual Observatory
NASA Astrophysics Data System (ADS)
2002-12-01
Imagine you are an astronomer with instant, fingertip access to all existing observations of a given object and the opportunity to sift through them at will. In just a few moments, you can have information on all kinds about objects out of catalogues all over the world, including observations taken at different times. Over the next two years this scenario will become reality as Europe's Astrophysical Virtual Observatory (AVO) develops. Established only a year ago (cf. ESO PR 26/01), the AVO already offers astronomers a unique, prototype research tool that will lead the way to many outstanding new discoveries. Journalists are invited to a live demonstration of the capabilities of this exciting new initiative in astronomy. The demonstration will take place at the Jodrell Bank Observatory in Manchester, in the United Kingdom, on 20 January 2003, starting at 11:00. Sophisticated AVO tools will help scientists find the most distant supernovae - objects that reveal the cosmological makeup of our Universe. The tools are also helping astronomers measure the rate of birth of stars in extremely red and distant galaxies. Journalists will also have the opportunity to discuss the project with leading astronomers from across Europe. The new AVO website has been launched today, explaining the progress being made in this European Commission-funded project: URL: http://www.euro-vo.org/ To register your intention to attend the AVO First Light Demonstration, please provide your name and affiliation by January 13, 2003, to: Ian Morison, Jodrell Bank Observatory (full contact details below). Information on getting to the event is included on the webpage above. Programme for the AVO First Light Demonstration 11:00 Welcome, Phil Diamond (University of Manchester/Jodrell Bank Observatory) 11:05 Short introduction to Virtual Observatories, Piero Benvenuti (ESA/ST-ECF) 11:15 Q&A 11:20 Short introduction to the Astrophysical Virtual Observatory, Peter Quinn (ESO) 11:30 Q&A 11:35 Screening of Video News Release 11:40 Demonstration of the AVO prototype, Nicholas Walton (University of Cambridge) 12:00 Q&A, including interview possibilities with the scientists 12:30-13:45 Buffet lunch, including individual hands-on demos 14:00 Science Demo (also open to interested journalists) For more information about Virtual Observatories and the AVO, see the website or the explanation below. Notes to editors The AVO involves several partner organisations led by the European Southern Observatory (ESO). The other partner organisations are the European Space Agency (ESA), AstroGrid (funded by PPARC as part of the UK's E-Science programme), the CNRS-supported Centre de Données Astronomiques de Strasbourg (CDS), the University Louis Pasteur in Strasbourg, France, the CNRS-supported TERAPIX astronomical data centre at the Institut d'Astrophysique in Paris, France, and the Jodrell Bank Observatory of the Victoria University of Manchester, United Kingdom. Note [1]: This is a joint Press Release issued by the European Southern Observatory (ESO), the Hubble European Space Agency Information Centre, AstroGrid, CDS, TERAPIX/CNRS and the University of Manchester. Science Contacts Peter J. Quinn European Southern Observatory (ESO) Garching, Germany Tel: +49-89-3200 -6509 email: pjq@eso.org Phil Diamond University of Manchester/Jodrell Bank Observatory United Kingdom Tel: +44-147-757-26-25 (0147 in the United Kingdom) email: pdiamond@jb.man.ac.uk Press contacts Ian Morison University of Manchester/Jodrell Bank Observatory United Kingdom Tel: +44-147-757-26-10 (0147 in the United Kingdom) E-mail: email: im@jb.man.ac.uk Lars Lindberg Christensen Hubble European Space Agency Information Centre Garching, Germany Tel: +49-89-3200-6306 (089 in Germany) Cellular (24 hr): +49-173-3872-621 (0173 in Germany) email: lars@eso.org Richard West (ESO EPR Dept.) ESO EPR Dept. Garching, Germany Phone: +49-89-3200-6276 email: rwest@eso.org Background information What is a Virtual Observatory? - A short introduction The Virtual Observatory is an international astronomical community-based initiative. It aims to allow global electronic access to the available astronomical data archives of space and ground-based observatories, sky survey databases. It also aims to enable data analysis techniques through a coordinating entity that will provide common standards, wide-network bandwidth, and state-of-the-art analysis tools. It is now possible to have powerful and expensive new observing facilities at wavelengths from the radio to the X-ray and gamma-ray regions. Together with advanced instrumentation techniques, a vast new array of astronomical data sets will soon be forthcoming at all wavelengths. These very large databases must be archived and made accessible in a systematic and uniform manner to realise the full potential of the new observing facilities. The Virtual Observatory aims to provide the framework for global access to the various data archives by facilitating the standardisation of archiving and data-mining protocols. The AVO will also take advantage of state-of-the-art advances in data-handling software in astronomy and in other fields. The Virtual Observatory initiative is currently aiming at a global collaboration of the astronomical communities in Europe, North and South America, Asia, and Australia under the auspices of the recently formed International Virtual Observatory Alliance. The Astrophysical Virtual Observatory - An Introduction The breathtaking capabilities and ultrahigh efficiency of new ground and space observatories have led to a 'data explosion' calling for innovative ways to process, explore, and exploit these data. Researchers must now turn to the GRID paradigm of distributed computing and resources to solve complex, front-line research problems. To implement this new IT paradigm, you have to join existing astronomical data centres and archives into an interoperating and single unit. This new astronomical data resource will form a Virtual Observatory (VO) so that astronomers can explore the digital Universe in the new archives across the entire spectrum. Similarly to how a real observatory consists of telescopes, each with a collection of unique astronomical instruments, the VO consists of a collection of data centres each with unique collections of astronomical data, software systems, and processing capabilities. The Astrophysical Virtual Observatory Project (AVO) will conduct a research and demonstration programme on the scientific requirements and technologies necessary to build a VO for European astronomy. The AVO has been jointly funded by the European Commission (under FP5 - Fifth Framework Programme) with six European organisations participating in a three year Phase-A work programme, valued at 5 million Euro. The partner organisations are the European Southern Observatory (ESO) in Munich, Germany, the European Space Agency (ESA), AstroGrid (funded by PPARC as part of the UK's E-Science programme), the CNRS-supported Centre de Données Astronomiques de Strasbourg (CDS), the University Louis Pasteur in Strasbourg, France, the CNRS-supported TERAPIX astronomical data centre at the Institut d'Astrophysique in Paris, France, and the Jodrell Bank Observatory of the Victoria University of Manchester, United Kingdom. The Phase A program will focus its effort in the following areas: * A detailed description of the science requirements for the AVO will be constructed, following the experience gained in a smaller-scale science demonstration program called ASTROVIRTEL (Accessing Astronomical Archives as Virtual Telescopes). * The difficult issue of data and archive interoperability will be addressed by new standards definitions for astronomical data and trial programmes of "joins" between specific target archives within the project team. * The necessary GRID and database technologies will be assessed and tested for use within a full AVO implementation. The AVO project is currently working in conjunction with other international VO efforts in the United States and Asia-Pacific region. This is part of an International Virtual Observatory Alliance to define essential new data standards so that the VO concept can have a global dimension. The AVO partners will join with all astronomical data centres in Europe to put forward an FP6 IST (Sixth Framework Programme - Information Society Technologies Programme) Integrated Project proposal to make a European VO fully operational by the end of 2007.
Creating the virtual Eiger North Face
NASA Astrophysics Data System (ADS)
Buchroithner, Manfred
The described activities aim at combining the potentials of photogrammetry, remote sensing, digital cartography and virtual reality/photorealism with the needs of modern spatial information systems for tourism and for alpinism in particular (the latter aspect is, however, not covered in the paper). Since for slopes steeper than 45°, a digital relief model in nadir projection cannot adequately depict the terrain even in low-angle views, digital Steep Slope Models (SSMs) with a rather vertical reference plane are desirable. This condition very much applies to the Eiger North Face which has been chosen as a testbed for the realisation of a virtual rock face and which shall later be embedded into a lower resolution synthetic landscape of the Eiger-Moench-Jungfrau Region generated from a DTM and satellite imagery. Our "SSM approach" seems justified by the fact that except for the visualisation, commercial software was used which is very limited both in DTM modelling and texture mapping. For the creation of the actual SSM, a pair of oblique coloured air photos has been used, resulting in both a digital face model of 3.7 m grid size and an orthophoto with a resolution of 0.25 m. To demonstrate the alpinistic potential of the product, climbing routes have been inserted into the face model, thus enabling even non-experienced individuals to enjoy the "virtual reality conquest" of the Eiger North Face and potential climbing candidates to prepare themselves for the actual "real world" enterprise.
NASA Astrophysics Data System (ADS)
Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.
2011-12-01
WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.
Dynamical Core in Atmospheric Model Does Matter in the Simulation of Arctic Climate
NASA Astrophysics Data System (ADS)
Jun, Sang-Yoon; Choi, Suk-Jin; Kim, Baek-Min
2018-03-01
Climate models using different dynamical cores can simulate significantly different winter Arctic climates even if equipped with virtually the same physics schemes. Current climate simulated by the global climate model using cubed-sphere grid with spectral element method (SE core) exhibited significantly warmer Arctic surface air temperature compared to that using latitude-longitude grid with finite volume method core. Compared to the finite volume method core, SE core simulated additional adiabatic warming in the Arctic lower atmosphere, and this was consistent with the eddy-forced secondary circulation. Downward longwave radiation further enhanced Arctic near-surface warming with a higher surface air temperature of about 1.9 K. Furthermore, in the atmospheric response to the reduced sea ice conditions with the same physical settings, only the SE core showed a robust cooling response over North America. We emphasize that special attention is needed in selecting the dynamical core of climate models in the simulation of the Arctic climate and associated teleconnection patterns.
Atlasmaker: A Grid-based Implementation of the Hyperatlas
NASA Astrophysics Data System (ADS)
Williams, R.; Djorgovski, S. G.; Feldmann, M. T.; Jacob, J.
2004-07-01
The Atlasmaker project is using Grid technology, in combination with NVO interoperability, to create new knowledge resources in astronomy. The product is a multi-faceted, multi-dimensional, scientifically trusted image atlas of the sky, made by federating many different surveys at different wavelengths, times, resolutions, polarizations, etc. The Atlasmaker software does resampling and mosaicking of image collections, and is well-suited to operate with the Hyperatlas standard. Requests can be satisfied via on-demand computations or by accessing a data cache. Computed data is stored in a distributed virtual file system, such as the Storage Resource Broker (SRB). We expect these atlases to be a new and powerful paradigm for knowledge extraction in astronomy, as well as a magnificent way to build educational resources. The system is being incorporated into the data analysis pipeline of the Palomar-Quest synoptic survey, and is being used to generate all-sky atlases from the 2MASS, SDSS, and DPOSS surveys for joint object detection.
Spherical ion oscillations in a positive polarity gridded inertial-electrostatic confinement device
NASA Astrophysics Data System (ADS)
Bandara, R.; Khachan, J.
2013-07-01
A pulsed, positive polarity gridded inertial electrostatic confinement device has been investigated experimentally, using a differential emissive probe and potential traces as primary diagnostics. Large amplitude oscillations in the plasma current and plasma potential were observed within a microsecond of the discharge onset, which are indicative of coherent ion oscillations about a temporarily confined excess of recirculating electron space charge. The magnitude of the depth of the potential well in the established virtual cathode was determined using a differential emissive Langmuir probe, which correlated well to the potential well inferred from the ion oscillation frequency for both hydrogen and argon experiments. It was found that the timescale for ion oscillation dispersion is strongly dependent on the neutral gas density, and weakly dependent on the peak anode voltage. The cessation of the oscillations was found to be due to charge exchange processes converting ions to high velocity neutrals, causing the abrupt de-coherence of the oscillations through an avalanche dispersion in phase space.
V-FOR-WaTer - a new virtual research environment for environmental research
NASA Astrophysics Data System (ADS)
Strobl, Marcus; Azmi, Elnaz; Hassler, Sibylle; Mälicke, Mirko; Meyer, Jörg; Zehe, Erwin
2017-04-01
The preparation of heterogeneous datasets for scientific analysis is still a demanding task. Data preprocessing for hydrological models typically involves gathering datasets from different sources, extensive work within geoinformation systems, data transformation, the generation of computational grids and the definition of initial and boundary conditions. V-FOR-WaTer, a standardized and scalable data hub with compatible analysis tools, will ease comprehensive studies and significantly reduce data preparation time. The idea behind V-FOR-WaTer is to bring together various datasets (e.g. point measurements, 2D/3D data, time series data) from different sources (e.g. gathered in research projects, or as part of regular monitoring of state offices) and to provide common as well as innovative scaling tools in space and time to generate a coherent data grid. Each dataset holds detailed standardized metadata to ensure usability of the data, offer a comprehensive search function and provide reference information for appropriate citation of the dataset creators. V-FOR-WaTer includes a basis of data and tools, but its purpose is to grow by users who extend the virtual research environment with their own tools and research data. Researchers who upload new data or tools can receive a digital object identifier, or protect their data and tools from others until publication. Access to data and tools provided from V-FOR-WaTer happens via an easy-to-use web portal. Due to its modular architecture the portal is ready to be extended with new tools and features and also offers interfaces to Matlab, Python and R.
The ALICE Software Release Validation cluster
NASA Astrophysics Data System (ADS)
Berzano, D.; Krzewicki, M.
2015-12-01
One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.
Analysis Methodology for Balancing Authority Cooperation in High Penetration of Variable Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Etingov, Pavel V.; Zhou, Ning
2010-02-01
With the rapidly growing penetration level of wind and solar generation, the challenges of managing variability and the uncertainty of intermittent renewable generation become more and more significant. The problem of power variability and uncertainty gets exacerbated when each balancing authority (BA) works locally and separately to balance its own subsystem. The virtual BA concept means various forms of collaboration between individual BAs must manage power variability and uncertainty. The virtual BA will have a wide area control capability in managing its operational balancing requirements in different time frames. This coordination results in the improvement of efficiency and reliability ofmore » power system operation while facilitating the high level integration of green, intermittent energy resources. Several strategies for virtual BA implementation, such as ACE diversity interchange (ADI), wind only BA, BA consolidation, dynamic scheduling, regulation and load following sharing, extreme event impact study are discussed in this report. The objective of such strategies is to allow individual BAs within a large power grid to help each other deal with power variability. Innovative methods have been developed to simulate the balancing operation of BAs. These methods evaluate the BA operation through a number of metrics — such as capacity, ramp rate, ramp duration, energy and cycling requirements — to evaluate the performances of different virtual BA strategies. The report builds a systematic framework for evaluating BA consolidation and coordination. Results for case studies show that significant economic and reliability benefits can be gained. The merits and limitation of each virtual BA strategy are investigated. The report provides guidelines for the power industry to evaluate the coordination or consolidation method. The application of the developed strategies in cooperation with several regional BAs is in progress for several off-spring projects.« less
NASA Astrophysics Data System (ADS)
Lino, A. C. L.; Dal Fabbro, I. M.
2008-04-01
The conception of a tridimensional digital model of solid figures and plant organs started from topographic survey of virtual surfaces [1], followed by topographic survey of solid figures [2], fruit surface survey [3] and finally the generation of a 3D digital model [4] as presented by [1]. In this research work, i.e. step number [4] tested objects included cylinders, cubes, spheres and fruits. A Ronchi grid named G1 was generated in a PC, from which other grids referred as G2, G3, and G4 were set out of phase by 1/4, 1/2 and 3/4 of period from G1. Grid G1 was then projected onto the samples surface. Projected grid was named Gd. The difference between Gd and G1 followed by filtration generated de moiré fringes M1 and so on, obtaining the fringes M2, M3 and M4 from Gd. Fringes are out of phase one from each other by 1/4 of period, which were processed by the Rising Sun Moiré software to produce packed phase and further on, the unpacked fringes. Tested object was placed on a goniometer and rotate to generate four surfaces topography. These four surveyed surfaces were assembled by means of a SCILAB software, obtaining a three column matrix, corresponding to the object coordinates xi, also having elevation values and coordinates corrected as well. The work includes conclusions on the reliability of the proposed method as well as the setup simplicity and of low cost.
Real Time Monitor of Grid job executions
NASA Astrophysics Data System (ADS)
Colling, D. J.; Martyniak, J.; McGough, A. S.; Křenek, A.; Sitera, J.; Mulač, M.; Dvořák, F.
2010-04-01
In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.
Exact Integrations of Polynomials and Symmetric Quadrature Formulas over Arbitrary Polyhedral Grids
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel
1997-01-01
This paper is concerned with two important elements in the high-order accurate spatial discretization of finite volume equations over arbitrary grids. One element is the integration of basis functions over arbitrary domains, which is used in expressing various spatial integrals in terms of discrete unknowns. The other consists of quadrature approximations to those integrals. Only polynomial basis functions applied to polyhedral and polygonal grids are treated here. Non-triangular polygonal faces are subdivided into a union of planar triangular facets, and the resulting triangulated polyhedron is subdivided into a union of tetrahedra. The straight line segment, triangle, and tetrahedron are thus the fundamental shapes that are the building blocks for all integrations and quadrature approximations. Integrals of products up to the fifth order are derived in a unified manner for the three fundamental shapes in terms of the position vectors of vertices. Results are given both in terms of tensor products and products of Cartesian coordinates. The exact polynomial integrals are used to obtain symmetric quadrature approximations of any degree of precision up to five for arbitrary integrals over the three fundamental domains. Using a coordinate-free formulation, simple and rational procedures are developed to derive virtually all quadrature formulas, including some previously unpublished. Four symmetry groups of quadrature points are introduced to derive Gauss formulas, while their limiting forms are used to derive Lobatto formulas. Representative Gauss and Lobatto formulas are tabulated. The relative efficiency of their application to polyhedral and polygonal grids is detailed. The extension to higher degrees of precision is discussed.
The International Symposium on Grids and Clouds
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds (ISGC) 2012 will be held at Academia Sinica in Taipei from 26 February to 2 March 2012, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). 2012 is the decennium anniversary of the ISGC which over the last decade has tracked the convergence, collaboration and innovation of individual researchers across the Asia Pacific region to a coherent community. With the continuous support and dedication from the delegates, ISGC has provided the primary international distributed computing platform where distinguished researchers and collaboration partners from around the world share their knowledge and experiences. The last decade has seen the wide-scale emergence of e-Infrastructure as a critical asset for the modern e-Scientist. The emergence of large-scale research infrastructures and instruments that has produced a torrent of electronic data is forcing a generational change in the scientific process and the mechanisms used to analyse the resulting data deluge. No longer can the processing of these vast amounts of data and production of relevant scientific results be undertaken by a single scientist. Virtual Research Communities that span organisations around the world, through an integrated digital infrastructure that connects the trust and administrative domains of multiple resource providers, have become critical in supporting these analyses. Topics covered in ISGC 2012 include: High Energy Physics, Biomedicine & Life Sciences, Earth Science, Environmental Changes and Natural Disaster Mitigation, Humanities & Social Sciences, Operations & Management, Middleware & Interoperability, Security and Networking, Infrastructure Clouds & Virtualisation, Business Models & Sustainability, Data Management, Distributed Volunteer & Desktop Grid Computing, High Throughput Computing, and High Performance, Manycore & GPU Computing.
Building a Virtual Solar Observatory: I Look Around and There's a Petabyte Following Me
NASA Technical Reports Server (NTRS)
Gurman, J. B.; Bogart, R.; Hill. F.; Martens, P.; Oergerle, William (Technical Monitor)
2002-01-01
The 2001 July NASA Senior Review of Sun-Earth Connections missions and data centers directed the Solar Data Analysis Center (SDAC) to proceed in studying and implementing a Virtual Solar Observatory (VSO) to ease the identification of and access to distributed archives of solar data. Any such design (cf. the National Virtual Observatory and NASA's Planetary Data System) consists of three elements: the distributed archives, a "broker" facility that translates metadata from all partner archives into a single standard for searches, and a user interface to allow searching, browsing, and download of data. Three groups are now engaged in a six-month study that will produce a candidate design and implementation roadmap for the VSO. We hope to proceed with the construction of a prototype VSO in US fiscal year 2003, with fuller deployment dependent on community reaction to and use of the capability. We therefore invite as broad as possible public comment and involvement, and invite interested parties to a "birds of a feather" session at this meeting. VSO is partnered with the European Grid of Solar Observations (EGSO), and if successful, we hope to be able to offer the VSO as the basis for the solar component of a Living With a Star data system.
Comparative dynamic analysis of the full Grossman model.
Ried, W
1998-08-01
The paper applies the method of comparative dynamic analysis to the full Grossman model. For a particular class of solutions, it derives the equations implicitly defining the complete trajectories of the endogenous variables. Relying on the concept of Frisch decision functions, the impact of any parametric change on an endogenous variable can be decomposed into a direct and an indirect effect. The focus of the paper is on marginal changes in the rate of health capital depreciation. It also analyses the impact of either initial financial wealth or the initial stock of health capital. While the direction of most effects remains ambiguous in the full model, the assumption of a zero consumption benefit of health is sufficient to obtain a definite for any direct or indirect effect.
The 1973 Nobel Prize for Physiology or Medicine: recognition for behavioral science?
Dewsbury, Donald A
2003-09-01
The Nobel Prize for Physiology or Medicine for 1973 was awarded to 3 ethologists: Karl von Frisch, Konrad Lorenz, and Nikolaas Tinbergen. This was a landmark event in the history of the field of ethology and potentially for the behavioral sciences more broadly. For the first time, the prize was awarded for research of a purely behavioral nature. The language used in making the award emphasized the implications of ethological work for human health and appeared to suggest that more such awards might be forthcoming; few were. The author provides an overview of the 3 men, their work, the events surrounding the award, the controversy that arose, and the significance of the award as viewed in contemporary perspective.
Framework Resources Multiply Computing Power
NASA Technical Reports Server (NTRS)
2010-01-01
As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.
Generalized Sheet Transition Condition FDTD Simulation of Metasurface
NASA Astrophysics Data System (ADS)
Vahabzadeh, Yousef; Chamanara, Nima; Caloz, Christophe
2018-01-01
We propose an FDTD scheme based on Generalized Sheet Transition Conditions (GSTCs) for the simulation of polychromatic, nonlinear and space-time varying metasurfaces. This scheme consists in placing the metasurface at virtual nodal plane introduced between regular nodes of the staggered Yee grid and inserting fields determined by GSTCs in this plane in the standard FDTD algorithm. The resulting update equations are an elegant generalization of the standard FDTD equations. Indeed, in the limiting case of a null surface susceptibility ($\\chi_\\text{surf}=0$), they reduce to the latter, while in the next limiting case of a time-invariant metasurface $[\\chi_\\text{surf}\
A Security Architecture for Grid-enabling OGC Web Services
NASA Astrophysics Data System (ADS)
Angelini, Valerio; Petronzio, Luca
2010-05-01
In the proposed presentation we describe an architectural solution for enabling a secure access to Grids and possibly other large scale on-demand processing infrastructures through OGC (Open Geospatial Consortium) Web Services (OWS). This work has been carried out in the context of the security thread of the G-OWS Working Group. G-OWS (gLite enablement of OGC Web Services) is an international open initiative started in 2008 by the European CYCLOPS , GENESI-DR, and DORII Project Consortia in order to collect/coordinate experiences in the enablement of OWS's on top of the gLite Grid middleware. G-OWS investigates the problem of the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Concerning security issues, the integration of OWS compliant infrastructures and gLite Grids needs to address relevant challenges, due to their respective design principles. In fact OWS's are part of a Web based architecture that demands security aspects to other specifications, whereas the gLite middleware implements the Grid paradigm with a strong security model (the gLite Grid Security Infrastructure: GSI). In our work we propose a Security Architectural Framework allowing the seamless use of Grid-enabled OGC Web Services through the federation of existing security systems (mostly web based) with the gLite GSI. This is made possible mediating between different security realms, whose mutual trust is established in advance during the deployment of the system itself. Our architecture is composed of three different security tiers: the user's security system, a specific G-OWS security system, and the gLite Grid Security Infrastructure. Applying the separation-of-concerns principle, each of these tiers is responsible for controlling the access to a well-defined resource set, respectively: the user's organization resources, the geospatial resources and services, and the Grid resources. While the gLite middleware is tied to a consolidated security approach based on X.509 certificates, our system is able to support different kinds of user's security infrastructures. Our central component, the G-OWS Security Framework, is based on the OASIS WS-Trust specifications and on the OGC GeoRM architectural framework. This allows to satisfy advanced requirements such as the enforcement of specific geospatial policies and complex secure web service chained requests. The typical use case is represented by a scientist belonging to a given organization who issues a request to a G-OWS Grid-enabled Web Service. The system initially asks the user to authenticate to his/her organization's security system and, after verification of the user's security credentials, it translates the user's digital identity into a G-OWS identity. This identity is linked to a set of attributes describing the user's access rights to the G-OWS services and resources. Inside the G-OWS Security system, access restrictions are applied making use of the enhanced Geospatial capabilities specified by the OGC GeoXACML. If the required action needs to make use of the Grid environment the system checks if the user is entitled to access a Grid infrastructure. In that case his/her identity is translated to a temporary Grid security token using the Short Lived Credential Services (IGTF Standard). In our case, for the specific gLite Grid infrastructure, some information (VOMS Attributes) is plugged into the Grid Security Token to grant the access to the user's Virtual Organization Grid resources. The resulting token is used to submit the request to the Grid and also by the various gLite middleware elements to verify the user's grants. Basing on the presented framework, the G-OWS Security Working Group developed a prototype, enabling the execution of OGC Web Services on the EGEE Production Grid through the federation with a Shibboleth based security infrastructure. Future plans aim to integrate other Web authentication services such as OpenID, Kerberos and WS-Federation.
A Grid Metadata Service for Earth and Environmental Sciences
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni
2010-05-01
Critical challenges for climate modeling researchers are strongly connected with the increasingly complex simulation models and the huge quantities of produced datasets. Future trends in climate modeling will only increase computational and storage requirements. For this reason the ability to transparently access to both computational and data resources for large-scale complex climate simulations must be considered as a key requirement for Earth Science and Environmental distributed systems. From the data management perspective (i) the quantity of data will continuously increases, (ii) data will become more and more distributed and widespread, (iii) data sharing/federation will represent a key challenging issue among different sites distributed worldwide, (iv) the potential community of users (large and heterogeneous) will be interested in discovery experimental results, searching of metadata, browsing collections of files, compare different results, display output, etc.; A key element to carry out data search and discovery, manage and access huge and distributed amount of data is the metadata handling framework. What we propose for the management of distributed datasets is the GRelC service (a data grid solution focusing on metadata management). Despite the classical approaches, the proposed data-grid solution is able to address scalability, transparency, security and efficiency and interoperability. The GRelC service we propose is able to provide access to metadata stored in different and widespread data sources (relational databases running on top of MySQL, Oracle, DB2, etc. leveraging SQL as query language, as well as XML databases - XIndice, eXist, and libxml2 based documents, adopting either XPath or XQuery) providing a strong data virtualization layer in a grid environment. Such a technological solution for distributed metadata management leverages on well known adopted standards (W3C, OASIS, etc.); (ii) supports role-based management (based on VOMS), which increases flexibility and scalability; (iii) provides full support for Grid Security Infrastructure, which means (authorization, mutual authentication, data integrity, data confidentiality and delegation); (iv) is compatible with existing grid middleware such as gLite and Globus and finally (v) is currently adopted at the Euro-Mediterranean Centre for Climate Change (CMCC - Italy) to manage the entire CMCC data production activity as well as in the international Climate-G testbed.
NASA Astrophysics Data System (ADS)
Ghonima, M. S.; Yang, H.; Zhong, X.; Ozge, B.; Sahu, D. K.; Kim, C. K.; Babacan, O.; Hanna, R.; Kurtz, B.; Mejia, F. A.; Nguyen, A.; Urquhart, B.; Chow, C. W.; Mathiesen, P.; Bosch, J.; Wang, G.
2015-12-01
One of the main obstacles to high penetrations of solar power is the variable nature of solar power generation. To mitigate variability, grid operators have to schedule additional reliability resources, at considerable expense, to ensure that load requirements are met by generation. Thus despite the cost of solar PV decreasing, the cost of integrating solar power will increase as penetration of solar resources onto the electric grid increases. There are three principal tools currently available to mitigate variability impacts: (i) flexible generation, (ii) storage, either virtual (demand response) or physical devices and (iii) solar forecasting. Storage devices are a powerful tool capable of ensuring smooth power output from renewable resources. However, the high cost of storage is prohibitive and markets are still being designed to leverage their full potential and mitigate their limitation (e.g. empty storage). Solar forecasting provides valuable information on the daily net load profile and upcoming ramps (increasing or decreasing solar power output) thereby providing the grid advance warning to schedule ancillary generation more accurately, or curtail solar power output. In order to develop solar forecasting as a tool that can be utilized by the grid operators we identified two focus areas: (i) develop solar forecast technology and improve solar forecast accuracy and (ii) develop forecasts that can be incorporated within existing grid planning and operation infrastructure. The first issue required atmospheric science and engineering research, while the second required detailed knowledge of energy markets, and power engineering. Motivated by this background we will emphasize area (i) in this talk and provide an overview of recent advancements in solar forecasting especially in two areas: (a) Numerical modeling tools for coastal stratocumulus to improve scheduling in the day-ahead California energy market. (b) Development of a sky imager to provide short term forecasts (0-20 min ahead) to improve optimization and control of equipment on distribution feeders with high penetration of solar. Leveraging such tools that have seen extensive use in the atmospheric sciences supports the development of accurate physics-based solar forecast models. Directions for future research are also provided.
RandomSpot: A web-based tool for systematic random sampling of virtual slides.
Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E
2015-01-01
This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.
Pyglidein - A Simple HTCondor Glidein Service
NASA Astrophysics Data System (ADS)
Schultz, D.; Riedel, B.; Merino, G.
2017-10-01
A major challenge for data processing and analysis at the IceCube Neutrino Observatory presents itself in connecting a large set of individual clusters together to form a computing grid. Most of these clusters do not provide a “standard” grid interface. Using a local account on each submit machine, HTCondor glideins can be submitted to virtually any type of scheduler. The glideins then connect back to a main HTCondor pool, where jobs can run normally with no special syntax. To respond to dynamic load, a simple server advertises the number of idle jobs in the queue and the resources they request. The submit script can query this server to optimize glideins to what is needed, or not submit if there is no demand. Configuring HTCondor dynamic slots in the glideins allows us to efficiently handle varying memory requirements as well as whole-node jobs. One step of the IceCube simulation chain, photon propagation in the ice, heavily relies on GPUs for faster execution. Therefore, one important requirement for any workload management system in IceCube is to handle GPU resources properly. Within the pyglidein system, we have successfully configured HTCondor glideins to use any GPU allocated to it, with jobs using the standard HTCondor GPU syntax to request and use a GPU. This mechanism allows us to seamlessly integrate our local GPU cluster with remote non-Grid GPU clusters, including specially allocated resources at XSEDE supercomputers.
OpenClimateGIS - A Web Service Providing Climate Model Data in Commonly Used Geospatial Formats
NASA Astrophysics Data System (ADS)
Erickson, T. A.; Koziol, B. W.; Rood, R. B.
2011-12-01
The goal of the OpenClimateGIS project is to make climate model datasets readily available in commonly used, modern geospatial formats used by GIS software, browser-based mapping tools, and virtual globes.The climate modeling community typically stores climate data in multidimensional gridded formats capable of efficiently storing large volumes of data (such as netCDF, grib) while the geospatial community typically uses flexible vector and raster formats that are capable of storing small volumes of data (relative to the multidimensional gridded formats). OpenClimateGIS seeks to address this difference in data formats by clipping climate data to user-specified vector geometries (i.e. areas of interest) and translating the gridded data on-the-fly into multiple vector formats. The OpenClimateGIS system does not store climate data archives locally, but rather works in conjunction with external climate archives that expose climate data via the OPeNDAP protocol. OpenClimateGIS provides a RESTful API web service for accessing climate data resources via HTTP, allowing a wide range of applications to access the climate data.The OpenClimateGIS system has been developed using open source development practices and the source code is publicly available. The project integrates libraries from several other open source projects (including Django, PostGIS, numpy, Shapely, and netcdf4-python).OpenClimateGIS development is supported by a grant from NOAA's Climate Program Office.
Integration of advanced technologies to enhance problem-based learning over distance: Project TOUCH.
Jacobs, Joshua; Caudell, Thomas; Wilks, David; Keep, Marcus F; Mitchell, Steven; Buchanan, Holly; Saland, Linda; Rosenheimer, Julie; Lozanoff, Beth K; Lozanoff, Scott; Saiki, Stanley; Alverson, Dale
2003-01-01
Distance education delivery has increased dramatically in recent years as a result of the rapid advancement of communication technology. The National Computational Science Alliance's Access Grid represents a significant advancement in communication technology with potential for distance medical education. The purpose of this study is to provide an overview of the TOUCH project (Telehealth Outreach for Unified Community Health; http://hsc.unm.edu/touch) with special emphasis on the process of problem-based learning case development for distribution over the Access Grid. The objective of the TOUCH project is to use emerging Internet-based technology to overcome geographic barriers for delivery of tutorial sessions to medical students pursuing rotations at remote sites. The TOUCH project also is aimed at developing a patient simulation engine and an immersive virtual reality environment to achieve a realistic health care scenario enhancing the learning experience. A traumatic head injury case is developed and distributed over the Access Grid as a demonstration of the TOUCH system. Project TOUCH serves as an example of a computer-based learning system for developing and implementing problem-based learning cases within the medical curriculum, but this system should be easily applied to other educational environments and disciplines involving functional and clinical anatomy. Future phases will explore PC versions of the TOUCH cases for increased distribution. Copyright 2003 Wiley-Liss, Inc.
International Symposium on Grids and Clouds (ISGC) 2016
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds (ISGC) 2016 will be held at Academia Sinica in Taipei, Taiwan from 13-18 March 2016, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). The theme of ISGC 2016 focuses on“Ubiquitous e-infrastructures and Applications”. Contemporary research is impossible without a strong IT component - researchers rely on the existence of stable and widely available e-infrastructures and their higher level functions and properties. As a result of these expectations, e-Infrastructures are becoming ubiquitous, providing an environment that supports large scale collaborations that deal with global challenges as well as smaller and temporal research communities focusing on particular scientific problems. To support those diversified communities and their needs, the e-Infrastructures themselves are becoming more layered and multifaceted, supporting larger groups of applications. Following the call for the last year conference, ISGC 2016 continues its aim to bring together users and application developers with those responsible for the development and operation of multi-purpose ubiquitous e-Infrastructures. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities, Arts, and Social Sciences (HASS) Applications, Virtual Research Environment (including Middleware, tools, services, workflow, etc.), Data Management, Big Data, Networking & Security, Infrastructure & Operations, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC), etc.
NASA Astrophysics Data System (ADS)
Turner, M. A.
2015-12-01
Because of a lack of centralized planning and no widely-adopted standards among hydrological modeling research groups, research communities, and the data management teams meant to support research, there is chaos when it comes to data formats, spatio-temporal resolutions, ontologies, and data availability. All this makes true scientific reproducibility and collaborative integrated modeling impossible without some glue to piece it all together. Our Virtual Watershed Integrated Modeling System provides the tools and modeling framework hydrologists need to accelerate and fortify new scientific investigations by tracking provenance and providing adaptors for integrated, collaborative hydrologic modeling and data management. Under global warming trends where water resources are under increasing stress, reproducible hydrological modeling will be increasingly important to improve transparency and understanding of the scientific facts revealed through modeling. The Virtual Watershed Data Engine is capable of ingesting a wide variety of heterogeneous model inputs, outputs, model configurations, and metadata. We will demonstrate one example, starting from real-time raw weather station data packaged with station metadata. Our integrated modeling system will then create gridded input data via geostatistical methods along with error and uncertainty estimates. These gridded data are then used as input to hydrological models, all of which are available as web services wherever feasible. Models may be integrated in a data-centric way where the outputs too are tracked and used as inputs to "downstream" models. This work is part of an ongoing collaborative Tri-state (New Mexico, Nevada, Idaho) NSF EPSCoR Project, WC-WAVE, comprised of researchers from multiple universities in each of the three states. The tools produced and presented here have been developed collaboratively alongside watershed scientists to address specific modeling problems with an eye on the bigger picture of scientific reproducibility and transparency, and data publication and reuse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, S.J.Ben; Lauer, Gregory S.
Extreme-science drives the need for distributed exascale processing and communications that are carefully, yet flexibly, managed. Exponential growth of data for scientific simulations, experimental data, collaborative data analyses, remote visualization and GRID computing requirements of scientists in fields as diverse as high energy physics, climate change, genomics, fusion, synchrotron radiation, material science, medicine, and other scientific disciplines cannot be accommodated by simply applying existing transport protocols to faster pipes. Further, scientific challenges today demand diverse research teams, heightening the need for and increasing the complexity of collaboration. To address these issues within the network layer and physical layer, we havemore » performed a number of research activities surrounding effective allocation and management of elastic optical network (EON) resources, particularly focusing on FlexGrid transponders. FlexGrid transponders support the opportunity to build Layer-1 connections at a wide range of bandwidths and to reconfigure them rapidly. The new flexibility supports complex new ways of using the physical layer that must be carefully managed and hidden from the scientist end-users. FlexGrid networks utilize flexible (or elastic) spectral bandwidths for each data link without using fixed wavelength grids. The flexibility in spectrum allocation brings many appealing features to network operations. Current networks are designed for the worst case impairments in transmission performance and the assigned spectrum is over-provisioned. In contrast, the FlexGrid networks can operate with the highest spectral efficiency and minimum bandwidth for the given traffic demand while meeting the minimum quality of transmission (QoT) requirement. Two primary focuses of our research are: (1) resource and spectrum allocation (RSA) for IP traffic over EONs, and (2) RSA for cross-domain optical networks. Previous work concentrates primarily on large file transfers within a single domain. Adding support for IP traffic changes the nature of the RSA problem: instead of choosing to accept or deny each request for network support, IP traffic is inherently elastic and thus lends itself to a bandwidth maximization formulation. We developed a number of algorithms that could be easily deployed within existing and new FlexGrid networks, leading to networks that better support scientific collaboration. Cross-domain RSA research is essential to support large-scale FlexGrid networks, since configuration information is generally not shared or coordinated across domains. The results presented here are in their early stages. They are technically feasible and practical, but still require coordination among organizations and equipment owners and a higher-layer framework for managing network requests.« less
Enhancing GIS Capabilities for High Resolution Earth Science Grids
NASA Astrophysics Data System (ADS)
Koziol, B. W.; Oehmke, R.; Li, P.; O'Kuinghttons, R.; Theurich, G.; DeLuca, C.
2017-12-01
Applications for high performance GIS will continue to increase as Earth system models pursue more realistic representations of Earth system processes. Finer spatial resolution model input and output, unstructured or irregular modeling grids, data assimilation, and regional coordinate systems present novel challenges for GIS frameworks operating in the Earth system modeling domain. This presentation provides an overview of two GIS-driven applications that combine high performance software with big geospatial datasets to produce value-added tools for the modeling and geoscientific community. First, a large-scale interpolation experiment using National Hydrography Dataset (NHD) catchments, a high resolution rectilinear CONUS grid, and the Earth System Modeling Framework's (ESMF) conservative interpolation capability will be described. ESMF is a parallel, high-performance software toolkit that provides capabilities (e.g. interpolation) for building and coupling Earth science applications. ESMF is developed primarily by the NOAA Environmental Software Infrastructure and Interoperability (NESII) group. The purpose of this experiment was to test and demonstrate the utility of high performance scientific software in traditional GIS domains. Special attention will be paid to the nuanced requirements for dealing with high resolution, unstructured grids in scientific data formats. Second, a chunked interpolation application using ESMF and OpenClimateGIS (OCGIS) will demonstrate how spatial subsetting can virtually remove computing resource ceilings for very high spatial resolution interpolation operations. OCGIS is a NESII-developed Python software package designed for the geospatial manipulation of high-dimensional scientific datasets. An overview of the data processing workflow, why a chunked approach is required, and how the application could be adapted to meet operational requirements will be discussed here. In addition, we'll provide a general overview of OCGIS's parallel subsetting capabilities including challenges in the design and implementation of a scientific data subsetter.
Evaluation of a risk-based environmental hot spot delineation algorithm.
Sinha, Parikhit; Lambert, Michael B; Schew, William A
2007-10-22
Following remedial investigations of hazardous waste sites, remedial strategies may be developed that target the removal of "hot spots," localized areas of elevated contamination. For a given exposure area, a hot spot may be defined as a sub-area that causes risks for the whole exposure area to be unacceptable. The converse of this statement may also apply: when a hot spot is removed from within an exposure area, risks for the exposure area may drop below unacceptable thresholds. The latter is the motivation for a risk-based approach to hot spot delineation, which was evaluated using Monte Carlo simulation. Random samples taken from a virtual site ("true site") were used to create an interpolated site. The latter was gridded and concentrations from the center of each grid box were used to calculate 95% upper confidence limits on the mean site contaminant concentration and corresponding hazard quotients for a potential receptor. Grid cells with the highest concentrations were removed and hazard quotients were recalculated until the site hazard quotient dropped below the threshold of 1. The grid cells removed in this way define the spatial extent of the hot spot. For each of the 100,000 Monte Carlo iterations, the delineated hot spot was compared to the hot spot in the "true site." On average, the algorithm was able to delineate hot spots that were collocated with and equal to or greater in size than the "true hot spot." When delineated hot spots were mapped onto the "true site," setting contaminant concentrations in the mapped area to zero, the hazard quotients for these "remediated true sites" were on average within 5% of the acceptable threshold of 1.
Special Relativity in Week One: 3) Introducing the Lorentz Contraction
NASA Astrophysics Data System (ADS)
Huggins, Elisha
2011-05-01
This is the third of four articles on teaching special relativity in the first week of an introductory physics course.1,2 With Einstein's second postulate that the speed of light is the same to all observers, we could use the light pulse clock to introduce time dilation. But we had difficulty introducing the Lorentz contraction until we saw the movie "Time Dilation, an Experiment with Mu-Mesons" by David Frisch and James Smith.3,4 The movie demonstrates that time dilation and the Lorentz contraction are essentially two sides of the same coin. Here we take the muon's point of view for a more intuitive understanding of the Lorentz contraction, and use the results of the movie to provide an insight into the way we interpret experimental results involving special relativity.
Increasing Capacity Exploitation in Food Supply Chains Using Grid Concepts
NASA Astrophysics Data System (ADS)
Volk, Eugen; Müller, Marcus; Jacob, Ansger; Racz, Peter; Waldburger, Martin
Food supply chains today are characterized by fixed trade relations with long term contracts established between heterogeneous supply chain companies. Production and logistics capacities of these companies are often utilized in an economically inefficient manner only. In addition, increased consumer awareness in food safety issues renders supply chain management even more challenging, since integrated tracking and tracing along the whole food supply chain is needed. Facing these issues of supply chain management complexity and completely documented product quality, this paper proposes a full lifecycle solution for dynamic capacity markets based on concepts used in the field of Grid [1], like management of Virtual Organization (VO) combined with Service Level Agreement (SLA). The solution enables the cost-efficient utilization of real world capacities (e.g., production capacities or logistics facilities) by using a simple, browser-based portal. Users are able to enter into product-specific negotiations with buyers and suppliers of a food supply chain, and to obtain real-time access to product information including SLA evaluation reports. Thus, business opportunities in wider market access, process innovation, and trustworthy food products are offered for participating supply chain companies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Jacob; Edgar, Thomas W.; Daily, Jeffrey A.
With an ever-evolving power grid, concerns regarding how to maintain system stability, efficiency, and reliability remain constant because of increasing uncertainties and decreasing rotating inertia. To alleviate some of these concerns, demand response represents a viable solution and is virtually an untapped resource in the current power grid. This work describes a hierarchical control framework that allows coordination between distributed energy resources and demand response. This control framework is composed of two control layers: a coordination layer that ensures aggregations of resources are coordinated to achieve system objectives and a device layer that controls individual resources to assure the predeterminedmore » power profile is tracked in real time. Large-scale simulations are executed to study the hierarchical control, requiring advancements in simulation capabilities. Technical advancements necessary to investigate and answer control interaction questions, including the Framework for Network Co-Simulation platform and Arion modeling capability, are detailed. Insights into the interdependencies of controls across a complex system and how they must be tuned, as well as validation of the effectiveness of the proposed control framework, are yielded using a large-scale integrated transmission system model coupled with multiple distribution systems.« less
High Power ECR Ion Thruster Discharge Characterization
NASA Technical Reports Server (NTRS)
Foster, John E.; Kamhawi, Hani; Haag, Thomas; Carpenter, Christian; Williams, George W.
2006-01-01
Electron cyclotron resonance (ECR) based ion thrusters with carbon based ion optics can potentially satisfy lifetime requirements for long duration missions (approximately 10 years) because grid erosion and cathode insert depletion issues are virtually eliminated. Though the ECR plasma discharge has been found to typically operate at slightly higher discharge losses than conventional DC ion thrusters (for high total thruster power applications), the discharge power fraction is small (less than 1 percent at 25 kW). In this regard, the benefits of increased life, low discharge plasma potentials, and reduced complexity are welcome tradeoffs for the associated discharge efficiency decrease. Presented here are results from discharge characterization of a large area ECR plasma source for gridded ion thruster applications. These measurements included load matching efficacy, bulk plasma properties via Langmuir probe, and plasma uniformity as measured using current probes distributed at the exit plane. A high degree of plasma uniformity was observed (flatness greater than 0.9). Additionally, charge state composition was qualitatively evaluated using emission spectroscopy. Plasma induced emission was dominated by xenon ion lines. No doubly charged xenon ions were detected.
Microcontroller based spectrophotometer using compact disc as diffraction grid
NASA Astrophysics Data System (ADS)
Bano, Saleha; Altaf, Talat; Akbar, Sunila
2010-12-01
This paper describes the design and implementation of a portable, inexpensive and cost effective spectrophotometer. The device combines the use of compact disc (CD) media as diffraction grid and 60 watt bulb as a light source. Moreover it employs a moving slit along with stepper motor for obtaining a monochromatic light, photocell with spectral sensitivity in visible region to determine the intensity of light and an amplifier with a very high gain as well as an advanced virtual RISC (AVR) microcontroller ATmega32 as a control unit. The device was successfully applied to determine the absorbance and transmittance of KMnO4 and the unknown concentration of KMnO4 with the help of calibration curve. For comparison purpose a commercial spectrophotometer was used. There are not significant differences between the absorbance and transmittance values estimated by the two instruments. Furthermore, good results are obtained at all visible wavelengths of light. Therefore, the designed instrument offers an economically feasible alternative for spectrophotometric sample analysis in small routine, research and teaching laboratories, because the components used in the designing of the device are cheap and of easy acquisition.
Interoperating Cloud-based Virtual Farms
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.
2015-12-01
The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Cyberhubs: Virtual Research Environments for Astronomy
NASA Astrophysics Data System (ADS)
Herwig, Falk; Andrassy, Robert; Annau, Nic; Clarkson, Ondrea; Côté, Benoit; D’Sa, Aaron; Jones, Sam; Moa, Belaid; O’Connell, Jericho; Porter, David; Ritter, Christian; Woodward, Paul
2018-05-01
Collaborations in astronomy and astrophysics are faced with numerous cyber-infrastructure challenges, such as large data sets, the need to combine heterogeneous data sets, and the challenge to effectively collaborate on those large, heterogeneous data sets with significant processing requirements and complex science software tools. The cyberhubs system is an easy-to-deploy package for small- to medium-sized collaborations based on the Jupyter and Docker technology, which allows web-browser-enabled, remote, interactive analytic access to shared data. It offers an initial step to address these challenges. The features and deployment steps of the system are described, as well as the requirements collection through an account of the different approaches to data structuring, handling, and available analytic tools for the NuGrid and PPMstar collaborations. NuGrid is an international collaboration that creates stellar evolution and explosion physics and nucleosynthesis simulation data. The PPMstar collaboration performs large-scale 3D stellar hydrodynamics simulations of interior convection in the late phases of stellar evolution. Examples of science that is currently performed on cyberhubs, in the areas of 3D stellar hydrodynamic simulations, stellar evolution and nucleosynthesis, and Galactic chemical evolution, are presented.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Towards a centralized Grid Speedometer
NASA Astrophysics Data System (ADS)
Dzhunov, I.; Andreeva, J.; Fajardo, E.; Gutsche, O.; Luyckx, S.; Saiz, P.
2014-06-01
Given the distributed nature of the Worldwide LHC Computing Grid and the way CPU resources are pledged and shared around the globe, Virtual Organizations (VOs) face the challenge of monitoring the use of these resources. For CMS and the operation of centralized workflows, the monitoring of how many production jobs are running and pending in the Glidein WMS production pools is very important. The Dashboard Site Status Board (SSB) provides a very flexible framework to collect, aggregate and visualize data. The CMS production monitoring team uses the SSB to define the metrics that have to be monitored and the alarms that have to be raised. During the integration of CMS production monitoring into the SSB, several enhancements to the core functionality of the SSB were required; They were implemented in a generic way, so that other VOs using the SSB can exploit them. Alongside these enhancements, there were a number of changes to the core of the SSB framework. This paper presents the details of the implementation and the advantages for current and future usage of the new features in SSB.
Integrating multiple scientific computing needs via a Private Cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.
2014-06-01
In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.
Incremental analysis of large elastic deformation of a rotating cylinder
NASA Technical Reports Server (NTRS)
Buchanan, G. R.
1976-01-01
The effect of finite deformation upon a rotating, orthotropic cylinder was investigated using a general incremental theory. The incremental equations of motion are developed using the variational principle. The governing equations are derived using the principle of virtual work for a body with initial stress. The governing equations are reduced to those for the title problem and a numerical solution is obtained using finite difference approximations. Since the problem is defined in terms of one independent space coordinate, the finite difference grid can be modified as the incremental deformation occurs without serious numerical difficulties. The nonlinear problem is solved incrementally by totaling a series of linear solutions.
NASA Astrophysics Data System (ADS)
Veselka, T. D.; Poch, L.
2011-12-01
Integrating high penetration levels of wind and solar energy resources into the power grid is a formidable challenge in virtually all interconnected systems due to the fact that supply and demand must remain in balance at all times. Since large scale electricity storage is currently not economically viable, generation must exactly match electricity demand plus energy losses in the system as time unfolds. Therefore, as generation from variable resources such as wind and solar fluctuate, production from generating resources that are easier to control and dispatch need to compensate for these fluctuations while at the same time respond to both instantaneous change in load and follow daily load profiles. The grid in the Western U.S. is not exempt to grid integration challenges associated with variable resources. However, one advantage that the power system in the Western U.S. has over many other regional power systems is that its footprint contains an abundance of hydropower resources. Hydropower plants, especially those that have reservoir water storage, can physically change electricity production levels very quickly both via a dispatcher and through automatic generation control. Since hydropower response time is typically much faster than other dispatchable resources such as steam or gas turbines, it is well suited to alleviate variable resource grid integration issues. However, despite an abundance of hydropower resources and the current low penetration of variable resources in the Western U.S., problems have already surfaced. This spring in the Pacific Northwest, wetter than normal hydropower conditions in combination with transmission constraints resulted in controversial wind resource shedding. This action was taken since water spilling would have increased dissolved oxygen levels downstream of dams thereby significantly degrading fish habitats. The extent to which hydropower resources will be able to contribute toward a stable and reliable Western grid is currently being studied. Typically these studies consider the inherent flexibility of hydropower technologies, but tend to fall short on details regarding grid operations, institutional arrangements, and hydropower environmental regulations. This presentation will focus on an analysis that Argonne National Laboratory is conducting in collaboration with the Western Area Power Administration (Western). The analysis evaluates the extent to which Western's hydropower resources may help with grid integration challenges via a proposed Energy Imbalance Market. This market encompasses most of the Western Electricity Coordinating Council footprint. It changes grid operations such that the real-time dispatch would be, in part, based on a 5-minute electricity market. The analysis includes many factors such as site-specific environmental considerations at each of its hydropower facilities, long-term firm purchase agreements, and hydropower operating objectives and goals. Results of the analysis indicate that site-specific details significantly affect the ability of hydropower plant to respond to grid needs in a future which will have a high penetration of variable resources.
SAMPL4 & DOCK3.7: lessons for automated docking procedures
NASA Astrophysics Data System (ADS)
Coleman, Ryan G.; Sterling, Teague; Weiss, Dahlia R.
2014-03-01
The SAMPL4 challenges were used to test current automated methods for solvation energy, virtual screening, pose and affinity prediction of the molecular docking pipeline DOCK 3.7. Additionally, first-order models of binding affinity were proposed as milestones for any method predicting binding affinity. Several important discoveries about the molecular docking software were made during the challenge: (1) Solvation energies of ligands were five-fold worse than any other method used in SAMPL4, including methods that were similarly fast, (2) HIV Integrase is a challenging target, but automated docking on the correct allosteric site performed well in terms of virtual screening and pose prediction (compared to other methods) but affinity prediction, as expected, was very poor, (3) Molecular docking grid sizes can be very important, serious errors were discovered with default settings that have been adjusted for all future work. Overall, lessons from SAMPL4 suggest many changes to molecular docking tools, not just DOCK 3.7, that could improve the state of the art. Future difficulties and projects will be discussed.
VO-KOREL: A Fourier Disentangling Service of the Virtual Observatory
NASA Astrophysics Data System (ADS)
Škoda, Petr; Hadrava, Petr; Fuchs, Jan
2012-04-01
VO-KOREL is a web service exploiting the technology of the Virtual Observatory for providing astronomers with the intuitive graphical front-end and distributed computing back-end running the most recent version of the Fourier disentangling code KOREL. The system integrates the ideas of the e-shop basket, conserving the privacy of every user by transfer encryption and access authentication, with features of laboratory notebook, allowing the easy housekeeping of both input parameters and final results, as well as it explores a newly emerging technology of cloud computing. While the web-based front-end allows the user to submit data and parameter files, edit parameters, manage a job list, resubmit or cancel running jobs and mainly watching the text and graphical results of a disentangling process, the main part of the back-end is a simple job queue submission system executing in parallel multiple instances of the FORTRAN code KOREL. This may be easily extended for GRID-based deployment on massively parallel computing clusters. The short introduction into underlying technologies is given, briefly mentioning advantages as well as bottlenecks of the design used.
NASA Technical Reports Server (NTRS)
1997-01-01
Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.
A Solar Data Model for Use in Virtual Observatories
NASA Astrophysics Data System (ADS)
Reardon, K. P.; Bentley, R. D.; Messerotti, M.; Giordano, S.
2004-05-01
The creation of a virtual solar observatories relies heavily on the merging of the metadata describing different datasets into a common form so that it can be handled in a standard way for all associated resources. In order to bring together the varied data descriptions that already exist, it is necessary to have a common framework on which all the different datasets can be represented. The definition of this framework is done through a data model which attempts to provide a simplified but realistic description of the various entities that make up a data set or solar resource. We present the solar data model which has been developed as part of the European Grid of Solar Observations (EGSO) project. This model attempts to include many of the different elements in the field of solar physics, including data producers, data sets, event lists, and data providers. This global picture can then be used to focus on the particular elements required for a specific implementation. We present the different aspects of the model and describe some systems in which portions of this model have been implemented.
The production of hydrogen fuel from renewable sources and its role in grid operations
NASA Astrophysics Data System (ADS)
Barton, John; Gammon, Rupert
Understanding the scale and nature of hydrogen's potential role in the development of low carbon energy systems requires an examination of the operation of the whole energy system, including heat, power, industrial and transport sectors, on an hour-by-hour basis. The Future Energy Scenario Assessment (FESA) software model used for this study is unique in providing a holistic, high resolution, functional analysis, which incorporates variations in supply resulting from weather-dependent renewable energy generators. The outputs of this model, arising from any given user-definable scenario, are year round supply and demand profiles that can be used to assess the market size and operational regime of energy technologies. FESA was used in this case to assess what - if anything - might be the role for hydrogen in a low carbon economy future for the UK. In this study, three UK energy supply pathways were considered, all of which reduce greenhouse gas emissions by 80% by 2050, and substantially reduce reliance on oil and gas while maintaining a stable electricity grid and meeting the energy needs of a modern economy. All use more nuclear power and renewable energy of all kinds than today's system. The first of these scenarios relies on substantial amounts of 'clean coal' in combination with intermittent renewable energy sources by year the 2050. The second uses twice as much intermittent renewable energy as the first and virtually no coal. The third uses 2.5 times as much nuclear power as the first and virtually no coal. All scenarios clearly indicate that the use of hydrogen in the transport sector is important in reducing distributed carbon emissions that cannot easily be mitigated by Carbon Capture and Storage (CCS). In the first scenario, this hydrogen derives mainly from steam reformation of fossil fuels (principally coal), whereas in the second and third scenarios, hydrogen is made mainly by electrolysis using variable surpluses of low-carbon electricity. Hydrogen thereby fulfils a double facetted role of Demand Side Management (DSM) for the electricity grid and the provision of a 'clean' fuel, predominantly for the transport sector. When each of the scenarios was examined without the use of hydrogen as a transport fuel, substantially larger amounts of primary energy were required in the form of imported coal. The FESA model also indicates that the challenge of grid balancing is not a valid reason for limiting the amount of intermittent renewable energy generated. Engineering limitations, economic viability, local environmental considerations and conflicting uses of land and sea may limit the amount of renewable energy available, but there is no practical limit to the conversion of this energy into whatever is required, be it electricity, heat, motive power or chemical feedstocks.
NASA Astrophysics Data System (ADS)
Tijera, Manuel; Maqueda, Gregorio; Cano, José L.; López, Pilar; Yagüe, Carlos
2010-05-01
The wind velocity series of the atmospheric turbulent flow in the planetary boundary layer (PBL), in spite of being highly erratic, present a self-similarity structure (Frisch, 1995; Peitgen et., 2004; Falkovich et., 2006). So, the wind velocity can be seen as a fractal magnitude. We calculate the fractal dimension (Komolgorov capacity or box-counting dimension) of the wind perturbation series (u' = u- ) in the physical spaces (namely velocity-time). It has been studied the time evolution of the fractal dimension along different days and at three levels above the ground (5.8 m, 13.5 m, 32 m). The data analysed was recorded in the experimental campaign SABLES-98 (Cuxart et al., 2000) at the Research Centre for the Lower Atmosphere (CIBA) located in Valladolid (Spain). In this work the u, v and w components of wind velocity series have been measured by sonic anemometers (20 Hz sampling rate). The fractal dimension versus the integral length scales of the mean wind series have been studied, as well as the influence of different turbulent parameters. A method for estimating these integral scales is developed using the normalized autocorrelation function and a Gaussian fit. Finally, it will be analysed the variation of the fractal dimension versus stability parameters (as Richardson number) in order to explain some of the dominant features which are likely immersed in the fractal nature of these turbulent flows. References - Cuxart J, Yagüe C, Morales G, Terradellas E, Orbe J, Calvo J, Fernández A, Soler MR, Infante C, Buenestado P, Espinalt A, Joergensen HE, Rees JM, Vilá J, Redondo JM, Cantalapiedra IR and Conangla L (2000) Stable atmospheric boundary-layer experiment in Spain (SABLES98): a report. Boundary- Layer Meteorol 96:337-370 - Falkovich G and Kattepalli R. Sreenivasan (2006) Lessons from Hidrodynamic Turbulence. Physics Today 59: 43-49 - Frisch U (1995) Turbulence the legacy of A.N. Kolmogorov Cambridge University Press 269pp - Peitgen H, Jürgens H and Saupe D (2004) Chaos and Fractals Springer-Verlag 971pp
NASA Astrophysics Data System (ADS)
Zhu, B.; Lin, J.; Yuan, X.; Li, Y.; Shen, C.
2016-12-01
The role of turbulent acceleration and heating in the fractal magnetic reconnection of solar flares is still not clear, especially at the X-point in the diffusion region. At virtual test aspect, it is hardly to quantitatively analyze the vortex generation, turbulence evolution, particle acceleration and heating in the magnetic islands coalesce in fractal manner, formatting into largest plasmid and ejection process in diffusion region through classical magnetohydrodynamics numerical method. With the development of physical particle numerical method (particle in cell method [PIC], Lattice Boltzmann method [LBM]) and high performance computing technology in recently two decades. Kinetic simulation has developed into an effectively manner to exploring the role of magnetic field and electric field turbulence in charged particles acceleration and heating process, since all the physical aspects relating to turbulent reconnection are taken into account. In this paper, the LBM based lattice DxQy grid and extended distribution are added into charged-particles-to-grid-interpolation of PIC based finite difference time domain scheme and Yee Grid, the hybrid PIC-LBM simulation tool is developed to investigating turbulence acceleration on TIANHE-2. The actual solar coronal condition (L≈105Km,B≈50-500G,T≈5×106K, n≈108-109, mi/me≈500-1836) is applied to study the turbulent acceleration and heating in solar flare fractal current sheet. At stage I, magnetic islands shrink due to magnetic tension forces, the process of island shrinking halts when the kinetic energy of the accelerated particles is sufficient to halt the further collapse due to magnetic tension forces, the particle energy gain is naturally a large fraction of the released magnetic energy. At stage II and III, the particles from the energized group come in to the center of the diffusion region and stay longer in the area. In contract, the particles from non energized group only skim the outer part of the diffusion regions. At stage IV, the magnetic reconnection type nanoplasmid (200km) stop expanding and carrying enough energy to eject particles as constant velocity. Last, the role of magnetic field turbulence and electric field turbulence in electron and ion acceleration at the diffusion regions in solar flare fractural current sheet is given.
Integrated multidisciplinary CAD/CAE environment for micro-electro-mechanical systems (MEMS)
NASA Astrophysics Data System (ADS)
Przekwas, Andrzej J.
1999-03-01
Computational design of MEMS involves several strongly coupled physical disciplines, including fluid mechanics, heat transfer, stress/deformation dynamics, electronics, electro/magneto statics, calorics, biochemistry and others. CFDRC is developing a new generation multi-disciplinary CAD systems for MEMS using high-fidelity field solvers on unstructured, solution-adaptive grids for a full range of disciplines. The software system, ACE + MEMS, includes all essential CAD tools; geometry/grid generation for multi- discipline, multi-equation solvers, GUI, tightly coupled configurable 3D field solvers for FVM, FEM and BEM and a 3D visualization/animation tool. The flow/heat transfer/calorics/chemistry equations are solved with unstructured adaptive FVM solver, stress/deformation are computed with a FEM STRESS solver and a FAST BEM solver is used to solve linear heat transfer, electro/magnetostatics and elastostatics equations on adaptive polygonal surface grids. Tight multidisciplinary coupling and automatic interoperability between the tools was achieved by designing a comprehensive database structure and APIs for complete model definition. The virtual model definition is implemented in data transfer facility, a publicly available tool described in this paper. The paper presents overall description of the software architecture and MEMS design flow in ACE + MEMS. It describes current status, ongoing effort and future plans for the software. The paper also discusses new concepts of mixed-level and mixed- dimensionality capability in which 1D microfluidic networks are simulated concurrently with 3D high-fidelity models of discrete components.
International Symposium on Grids and Clouds (ISGC) 2014
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds (ISGC) 2014 will be held at Academia Sinica in Taipei, Taiwan from 23-28 March 2014, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC).“Bringing the data scientist to global e-Infrastructures” is the theme of ISGC 2014. The last decade has seen the phenomenal growth in the production of data in all forms by all research communities to produce a deluge of data from which information and knowledge need to be extracted. Key to this success will be the data scientist - educated to use advanced algorithms, applications and infrastructures - collaborating internationally to tackle society’s challenges. ISGC 2014 will bring together researchers working in all aspects of data science from different disciplines around the world to collaborate and educate themselves in the latest achievements and techniques being used to tackle the data deluge. In addition to the regular workshops, technical presentations and plenary keynotes, ISGC this year will focus on how to grow the data science community by considering the educational foundation needed for tomorrow’s data scientist. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities & Social Sciences Application, Virtual Research Environment (including Middleware, tools, services, workflow, ... etc.), Data Management, Big Data, Infrastructure & Operations Management, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC).
Studies of Shock Wave Interactions with Homogeneous and Isotropic Turbulence
NASA Technical Reports Server (NTRS)
Briassulis, G.; Agui, J.; Watkins, C. B.; Andreopoulos, Y.
1998-01-01
A nearly homogeneous nearly isotropic compressible turbulent flow interacting with a normal shock wave has been studied experimentally in a large shock tube facility. Spatial resolution of the order of 8 Kolmogorov viscous length scales was achieved in the measurements of turbulence. A variety of turbulence generating grids provide a wide range of turbulence scales. Integral length scales were found to substantially decrease through the interaction with the shock wave in all investigated cases with flow Mach numbers ranging from 0.3 to 0.7 and shock Mach numbers from 1.2 to 1.6. The outcome of the interaction depends strongly on the state of compressibility of the incoming turbulence. The length scales in the lateral direction are amplified at small Mach numbers and attenuated at large Mach numbers. Even at large Mach numbers amplification of lateral length scales has been observed in the case of fine grids. In addition to the interaction with the shock the present work has documented substantial compressibility effects in the incoming homogeneous and isotropic turbulent flow. The decay of Mach number fluctuations was found to follow a power law similar to that describing the decay of incompressible isotropic turbulence. It was found that the decay coefficient and the decay exponent decrease with increasing Mach number while the virtual origin increases with increasing Mach number. A mechanism possibly responsible for these effects appears to be the inherently low growth rate of compressible shear layers emanating from the cylindrical rods of the grid.
Boosting CSP Production with Thermal Energy Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, P.; Mehos, M.
2012-06-01
Combining concentrating solar power (CSP) with thermal energy storage shows promise for increasing grid flexibility by providing firm system capacity with a high ramp rate and acceptable part-load operation. When backed by energy storage capability, CSP can supplement photovoltaics by adding generation from solar resources during periods of low solar insolation. The falling cost of solar photovoltaic (PV) - generated electricity has led to a rapid increase in the deployment of PV and projections that PV could play a significant role in the future U.S. electric sector. The solar resource itself is virtually unlimited; however, the actual contribution of PVmore » electricity is limited by several factors related to the current grid. The first is the limited coincidence between the solar resource and normal electricity demand patterns. The second is the limited flexibility of conventional generators to accommodate this highly variable generation resource. At high penetration of solar generation, increased grid flexibility will be needed to fully utilize the variable and uncertain output from PV generation and to shift energy production to periods of high demand or reduced solar output. Energy storage is one way to increase grid flexibility, and many storage options are available or under development. In this article, however, we consider a technology already beginning to be used at scale - thermal energy storage (TES) deployed with concentrating solar power (CSP). PV and CSP are both deployable in areas of high direct normal irradiance such as the U.S. Southwest. The role of these two technologies is dependent on their costs and relative value, including how their value to the grid changes as a function of what percentage of total generation they contribute to the grid, and how they may actually work together to increase overall usefulness of the solar resource. Both PV and CSP use solar energy to generate electricity. A key difference is the ability of CSP to utilize high-efficiency TES, which turns CSP into a partially dispatchable resource. The addition of TES produces additional value by shifting the delivery of solar energy to periods of peak demand, providing firm capacity and ancillary services, and reducing integration challenges. Given the dispatchability of CSP enabled by TES, it is possible that PV and CSP are at least partially complementary. The dispatchability of CSP with TES can enable higher overall penetration of the grid by solar energy by providing solar-generated electricity during periods of cloudy weather or at night, when PV-generated power is unavailable. Such systems also have the potential to improve grid flexibility, thereby enabling greater penetration of PV energy (and other variable generation sources such as wind) than if PV were deployed without CSP.« less
Johnson, P.R.; Kattan, F.H.; Wooden, J.L.
2001-01-01
The Asir terrane consists of north-trending belts of variably metamorphosed volcanic, sedimentary, and plutonic rocks that are cut by numerous shear zones (Fig. 1). Previous workers interpreted the shear zones as sutures, structures that modify earlier sutures, or structures that define the margins of tectonic belts across which there are significant lithologic differences and along which there may have been major transposition (Frisch and Al-Shanti, 1977; Greenwood et al., 1982; Brown et al., 1989). SHRIMP data from zircons (Table 1) and sense-of-shear data recently acquired from selected shear zones in the terrane help to constrain the minimum ages and kinematics of these shearing events and lead to an overall model of terrane assembly that is more complex than previously proposed.
Exploring the Digital Universe with Europe's Astrophysical Virtual Observatory
NASA Astrophysics Data System (ADS)
2001-12-01
Vast Databanks at the Astronomers' Fingertips Summary A new European initiative called the Astrophysical Virtual Observatory (AVO) is being launched to provide astronomers with a breathtaking potential for new discoveries. It will enable them to seamlessly combine the data from both ground- and space-based telescopes which are making observations of the Universe across the whole range of wavelengths - from high-energy gamma rays through the ultraviolet and visible to the infrared and radio. The aim of the Astrophysical Virtual Observatory (AVO) project, which started on 15 November 2001, is to allow astronomers instant access to the vast databanks now being built up by the world's observatories and which are forming what is, in effect, a "digital sky" . Using the AVO, astronomers will, for example, be able to retrieve the elusive traces of the passage of an asteroid as it passes near the Earth and so enable them to predict its future path and perhaps warn of a possible impact. When a giant star comes to the end of its life in a cataclysmic explosion called a supernova, they will be able to access the digital sky and pinpoint the star shortly before it exploded so adding invaluable data to the study of the evolution of stars. Background information on the Astrophysical Virtual Observatory is available in the Appendix. PR Photo 34a/01 : The Astrophysical Virtual Observatory - an artist's impression. The rapidly accumulating database ESO PR Photo 34a/01 ESO PR Photo 34a/01 [Preview - JPEG: 400 x 345 pix - 90k] [Normal - JPEG: 800 x 689 pix - 656k] [Hi-Res - JPEG: 3000 x 2582 pix - 4.3M] ESO PR Photo 34a/01 shows an artist's impression of the Astrophysical Virtual Observatory . Modern observatories observe the sky continuously and data accumulates remorselessly in the digital archives. The growth rate is impressive and many hundreds of terabytes of data - corresponding to many thousands of billions of pixels - are already available to scientists. The real sky is being digitally reconstructed in the databanks! The richness and complexity of data and information available to the astronomers is overwhelming. This has created a major problem as to how astronomers can manage, distribute and analyse this great wealth of data . The Astrophysical Virtual Observatory (AVO) will allow astronomers to overcome the challenges and enable them to "put the Universe online". AVO is supported by the European Commission The AVO is a three-year project, funded by the European Commission under its Research and Technological Development (RTD) scheme, to design and implement a virtual observatory for the European astronomical community. The European Commission awarded a contract valued at 4 million Euro for the AVO project , starting 15 November 2001. AVO will provide software tools to enable astronomers to access the multi-wavelength data archives over the Internet and so give them the capability to resolve fundamental questions about the Universe by probing the digital sky. Equivalent searches of the 'real' sky would, in comparison, be both costly and take far too long. Towards a Global Virtual Observatory The need for virtual observatories has also been recognised by other astronomical communities. The National Science Foundation in the USA has awarded 10 million Dollar (approx. 11.4 million Euro) for a National Virtual Observatory (NVO). The AVO project team has formed a close alliance with the NVO and both teams have representatives on their respective committees. It is clear to the NVO and AVO communities that there are no intrinsic boundaries to the virtual observatory concept and that all astronomers should be working towards a truly global virtual observatory that will enable new science to be carried out on the wealth of astronomical data held in the growing number of first class international astronomical archives. The AVO involves six partner organisations led by the European Southern Observatory (ESO) in Munich (Germany). The other partner organisations are the European Space Agency (ESA) , the United Kingdom's ASTROGRID consortium, the CNRS-supported Centre de Données Astronomiques de Strasbourg (CDS) at the University Louis Pasteur in Strasbourg (France), the CNRS-supported TERAPIX astronomical data centre at the Institut d'Astrophysique in Paris and the Jodrell Bank Observatory of the Victoria University of Manchester (UK). Note [1]: This is a joint Press Release issued by the European Southern Observatory (ESO), the Hubble European Space Agency Information Centre, ASTROGRID, CDS, TERAPIX/CNRS and the University of Manchester. A 13 minute background video (broadcast PAL) is available from ESO PR and the Hubble European Space Agency Information Centre (addresses below). This will also be transmitted via satellite Wednesday 12 December 2001 from 12:00 to 12:15 CET on "ESA TV Service", cf. http://television.esa.int. An international conference, "Toward an International Virtual Observatory" will take place at ESO (Garching, Germany) on June 10 - 14, 2002. Contacts AVO Contacts Peter Quinn European Southern Observatory Garching, Germany Tel.: +4989-3200-6509 email: pjq@eso.org Piero Benvenuti Space Telescope-European Coordinating Facility Garching, Germany Tel.: +49-89-3200-6290 email: pbenvenu@eso.org Andy Lawrence (on behalf of The ASTROGRID Consortium) Institute for Astronomy University of Edinburgh United Kingdom Tel.: +44-131-668-8346/56 email: al@roe.ac.uk Francoise Genova Centre de Données Astronomiques de Strasbourg (CDS) France Tel.: +33-390-24-24-76 email: genova@astro.u-strasbg.fr Yannick Mellier CNRS, Delegation Paris A (CNRSDR01-Terapix)/IAP/INSU France Tel.: +33-1-44-32-81-40 email: mellier@iap.fr Phil Diamond University of Manchester/Jodrell Bank Observatory United Kingdom Tel.: +44-147-757-2625 email: pdiamond@jb.man.ac.uk PR Contacts Richard West European Southern Observatory Garching, Germany Tel.: +49-89-3200-6276 email: rwest@eso.org Lars Lindberg Christensen Hubble European Space Agency Information Centre Garching, Germany Tel.: +49-89-3200-6306 or +49-173-38-72-621 email: lars@eso.org Ray Footman The ASTROGRID Consortium/University of Edinburgh United Kingdom Tel.: +44-131-650-2249 email: r.footman@ed.ac.uk Philippe Chauvin Terapix/CDS CNRS, Delegation Paris A, IAP/INSU France Tel.: +33 1 44 96 43 36 email: philippe.chauvin@cnrs-dir.fr Agnes Villanueva University of Strasbourg France Tel.: +33 3 90 24 11 35 email: agnes.villanueva@adm-ulp.u-strasbg.fr Ian Morison University of Manchester/Jodrell Bank Observatory United Kingdom Tel.: +44 1477 572610 email: im@jb.man.ac.uk Appendix: Introduction to Europe's Astrophysical Virtual Observatory (AVO) The Digital Data Revolution Over the past thirty years, astronomers have moved from photographic and analogue techniques towards the use of high-speed, digital instruments connected to specialised telescopes to study the Universe. Whether these instruments are onboard spacecraft or located at terrestrial observatories, the data they produce are stored digitally on computer systems for later analysis. Two Challenges This data revolution has created two challenges for astronomers. Firstly, as the capability of digital detector systems has advanced, the volume of digital data that astronomical facilities are producing has expanded greatly. The rate of growth of the volume of stored data far exceeds the rate of increase in the performance of computer systems or storage devices. Secondly, astronomers have realised that many important insights into the deepest secrets in the Universe can come from combining information obtained at many wavelengths into a consistent and comprehensive physical picture . However, because the datasets from different parts of the spectrum come from different observatories using different instruments, the data are not easily combined. To unite data from different observatories, bridges must be built between digital archives to allow them to share data and "interoperate" - an important and challenging task. The Human Factor These challenges are not only technological. Our brains are not equipped to for instance analyse simultaneously the millions and millions of images available. Astronomers must adapt and learn to deal with such diverse and extensive sets of data. The "digital sky" has the potential to become a vital tool with novel and fascinating capabilities that are essential for astronomers to make progress in their understanding of the Cosmos. But astronomers must be able to find the relevant information quickly and efficiently. Currently the data needed by a particular research program may well be stored in the archives already, but the tools and methods have not yet been developed to extract the relevant information from the flood of images available. A new way of thinking, a new frame of mind and a new approach are needed. The Astrophysical Virtual Observatory The Astrophysical Virtual Observatory (AVO) will allow astronomers to overcome the challenges and extract data from the digital sky, thus "putting the Universe online" . Like a search engine helps us to find information on the Internet, astronomers need sophisticated "search engines" as well as other tools to find and interpret the information. "We're drowning in information and starving for knowledge", a Yale University librarian once said. Or to paraphrase a popular series on TV: "The information is out there, but you have to find it!" Using the latest in computer technology, data storage and analysis techniques, AVO will maximise the potential for new scientific insights from the stored data by making them available in a readily accessible and seamlessly unified form to professional researchers, amateur astronomers and students. Users of AVO will have immense multi-wavelength vistas of the digital Universe at their fingertips and the potential to make breathtaking new discoveries. Virtual observatories signal a new era, where data collected by a multitude of sophisticated telescopes can be used globally and repeatedly to achieve substantial progress in the quest for knowledge. The AVO project, funded by the European Commission, is a three-year study of the design and implementation of a virtual observatory for European astronomy. A virtual observatory is a collection of connected data archives and software tools that utilise the Internet to form a scientific research environment in which new multi-wavelength astronomical research programs can be conducted. In much the same way as a real observatory consists of telescopes, each with a collection of unique astronomical instruments, the virtual observatory consists of a collection of data centres each with unique collections of astronomical data, software systems and processing capabilities. The programme will implement and test a prototype virtual observatory , focussing on the key areas of scientific requirements, interoperability and new technologies such as the GRID, needed to link powerful computers to the newly formed large data repositories. The GRID and the Future of the Internet The technical problems astronomers have to solve are similar to those being worked on by particle physicists, by biologists, and by commercial companies who want to search and fill customer databases across the world. The emerging idea is that of the GRID where computers collaborate across the Internet. The World Wide Web made words and pictures available to anybody at the click of a mouse. The GRID will do the same for data, and for computer processing power. Anybody can have the power of a supercomputer sitting on their desktop. The Astrophysical Virtual Observatory, and GRID projects like the ASTROGRID project in the United Kingdom (funding 5 million UK Pounds or 8 million Euro), are closely linked to these developments.
From GCM grid cell to agricultural plot: scale issues affecting modelling of climate impact
Baron, Christian; Sultan, Benjamin; Balme, Maud; Sarr, Benoit; Traore, Seydou; Lebel, Thierry; Janicot, Serge; Dingkuhn, Michael
2005-01-01
General circulation models (GCM) are increasingly capable of making relevant predictions of seasonal and long-term climate variability, thus improving prospects of predicting impact on crop yields. This is particularly important for semi-arid West Africa where climate variability and drought threaten food security. Translating GCM outputs into attainable crop yields is difficult because GCM grid boxes are of larger scale than the processes governing yield, involving partitioning of rain among runoff, evaporation, transpiration, drainage and storage at plot scale. This study analyses the bias introduced to crop simulation when climatic data is aggregated spatially or in time, resulting in loss of relevant variation. A detailed case study was conducted using historical weather data for Senegal, applied to the crop model SARRA-H (version for millet). The study was then extended to a 10°N–17° N climatic gradient and a 31 year climate sequence to evaluate yield sensitivity to the variability of solar radiation and rainfall. Finally, a down-scaling model called LGO (Lebel–Guillot–Onibon), generating local rain patterns from grid cell means, was used to restore the variability lost by aggregation. Results indicate that forcing the crop model with spatially aggregated rainfall causes yield overestimations of 10–50% in dry latitudes, but nearly none in humid zones, due to a biased fraction of rainfall available for crop transpiration. Aggregation of solar radiation data caused significant bias in wetter zones where radiation was limiting yield. Where climatic gradients are steep, these two situations can occur within the same GCM grid cell. Disaggregation of grid cell means into a pattern of virtual synoptic stations having high-resolution rainfall distribution removed much of the bias caused by aggregation and gave realistic simulations of yield. It is concluded that coupling of GCM outputs with plot level crop models can cause large systematic errors due to scale incompatibility. These errors can be avoided by transforming GCM outputs, especially rainfall, to simulate the variability found at plot level. PMID:16433096
The application of color display techniques for the analysis of Nimbus infrared radiation data
NASA Technical Reports Server (NTRS)
Allison, L. J.; Cherrix, G. T.; Ausfresser, H.
1972-01-01
A color enhancement system designed for the Applications Technology Satellite (ATS) spin scan experiment has been adapted for the analysis of Nimbus infrared radiation measurements. For a given scene recorded on magnetic tape by the Nimbus scanning radiometers, a virtually unlimited number of color images can be produced at the ATS Operations Control Center from a color selector paper tape input. Linear image interpolation has produced radiation analyses in which each brightness-color interval has a smooth boundary without any mosaic effects. An annotated latitude-longitude gridding program makes it possible to precisely locate geophysical parameters, which permits accurate interpretation of pertinent meteorological, geological, hydrological, and oceanographic features.
Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center
NASA Astrophysics Data System (ADS)
Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.
2012-12-01
Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.
Dauguet, Julien; Bock, Davi; Reid, R Clay; Warfield, Simon K
2007-01-01
3D reconstruction from serial 2D microscopy images depends on non-linear alignment of serial sections. For some structures, such as the neuronal circuitry of the brain, very large images at very high resolution are necessary to permit reconstruction. These very large images prevent the direct use of classical registration methods. We propose in this work a method to deal with the non-linear alignment of arbitrarily large 2D images using the finite support properties of cubic B-splines. After initial affine alignment, each large image is split into a grid of smaller overlapping sub-images, which are individually registered using cubic B-splines transformations. Inside the overlapping regions between neighboring sub-images, the coefficients of the knots controlling the B-splines deformations are blended, to create a virtual large grid of knots for the whole image. The sub-images are resampled individually, using the new coefficients, and assembled together into a final large aligned image. We evaluated the method on a series of large transmission electron microscopy images and our results indicate significant improvements compared to both manual and affine alignment.
NASA Astrophysics Data System (ADS)
Tang, S. L.; Antonia, R. A.; Djenidi, L.; Danaila, L.; Zhou, Y.
2016-09-01
The transport equation for the mean scalar dissipation rate ɛ ¯ θ is derived by applying the limit at small separations to the generalized form of Yaglom's equation in two types of flows, those dominated mainly by a decay of energy in the streamwise direction and those which are forced, through a continuous injection of energy at large scales. In grid turbulence, the imbalance between the production of ɛ ¯ θ due to stretching of the temperature field and the destruction of ɛ ¯ θ by the thermal diffusivity is governed by the streamwise advection of ɛ ¯ θ by the mean velocity. This imbalance is intrinsically different from that in stationary forced periodic box turbulence (or SFPBT), which is virtually negligible. In essence, the different types of imbalance represent different constraints imposed by the large-scale motion on the relation between the so-called mixed velocity-temperature derivative skewness ST and the scalar enstrophy destruction coefficient Gθ in different flows, thus resulting in non-universal approaches of ST towards a constant value as Reλ increases. The data for ST collected in grid turbulence and in SFPBT indicate that the magnitude of ST is bounded, this limit being close to 0.5.
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo
2014-04-21
Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.
NASA Technical Reports Server (NTRS)
Bradford, Robert N.
2002-01-01
Currently, and in the past, dedicated communication circuits and "network services" with very stringent performance requirements are being used to support manned and unmanned mission critical ground operations at GSFC, JSC, MSFC, KSC and other NASA facilities. Because of the evolution of network technology, it is time to investigate using other approaches to providing mission services for space ground operations. The current NASA approach is not in keeping with the evolution of network technologies. In the past decade various research and education networks dedicated to scientific and educational endeavors have emerged, as well as commercial networking providers, that employ advanced networking technologies. These technologies have significantly changed networking in recent years. Significant advances in network routing techniques, various topologies and equipment have made commercial networks very stable and virtually error free. Advances in Dense Wave Division Multiplexing will provide tremendous amounts of bandwidth for the future. The question is: Do these networks, which are controlled and managed centrally, provide a level of service that equals the stringent NASA performance requirements. If they do, what are the implication(s) of using them for critical space based ground operations as they are, without adding high cost contractual performance requirements? A second question is the feasibility of applying the emerging grid technology in space operations. Is it feasible to develop a Space Operations Grid and/or a Space Science Grid? Since these network's connectivity is substantial, both nationally and internationally, development of these sorts of grids may be feasible. The concept of research and education networks has evolved to the international community as well. Currently there are international RENs connecting the US in Chicago to and from Europe, South America, Asia and the Pacific rim, Russia and Canada. And most countries in these areas have their own research and education network as do many states in the USA.
NASA Astrophysics Data System (ADS)
Yoon, Do-Kun; Jung, Joo-Young; Suh, Tae Suk
2014-05-01
In order to confirm the possibility of field application of a different type collimator with a multileaf collimator (MLC), we constructed a grid-type multi-layer pixel collimator (GTPC) by using a Monte Carlo n-particle simulation (MCNPX). In this research, a number of factors related to the performance of the GPTC were evaluated using simulated output data of a basic MLC model. A layer was comprised of a 1024-pixel collimator (5.0 × 5.0 mm2) which could operate individually as a grid-type collimator (32 × 32). A 30-layer collimator was constructed for a specific portal form to pass radiation through the opening and closing of each pixel cover. The radiation attenuation level and the leakage were compared between the GTPC modality simulation and MLC modeling (tungsten, 17.50 g/cm3, 5.0 × 70.0 × 160.0 mm3) currently used for a radiation field. Comparisons of the portal imaging, the lateral dose profile from a virtual water phantom, the dependence of the performance on the increase in the number of layers, the radiation intensity modulation verification, and the geometric error between the GTPC and the MLC were done using the MCNPX simulation data. From the simulation data, the intensity modulation of the GTPC showed a faster response than the MLC's (29.6%). In addition, the agreement between the doses that should be delivered to the target region was measured as 97.0%, and the GTPC system had an error below 0.01%, which is identical to that of MLC. A Monte Carlo simulation of the GTPC could be useful for verification of application possibilities. Because the line artifact is caused by the grid frame and the folded cover, a lineal dose transfer type is chosen for the operation of this system. However, the result of GTPC's performance showed that the methods of effective intensity modulation and the specific geometric beam shaping differed with the MLC modality.
NASA Astrophysics Data System (ADS)
De Salvo, A.; Kataoka, M.; Sanchez Pineda, A.; Smirnov, Y.
2015-12-01
The ATLAS Installation System v2 is the evolution of the original system, used since 2003. The original tool has been completely re-designed in terms of database backend and components, adding support for submission to multiple backends, including the original Workload Management Service (WMS) and the new PanDA modules. The database engine has been changed from plain MySQL to Galera/Percona and the table structure has been optimized to allow a full High-Availability (HA) solution over Wide Area Network. The servlets, running on each frontend, have been also decoupled from local settings, to allow an easy scalability of the system, including the possibility of an HA system with multiple sites. The clients can also be run in multiple copies and in different geographical locations, and take care of sending the installation and validation jobs to the target Grid or Cloud sites. Moreover, the Installation Database is used as source of parameters by the automatic agents running in CVMFS, in order to install the software and distribute it to the sites. The system is in production for ATLAS since 2013, having as main sites in HA the INFN Roma Tier 2 and the CERN Agile Infrastructure. The Light Job Submission Framework for Installation (LJSFi) v2 engine is directly interfacing with PanDA for the Job Management, the Atlas Grid Information System (AGIS) for the site parameter configurations, and CVMFS for both core components and the installation of the software itself. LJSFi2 is also able to use other plugins, and is essentially Virtual Organization (VO) agnostic, so can be directly used and extended to cope with the requirements of any Grid or Cloud enabled VO. In this work we will present the architecture, performance, status and possible evolutions to the system for the LHC Run2 and beyond.
Discovery of Marine Datasets and Geospatial Metadata Visualization
NASA Astrophysics Data System (ADS)
Schwehr, K. D.; Brennan, R. T.; Sellars, J.; Smith, S.
2009-12-01
NOAA's National Geophysical Data Center (NGDC) provides the deep archive of US multibeam sonar hydrographic surveys. NOAA stores the data as Bathymetric Attributed Grids (BAG; http://www.opennavsurf.org/) that are HDF5 formatted files containing gridded bathymetry, gridded uncertainty, and XML metadata. While NGDC provides the deep store and a basic ERSI ArcIMS interface to the data, additional tools need to be created to increase the frequency with which researchers discover hydrographic surveys that might be beneficial for their research. Using Open Source tools, we have created a draft of a Google Earth visualization of NOAA's complete collection of BAG files as of March 2009. Each survey is represented as a bounding box, an optional preview image of the survey data, and a pop up placemark. The placemark contains a brief summary of the metadata and links to directly download of the BAG survey files and the complete metadata file. Each survey is time tagged so that users can search both in space and time for surveys that meet their needs. By creating this visualization, we aim to make the entire process of data discovery, validation of relevance, and download much more efficient for research scientists who may not be familiar with NOAA's hydrographic survey efforts or the BAG format. In the process of creating this demonstration, we have identified a number of improvements that can be made to the hydrographic survey process in order to make the results easier to use especially with respect to metadata generation. With the combination of the NGDC deep archiving infrastructure, a Google Earth virtual globe visualization, and GeoRSS feeds of updates, we hope to increase the utilization of these high-quality gridded bathymetry. This workflow applies equally well to LIDAR topography and bathymetry. Additionally, with proper referencing and geotagging in journal publications, we hope to close the loop and help the community create a true “Geospatial Scholar” infrastructure.
NASA Astrophysics Data System (ADS)
Lammers, M.
2016-12-01
Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, pre-rendered animations, or cumbersome geoservers. These methods can limit interactivity and/or place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite observed them on and above the Earth's surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.
NASA Technical Reports Server (NTRS)
Lammers, Matthew
2016-01-01
Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, prerendered animations, or cumbersome geoservers. These methods can limit interactivity andor place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite-observed them on and above the Earths surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.
Automated detection of epileptic ripples in MEG using beamformer-based virtual sensors
NASA Astrophysics Data System (ADS)
Migliorelli, Carolina; Alonso, Joan F.; Romero, Sergio; Nowak, Rafał; Russi, Antonio; Mañanas, Miguel A.
2017-08-01
Objective. In epilepsy, high-frequency oscillations (HFOs) are expressively linked to the seizure onset zone (SOZ). The detection of HFOs in the noninvasive signals from scalp electroencephalography (EEG) and magnetoencephalography (MEG) is still a challenging task. The aim of this study was to automate the detection of ripples in MEG signals by reducing the high-frequency noise using beamformer-based virtual sensors (VSs) and applying an automatic procedure for exploring the time-frequency content of the detected events. Approach. Two-hundred seconds of MEG signal and simultaneous iEEG were selected from nine patients with refractory epilepsy. A two-stage algorithm was implemented. Firstly, beamforming was applied to the whole head to delimitate the region of interest (ROI) within a coarse grid of MEG-VS. Secondly, a beamformer using a finer grid in the ROI was computed. The automatic detection of ripples was performed using the time-frequency response provided by the Stockwell transform. Performance was evaluated through comparisons with simultaneous iEEG signals. Main results. ROIs were located within the seizure-generating lobes in the nine subjects. Precision and sensitivity values were 79.18% and 68.88%, respectively, by considering iEEG-detected events as benchmarks. A higher number of ripples were detected inside the ROI compared to the same region in the contralateral lobe. Significance. The evaluation of interictal ripples using non-invasive techniques can help in the delimitation of the epileptogenic zone and guide placement of intracranial electrodes. This is the first study that automatically detects ripples in MEG in the time domain located within the clinically expected epileptic area taking into account the time-frequency characteristics of the events through the whole signal spectrum. The algorithm was tested against intracranial recordings, the current gold standard. Further studies should explore this approach to enable the localization of noninvasively recorded HFOs to help during pre-surgical planning and to reduce the need for invasive diagnostics.
Lebedev, Mikhail A; Pimashkin, Alexey; Ossadtchi, Alexei
2018-01-01
According to the currently prevailing theory, hippocampal formation constructs and maintains cognitive spatial maps. Most of the experimental evidence for this theory comes from the studies on navigation in laboratory rats and mice, typically male animals. While these animals exhibit a rich repertoire of behaviors associated with navigation, including locomotion, head movements, whisking, sniffing, raring and scent marking, the contribution of these behavioral patterns to the hippocampal spatially-selective activity has not been sufficiently studied. Instead, many publications have considered animal position in space as the major variable that affects the firing of hippocampal place cells and entorhinal grid cells. Here we argue that future work should focus on a more detailed examination of different behaviors exhibited during navigation to better understand the mechanism of spatial tuning in hippocampal neurons. As an inquiry in this direction, we have analyzed data from two datasets, shared online, containing recordings from rats navigating in square and round arenas. Our analyses revealed patchy navigation patterns, evident from the spatial maps of animal position, velocity and acceleration. Moreover, grid cells available in the datasets exhibited similar periodicity as the navigation parameters. These findings indicate that activity of grid cells could affect navigation parameters and/or vice versa. Additionally, we speculate that scent marks left by navigating animals could contribute to neuronal responses while rats and mice sniff their environment; the act of sniffing could modulate neuronal discharges even in virtual visual environments. Accordingly, we propose that future experiments should contain additional controls for navigation patterns, whisking, sniffing and maps composed of scent marks.
Alverson, Dale C; Saiki, Stanley M; Jacobs, Joshua; Saland, Linda; Keep, Marcus F; Norenberg, Jeffrey; Baker, Rex; Nakatsu, Curtis; Kalishman, Summers; Lindberg, Marlene; Wax, Diane; Mowafi, Moad; Summers, Kenneth L; Holten, James R; Greenfield, John A; Aalseth, Edward; Nickles, David; Sherstyuk, Andrei; Haines, Karen; Caudell, Thomas P
2004-01-01
Medical knowledge and skills essential for tomorrow's healthcare professionals continue to change faster than ever before creating new demands in medical education. Project TOUCH (Telehealth Outreach for Unified Community Health) has been developing methods to enhance learning by coupling innovations in medical education with advanced technology in high performance computing and next generation Internet2 embedded in virtual reality environments (VRE), artificial intelligence and experiential active learning. Simulations have been used in education and training to allow learners to make mistakes safely in lieu of real-life situations, learn from those mistakes and ultimately improve performance by subsequent avoidance of those mistakes. Distributed virtual interactive environments are used over distance to enable learning and participation in dynamic, problem-based, clinical, artificial intelligence rules-based, virtual simulations. The virtual reality patient is programmed to dynamically change over time and respond to the manipulations by the learner. Participants are fully immersed within the VRE platform using a head-mounted display and tracker system. Navigation, locomotion and handling of objects are accomplished using a joy-wand. Distribution is managed via the Internet2 Access Grid using point-to-point or multi-casting connectivity through which the participants can interact. Medical students in Hawaii and New Mexico (NM) participated collaboratively in problem solving and managing of a simulated patient with a closed head injury in VRE; dividing tasks, handing off objects, and functioning as a team. Students stated that opportunities to make mistakes and repeat actions in the VRE were extremely helpful in learning specific principles. VRE created higher performance expectations and some anxiety among VRE users. VRE orientation was adequate but students needed time to adapt and practice in order to improve efficiency. This was also demonstrated successfully between Western Australia and UNM. We successfully demonstrated the ability to fully immerse participants in a distributed virtual environment independent of distance for collaborative team interaction in medical simulation designed for education and training. The ability to make mistakes in a safe environment is well received by students and has a positive impact on their understanding, as well as memory of the principles involved in correcting those mistakes. Bringing people together as virtual teams for interactive experiential learning and collaborative training, independent of distance, provides a platform for distributed "just-in-time" training, performance assessment and credentialing. Further validation is necessary to determine the potential value of the distributed VRE in knowledge transfer, improved future performance and should entail training participants to competence in using these tools.
NASA Astrophysics Data System (ADS)
Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan
2008-02-01
One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.
Utility Computing: Reality and Beyond
NASA Astrophysics Data System (ADS)
Ivanov, Ivan I.
Utility Computing is not a new concept. It involves organizing and providing a wide range of computing-related services as public utilities. Much like water, gas, electricity and telecommunications, the concept of computing as public utility was announced in 1955. Utility Computing remained a concept for near 50 years. Now some models and forms of Utility Computing are emerging such as storage and server virtualization, grid computing, and automated provisioning. Recent trends in Utility Computing as a complex technology involve business procedures that could profoundly transform the nature of companies' IT services, organizational IT strategies and technology infrastructure, and business models. In the ultimate Utility Computing models, organizations will be able to acquire as much IT services as they need, whenever and wherever they need them. Based on networked businesses and new secure online applications, Utility Computing would facilitate "agility-integration" of IT resources and services within and between virtual companies. With the application of Utility Computing there could be concealment of the complexity of IT, reduction of operational expenses, and converting of IT costs to variable `on-demand' services. How far should technology, business and society go to adopt Utility Computing forms, modes and models?
Managing a tier-2 computer centre with a private cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara
2014-06-01
In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.
Deflection of a flexural cantilever beam
NASA Astrophysics Data System (ADS)
Sherbourne, A. N.; Lu, F.
The behavior of a flexural elastoplastic cantilever beam is investigated in which geometric nonlinearities are considered. The result of an elastica analysis by Frisch-Fay (1962) is extended to include postyield behavior. Although a closed-form solution is not possible, as in the elastic case, simple algebraic equations are derived involving only one unknown variable, which can also be expressed in the standard form of elliptic integrals if so desired. The results, in comparison with those of the small deflection analyses, indicate that large deflection analyses are necessary when the relative depth of the beam is very small over the length. The present exact solution can be used as a reference by those who resort to a finite element method for more complicated problems. It can also serve as a building block to other beam problems such as a simply supported beam or a beam with multiple loads.
Flamm, Heinz
2015-08-01
The Viennese surgeon Emerich Ullmann who was trained in rabies vaccination by Pasteur personally started his activity in Vienna on 28.6.1886 vaccinating persons of Austria-Hungary been bitten by rabid animals. Whereas Prof. v.Frisch of the other surgical clinic who also had visited Pasteur carried out animal experiments which urged him to disapprove Pasteur's human rabies vaccination. Ullmann vaccinated with great success but soon there appeared obstructions in Viennese medical journals and hateful discussions in the Austrian Parliament against Pasteur and Ullmann. These facts blocked the necessary financial subvention of Ullmann's self financed vaccinations and resulted in their interruption. After a mass infection of rabies in the Bukowina in 1891 the Supreme Sanitary Board formed an Epidemiologic Committee which recommended the establishment of a vaccination unit in an Austrian hospital. In July 1894 the Vaccination Unit was opened in the Viennese hospital Rudolfstiftung, where Emerich Ullmann carried out the rabies vaccinations.
NASA Astrophysics Data System (ADS)
Villone, Barbara; Rampf, Cornelius
2017-12-01
The present is a companion paper to "A contemporary look at Hermann Hankel's 1861 pioneering work on Lagrangian fluid dynamics" by Frisch, Grimberg and Villone [Eur. Phys. J. H 42, 537-556 (2017)]. Here we present the English translation of the 1861 prize manuscript from Göttingen University "Zur allgemeinen Theorie der Bewegung der Flüssigkeiten" (On the general theory of the motion of the fluids) of Hermann Hankel (1839-1873), which was originally submitted in Latin and then translated into German by the Author for publication. We also provide the English translation of two important reports on the manuscript, one written by Bernhard Riemann and the other by Wilhelm Eduard Weber during the assessment process for the prize. Finally, we give a short biography of Hermann Hankel with his complete bibliography.
Network design for telemedicine--e-health using satellite technology.
Graschew, Georgi; Roelofs, Theo A; Rakowsky, Stefan; Schlag, Peter M
2008-01-01
Over the last decade various international Information and Communications Technology networks have been created for a global access to high-level medical care. OP 2000 has designed and validated the high-end interactive video communication system WinVicos especially for telemedical applications, training of the physician in a distributed environment, teleconsultation and second opinion. WinVicos is operated on a workstation (WoTeSa) using standard hardware components and offers a superior image quality at a moderate transmission bandwidth of up to 2 Mbps. WoTeSa / WinVicos have been applied for IP-based communication in different satellite-based telemedical networks. In the DELTASS-project a disaster scenario was analysed and an appropriate telecommunication system for effective rescue measures for the victims was set up and evaluated. In the MEDASHIP project an integrated system for telemedical services (teleconsultation, teleelectro-cardiography, telesonography) on board of cruise ships and ferries has been set up. EMISPHER offers an equal access for most of the countries of the Euro-Mediterranean area to on-line services for health care in the required quality of service. E-learning applications, real-time telemedicine and shared management of medical assistance have been realized. The innovative developments in ICT with the aim of realizing a ubiquitous access to medical resources for everyone at any time and anywhere (u-Health) bear the risk of creating and amplifying a digital divide in the world. Therefore we have analyzed how the objective needs of the heterogeneous partners can be joined with the result that there is a need for real integration of the various platforms and services. A virtual combination of applications serves as the basic idea for the Virtual Hospital. The development of virtual hospitals and digital medicine helps to bridge the digital divide between different regions of the world and enables equal access to high-level medical care. Pre-operative planning, intra-operative navigation and minimally-invasive surgery require a digital and virtual environment supporting the perception of the physician. As data and computing resources in a virtual hospital are distributed over many sites the concept of the Grid should be integrated with other communication networks and platforms.
NASA Astrophysics Data System (ADS)
Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.
2010-12-01
In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on JGN2plus, and they constitute 1PB (physical size) virtual storage by Gfarm v2. These disk servers are connected with supercomputers of NICT and Osaka University. A system that data output from the supercomputers are automatically transferred to the virtual storage had been built up. Transfer rate is about 50 GB/hrs by actual measurement. It is estimated that the performance is reasonable for a certain simulation and analysis for reconstruction of coronal magnetic field. This research is assumed an experiment of the system, and the verification of practicality is advanced at the same time. Herein we introduce an overview of the space weather cloud system so far we have developed. We also demonstrate several scientific results using the space weather cloud system. We also introduce several web applications of the cloud as a service of the space weather cloud, which is named as "e-SpaceWeather" (e-SW). The e-SW provides with a variety of space weather online services from many aspects.
Application of a distributed network in computational fluid dynamic simulations
NASA Technical Reports Server (NTRS)
Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.; Deshpande, Ashish
1994-01-01
A general-purpose 3-D, incompressible Navier-Stokes algorithm is implemented on a network of concurrently operating workstations using parallel virtual machine (PVM) and compared with its performance on a CRAY Y-MP and on an Intel iPSC/860. The problem is relatively computationally intensive, and has a communication structure based primarily on nearest-neighbor communication, making it ideally suited to message passing. Such problems are frequently encountered in computational fluid dynamics (CDF), and their solution is increasingly in demand. The communication structure is explicitly coded in the implementation to fully exploit the regularity in message passing in order to produce a near-optimal solution. Results are presented for various grid sizes using up to eight processors.
BioAcoustica: a free and open repository and analysis platform for bioacoustics
Baker, Edward; Price, Ben W.; Rycroft, S. D.; Smith, Vincent S.
2015-01-01
We describe an online open repository and analysis platform, BioAcoustica (http://bio.acousti.ca), for recordings of wildlife sounds. Recordings can be annotated using a crowdsourced approach, allowing voice introductions and sections with extraneous noise to be removed from analyses. This system is based on the Scratchpads virtual research environment, the BioVeL portal and the Taverna workflow management tool, which allows for analysis of recordings using a grid computing service. At present the analyses include spectrograms, oscillograms and dominant frequency analysis. Further analyses can be integrated to meet the needs of specific researchers or projects. Researchers can upload and annotate their recordings to supplement traditional publication. Database URL: http://bio.acousti.ca PMID:26055102
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.
2015-12-01
The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools. At present, we are working to define a model for monitoring-as-a-service, based on the tools described above, which the Cloud tenants can easily configure to suit their specific needs.
Gillett, Brian; Silverberg, Mark; Roblin, Patricia; Adelaine, John; Valesky, Walter; Arquilla, Bonnie
2011-06-01
Emergency preparedness experts generally are based at academic or governmental institutions. A mechanism for experts to remotely facilitate a distant hospital's disaster readiness is lacking. The objective of this study was to develop and examine the feasibility of an Internet-based software tool to assess disaster preparedness for remote hospitals using a long-distance, virtual, tabletop drill. An Internet-based system that remotely acquires information and analyzes disaster preparedness for hospitals at a distance in a virtual, tabletop drill model was piloted. Nine hospitals in Cape Town, South Africa designated as receiving institutions for the 2010 FIFA World Cup Games and its organizers, utilized the system over a 10-week period. At one-week intervals, the system e-mailed each hospital's leadership a description of a stadium disaster and instructed them to login to the system and answer questions relating to their hospital's state of readiness. A total of 169 questions were posed relating to operational and surge capacities, communication, equipment, major incident planning, public relations, staff safety, hospital supplies, and security in each hospital. The system was used to analyze answers and generate a real-time grid that reflected readiness as a percent for each hospital in each of the above categories. It also created individualized recommendations of how to improve preparedness for each hospital. To assess feasibility of such a system, the end users' compliance and response times were examined. Overall, compliance was excellent with an aggregate response rate of 98%. The mean response interval, defined as the time elapsed between sending a stimuli and receiving a response, was eight days (95% CI = 8-9 days). A web-based data acquisition system using a virtual, tabletop drill to remotely facilitate assessment of disaster preparedness is efficient and feasible. Weekly reinforcement for disaster preparedness resulted in strong compliance.
Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project
NASA Astrophysics Data System (ADS)
Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Fazio, D.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Sedov, A.; Twomey, M. S.; Wang, F.; Zaytsev, A.
2015-12-01
During the LHC Long Shutdown 1 (LSI) period, that started in 2013, the Simulation at Point1 (Sim@P1) project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High-Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 Virtual Machines (VMs) each with 8 CPU cores, for a total of up to 22000 parallel jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 project, operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 33 million CPU-hours and it generated more than 1.1 billion Monte Carlo events. The design aspects are presented: the virtualization platform exploited by Sim@P1 avoids interferences with TDAQ operations and it guarantees the security and the usability of the ATLAS private network. The cloud mechanism allows the separation of the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support) levels. This paper focuses on the operational aspects of such a large system during the upcoming LHC Run 2 period: simple, reliable, and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and back, to exploit the resources when they are not used for the data acquisition, even for short periods. The evolution of the central OpenStack infrastructure is described, as it was upgraded from Folsom to the Icehouse release, including the scalability issues addressed.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lin, Yuh-Lang
2004-01-01
During the research project, sounding datasets were generated for the region surrounding 9 major airports, including Dallas, TX, Boston, MA, New York, NY, Chicago, IL, St. Louis, MO, Atlanta, GA, Miami, FL, San Francico, CA, and Los Angeles, CA. The numerical simulation of winter and summer environments during which no instrument flight rule impact was occurring at these 9 terminals was performed using the most contemporary version of the Terminal Area PBL Prediction System (TAPPS) model nested from 36 km to 6 km to 1 km horizontal resolution and very detailed vertical resolution in the planetary boundary layer. The soundings from the 1 km model were archived at 30 minute time intervals for a 24 hour period and the vertical dependent variables as well as derived quantities, i.e., 3-dimensional wind components, temperatures, pressures, mixing ratios, turbulence kinetic energy and eddy dissipation rates were then interpolated to 5 m vertical resolution up to 1000 m elevation above ground level. After partial validation against field experiment datasets for Dallas as well as larger scale and much coarser resolution observations at the other 8 airports, these sounding datasets were sent to NASA for use in the Virtual Air Space and Modeling program. The application of these datasets being to determine representative airport weather environments to diagnose the response of simulated wake vortices to realistic atmospheric environments. These virtual datasets are based on large scale observed atmospheric initial conditions that are dynamically interpolated in space and time. The 1 km nested-grid simulated datasets providing a very coarse and highly smoothed representation of airport environment meteorological conditions. Details concerning the airport surface forcing are virtually absent from these simulated datasets although the observed background atmospheric processes have been compared to the simulated fields and the fields were found to accurately replicate the flows surrounding the airport where coarse verification data were available as well as where airport scale datasets were available.
NASA Astrophysics Data System (ADS)
van Hemert, Jano; Vilotte, Jean-Pierre
2010-05-01
Research in earthquake and seismology addresses fundamental problems in understanding Earth's internal wave sources and structures, and augment applications to societal concerns about natural hazards, energy resources and environmental change. This community is central to the European Plate Observing System (EPOS)—the ESFRI initiative in solid Earth Sciences. Global and regional seismology monitoring systems are continuously operated and are transmitting a growing wealth of data from Europe and from around the world. These tremendous volumes of seismograms, i.e., records of ground motions as a function of time, have a definite multi-use attribute, which puts a great premium on open-access data infrastructures that are integrated globally. In Europe, the earthquake and seismology community is part of the European Integrated Data Archives (EIDA) infrastructure and is structured as "horizontal" data services. On top of this distributed data archive system, the community has developed recently within the EC project NERIES advanced SOA-based web services and a unified portal system. Enabling advanced analysis of these data by utilising a data-aware distributed computing environment is instrumental to fully exploit the cornucopia of data and to guarantee optimal operation of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of data-intensive applications in data mining and modelling and will be illustrated through a set of applications. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of these applications, and to integrate the community data infrastructure with Grid and HPC infrastructures. A first novel aspect is a service-oriented architecture that provides well-equipped integrated workbenches, with an efficient communication layer between data and Grid infrastructures, augmented with bridges to the HPC facilities. A second novel aspect is the coupling between Grid data analysis and HPC data modelling applications through workflow and data sharing mechanisms. VERCE will develop important interactions with the European infrastructure initiatives in Grid and HPC computing. The VERCE team: CNRS-France (IPG Paris, LGIT Grenoble), UEDIN (UK), KNMI-ORFEUS (Holland), EMSC, INGV (Italy), LMU (Germany), ULIV (UK), BADW-LRZ (Germany), SCAI (Germany), CINECA (Italy)
VisIVO: A Library and Integrated Tools for Large Astrophysical Dataset Exploration
NASA Astrophysics Data System (ADS)
Becciani, U.; Costa, A.; Ersotelos, N.; Krokos, M.; Massimino, P.; Petta, C.; Vitello, F.
2012-09-01
VisIVO provides an integrated suite of tools and services that can be used in many scientific fields. VisIVO development starts in the Virtual Observatory framework. VisIVO allows users to visualize meaningfully highly-complex, large-scale datasets and create movies of these visualizations based on distributed infrastructures. VisIVO supports high-performance, multi-dimensional visualization of large-scale astrophysical datasets. Users can rapidly obtain meaningful visualizations while preserving full and intuitive control of the relevant parameters. VisIVO consists of VisIVO Desktop - a stand-alone application for interactive visualization on standard PCs, VisIVO Server - a platform for high performance visualization, VisIVO Web - a custom designed web portal, VisIVOSmartphone - an application to exploit the VisIVO Server functionality and the latest VisIVO features: VisIVO Library allows a job running on a computational system (grid, HPC, etc.) to produce movies directly with the code internal data arrays without the need to produce intermediate files. This is particularly important when running on large computational facilities, where the user wants to have a look at the results during the data production phase. For example, in grid computing facilities, images can be produced directly in the grid catalogue while the user code is running in a system that cannot be directly accessed by the user (a worker node). The deployment of VisIVO on the DG and gLite is carried out with the support of EDGI and EGI-Inspire projects. Depending on the structure and size of datasets under consideration, the data exploration process could take several hours of CPU for creating customized views and the production of movies could potentially last several days. For this reason an MPI parallel version of VisIVO could play a fundamental role in increasing performance, e.g. it could be automatically deployed on nodes that are MPI aware. A central concept in our development is thus to produce unified code that can run either on serial nodes or in parallel by using HPC oriented grid nodes. Another important aspect, to obtain as high performance as possible, is the integration of VisIVO processes with grid nodes where GPUs are available. We have selected CUDA for implementing a range of computationally heavy modules. VisIVO is supported by EGI-Inspire, EDGI and SCI-BUS projects.
High-resolution daily gridded data sets of air temperature and wind speed for Europe
NASA Astrophysics Data System (ADS)
Brinckmann, Sven; Krähenmann, Stefan; Bissolli, Peter
2016-10-01
New high-resolution data sets for near-surface daily air temperature (minimum, maximum and mean) and daily mean wind speed for Europe (the CORDEX domain) are provided for the period 2001-2010 for the purpose of regional model validation in the framework of DecReg, a sub-project of the German MiKlip project, which aims to develop decadal climate predictions. The main input data sources are SYNOP observations, partly supplemented by station data from the ECA&D data set (http://www.ecad.eu). These data are quality tested to eliminate erroneous data. By spatial interpolation of these station observations, grid data in a resolution of 0.044° (≈ 5
Global Multi-Resolution Topography (GMRT) Synthesis - Version 2.0
NASA Astrophysics Data System (ADS)
Ferrini, V.; Coplan, J.; Carbotte, S. M.; Ryan, W. B.; O'Hara, S.; Morton, J. J.
2010-12-01
The detailed morphology of the global ocean floor is poorly known, with most areas mapped only at low resolution using satellite-based measurements. Ship-based sonars provide data at resolution sufficient to quantify seafloor features related to the active processes of erosion, sediment flow, volcanism, and faulting. To date, these data have been collected in a small fraction of the global ocean (<10%). The Global Multi-Resolution Topography (GMRT) synthesis makes use of sonar data collected by scientists and institutions worldwide, merging them into a single continuously updated compilation of high-resolution seafloor topography. Several applications, including GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org), make use of the GMRT Synthesis and provide direct access to images and underlying gridded data. Source multibeam files included in the compilation can also accessed through custom functionality in GeoMapApp. The GMRT Synthesis began in 1992 as the Ridge Multibeam Synthesis. It was subsequently expanded to include bathymetry data from the Southern Ocean, and now includes data from throughout the global oceans. Our design strategy has been to make data available at the full native resolution of shipboard sonar systems, which historically has been ~100 m in the deep sea (Ryan et al., 2009). A new release of the GMRT Synthesis in Fall of 2010 includes several significant improvements over our initial strategy. In addition to increasing the number of cruises included in the compilation by over 25%, we have developed a new protocol for handling multibeam source data, which has improved the overall quality of the compilation. The new tileset also includes a discrete layer of sonar data in the public domain that are gridded to the full resolution of the sonar system, with data gridded 25 m in some areas. This discrete layer of sonar data has been provided to Google for integration into Google’s default ocean base map. NOAA coastal grids and numerous grids contributed by the international science community are also integrated into the GMRT Synthesis. Finally, terrestrial elevation data from NASA’s ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) global DEM, and the USGS National Elevation Dataset have been included in the synthesis, providing resolution of up to 10 m in some areas of the US.
Adaptive and dynamic meshing methods for numerical simulations
NASA Astrophysics Data System (ADS)
Acikgoz, Nazmiye
For the numerical simulation of many problems of engineering interest, it is desirable to have an automated mesh adaption tool capable of producing high quality meshes with an affordably low number of mesh points. This is important especially for problems, which are characterized by anisotropic features of the solution and require mesh clustering in the direction of high gradients. Another significant issue in meshing emerges in the area of unsteady simulations with moving boundaries or interfaces, where the motion of the boundary has to be accommodated by deforming the computational grid. Similarly, there exist problems where current mesh needs to be adapted to get more accurate solutions because either the high gradient regions are initially predicted inaccurately or they change location throughout the simulation. To solve these problems, we propose three novel procedures. For this purpose, in the first part of this work, we present an optimization procedure for three-dimensional anisotropic tetrahedral grids based on metric-driven h-adaptation. The desired anisotropy in the grid is dictated by a metric that defines the size, shape, and orientation of the grid elements throughout the computational domain. Through the use of topological and geometrical operators, the mesh is iteratively adapted until the final mesh minimizes a given objective function. In this work, the objective function measures the distance between the metric of each simplex and a target metric, which can be either user-defined (a-priori) or the result of a-posteriori error analysis. During the adaptation process, one tries to decrease the metric-based objective function until the final mesh is compliant with the target within a given tolerance. However, in regions such as corners and complex face intersections, the compliance condition was found to be very difficult or sometimes impossible to satisfy. In order to address this issue, we propose an optimization process based on an ad-hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations. Therefore, in order to minimize user intervention and prevent frequent remeshings, we conclude this work by defining a novel mesh adaptation technique that integrates metric based target mesh definitions with the ball-vertex mesh deformation method. In this new approach, the entire mesh is deformed based on either an a-priori or an a-posteriori error estimator. In other words, nodal points are repositioned upon application of a force field in order to comply with the target mesh or to get more accurate solutions. The method has been tested for two-dimensional problems of a-priori metric definitions as well as for oblique shock clusterings.
Lai, Canhai; Xu, Zhijie; Li, Tingwen; ...
2017-08-05
In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber's performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered sub-grid models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable accuracymore » and manageable computational effort. Previously developed filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical tubes) on the adsorber's hydrodynamics and CO 2 capture performance are then examined. A one-dimensional three-region process model is briefly introduced for comparison purpose. The CFD model matches reasonably well with the process model while provides additional information about the flow field that is not available with the process model.« less
Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin
2014-01-01
In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150
Localized strain measurements of the intervertebral disc annulus during biaxial tensile testing.
Karakolis, Thomas; Callaghan, Jack P
2015-01-01
Both inter-lamellar and intra-lamellar failures of the annulus have been described as potential modes of disc herniation. Attempts to characterize initial lamellar failure of the annulus have involved tensile testing of small tissue samples. The purpose of this study was to evaluate a method of measuring local surface strains through image analysis of a tensile test conducted on an isolated sample of annular tissue in order to enhance future studies of intervertebral disc failure. An annulus tissue sample was biaxial strained to 10%. High-resolution images captured the tissue surface throughout testing. Three test conditions were evaluated: submerged, non-submerged and marker. Surface strains were calculated for the two non-marker conditions based on motion of virtual tracking points. Tracking algorithm parameters (grid resolution and template size) were varied to determine the effect on estimated strains. Accuracy of point tracking was assessed through a comparison of the non-marker conditions to a condition involving markers placed on tissue surface. Grid resolution had a larger effect on local strain than template size. Average local strain error ranged from 3% to 9.25% and 0.1% to 2.0%, for the non-submerged and submerged conditions, respectively. Local strain estimation has a relatively high potential for error. Submerging the tissue provided superior strain estimates.
Orbital stability of compact three-planets systems.
NASA Astrophysics Data System (ADS)
Gavino, Sacha; Lissauer, Jack
2018-04-01
Recent discoveries unveiled a significant number of compact multi-planetary systems, where the adjacent planets orbits are much closer to those found in the Solar System. Studying the orbital stability of such compact systems provides information on how they form and how long they survive. We performed a general study of three Earth-like planets orbiting a Sun-mass star in circular and coplanar prograde orbits. The simulations were performed over a wide range of mutual Hill radii and were conducted for virtual times reaching at most 10 billion years. Both equally-spaced and unequally spaced planet systems are investigated. We recover the results of previous studies done for systems of planets spaced uniformly in mutual Hill radius and we investigate mean motion resonances and test chaos. We also study systems with different initial spacing between the adjacent inner pair of planets and the outer pair of planets and we displayed their lifetime on a grid at different resolution. Over 45000 simulations have been done. We then characterize isochrones for lifetime of systems of equivalent spacing. We find that the stability time increases significantly for values of mutual Hill radii beyond 8. We also study the affects of mean motion resonances, the degree of symmetry in the grid and test chaos.
NASA Astrophysics Data System (ADS)
Crivori, Patrizia; Zamora, Ismael; Speed, Bill; Orrenius, Christian; Poggesi, Italo
2004-03-01
A number of computational approaches are being proposed for an early optimization of ADME (absorption, distribution, metabolism and excretion) properties to increase the success rate in drug discovery. The present study describes the development of an in silico model able to estimate, from the three-dimensional structure of a molecule, the stability of a compound with respect to the human cytochrome P450 (CYP) 3A4 enzyme activity. Stability data were obtained by measuring the amount of unchanged compound remaining after a standardized incubation with human cDNA-expressed CYP3A4. The computational method transforms the three-dimensional molecular interaction fields (MIFs) generated from the molecular structure into descriptors (VolSurf and Almond procedures). The descriptors were correlated to the experimental metabolic stability classes by a partial least squares discriminant procedure. The model was trained using a set of 1800 compounds from the Pharmacia collection and was validated using two test sets: the first one including 825 compounds from the Pharmacia collection and the second one consisting of 20 known drugs. This model correctly predicted 75% of the first and 85% of the second test set and showed a precision above 86% to correctly select metabolically stable compounds. The model appears a valuable tool in the design of virtual libraries to bias the selection toward more stable compounds. Abbreviations: ADME - absorption, distribution, metabolism and excretion; CYP - cytochrome P450; MIFs - molecular interaction fields; HTS - high throughput screening; DDI - drug-drug interactions; 3D - three-dimensional; PCA - principal components analysis; CPCA - consensus principal components analysis; PLS - partial least squares; PLSD - partial least squares discriminant; GRIND - grid independent descriptors; GRID - software originally created and developed by Professor Peter Goodford.
Data Management System for the National Energy-Water System (NEWS) Assessment Framework
NASA Astrophysics Data System (ADS)
Corsi, F.; Prousevitch, A.; Glidden, S.; Piasecki, M.; Celicourt, P.; Miara, A.; Fekete, B. M.; Vorosmarty, C. J.; Macknick, J.; Cohen, S. M.
2015-12-01
Aiming at providing a comprehensive assessment of the water-energy nexus, the National Energy-Water System (NEWS) project requires the integration of data to support a modeling framework that links climate, hydrological, power production, transmission, and economical models. Large amounts of Georeferenced data has to be streamed to the components of the inter-disciplinary model to explore future challenges and tradeoffs in the US power production, based on climate scenarios, power plant locations and technologies, available water resources, ecosystem sustainability, and economic demand. We used open source and in-house build software components to build a system that addresses two major data challenges: On-the-fly re-projection, re-gridding, interpolation, extrapolation, nodata patching, merging, temporal and spatial aggregation, of static and time series datasets in virtually any file formats and file structures, and any geographic extent for the models I/O, directly at run time; Comprehensive data management based on metadata cataloguing and discovery in repositories utilizing the MAGIC Table (Manipulation and Geographic Inquiry Control database). This innovative concept allows models to access data on-the-fly by data ID, irrespective of file path, file structure, file format and regardless its GIS specifications. In addition, a web-based information and computational system is being developed to control the I/O of spatially distributed Earth system, climate, and hydrological, power grid, and economical data flow within the NEWS framework. The system allows scenario building, data exploration, visualization, querying, and manipulation any loaded gridded, point, and vector polygon dataset. The system has demonstrated its potential for applications in other fields of Earth science modeling, education, and outreach. Over time, this implementation of the system will provide near real-time assessment of various current and future scenarios of the water-energy nexus.
NASA Astrophysics Data System (ADS)
Yuan, H. Z.; Wang, Y.; Shu, C.
2017-12-01
This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.
Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure
NASA Astrophysics Data System (ADS)
Matyska, Ludek; Ruda, Miroslav; Toth, Simon
For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.
NASA Astrophysics Data System (ADS)
Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.
2016-12-01
We are extending climate analytics-as-a-service, including: (1) A high-performance Virtual Real-Time Analytics Testbed supporting six major reanalysis data sets using advanced technologies like the Cloudera Impala-based SQL and Hadoop-based MapReduce analytics over native NetCDF files. (2) A Reanalysis Ensemble Service (RES) that offers a basic set of commonly used operations over the reanalysis collections that are accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib. (3) An Open Geospatial Consortium (OGC) WPS-compliant Web service interface to CDSLib to accommodate ESGF's Web service endpoints. This presentation will report on the overall progress of this effort, with special attention to recent enhancements that have been made to the Reanalysis Ensemble Service, including the following: - An CDSlib Python library that supports full temporal, spatial, and grid-based resolution services - A new reanalysis collections reference model to enable operator design and implementation - An enhanced library of sample queries to demonstrate and develop use case scenarios - Extended operators that enable single- and multiple reanalysis area average, vertical average, re-gridding, and trend, climatology, and anomaly computations - Full support for the MERRA-2 reanalysis and the initial integration of two additional reanalyses - A prototype Jupyter notebook-based distribution mechanism that combines CDSlib documentation with interactive use case scenarios and personalized project management - Prototyped uncertainty quantification services that combine ensemble products with comparative observational products - Convenient, one-stop shopping for commonly used data products from multiple reanalyses, including basic subsetting and arithmetic operations over the data and extractions of trends, climatologies, and anomalies - The ability to compute and visualize multiple reanalysis intercomparisons
TOPCAT: Tool for OPerations on Catalogues And Tables
NASA Astrophysics Data System (ADS)
Taylor, Mark
2011-01-01
TOPCAT is an interactive graphical viewer and editor for tabular data. Its aim is to provide most of the facilities that astronomers need for analysis and manipulation of source catalogues and other tables, though it can be used for non-astronomical data as well. It understands a number of different astronomically important formats (including FITS and VOTable) and more formats can be added. It offers a variety of ways to view and analyse tables, including a browser for the cell data themselves, viewers for information about table and column metadata, and facilities for 1-, 2-, 3- and higher-dimensional visualisation, calculating statistics and joining tables using flexible matching algorithms. Using a powerful and extensible Java-based expression language new columns can be defined and row subsets selected for separate analysis. Table data and metadata can be edited and the resulting modified table can be written out in a wide range of output formats. It is a stand-alone application which works quite happily with no network connection. However, because it uses Virtual Observatory (VO) standards, it can cooperate smoothly with other tools in the VO world and beyond, such as VODesktop, Aladin and ds9. Between 2006 and 2009 TOPCAT was developed within the AstroGrid project, and is offered as part of a standard suite of applications on the AstroGrid web site, where you can find information on several other VO tools. The program is written in pure Java and available under the GNU General Public Licence. It has been developed in the UK within the Starlink and AstroGrid projects, and under PPARC and STFC grants. Its underlying table processing facilities are provided by STIL.
Public storage for the Open Science Grid
NASA Astrophysics Data System (ADS)
Levshina, T.; Guru, A.
2014-06-01
The Open Science Grid infrastructure doesn't provide efficient means to manage public storage offered by participating sites. A Virtual Organization that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its utilization. The involvement of the production manager, site administrators and VO support personnel is required to allocate or rescind storage space. One of the main requirements for Public Storage implementation is that it should use SRM or GridFTP protocols to access the Storage Elements provided by the OSG Sites and not put any additional burden on sites. By policy, no new services related to Public Storage can be installed and run on OSG sites. Opportunistic users also have difficulties in accessing the OSG Storage Elements during the execution of jobs. A typical users' data management workflow includes pre-staging common data on sites before a job's execution, then storing for a subsequent download to a local institution the output data produced by a job on a worker node. When the amount of data is significant, the only means to temporarily store the data is to upload it to one of the Storage Elements. In order to do that, a user's job should be aware of the storage location, availability, and free space. After a successful data upload, users must somehow keep track of the data's location for future access. In this presentation we propose solutions for storage management and data handling issues in the OSG. We are investigating the feasibility of using the integrated Rule-Oriented Data System developed at RENCI as a front-end service to the OSG SEs. The current architecture, state of deployment and performance test results will be discussed. We will also provide examples of current usage of the system by beta-users.
Wright, Bernice; Watson, Kimberly A; McGuffin, Liam J; Lovegrove, Julie A; Gibbins, Jonathan M
2015-11-01
Flavonoids reduce cardiovascular disease risk through anti-inflammatory, anti-coagulant and anti-platelet actions. One key flavonoid inhibitory mechanism is blocking kinase activity that drives these processes. Flavonoids attenuate activities of kinases including phosphoinositide-3-kinase, Fyn, Lyn, Src, Syk, PKC, PIM1/2, ERK, JNK and PKA. X-ray crystallographic analyses of kinase-flavonoid complexes show that flavonoid ring systems and their hydroxyl substitutions are important structural features for their binding to kinases. A clearer understanding of structural interactions of flavonoids with kinases is necessary to allow construction of more potent and selective counterparts. We examined flavonoid (quercetin, apigenin and catechin) interactions with Src family kinases (Lyn, Fyn and Hck) applying the Sybyl docking algorithm and GRID. A homology model (Lyn) was used in our analyses to demonstrate that high-quality predicted kinase structures are suitable for flavonoid computational studies. Our docking results revealed potential hydrogen bond contacts between flavonoid hydroxyls and kinase catalytic site residues. Identification of plausible contacts indicated that quercetin formed the most energetically stable interactions, apigenin lacked hydroxyl groups necessary for important contacts and the non-planar structure of catechin could not support predicted hydrogen bonding patterns. GRID analysis using a hydroxyl functional group supported docking results. Based on these findings, we predicted that quercetin would inhibit activities of Src family kinases with greater potency than apigenin and catechin. We validated this prediction using in vitro kinase assays. We conclude that our study can be used as a basis to construct virtual flavonoid interaction libraries to guide drug discovery using these compounds as molecular templates. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
A new method for combining live action and computer graphics in stereoscopic 3D
NASA Astrophysics Data System (ADS)
Rupkalvis, John A.; Gillen, Ron
2008-02-01
A primary requirement when elements are to be combined stereoscopically, is that homologous points in each eye view of each element have identical parallax separation at any point of interaction. If this is not done, the image parts on one element will appear to be at a different distance from the corresponding or associated parts on the other element. This results in a visual discontinuity that appears very unnatural. For example, if a live actor were to appear to "shake hands" with a cartoon character, a very natural appearing juncture may appear to be the case when seen in 2-D, but their hands may appear to miss when seen in 3-D. Previous efforts to compensate, or correct these errors have involved painstaking time-consuming trial-and-error tests. In the area of pure animation, efforts to make cartoon characters appear more realistic were developed. A "motion tracking" technique was developed. This involves an actor wearing a special suit with indicator marks at various points on their body. The actor walks through the scene, then the animator tracks the points using motion capture software. Because live action and CG elements can interact or change at several different points and levels within a scene, additional requirements must also be addressed. "Occlusions" occur when one object passes in front of another. A particular tracking point may appear in one eye-view, and not the other. When Z-axis differentials are to be considered in the live action as well as the CG elements, and both are to interact with each other, both eye-views must be tracked, especially at points of occlusion. A new approach would be to generate a three dimensional grid, within which the action is to take place. This grid can be projected, onto the stage where the live action part is to take place. When differential occlusions occur, the grid may be seen and CG elements plotted in reference to it. Because of the capability of precisely locating points in a digital image, a pixel-accurate virtual model of both the actual and the virtual scene may be matched with extreme accuracy. The metrology of the grid may also be easily changed at any time, not only as to the pitch of the lines, but also the introduction of intentional distortions, such as when a forced perspective is desired. This approach would also include using a special parallax indicator, which may be used as a physical generator, such as a bar-generator light and actually carried in the scene. Parallax indicators can provide instantaneous "readouts" of the parallax at any point on the animator's monitor. Customized software would equate as the cursor is moved around the screen, the exact parallax at that indicated pixel would appear on the screen, immediately adjacent to that point. Preferences would allow the choice of either keying the point to the left-eye image, the right-eye image, or a point midway in-between.
Lessons Learned during the Development and Operation of Virtual Observatory
NASA Astrophysics Data System (ADS)
Ohishi, M.; Shirasaki, Y.; Komiya, Y.; Mizumoto, Y.; Yasuda, N.; Tanaka, M.
2010-12-01
In the last a few years several Virtual Observatory (VO) projects have entered from the research and development phase to the operations phase. The VO projects include AstroGrid (UK), Virtual Astronomical Observatory (former National Virtual Observatory, USA), EURO-VO (EU), Japanese Virtual Observatory (Japan), and so on. This successful transition from the development phase to the operations phase owes primarily to the concerted action to develop standard interfaces among the VO projects in the world, that has been conducted in the International Virtual Observatory Alliance. The registry interface has been one of the most important key to share among the VO projects and data centers (data providers) with the observed data and the catalog data. Data access protocols and/or language (SIAP, SSAP, ADQL) and the common data format (VOTable) are other keys. Consequently we are able to find scientific papers so far published. However, we had faced some experience during the implementation process as follows:
A Virtual Science Data Environment for Carbon Dioxide Observations
NASA Astrophysics Data System (ADS)
Verma, R.; Goodale, C. E.; Hart, A. F.; Law, E.; Crichton, D. J.; Mattmann, C. A.; Gunson, M. R.; Braverman, A. J.; Nguyen, H. M.; Eldering, A.; Castano, R.; Osterman, G. B.
2011-12-01
Climate science data are often distributed cross-institutionally and made available using heterogeneous interfaces. With respect to observational carbon-dioxide (CO2) records, these data span across national as well as international institutions and are typically distributed using a variety of data standards. Such an arrangement can yield challenges from a research perspective, as users often need to independently aggregate datasets as well as address the issue of data quality. To tackle this dispersion and heterogeneity of data, we have developed the CO2 Virtual Science Data Environment - a comprehensive approach to virtually integrating CO2 data and metadata from multiple missions and providing a suite of computational services that facilitate analysis, comparison, and transformation of that data. The Virtual Science Environment provides climate scientists with a unified web-based destination for discovering relevant observational data in context, and supports a growing range of online tools and services for analyzing and transforming the available data to suit individual research needs. It includes web-based tools to geographically and interactively search for CO2 observations collected from multiple airborne, space, as well as terrestrial platforms. Moreover, the data analysis services it provides over the Internet, including offering techniques such as bias estimation and spatial re-gridding, move computation closer to the data and reduce the complexity of performing these operations repeatedly and at scale. The key to enabling these services, as well as consolidating the disparate data into a unified resource, has been to focus on leveraging metadata descriptors as the foundation of our data environment. This metadata-centric architecture, which leverages the Dublin Core standard, forgoes the need to replicate remote datasets locally. Instead, the system relies upon an extensive, metadata-rich virtual data catalog allowing on-demand browsing and retrieval of CO2 records from multiple missions. In other words, key metadata information about remote CO2 records is stored locally while the data itself is preserved at its respective archive of origin. This strategy has been made possible by our method of encapsulating the heterogeneous sources of data using a common set of web-based services, including services provided by Jet Propulsion Laboratory's Climate Data Exchange (CDX). Furthermore, this strategy has enabled us to scale across missions, and to provide access to a broad array of CO2 observational data. Coupled with on-demand computational services and an intuitive web-portal interface, the CO2 Virtual Science Data Environment effectively transforms heterogeneous CO2 records from multiple sources into a unified resource for scientific discovery.
Developing science gateways for drug discovery in a grid environment.
Pérez-Sánchez, Horacio; Rezaei, Vahid; Mezhuyev, Vitaliy; Man, Duhu; Peña-García, Jorge; den-Haan, Helena; Gesing, Sandra
2016-01-01
Methods for in silico screening of large databases of molecules increasingly complement and replace experimental techniques to discover novel compounds to combat diseases. As these techniques become more complex and computationally costly we are faced with an increasing problem to provide the research community of life sciences with a convenient tool for high-throughput virtual screening on distributed computing resources. To this end, we recently integrated the biophysics-based drug-screening program FlexScreen into a service, applicable for large-scale parallel screening and reusable in the context of scientific workflows. Our implementation is based on Pipeline Pilot and Simple Object Access Protocol and provides an easy-to-use graphical user interface to construct complex workflows, which can be executed on distributed computing resources, thus accelerating the throughput by several orders of magnitude.
Generalized Aggregation and Coordination of Residential Loads in a Smart Community
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, He; Somani, Abhishek; Lian, Jianming
2015-11-02
Flexibility from residential loads presents an enormous potential to provide various services to the smart grid. In this paper, we propose a unified hierarchical framework for aggregation and coordination of various residential loads in a smart community, such as Thermostatically Controlled Loads (TCLs), Distributed Energy Storages (DESs), residential Pool Pumps (PPs), and Electric Vehicles (EVs). A central idea of this framework is a virtual battery model, which provides a simple and intuitive tool to aggregate the flexibility of distributed loads. Moreover, a multi-stage Nash-bargainingbased coordination strategy is proposed to coordinate different aggregations of residential loads for demand response. Case studiesmore » are provided to demonstrate the efficacy of our proposed framework and coordination strategy in managing peak power demand in a smart residential community.« less
2-Dimensional beamsteering using dispersive deflectors and wavelength tuning.
Chan, Trevor; Myslivets, Evgeny; Ford, Joseph E
2008-09-15
We introduce a 2D beamscanner which is controlled by wavelength tuning. Two passive dispersive devices are aligned orthogonally to deflect the optical beam in two dimensions. We provide a proof of principle demonstration by combining an arrayed waveguide grating with a free space optical grating and using various input sources to characterize the beamscanner. This achieved a discrete 10.3 degrees by 11 degrees output field of view with attainable angles existing on an 8 by 6 grid of directions. The entire range was reached by scanning over a 40 nm wavelength range. We also analyze an improved system combining a virtually imaged phased array with a diffraction grating. This device is much more compact and produces a continuous output scan in one direction while being discrete in the other.
Distributed Trust Management for Validating SLA Choreographies
NASA Astrophysics Data System (ADS)
Haq, Irfan Ul; Alnemr, Rehab; Paschke, Adrian; Schikuta, Erich; Boley, Harold; Meinel, Christoph
For business workflow automation in a service-enriched environment such as a grid or a cloud, services scattered across heterogeneous Virtual Organizations (VOs) can be aggregated in a producer-consumer manner, building hierarchical structures of added value. In order to preserve the supply chain, the Service Level Agreements (SLAs) corresponding to the underlying choreography of services should also be incrementally aggregated. This cross-VO hierarchical SLA aggregation requires validation, for which a distributed trust system becomes a prerequisite. Elaborating our previous work on rule-based SLA validation, we propose a hybrid distributed trust model. This new model is based on Public Key Infrastructure (PKI) and reputation-based trust systems. It helps preventing SLA violations by identifying violation-prone services at service selection stage and actively contributes in breach management at the time of penalty enforcement.
NASA Astrophysics Data System (ADS)
Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.
2012-12-01
Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.
Infrastructures for Distributed Computing: the case of BESIII
NASA Astrophysics Data System (ADS)
Pellegrino, J.
2018-05-01
The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.
Joint Video Stitching and Stabilization from Moving Cameras.
Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef
2016-09-08
In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.
On the (a)symmetry between the perception of time and space in large-scale environments.
Riemer, Martin; Shine, Jonathan P; Wolbers, Thomas
2018-04-23
Cross-dimensional interference between spatial and temporal processing is well documented in humans, but the direction of these interactions remains unclear. The theory of metaphoric structuring states that space is the dominant concept influencing time perception, whereas time has little effect upon the perception of space. In contrast, theories proposing a common neuronal mechanism representing magnitudes argue for a symmetric interaction between space and time perception. Here, we investigated space-time interactions in realistic, large-scale virtual environments. Our results demonstrate a symmetric relationship between the perception of temporal intervals in the supra-second range and room size (experiment 1), but an asymmetric relationship between the perception of travel time and traveled distance (experiment 2). While the perception of time was influenced by the size of virtual rooms and by the distance traveled within these rooms, time itself affected only the perception of room size, but had no influence on the perception of traveled distance. These results are discussed in the context of recent evidence from rodent studies suggesting that subsets of hippocampal place and entorhinal grid cells can simultaneously code for space and time, providing a potential neuronal basis for the interactions between these domains. © 2018 Wiley Periodicals, Inc.
The Added Value of Water Footprint Assessment for National Water Policy: A Case Study for Morocco
Schyns, Joep F.; Hoekstra, Arjen Y.
2014-01-01
A Water Footprint Assessment is carried out for Morocco, mapping the water footprint of different activities at river basin and monthly scale, distinguishing between surface- and groundwater. The paper aims to demonstrate the added value of detailed analysis of the human water footprint within a country and thorough assessment of the virtual water flows leaving and entering a country for formulating national water policy. Green, blue and grey water footprint estimates and virtual water flows are mainly derived from a previous grid-based (5×5 arc minute) global study for the period 1996–2005. These estimates are placed in the context of monthly natural runoff and waste assimilation capacity per river basin derived from Moroccan data sources. The study finds that: (i) evaporation from storage reservoirs is the second largest form of blue water consumption in Morocco, after irrigated crop production; (ii) Morocco’s water and land resources are mainly used to produce relatively low-value (in US$/m3 and US$/ha) crops such as cereals, olives and almonds; (iii) most of the virtual water export from Morocco relates to the export of products with a relatively low economic water productivity (in US$/m3); (iv) blue water scarcity on a monthly scale is severe in all river basins and pressure on groundwater resources by abstractions and nitrate pollution is considerable in most basins; (v) the estimated potential water savings by partial relocation of crops to basins where they consume less water and by reducing water footprints of crops down to benchmark levels are significant compared to demand reducing and supply increasing measures considered in Morocco’s national water strategy. PMID:24919194
The added value of water footprint assessment for national water policy: a case study for Morocco.
Schyns, Joep F; Hoekstra, Arjen Y
2014-01-01
A Water Footprint Assessment is carried out for Morocco, mapping the water footprint of different activities at river basin and monthly scale, distinguishing between surface- and groundwater. The paper aims to demonstrate the added value of detailed analysis of the human water footprint within a country and thorough assessment of the virtual water flows leaving and entering a country for formulating national water policy. Green, blue and grey water footprint estimates and virtual water flows are mainly derived from a previous grid-based (5 × 5 arc minute) global study for the period 1996-2005. These estimates are placed in the context of monthly natural runoff and waste assimilation capacity per river basin derived from Moroccan data sources. The study finds that: (i) evaporation from storage reservoirs is the second largest form of blue water consumption in Morocco, after irrigated crop production; (ii) Morocco's water and land resources are mainly used to produce relatively low-value (in US$/m3 and US$/ha) crops such as cereals, olives and almonds; (iii) most of the virtual water export from Morocco relates to the export of products with a relatively low economic water productivity (in US$/m3); (iv) blue water scarcity on a monthly scale is severe in all river basins and pressure on groundwater resources by abstractions and nitrate pollution is considerable in most basins; (v) the estimated potential water savings by partial relocation of crops to basins where they consume less water and by reducing water footprints of crops down to benchmark levels are significant compared to demand reducing and supply increasing measures considered in Morocco's national water strategy.
Visualizing astronomy data using VRML
NASA Astrophysics Data System (ADS)
Beeson, Brett; Lancaster, Michael; Barnes, David G.; Bourke, Paul D.; Rixon, Guy T.
2004-09-01
Visualisation is a powerful tool for understanding the large data sets typical of astronomical surveys and can reveal unsuspected relationships and anomalous regions of parameter space which may be difficult to find programatically. Visualisation is a classic information technology for optimising scientific return. We are developing a number of generic on-line visualisation tools as a component of the Australian Virtual Observatory project. The tools will be deployed within the framework of the International Virtual Observatory Alliance (IVOA), and follow agreed-upon standards to make them accessible by other programs and people. We and our IVOA partners plan to utilise new information technologies (such as grid computing and web services) to advance the scientific return of existing and future instrumentation. Here we present a new tool - VOlume - which visualises point data. Visualisation of astronomical data normally requires the local installation of complex software, the downloading of potentially large datasets, and very often time-consuming and tedious data format conversions. VOlume enables the astronomer to visualise data using just a web browser and plug-in. This is achieved using IVOA standards which allow us to pass data between Web Services, Java Servlet Technology and Common Gateway Interface programs. Data from a catalogue server can be streamed in eXtensible Mark-up Language format to a servlet which produces Virtual Reality Modeling Language output. The user selects elements of the catalogue to map to geometry and then visualises the result in a browser plug-in such as Cortona or FreeWRL. Other than requiring an input VOTable format file, VOlume is very general. While its major use will likely be to display and explore astronomical source catalogues, it can easily render other important parameter fields such as the sky and redshift coverage of proposed surveys or the sampling of the visibility plane by a rotation-synthesis interferometer.
NASA Astrophysics Data System (ADS)
Timm, S.; Cooper, G.; Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Grassano, D.; Tiradani, A.; Krishnamurthy, R.; Vinayagam, S.; Raicu, I.; Wu, H.; Ren, S.; Noh, S.-Y.
2017-10-01
The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.
RoboLab and virtual environments
NASA Technical Reports Server (NTRS)
Giarratano, Joseph C.
1994-01-01
A useful adjunct to the manned space station would be a self-contained free-flying laboratory (RoboLab). This laboratory would have a robot operated under telepresence from the space station or ground. Long duration experiments aboard RoboLab could be performed by astronauts or scientists using telepresence to operate equipment and perform experiments. Operating the lab by telepresence would eliminate the need for life support such as food, water and air. The robot would be capable of motion in three dimensions, have binocular vision TV cameras, and two arms with manipulators to simulate hands. The robot would move along a two-dimensional grid and have a rotating, telescoping periscope section for extension in the third dimension. The remote operator would wear a virtual reality type headset to allow the superposition of computer displays over the real-time video of the lab. The operators would wear exoskeleton type arms to facilitate the movement of objects and equipment operation. The combination of video displays, motion, and the exoskeleton arms would provide a high degree of telepresence, especially for novice users such as scientists doing short-term experiments. The RoboLab could be resupplied and samples removed on other space shuttle flights. A self-contained RoboLab module would be designed to fit within the cargo bay of the space shuttle. Different modules could be designed for specific applications, i.e., crystal-growing, medicine, life sciences, chemistry, etc. This paper describes a RoboLab simulation using virtual reality (VR). VR provides an ideal simulation of telepresence before the actual robot and laboratory modules are constructed. The easy simulation of different telepresence designs will produce a highly optimum design before construction rather than the more expensive and time consuming hardware changes afterwards.
Eikonal Tomography of the Southern California Plate Boundary Region
NASA Astrophysics Data System (ADS)
Qiu, H.; Ben-Zion, Y.; Zigone, D.; Lin, F. C.
2016-12-01
We use eikonal tomography to derive directionally-dependent phase velocities of surface waves for the plate boundary region in southern CA sensitive to the approximate depth range 1-20 km. Seismic noise data recorded by 346 stations in the area provide a spatial coverage with 5-25 km typical station spacing and period range of 1-20 s. Noise cross-correlations are calculated for vertical component data recorded in year 2014. Rayleigh wave group and phase travel times between 2 and 13 sec period are derived for each station pair using frequency-time analysis. For each common station, all available phase travel time measurements with sufficient signal to noise ratio and envelope peak amplitude are used to construct a travel time map for a virtual source at the common station location. By solving the eikonal equation, both phase velocity and propagation direction are evaluated at each location for each virtual source. Isotropic phase velocities and 2-psi azimuthal anisotropy and their uncertainties are determined statistically using measurements from different virtual sources. Following the method of Barmin et al. (2001), group velocities are also inverted using all the group travel times that pass quality criteria. The obtained group and phase dispersions of Rayleigh waves are then inverted on a 6 x 6 km2 grid for local 1D piecewise shear wave velocity structures using the procedure of Herrmann (2013). The results agree well with previous observations of Zigone et al. (2015) in the overlapping area. Clear velocity contrasts and low velocity zones are seen for the San Andreas, San Jacinto, Elsinore and Garlock faults. We also find 2-psi azimuthal anisotropy with fast directions parallel to geometrically-simple fault sections. Details and updated results will be presented in the meeting.
Borrego, Adrián; Latorre, Jorge; Alcañiz, Mariano; Llorens, Roberto
2018-06-01
The latest generation of head-mounted displays (HMDs) provides built-in head tracking, which enables estimating position in a room-size setting. This feature allows users to explore, navigate, and move within real-size virtual environments, such as kitchens, supermarket aisles, or streets. Previously, these actions were commonly facilitated by external peripherals and interaction metaphors. The objective of this study was to compare the Oculus Rift and the HTC Vive in terms of the working range of the head tracking and the working area, accuracy, and jitter in a room-size environment, and to determine their feasibility for serious games, rehabilitation, and health-related applications. The position of the HMDs was registered in a 10 × 10 grid covering an area of 25 m 2 at sitting (1.3 m) and standing (1.7 m) heights. Accuracy and jitter were estimated from positional data. The working range was estimated by moving the HMDs away from the cameras until no data were obtained. The HTC Vive provided a working area (24.87 m 2 ) twice as large as that of the Oculus Rift. Both devices showed excellent and comparable performance at sitting height (accuracy up to 1 cm and jitter <0.35 mm), and the HTC Vive presented worse but still excellent accuracy and jitter at standing height (accuracy up to 1.5 cm and jitter <0.5 mm). The HTC Vive presented a larger working range (7 m) than did the Oculus Rift (4.25 m). Our results support the use of these devices for real navigation, exploration, exergaming, and motor rehabilitation in virtual reality environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, S.; Cooper, G.; Fuess, S.
The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores.more » This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.« less
The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2012-03-20
The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less
NASA Astrophysics Data System (ADS)
Ritter, Kenneth August, III
Industry has a continuing need to train its workforce on recent engineering developments, but many engineering products and processes are hard to explain because of limitations of size, visibility, time scale, cost, and safety. The product or process might be difficult to see because it is either very large or very small, because it is enclosed within an opaque container, or because it happens very fast or very slowly. Some engineering products and processes are also costly or unsafe to use for training purposes, and sometimes the domain expert is not physically available at the training location. All these limitations can potentially be addressed using advanced visualization techniques such as virtual reality. This dissertation describes the development of an immersive virtual reality application using the Six Sigma DMADV process to explain the main equipment and processes used in a concentrating solar power plant. The virtual solar energy center (VEC) application was initially developed and tested in a Cave Automatic Virtual Environment (CAVE) during 2013 and 2014. The software programs used for development were SolidWorks, 3ds Max Design, and Unity 3D. Current hardware and software technologies that could complement this research were analyzed. The NVIDA GRID Visual Computing Appliance (VCA) was chosen as the rendering solution for animating complex CAD models in this application. The MiddleVR software toolkit was selected as the toolkit for VR interactions and CAVE display. A non-immersive 3D version of the VEC application was tested and shown to be an effective training tool in late 2015. An immersive networked version of the VEC allows the user to receive live instruction from a trainer being projected via depth camera imagery from a remote location. Four comparative analysis studies were performed. These studies used the average normalized gain from pre-test scores to determine the effectiveness of the various training methods. With the DMADV approach, solutions were identified and verified during each iteration of the development, which saved valuable time and resulted in better results being achieved in each revision of the application, with the final version having 88% positive responses and same effectiveness as other methods assessed.
Monitoring of IaaS and scientific applications on the Cloud using the Elasticsearch ecosystem
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.
2015-05-01
The private Cloud at the Torino INFN computing centre offers IaaS services to different scientific computing applications. The infrastructure is managed with the OpenNebula cloud controller. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BES-III collaboration, plus an increasing number of other small tenants. Besides keeping track of the usage, the automation of dynamic allocation of resources to tenants requires detailed monitoring and accounting of the resource usage. As a first investigation towards this, we set up a monitoring system to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the Elasticsearch, Logstash and Kibana stack. In the current implementation, the heterogeneous accounting information is fed to different MySQL databases and sent to Elasticsearch via a custom Logstash plugin. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service, which is also used for other accounting purposes. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BES-III virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. Each of these three cases is indexed separately in Elasticsearch. We are now starting to consider dismissing the intermediate level provided by the SQL database and evaluating a NoSQL option as a unique central database for all the monitoring information. We setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools.
SU-E-T-278: Realization of Dose Verification Tool for IMRT Plan Based On DPM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Jinfeng; Cao, Ruifen; Dai, Yumei
Purpose: To build a Monte Carlo dose verification tool for IMRT Plan by implementing a irradiation source model into DPM code. Extend the ability of DPM to calculate any incident angles and irregular-inhomogeneous fields. Methods: With the virtual source and the energy spectrum which unfolded from the accelerator measurement data,combined with optimized intensity maps to calculate the dose distribution of the irradiation irregular-inhomogeneous field. The irradiation source model of accelerator was substituted by a grid-based surface source. The contour and the intensity distribution of the surface source were optimized by ARTS (Accurate/Advanced Radiotherapy System) optimization module based on the tumormore » configuration. The weight of the emitter was decided by the grid intensity. The direction of the emitter was decided by the combination of the virtual source and the emitter emitting position. The photon energy spectrum unfolded from the accelerator measurement data was adjusted by compensating the contaminated electron source. For verification, measured data and realistic clinical IMRT plan were compared with DPM dose calculation. Results: The regular field was verified by comparing with the measured data. It was illustrated that the differences were acceptable (<2% inside the field, 2–3mm in the penumbra). The dose calculation of irregular field by DPM simulation was also compared with that of FSPB (Finite Size Pencil Beam) and the passing rate of gamma analysis was 95.1% for peripheral lung cancer. The regular field and the irregular rotational field were all within the range of permitting error. The computing time of regular fields were less than 2h, and the test of peripheral lung cancer was 160min. Through parallel processing, the adapted DPM could complete the calculation of IMRT plan within half an hour. Conclusion: The adapted parallelized DPM code with irradiation source model is faster than classic Monte Carlo codes. Its computational accuracy and speed satisfy the clinical requirement, and it is expectable to be a Monte Carlo dose verification tool for IMRT Plan. Strategic Priority Research Program of the China Academy of Science(XDA03040000); National Natural Science Foundation of China (81101132)« less
Scaled particle theory for bulk and confined fluids: A review
NASA Astrophysics Data System (ADS)
Dong, Wei; Chen, XiaoSong
2018-07-01
More than half a century after its first formulation by Reiss, Frisch and Lebowitz in 1959, scaled particle theory (SPT) has proven its immense usefulness and has become one of the most successful theories in liquid physics. In recent years, we have strived to extend SPT to fluids confined in a variety of random porous matrices. In this article, we present a timely review of these developments. We have endeavored to present a formulation that is pedagogically more accessible than those presented in various original papers, and we hope this benefits newcomers in their research work. We also use more consistent notations for different cases. In addition, we discuss issues that have been scarcely considered in the literature, e.g., the one-fluid structure of SPT due to the isomorphism between the equation of state for a multicomponent fluid and that for a one-component fluid or the pure-confinement scaling relation that provides a connection between a confined and a bulk fluid.
Flow Equation Approach to the Statistics of Nonlinear Dynamical Systems
NASA Astrophysics Data System (ADS)
Marston, J. B.; Hastings, M. B.
2005-03-01
The probability distribution function of non-linear dynamical systems is governed by a linear framework that resembles quantum many-body theory, in which stochastic forcing and/or averaging over initial conditions play the role of non-zero . Besides the well-known Fokker-Planck approach, there is a related Hopf functional methodootnotetextUriel Frisch, Turbulence: The Legacy of A. N. Kolmogorov (Cambridge University Press, 1995) chapter 9.5.; in both formalisms, zero modes of linear operators describe the stationary non-equilibrium statistics. To access the statistics, we investigate the method of continuous unitary transformationsootnotetextS. D. Glazek and K. G. Wilson, Phys. Rev. D 48, 5863 (1993); Phys. Rev. D 49, 4214 (1994). (also known as the flow equation approachootnotetextF. Wegner, Ann. Phys. 3, 77 (1994).), suitably generalized to the diagonalization of non-Hermitian matrices. Comparison to the more traditional cumulant expansion method is illustrated with low-dimensional attractors. The treatment of high-dimensional dynamical systems is also discussed.
Li, Jinjing; Sologon, Denisa Maria
2014-01-01
This paper advances a structural inter-temporal model of labour supply that is able to simulate the dynamics of labour supply in a continuous setting and addresses two main drawbacks of most existing models. The first limitation is the inability to incorporate individual heterogeneity as every agent is sharing the same parameters of the utility function. The second one is the strong assumption that individuals make decisions in a world of perfect certainty. Essentially, this paper offers an extension of marginal-utility-of-wealth-constant labour supply functions known as “Frisch functions” under certainty and uncertainty with homogenous and heterogeneous preferences. The lifetime models based on the fixed effect vector decomposition yield the most stable simulation results, under both certain and uncertain future wage assumptions. Due to its improved accuracy and stability, this lifetime labour supply model is particularly suitable for enhancing the performance of the life cycle simulation models, thus providing a better reference for policymaking. PMID:25391021
Schluessel, V; Rick, I P; Plischke, K
2014-11-01
Despite convincing data collected by microspectrophotometry and molecular biology, rendering sharks colourblind cone monochromats, the question of whether sharks can perceive colour had not been finally resolved in the absence of any behavioural experiments compensating for the confounding factor of brightness. The present study tested the ability of juvenile grey bamboo sharks to perceive colour in an experimental design based on a paradigm established by Karl von Frisch using colours in combination with grey distractor stimuli of equal brightness. Results showed that contrasts but no colours could be discriminated. Blue and yellow stimuli were not distinguished from a grey distractor stimulus of equal brightness but could be distinguished from distractor stimuli of varying brightness. In addition, different grey stimuli were distinguished significantly above chance level from one another. In conclusion, the behavioural results support the previously collected physiological data on bamboo sharks, which mutually show that the grey bamboo shark, like several marine mammals, is a cone monochromate and colourblind.
NPSS on NASA's IPG: Using CORBA and Globus to Coordinate Multidisciplinary Aeroscience Applications
NASA Technical Reports Server (NTRS)
Lopez, Isaac; Follen, Gregory J.; Gutierrez, Richard; Naiman, Cynthia G.; Foster, Ian; Ginsburg, Brian; Larsson, Olle; Martin, Stuart; Tuecke, Steven; Woodford, David
2000-01-01
Within NASA's High Performance Computing and Communication (HPCC) program, the NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. To this end, NPSS integrates multiple disciplines such as aerodynamics, structures, and heat transfer and supports "numerical zooming" between O-dimensional to 1-, 2-, and 3-dimensional component engine codes. In order to facilitate the timely and cost-effective capture of complex physical processes, NPSS uses object-oriented technologies such as C++ objects to encapsulate individual engine components and CORBA ORBs for object communication and deployment across heterogeneous computing platforms. Recently, the HPCC program has initiated a concept called the Information Power Grid (IPG), a virtual computing environment that integrates computers and other resources at different sites. IPG implements a range of Grid services such as resource discovery, scheduling, security, instrumentation, and data access, many of which are provided by the Globus toolkit. IPG facilities have the potential to benefit NPSS considerably. For example, NPSS should in principle be able to use Grid services to discover dynamically and then co-schedule the resources required for a particular engine simulation, rather than relying on manual placement of ORBs as at present. Grid services can also be used to initiate simulation components on parallel computers (MPPs) and to address inter-site security issues that currently hinder the coupling of components across multiple sites. These considerations led NASA Glenn and Globus project personnel to formulate a collaborative project designed to evaluate whether and how benefits such as those just listed can be achieved in practice. This project involves firstly development of the basic techniques required to achieve co-existence of commodity object technologies and Grid technologies; and secondly the evaluation of these techniques in the context of NPSS-oriented challenge problems. The work on basic techniques seeks to understand how "commodity" technologies (CORBA, DCOM, Excel, etc.) can be used in concert with specialized "Grid" technologies (for security, MPP scheduling, etc.). In principle, this coordinated use should be straightforward because of the Globus and IPG philosophy of providing low-level Grid mechanisms that can be used to implement a wide variety of application-level programming models. (Globus technologies have previously been used to implement Grid-enabled message-passing libraries, collaborative environments, and parameter study tools, among others.) Results obtained to date are encouraging: we have successfully demonstrated a CORBA to Globus resource manager gateway that allows the use of CORBA RPCs to control submission and execution of programs on workstations and MPPs; a gateway from the CORBA Trader service to the Grid information service; and a preliminary integration of CORBA and Grid security mechanisms. The two challenge problems that we consider are the following: 1) Desktop-controlled parameter study. Here, an Excel spreadsheet is used to define and control a CFD parameter study, via a CORBA interface to a high throughput broker that runs individual cases on different IPG resources. 2) Aviation safety. Here, about 100 near real time jobs running NPSS need to be submitted, run and data returned in near real time. Evaluation will address such issues as time to port, execution time, potential scalability of simulation, and reliability of resources. The full paper will present the following information: 1. A detailed analysis of the requirements that NPSS applications place on IPG. 2. A description of the techniques used to meet these requirements via the coordinated use of CORBA and Globus. 3. A description of results obtained to date in the first two challenge problems.
Multicore job scheduling in the Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.
2015-12-01
After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.
An automated calibration method for non-see-through head mounted displays.
Gilson, Stuart J; Fitzgibbon, Andrew W; Glennerster, Andrew
2011-08-15
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements. Copyright © 2011 Elsevier B.V. All rights reserved.
Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.
2012-12-01
The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.
Fairbank, Michael; Li, Shuhui; Fu, Xingang; Alonso, Eduardo; Wunsch, Donald
2014-01-01
We present a recurrent neural-network (RNN) controller designed to solve the tracking problem for control systems. We demonstrate that a major difficulty in training any RNN is the problem of exploding gradients, and we propose a solution to this in the case of tracking problems, by introducing a stabilization matrix and by using carefully constrained context units. This solution allows us to achieve consistently lower training errors, and hence allows us to more easily introduce adaptive capabilities. The resulting RNN is one that has been trained off-line to be rapidly adaptive to changing plant conditions and changing tracking targets. The case study we use is a renewable-energy generator application; that of producing an efficient controller for a three-phase grid-connected converter. The controller we produce can cope with the random variation of system parameters and fluctuating grid voltages. It produces tracking control with almost instantaneous response to changing reference states, and virtually zero oscillation. This compares very favorably to the classical proportional integrator (PI) controllers, which we show produce a much slower response and settling time. In addition, the RNN we propose exhibits better learning stability and convergence properties, and can exhibit faster adaptation, than has been achieved with adaptive critic designs. Copyright © 2013 Elsevier Ltd. All rights reserved.
Saletti, Dominique
2017-01-01
Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505
NASA Astrophysics Data System (ADS)
Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.
2017-10-01
The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.
Network Computing Infrastructure to Share Tools and Data in Global Nuclear Energy Partnership
NASA Astrophysics Data System (ADS)
Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya
CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer-Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP.