Storage Media for Microcomputers.
ERIC Educational Resources Information Center
Trautman, Rodes
1983-01-01
Reviews computer storage devices designed to provide additional memory for microcomputers--chips, floppy disks, hard disks, optical disks--and describes how secondary storage is used (file transfer, formatting, ingredients of incompatibility); disk/controller/software triplet; magnetic tape backup; storage volatility; disk emulator; and…
Design and implementation of reliability evaluation of SAS hard disk based on RAID card
NASA Astrophysics Data System (ADS)
Ren, Shaohua; Han, Sen
2015-10-01
Because of the huge advantage of RAID technology in storage, it has been widely used. However, the question associated with this technology is that the hard disk based on the RAID card can not be queried by Operating System. Therefore how to read the self-information and log data of hard disk has been a problem, while this data is necessary for reliability test of hard disk. In traditional way, this information can be read just suitable for SATA hard disk, but not for SAS hard disk. In this paper, we provide a method by using LSI RAID card's Application Program Interface, communicating with RAID card and analyzing the feedback data to solve the problem. Then we will get the necessary information to assess the SAS hard disk.
Disk Memories: What You Should Know before You Buy Them.
ERIC Educational Resources Information Center
Bursky, Dave
1981-01-01
Explains the basic features of floppy disk and hard disk computer storage systems and the purchasing decisions which must be made, particularly in relation to certain popular microcomputers. A disk vendors directory is included. Journal availability: Hayden Publishing Company, 50 Essex Street, Rochelle Park, NJ 07662. (SJL)
A Comprehensive Study on Energy Efficiency and Performance of Flash-based SSD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Seon-Yeon; Kim, Youngjae; Urgaonkar, Bhuvan
2011-01-01
Use of flash memory as a storage medium is becoming popular in diverse computing environments. However, because of differences in interface, flash memory requires a hard-disk-emulation layer, called FTL (flash translation layer). Although the FTL enables flash memory storages to replace conventional hard disks, it induces significant computational and space overhead. Despite the low power consumption of flash memory, this overhead leads to significant power consumption in an overall storage system. In this paper, we analyze the characteristics of flash-based storage devices from the viewpoint of power consumption and energy efficiency by using various methodologies. First, we utilize simulation tomore » investigate the interior operation of flash-based storage of flash-based storages. Subsequently, we measure the performance and energy efficiency of commodity flash-based SSDs by using microbenchmarks to identify the block-device level characteristics and macrobenchmarks to reveal their filesystem level characteristics.« less
Recent Cooperative Research Activities of HDD and Flexible Media Transport Technologies in Japan
NASA Astrophysics Data System (ADS)
Ono, Kyosuke
This paper presents the recent status of industry-university cooperative research activities in Japan on the mechatronics of information storage and input/output equipment. There are three research committees for promoting information exchange on technical problems and research topics of head-disk interface in hard disk drives (HDD), flexible media transport and image printing processes which are supported by the Japan Society of Mechanical Engineering (JSME), the Japanese Society of Tribologists (JAST) and the Japan Society of Precision Engineering (JSPE). For hard disk drive technology, the Storage Research Consortium (SRC) is supporting more than 40 research groups in various different universities to perform basic research for future HDD technology. The past and present statuses of these activities are introduced, particularly focusing on HDD and flexible media transport mechanisms.
Economic impact of off-line PC viewer for private folder management
NASA Astrophysics Data System (ADS)
Song, Koun-Sik; Shin, Myung J.; Lee, Joo Hee; Auh, Yong H.
1999-07-01
We developed a PC-based clinical workstation and implemented at Asan Medical Center in Seoul, Korea, Hardwares used were Pentium-II, 8M video memory, 64-128 MB RAM, 19 inch color monitor, and 10/100Mbps network adaptor. One of the unique features of this workstation is management tool for folders reside both in PACS short-term storage unit and local hard disk. Users can copy the entire study or part of the study to local hard disk, removable storages, or CD recorder. Even the images in private folders in PACS short-term storage can be copied to local storage devices. All images are saved as DICOM 3.0 file format with 2:1 lossless compression. We compared the prices of copy films and storage medias considering the possible savings of expensive PACS short- term storage and network traffic. Price savings of copy film is most remarkable in MR exam. Price savings arising from minimal use of short-term unit was 50,000 dollars. It as hard to calculate the price savings arising from the network usage. Off-line PC viewer is a cost-effective way of handling private folder management under the PACS environment.
Time-resolved scanning Kerr microscopy of flux beam formation in hard disk write heads
NASA Astrophysics Data System (ADS)
Valkass, Robert A. J.; Spicer, Timothy M.; Burgos Parra, Erick; Hicken, Robert J.; Bashir, Muhammad A.; Gubbins, Mark A.; Czoschke, Peter J.; Lopusnik, Radek
2016-06-01
To meet growing data storage needs, the density of data stored on hard disk drives must increase. In pursuit of this aim, the magnetodynamics of the hard disk write head must be characterized and understood, particularly the process of "flux beaming." In this study, seven different configurations of perpendicular magnetic recording (PMR) write heads were imaged using time-resolved scanning Kerr microscopy, revealing their detailed dynamic magnetic state during the write process. It was found that the precise position and number of driving coils can significantly alter the formation of flux beams during the write process. These results are applicable to the design and understanding of current PMR and next-generation heat-assisted magnetic recording devices, as well as being relevant to other magnetic devices.
PCM-Based Durable Write Cache for Fast Disk I/O
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zhuo; Wang, Bin; Carpenter, Patrick
2012-01-01
Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop amore » novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.« less
NASA Astrophysics Data System (ADS)
Holland, S. Douglas
1992-09-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
NASA Technical Reports Server (NTRS)
Holland, S. Douglas (Inventor)
1992-01-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
Time-resolved scanning Kerr microscopy of flux beam formation in hard disk write heads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valkass, Robert A. J., E-mail: rajv202@ex.ac.uk; Spicer, Timothy M.; Burgos Parra, Erick
To meet growing data storage needs, the density of data stored on hard disk drives must increase. In pursuit of this aim, the magnetodynamics of the hard disk write head must be characterized and understood, particularly the process of “flux beaming.” In this study, seven different configurations of perpendicular magnetic recording (PMR) write heads were imaged using time-resolved scanning Kerr microscopy, revealing their detailed dynamic magnetic state during the write process. It was found that the precise position and number of driving coils can significantly alter the formation of flux beams during the write process. These results are applicable tomore » the design and understanding of current PMR and next-generation heat-assisted magnetic recording devices, as well as being relevant to other magnetic devices.« less
The Stoner-Wohlfarth Model of Ferromagnetism
ERIC Educational Resources Information Center
Tannous, C.; Gieraltowski, J.
2008-01-01
The Stoner-Wohlfarth (SW) model is the simplest model that describes adequately the physics of fine magnetic grains, the magnetization of which can be used in digital magnetic storage (floppies, hard disks and tapes). Magnetic storage density is presently increasing steadily in almost the same way as electronic device size and circuitry are…
Automated Camouflage Pattern Generation Technology Survey.
1985-08-07
supported by high speed data communications? Costs: 9 What are your rates? $/CPU hour: $/MB disk storage/day: S/connect hour: other charges: What are your... data to the workstation, tape drives are needed for backing up and archiving completed patterns, 256 megabytes of on-line hard disk space as a minimum...is needed to support multiple processes and data files, and 4 megabytes of actual or virtual memory is needed to process the largest expected single
Code of Federal Regulations, 2012 CFR
2012-10-01
..., the following definitions apply to this subchapter: Act means the Social Security Act. ANSI stands for... required documents. Electronic media means: (1) Electronic storage media including memory devices in computers (hard drives) and any removable/transportable digital memory medium, such as magnetic tape or disk...
Code of Federal Regulations, 2011 CFR
2011-10-01
..., the following definitions apply to this subchapter: Act means the Social Security Act. ANSI stands for... required documents. Electronic media means: (1) Electronic storage media including memory devices in computers (hard drives) and any removable/transportable digital memory medium, such as magnetic tape or disk...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., the following definitions apply to this subchapter: Act means the Social Security Act. ANSI stands for... required documents. Electronic media means: (1) Electronic storage media including memory devices in computers (hard drives) and any removable/transportable digital memory medium, such as magnetic tape or disk...
Evolution of Archival Storage (from Tape to Memory)
NASA Technical Reports Server (NTRS)
Ramapriyan, Hampapuram K.
2015-01-01
Over the last three decades, there has been a significant evolution in storage technologies supporting archival of remote sensing data. This section provides a brief survey of how these technologies have evolved. Three main technologies are considered - tape, hard disk and solid state disk. Their historical evolution is traced, summarizing how reductions in cost have helped being able to store larger volumes of data on faster media. The cost per GB of media is only one of the considerations in determining the best approach to archival storage. Active archives generally require faster response to user requests for data than permanent archives. The archive costs have to consider facilities and other capital costs, operations costs, software licenses, utilities costs, etc. For meeting requirements in any organization, typically a mix of technologies is needed.
NASA Astrophysics Data System (ADS)
Fontana, Robert E.; Decad, Gary M.
2018-05-01
This paper describes trends in the storage technologies associated with Linear Tape Open (LTO) Tape cartridges, hard disk drives (HDD), and NAND Flash based storage devices including solid-state drives (SSD). This technology discussion centers on the relationship between cost/bit and bit density and, specifically on how the Moore's Law perception that areal density doubling and cost/bit halving every two years is no longer being achieved for storage based components. This observation and a Moore's Law Discussion are demonstrated with data from 9-year storage technology trends, assembled from publically available industry reporting sources.
Security of patient data when decommissioning ultrasound systems.
Moggridge, James
2017-02-01
Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system's own delete function; full reinstallation of system software; the manufacturer's own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system's own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer's wiping service or the open source wiping software. The typical method of reinstalling an ultrasound system's software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system's hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state.
Study on compensation algorithm of head skew in hard disk drives
NASA Astrophysics Data System (ADS)
Xiao, Yong; Ge, Xiaoyu; Sun, Jingna; Wang, Xiaoyan
2011-10-01
In hard disk drives (HDDs), head skew among multiple heads is pre-calibrated during manufacturing process. In real applications with high capacity of storage, the head stack may be tilted due to environmental change, resulting in additional head skew errors from outer diameter (OD) to inner diameter (ID). In case these errors are below the preset threshold for power on recalibration, the current strategy may not be aware, and drive performance under severe environment will be degraded. In this paper, in-the-field compensation of small DC head skew variation across stroke is proposed, where a zone table has been equipped. Test results demonstrating its effectiveness to reduce observer error and to enhance drive performance via accurate prediction of DC head skew are provided.
Site Partitioning for Redundant Arrays of Distributed Disks
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.
1996-01-01
Redundant arrays of distributed disks (RADD) can be used in a distributed computing system or database system to provide recovery in the presence of disk crashes and temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites of a distributed storage system into redundant arrays in such a way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-hard. We then propose and evaluate several heuristic algorithms for finding approximate solutions. Simulation results show that significant reduction in remote parity update costs can be achieved by optimizing the site partitioning scheme.
Security of patient data when decommissioning ultrasound systems
2017-01-01
Background Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. Methods In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system’s own delete function; full reinstallation of system software; the manufacturer’s own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. Results All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system’s own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer’s wiping service or the open source wiping software. Conclusion The typical method of reinstalling an ultrasound system’s software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system’s hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state. PMID:28228821
The successful of finite element to invent particle cleaning system by air jet in hard disk drive
NASA Astrophysics Data System (ADS)
Jai-Ngam, Nualpun; Tangchaichit, Kaitfa
2018-02-01
Hard Disk Drive manufacturing has faced very challenging with the increasing demand of high capacity drives for Cloud-based storage. Particle adhesion has also become increasingly important in HDD to gain more reliability of storage capacity. The ability to clean on surfaces is more complicated in removing such particles without damaging the surface. This research is aim to improve the particle cleaning in HSA by using finite element to develop the air flow model then invent the prototype of air cleaning system to remove particle from surface. Surface cleaning by air pressure can be applied as alternative for the removal of solid particulate contaminants that is adhering on a solid surface. These technical and economic challenges have driven the process development from traditional way that chemical solvent cleaning. The focus of this study is to develop alternative way from scrub, ultrasonic, mega sonic on surface cleaning principles to serve as a foundation for the development of new processes to meet current state-of-the-art process requirements and minimize the waste from chemical cleaning for environment safety.
Study of data I/O performance on distributed disk system in mask data preparation
NASA Astrophysics Data System (ADS)
Ohara, Shuichiro; Odaira, Hiroyuki; Chikanaga, Tomoyuki; Hamaji, Masakazu; Yoshioka, Yasuharu
2010-09-01
Data volume is getting larger every day in Mask Data Preparation (MDP). In the meantime, faster data handling is always required. MDP flow typically introduces Distributed Processing (DP) system to realize the demand because using hundreds of CPU is a reasonable solution. However, even if the number of CPU were increased, the throughput might be saturated because hard disk I/O and network speeds could be bottlenecks. So, MDP needs to invest a lot of money to not only hundreds of CPU but also storage and a network device which make the throughput faster. NCS would like to introduce new distributed processing system which is called "NDE". NDE could be a distributed disk system which makes the throughput faster without investing a lot of money because it is designed to use multiple conventional hard drives appropriately over network. NCS studies I/O performance with OASIS® data format on NDE which contributes to realize the high throughput in this paper.
NASA Astrophysics Data System (ADS)
2009-09-01
IBM scientist wins magnetism prizes Stuart Parkin, an applied physicist at IBM's Almaden Research Center, has won the European Geophysical Society's Néel Medal and the Magnetism Award from the International Union of Pure and Applied Physics (IUPAP) for his fundamental contributions to nanodevices used in information storage. Parkin's research on giant magnetoresistance in the late 1980s led IBM to develop computer hard drives that packed 1000 times more data onto a disk; his recent work focuses on increasing the storage capacity of solid-state electronic devices.
The Scalable Checkpoint/Restart Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, A.
The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less
78 FR 79481 - Summary of Commission Practice Relating to Administrative Protective Orders
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-30
... breach of the Commission's APOs. APO breach inquiries are considered on a case-by- case basis. As part of... suitable container (N.B.: storage of BPI on so-called hard disk computer media is to be avoided, because mere erasure of data from such media may not irrecoverably destroy the BPI and may result in violation...
[PACS: storage and retrieval of digital radiological image data].
Wirth, S; Treitl, M; Villain, S; Lucke, A; Nissen-Meyer, S; Mittermaier, I; Pfeifer, K-J; Reiser, M
2005-08-01
Efficient handling of both picture archiving and retrieval is a crucial factor when new PACS installations as well as technical upgrades are planned. For a large PACS installation for 200 actual studies, the number, modality,and body region of available priors were evaluated. In addition, image access time of 100 CT studies from hard disk (RAID), magneto-optic disk (MOD), and tape archives (TAPE) were accessed. For current examinations priors existed in 61.1% with an averaged quantity of 7.7 studies. Thereof 56.3% were within 0-3 months, 84.9% within 12 months, 91.7% within 24 months, and 96.2% within 36 months. On average, access to images from the hard disk cache was more than 100 times faster then from MOD or TAPE. Since only PACS RAID provides online image access, at least current imaging of the past 12 months should be available from cache. An accurate prefetching mechanism facilitates effective use of the expensive online cache area. For that, however, close interaction of PACS, RIS, and KIS is an indispensable prerequisite.
System and method for manipulating domain pinning and reversal in ferromagnetic materials
Silevitch, Daniel M.; Rosenbaum, Thomas F.; Aeppli, Gabriel
2013-10-15
A method for manipulating domain pinning and reversal in a ferromagnetic material comprises applying an external magnetic field to a uniaxial ferromagnetic material comprising a plurality of magnetic domains, where each domain has an easy axis oriented along a predetermined direction. The external magnetic field is applied transverse to the predetermined direction and at a predetermined temperature. The strength of the magnetic field is varied at the predetermined temperature, thereby isothermally regulating pinning of the domains. A magnetic storage device for controlling domain dynamics includes a magnetic hard disk comprising a uniaxial ferromagnetic material, a magnetic recording head including a first magnet, and a second magnet. The ferromagnetic material includes a plurality of magnetic domains each having an easy axis oriented along a predetermined direction. The second magnet is positioned adjacent to the magnetic hard disk and is configured to apply a magnetic field transverse to the predetermined direction.
Transport coefficients and mechanical response in hard-disk colloidal suspensions
NASA Astrophysics Data System (ADS)
Zhang, Bo-Kai; Li, Jian; Chen, Kang; Tian, Wen-De; Ma, Yu-Qiang
2016-11-01
We investigate the transport properties and mechanical response of glassy hard disks using nonlinear Langevin equation theory. We derive expressions for the elastic shear modulus and viscosity in two dimensions on the basis of thermal-activated barrier-hopping dynamics and mechanically accelerated motion. Dense hard disks exhibit phenomena such as softening elasticity, shear-thinning of viscosity, and yielding upon deformation, which are qualitatively similar to dense hard-sphere colloidal suspensions in three dimensions. These phenomena can be ascribed to stress-induced “landscape tilting”. Quantitative comparisons of these phenomena between hard disks and hard spheres are presented. Interestingly, we find that the density dependence of yield stress in hard disks is much more significant than in hard spheres. Our work provides a foundation for further generalizing the nonlinear Langevin equation theory to address slow dynamics and rheological behavior in binary or polydisperse mixtures of hard or soft disks. Project supported by the National Basic Research Program of China (Grant No. 2012CB821500) and the National Natural Science Foundation of China (Grant Nos. 21374073 and, 21574096).
Disposal of waste computer hard disk drive: data destruction and resources recycling.
Yan, Guoqing; Xue, Mianqiang; Xu, Zhenming
2013-06-01
An increasing quantity of discarded computers is accompanied by a sharp increase in the number of hard disk drives to be eliminated. A waste hard disk drive is a special form of waste electrical and electronic equipment because it holds large amounts of information that is closely connected with its user. Therefore, the treatment of waste hard disk drives is an urgent issue in terms of data security, environmental protection and sustainable development. In the present study the degaussing method was adopted to destroy the residual data on the waste hard disk drives and the housing of the disks was used as an example to explore the coating removal process, which is the most important pretreatment for aluminium alloy recycling. The key operation points of the degaussing determined were: (1) keep the platter plate parallel with the magnetic field direction; and (2) the enlargement of magnetic field intensity B and action time t can lead to a significant upgrade in the degaussing effect. The coating removal experiment indicated that heating the waste hard disk drives housing at a temperature of 400 °C for 24 min was the optimum condition. A novel integrated technique for the treatment of waste hard disk drives is proposed herein. This technique offers the possibility of destroying residual data, recycling the recovered resources and disposing of the disks in an environmentally friendly manner.
Hybrid RAID With Dual Control Architecture for SSD Reliability
NASA Astrophysics Data System (ADS)
Chatterjee, Santanu
2010-10-01
The Solid State Devices (SSD) which are increasingly being adopted in today's data storage Systems, have higher capacity and performance but lower reliability, which leads to more frequent rebuilds and to a higher risk. Although SSD is very energy efficient compared to Hard Disk Drives but Bit Error Rate (BER) of an SSD require expensive erase operations between successive writes. Parity based RAID (for Example RAID4,5,6)provides data integrity using parity information and supports losing of any one (RAID4, 5)or two drives(RAID6), but the parity blocks are updated more often than the data blocks due to random access pattern so SSD devices holding more parity receive more writes and consequently age faster. To address this problem, in this paper we propose a Model based System of hybrid disk array architecture in which we plan to use RAID 4(Stripping with Parity) technique and SSD drives as Data drives while any fastest Hard disk drives of same capacity can be used as dedicated parity drives. By this proposed architecture we can open the door to using commodity SSD's past their erasure limit and it can also reduce the need for expensive hardware Error Correction Code (ECC) in the devices.
High fold computer disk storage DATABASE for fast extended analysis of γ-rays events
NASA Astrophysics Data System (ADS)
Stézowski, O.; Finck, Ch.; Prévost, D.
1999-03-01
Recently spectacular technical developments have been achieved to increase the resolving power of large γ-ray spectrometers. With these new eyes, physicists are able to study the intricate nature of atomic nuclei. Concurrently more and more complex multidimensional analyses are needed to investigate very weak phenomena. In this article, we first present a software (DATABASE) allowing high fold coincidences γ-rays events to be stored on hard disk. Then, a non-conventional method of analysis, anti-gating procedure, is described. Two physical examples are given to explain how it can be used and Monte Carlo simulations have been performed to test the validity of this method.
Design and evaluation of a hybrid storage system in HEP environment
NASA Astrophysics Data System (ADS)
Xu, Qi; Cheng, Yaodong; Chen, Gang
2017-10-01
Nowadays, the High Energy Physics experiments produce a large amount of data. These data are stored in mass storage systems which need to balance the cost, performance and manageability. In this paper, a hybrid storage system including SSDs (Solid-state Drive) and HDDs (Hard Disk Drive) is designed to accelerate data analysis and maintain a low cost. The performance of accessing files is a decisive factor for the HEP computing system. A new deployment model of Hybrid Storage System in High Energy Physics is proposed which is proved to have higher I/O performance. The detailed evaluation methods and the evaluations about SSD/HDD ratio, and the size of the logic block are also given. In all evaluations, sequential-read, sequential-write, random-read and random-write are all tested to get the comprehensive results. The results show the Hybrid Storage System has good performance in some fields such as accessing big files in HEP.
NASA Astrophysics Data System (ADS)
Xiong, Shaomin; Wu, Haoyu; Bogy, David
2014-09-01
Heat assisted magnetic recording (HAMR) is expected to increase the storage areal density to more than 1 Tb/in2 in hard disk drives (HDDs). In this technology, a laser is used to heat the magnetic media to the Curie point (~400-600 °C) during the writing process. The lubricant on the top of a magnetic disk could evaporate and be depleted under the laser heating. The change of the lubricant can lead to instability of the flying slider and failure of the head-disk interface (HDI). In this study, a HAMR test stage is developed to study the lubricant thermal behavior. Various heating conditions are controlled for the study of the lubricant thermal depletion. The effects of laser heating repetitions and power levels on the lubricant depletion are investigated experimentally. The lubricant reflow behavior is discussed as well.
An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction
ERIC Educational Resources Information Center
Bhasin, Harpreet
2011-01-01
Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…
Effect of bioactive glass-containing resin composite on dentin remineralization.
Lee, Myoung Geun; Jang, Ji-Hyun; Ferracane, Jack L; Davis, Harry; Bae, Han Eul; Choi, Dongseok; Kim, Duck-Su
2018-05-25
The purpose of this study was to evaluate the effect of bioactive glass (BAG)-containing composite on dentin remineralization. Sixty-six dentin disks with 3 mm thickness were prepared from thirty-three bovine incisors. The following six experimental groups were prepared according to type of composite (control and experimental) and storage solutions (simulated body fluid [SBF] and phosphate-buffered saline [PBS]): 1 (undemineralized); 2 (demineralized); 3 (demineralized with control in SBF); 4 (demineralized with control in PBS); 5 (demineralized with experimental composite in SBF); and 6 (demineralized with experimental composite in PBS). BAG65S (65% Si, 31% Ca, and 4% P) was prepared via the sol-gel method. The control composite was made with a 50:50 Bis-GMA:TEGDMA resin matrix, 57 wt% strontium glass, and 15 wt% aerosol silica. The experimental composite had the same resin and filler, but with 15 wt% BAG65S replacing the aerosol silica. For groups 3-6, composite disks (20 × 10 × 2 mm) were prepared and approximated to the dentin disks and stored in PBS or SBF for 2 weeks. Micro-hardness measurements, attenuated total reflection Fourier-transform infrared spectroscopy (ATR-FTIR) and field-emission scanning electron microscopy (FE-SEM) was investigated. The experimental BAG-containing composite significantly increased the micro-hardness of the adjacent demineralized dentin. ATR-FTIR revealed calcium phosphate peaks on the surface of the groups which used experimental composite. FE-SEM revealed surface deposits partially occluding the dentin surface. No significant difference was found between SBF and PBS storage. BAG-containing composites placed in close proximity can partially remineralize adjacent demineralized dentin. Copyright © 2018. Published by Elsevier Ltd.
Erdemir, Ugur; Yildiz, Esra; Eren, Meltem Mert; Ozel, Sevda
2012-01-01
The purpose of this study was to evaluate the effect of sports and energy drinks on the surface hardness of different restorative materials over a 6-month period. Forty-two disk-shaped specimens were prepared for each of the four restorative materials tested: Compoglass F, Filtek Z250, Filtek Supreme, and Premise. Specimens were immersed for 2 min daily, up to 6 months, in six storage solutions (n=7 per material for each solution): distilled water, Powerade, Gatorade, X-IR, Burn, and Red Bull. Surface hardness was measured at baseline, after 1 week, 1 month, and 6 months. Data were analyzed statistically using repeated measures ANOVA followed by the Bonferroni test for multiple comparisons (α=0.05). Surface hardness of the restorative materials was significantly affected by both immersion solution and immersion period (p<0.001). All tested solutions induced significant reduction in surface hardness of the restorative materials over a 6-month immersion period.
DPM — efficient storage in diverse environments
NASA Astrophysics Data System (ADS)
Hellmich, Martin; Furano, Fabrizio; Smith, David; Brito da Rocha, Ricardo; Álvarez Ayllón, Alejandro; Manzi, Andrea; Keeble, Oliver; Calvet, Ivan; Regala, Miguel Antonio
2014-06-01
Recent developments, including low power devices, cluster file systems and cloud storage, represent an explosion in the possibilities for deploying and managing grid storage. In this paper we present how different technologies can be leveraged to build a storage service with differing cost, power, performance, scalability and reliability profiles, using the popular storage solution Disk Pool Manager (DPM/dmlite) as the enabling technology. The storage manager DPM is designed for these new environments, allowing users to scale up and down as they need it, and optimizing their computing centers energy efficiency and costs. DPM runs on high-performance machines, profiting from multi-core and multi-CPU setups. It supports separating the database from the metadata server, the head node, largely reducing its hard disk requirements. Since version 1.8.6, DPM is released in EPEL and Fedora, simplifying distribution and maintenance, but also supporting the ARM architecture beside i386 and x86_64, allowing it to run the smallest low-power machines such as the Raspberry Pi or the CuBox. This usage is facilitated by the possibility to scale horizontally using a main database and a distributed memcached-powered namespace cache. Additionally, DPM supports a variety of storage pools in the backend, most importantly HDFS, S3-enabled storage, and cluster file systems, allowing users to fit their DPM installation exactly to their needs. In this paper, we investigate the power-efficiency and total cost of ownership of various DPM configurations. We develop metrics to evaluate the expected performance of a setup both in terms of namespace and disk access considering the overall cost including equipment, power consumptions, or data/storage fees. The setups tested range from the lowest scale using Raspberry Pis with only 700MHz single cores and a 100Mbps network connections, over conventional multi-core servers to typical virtual machine instances in cloud settings. We evaluate the combinations of different name server setups, for example load-balanced clusters, with different storage setups, from using a classic local configuration to private and public clouds.
The Development of a Portable Hard Disk Encryption/Decryption System with a MEMS Coded Lock.
Zhang, Weiping; Chen, Wenyuan; Tang, Jian; Xu, Peng; Li, Yibin; Li, Shengyong
2009-01-01
In this paper, a novel portable hard-disk encryption/decryption system with a MEMS coded lock is presented, which can authenticate the user and provide the key for the AES encryption/decryption module. The portable hard-disk encryption/decryption system is composed of the authentication module, the USB portable hard-disk interface card, the ATA protocol command decoder module, the data encryption/decryption module, the cipher key management module, the MEMS coded lock controlling circuit module, the MEMS coded lock and the hard disk. The ATA protocol circuit, the MEMS control circuit and AES encryption/decryption circuit are designed and realized by FPGA(Field Programmable Gate Array). The MEMS coded lock with two couplers and two groups of counter-meshing-gears (CMGs) are fabricated by a LIGA-like process and precision engineering method. The whole prototype was fabricated and tested. The test results show that the user's password could be correctly discriminated by the MEMS coded lock, and the AES encryption module could get the key from the MEMS coded lock. Moreover, the data in the hard-disk could be encrypted or decrypted, and the read-write speed of the dataflow could reach 17 MB/s in Ultra DMA mode.
40 CFR 63.11995 - In what form and how long must I keep my records?
Code of Federal Regulations, 2013 CFR
2013-07-01
... years. Records may be maintained in hard copy or computer-readable format including, but not limited to, on paper, microfilm, hard disk drive, floppy disk, compact disk, magnetic tape or microfiche. ...
40 CFR 63.11995 - In what form and how long must I keep my records?
Code of Federal Regulations, 2014 CFR
2014-07-01
... years. Records may be maintained in hard copy or computer-readable format including, but not limited to, on paper, microfilm, hard disk drive, floppy disk, compact disk, magnetic tape or microfiche. ...
40 CFR 63.11995 - In what form and how long must I keep my records?
Code of Federal Regulations, 2012 CFR
2012-07-01
... years. Records may be maintained in hard copy or computer-readable format including, but not limited to, on paper, microfilm, hard disk drive, floppy disk, compact disk, magnetic tape or microfiche. ...
A DOS Primer for Librarians: Part II.
ERIC Educational Resources Information Center
Beecher, Henry
1990-01-01
Provides an introduction to DOS commands and strategies for the effective organization and use of hard disks. Functions discussed include the creation of directories and subdirectories, enhanced copying, the assignment of disk drives, and backing up the hard disk. (CLB)
40 CFR 63.9060 - In what form and how long must I keep my records?
Code of Federal Regulations, 2010 CFR
2010-07-01
... may be maintained in hard copy or computer-readable format including, but not limited to, on paper, microfilm, hard disk drive, floppy disk, compact disk, magnetic tape, or microfiche. (d) You must keep each...
A Test of Black-Hole Disk Truncation: Thermal Disk Emission in the Bright Hard State
NASA Astrophysics Data System (ADS)
Steiner, James
2017-09-01
The assumption that a black hole's accretion disk extends inwards to the ISCO is on firm footing for soft spectral states, but has been challenged for hard spectral states where it is often argued that the accretion flow is truncated far from the horizon. This is of critical importance because black-hole spin is measured on the basis of this assumption. The direct detection (or absence) of thermal disk emission associated with a disk extending to the ISCO is the smoking-gun test to rule truncation in or out for the bright hard state. Using a self-consistent spectral model on data taken in the bright hard state while taking advantage of the complementary coverage and capabilities of Chandra and NuSTAR, we will achieve a definitive test of the truncation paradigm.
The Development of a Portable Hard Disk Encryption/Decryption System with a MEMS Coded Lock
Zhang, Weiping; Chen, Wenyuan; Tang, Jian; Xu, Peng; Li, Yibin; Li, Shengyong
2009-01-01
In this paper, a novel portable hard-disk encryption/decryption system with a MEMS coded lock is presented, which can authenticate the user and provide the key for the AES encryption/decryption module. The portable hard-disk encryption/decryption system is composed of the authentication module, the USB portable hard-disk interface card, the ATA protocol command decoder module, the data encryption/decryption module, the cipher key management module, the MEMS coded lock controlling circuit module, the MEMS coded lock and the hard disk. The ATA protocol circuit, the MEMS control circuit and AES encryption/decryption circuit are designed and realized by FPGA(Field Programmable Gate Array). The MEMS coded lock with two couplers and two groups of counter-meshing-gears (CMGs) are fabricated by a LIGA-like process and precision engineering method. The whole prototype was fabricated and tested. The test results show that the user's password could be correctly discriminated by the MEMS coded lock, and the AES encryption module could get the key from the MEMS coded lock. Moreover, the data in the hard-disk could be encrypted or decrypted, and the read-write speed of the dataflow could reach 17 MB/s in Ultra DMA mode. PMID:22291566
Mapping hard magnetic recording disks by TOF-SIMS
NASA Astrophysics Data System (ADS)
Spool, A.; Forrest, J.
2008-12-01
Mapping of hard magnetic recording disks by TOF-SIMS was performed both to produce significant analytical results for the understanding of the disk surface and the head disk interface in hard disk drives, and as an example of a macroscopic non-rectangular mapping problem for the technique. In this study, maps were obtained by taking discrete samples of the disk surface at set intervals in R and Θ. Because both in manufacturing, and in the disk drive, processes that may affect the disk surface are typically circumferential in nature, changes in the surface are likely to be blurred in the Θ direction. An algorithm was developed to determine the optimum relative sampling ratio in R and Θ. The results confirm what the experience of the analysts suggested, that changes occur more rapidly on disks in the radial direction, and that more sampling in the radial direction is desired. The subsequent use of statistical methods principle component analysis (PCA), maximum auto-correlation factors (MAF), and the algorithm inverse distance weighting (IDW) are explored.
Electrodeposited Co-Pt thin films for magnetic hard disks
NASA Astrophysics Data System (ADS)
Bozzini, B.; De Vita, D.; Sportoletti, A.; Zangari, G.; Cavallotti, P. L.; Terrenzio, E.
1993-03-01
ew baths for Co-Pt electrodeposition have been developed and developed and ECD thin films (≤0.3μm) have been prepared and characterized structurally (XRD), morphologically (SEM), chemically (EDS) and magnetically (VSM); their improved corrosion, oxidation and wear resistance have been ascertained. Such alloys appear suitable candidates for magnetic storage systems, from all technological viewpoints. The originally formulated baths contain Co-NH 3-citrate complexes and Pt-p salt (Pt(NH 3) 2(NO 2) 2). Co-Pt thin films of fcc structure are deposited obtaining microcrystallites of definite composition. At Pt ⋍ 30 at% we obtain fcc films with a=0.369 nm, HC=80 kA m, and high squareness; increasing Co and decreasing Pt content in the bath it is possible to reduce the Pt content of the deposit, obtaining fcc structures containing two types of microcrystals with a = 0.3615 nm and a = 0.369 nm deposited simultaneously. NaH 2PO 2 additions to the bath have a stabilizing influence on the fcc structure of a = 0.3615 nm, Pt ⋍ 20 at% and HC as high as 200 kA/m, with hysteresis loops suitable for both longitudinal or perpendicular recording, depending on the thickness. We have prepared 2.5 in. hard disks for magnetic recording with ECD Co-Pt 20 at% with a polished and texturized ACD Ni-P underlayer. Pulse response, 1F & 2F frequency and frequency sweep response behaviour, as well as noise and overwrite characteristics have been measured for both our disks and high-standard sputtered Co-Cr-Ta production disks, showin improved D50 for Co-Pt ECD disks. The signal-to-noise ratio could be improved by pulse electrodeposition and etching post-treatments.
NASA Astrophysics Data System (ADS)
JANG, G. H.; LEE, S. H.; JUNG, M. S.
2002-03-01
Free vibration of a spinning flexible disk-spindle system supported by ball bearing and flexible shaft is analyzed by using Hamilton's principle, FEM and substructure synthesis. The spinning disk is described by using the Kirchhoff plate theory and von Karman non-linear strain. The rotating spindle and stationary shaft are modelled by Rayleigh beam and Euler beam respectively. Using Hamilton's principle and including the rigid body translation and tilting motion, partial differential equations of motion of the spinning flexible disk and spindle are derived consistently to satisfy the geometric compatibility in the internal boundary between substructures. FEM is used to discretize the derived governing equations, and substructure synthesis is introduced to assemble each component of the disk-spindle-bearing-shaft system. The developed method is applied to the spindle system of a computer hard disk drive with three disks, and modal testing is performed to verify the simulation results. The simulation result agrees very well with the experimental one. This research investigates critical design parameters in an HDD spindle system, i.e., the non-linearity of a spinning disk and the flexibility and boundary condition of a stationary shaft, to predict the free vibration characteristics accurately. The proposed method may be effectively applied to predict the vibration characteristics of a spinning flexible disk-spindle system supported by ball bearing and flexible shaft in the various forms of computer storage device, i.e., FDD, CD, HDD and DVD.
Magnetic Recording Media Technology for the Tb/in2 Era"
Bertero, Gerardo [Western Digital
2017-12-09
Magnetic recording has been the technology of choice of massive storage of information. The hard-disk drive industry has recently undergone a major technological transition from longitudinal magnetic recording (LMR) to perpendicular magnetic recording (PMR). However, convention perpendicular recording can only support a few new product generations before facing insurmountable physical limits. In order to support sustained recording areal density growth, new technological paradigms, such as energy-assisted recording and bit-patterined media recording are being contemplated and planned. In this talk, we will briefly discuss the LMR-to-PMR transition, the extendibility of current PMR recording, and the nature and merits of new enabling technologies. We will also discuss a technology roadmap toward recording densities approaching 10 Tv/in2, approximately 40 times higher than in current disk drives.
Rajauria, Sukumar; Schreck, Erhard; Marchon, Bruno
2016-01-01
The understanding of tribo- and electro-chemical phenomenons on the molecular level at a sliding interface is a field of growing interest. Fundamental chemical and physical insights of sliding surfaces are crucial for understanding wear at an interface, particularly for nano or micro scale devices operating at high sliding speeds. A complete investigation of the electrochemical effects on high sliding speed interfaces requires a precise monitoring of both the associated wear and surface chemical reactions at the interface. Here, we demonstrate that head-disk interface inside a commercial magnetic storage hard disk drive provides a unique system for such studies. The results obtained shows that the voltage assisted electrochemical wear lead to asymmetric wear on either side of sliding interface. PMID:27150446
NASA Astrophysics Data System (ADS)
Rajauria, Sukumar; Schreck, Erhard; Marchon, Bruno
2016-05-01
The understanding of tribo- and electro-chemical phenomenons on the molecular level at a sliding interface is a field of growing interest. Fundamental chemical and physical insights of sliding surfaces are crucial for understanding wear at an interface, particularly for nano or micro scale devices operating at high sliding speeds. A complete investigation of the electrochemical effects on high sliding speed interfaces requires a precise monitoring of both the associated wear and surface chemical reactions at the interface. Here, we demonstrate that head-disk interface inside a commercial magnetic storage hard disk drive provides a unique system for such studies. The results obtained shows that the voltage assisted electrochemical wear lead to asymmetric wear on either side of sliding interface.
NASA Astrophysics Data System (ADS)
Poat, M. D.; Lauret, J.
2017-10-01
As demand for widely accessible storage capacity increases and usage is on the rise, steady IO performance is desired but tends to suffer within multi-user environments. Typical deployments use standard hard drives as the cost per/GB is quite low. On the other hand, HDD based solutions for storage is not known to scale well with process concurrency and soon enough, high rate of IOPs create a “random access” pattern killing performance. Though not all SSDs are alike, SSDs are an established technology often used to address this exact “random access” problem. In this contribution, we will first discuss the IO performance of many different SSD drives (tested in a comparable and standalone manner). We will then be discussing the performance and integrity of at least three low-level disk caching techniques (Flashcache, dm-cache, and bcache) including individual policies, procedures, and IO performance. Furthermore, the STAR online computing infrastructure currently hosts a POSIX-compliant Ceph distributed storage cluster - while caching is not a native feature of CephFS (only exists in the Ceph Object store), we will show how one can implement a caching mechanism profiting from an implementation at a lower level. As our illustration, we will present our CephFS setup, IO performance tests, and overall experience from such configuration. We hope this work will service the community’s interest for using disk-caching mechanisms with applicable uses such as distributed storage systems and seeking an overall IO performance gain.
Mass storage technology in networks
NASA Astrophysics Data System (ADS)
Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo
1990-08-01
Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.
Archive Storage Media Alternatives.
ERIC Educational Resources Information Center
Ranade, Sanjay
1990-01-01
Reviews requirements for a data archive system and describes storage media alternatives that are currently available. Topics discussed include data storage; data distribution; hierarchical storage architecture, including inline storage, online storage, nearline storage, and offline storage; magnetic disks; optical disks; conventional magnetic…
Research and implementation of SATA protocol link layer based on FPGA
NASA Astrophysics Data System (ADS)
Liu, Wen-long; Liu, Xue-bin; Qiang, Si-miao; Yan, Peng; Wen, Zhi-gang; Kong, Liang; Liu, Yong-zheng
2018-02-01
In order to solve the problem high-performance real-time, high-speed the image data storage generated by the detector. In this thesis, it choose an suitable portable image storage hard disk of SATA interface, it is relative to the existing storage media. It has a large capacity, high transfer rate, inexpensive, power-down data which is not lost, and many other advantages. This paper focuses on the link layer of the protocol, analysis the implementation process of SATA2.0 protocol, and build state machines. Then analyzes the characteristics resources of Kintex-7 FPGA family, builds state machines according to the agreement, write Verilog implement link layer modules, and run the simulation test. Finally, the test is on the Kintex-7 development board platform. It meets the requirements SATA2.0 protocol basically.
Dynamic stability and slider-lubricant interactions in hard disk drives
NASA Astrophysics Data System (ADS)
Ambekar, Rohit Pradeep
2007-12-01
Hard disk drives (HDD) have played a significant role in the current information age and have become the backbone of storage. The soaring demand for mass data storage drives the necessity for increasing capacity of the drives and hence the areal density on the disks as well as the reliability of the HDD. To achieve greater areal density in hard disk drives, the flying height of the airbearing slider continually decreases. Different proximity forces and interactions influence the air bearing slider resulting in fly height modulation and instability. This poses several challenges to increasing the areal density (current goal is 2Tb/in.2) as well as making the head-disk interface (HDI) more reliable. Identifying and characterizing these forces or interactions has become important for achieving a stable fly height at proximity and realizing the goals of areal density and reliability. Several proximity forces or interactions influencing the slider are identified through the study of touchdown-takeoff hysteresis. Slider-lubricant interaction which causes meniscus force between the slider and disk as well as airbearing surface contamination seems to be the most important factor affecting stability and reliability at proximity. In addition, intermolecular forces and disk topography are identified as important factors. Disk-to-slider lubricant transfer leads to lubricant pickup on the slider and also causes depletion of lubricant on the disk, affecting stability and reliability of the HDI. Experimental and numerical investigation as well as a parametric study of the process of lubricant transfer has been done using a half-delubed disk. In the first part of this parametric study, dependence on the disk lubricant thickness, lubricant type and slider ABS design has been investigated. It is concluded that the lubricant transfer can occur without slider-disk contact and there can be more than one timescale associated with the transfer. Further, the transfer increases non-linearly with increasing disk lubricant thickness. Also, the transfer depends on the type of lubricant used, and is less for Ztetraol than for Zdol. The slider ABS design also plays an important role, and a few suggestions are made to improve the ABS design for better lubricant performance. In the second part of the parametric study, the effect of carbon overcoat, lubricant molecular weight and inclusion of X-1P and A20H on the slider-lubricant interactions is investigated using a half-delubed disk approach. Based on the results, it is concluded that there exists a critical head-disk clearance above which there is negligible slider-lubricant interaction. The interaction starts at this critical clearance and increases in intensity as the head-disk clearance is further decreased below the critical clearance. Using shear stress simulations and previously published work a theory is developed to support the experimental observations. The critical clearance depends on various HDI parameters and hence can be reduced through proper design of the interface. Comparison of critical clearance on CHx and CHxNy media indicates that presence of nitrogen is better for HDI as it reduces the critical clearance, which is found to increase with increasing lubricant molecular weight and in presence of additives X-1P and A20H. Further experiments maintaining a fixed slider-disk clearance suggest that two different mechanisms dominate the disk-to-slider and slider-to-disk lubricant transfer. One of the key factors influencing the slider stability at proximity is the disk topography, since it provides dynamic excitation to the low-flying sliders and strongly influences its dynamics. The effect of circumferential as well as radial disk topography is investigated using a new method to measure the 2-D (true) disk topography. Simulations using CMLAir dynamic simulator indicate a strong dependence on the circumferential roughness and waviness features as well as radial features, which have not been studied intensively till now. The simulations with 2-D disk topography are viewed as more realistic than the 1-D simulations. Further, it is also seen that the effect of the radial features can be reduced through effective ABS design. Finally, an attempt has been made to establish correlations between some of the proximity interactions as well as others which may affect the HDI reliability by creating a relational chart. Such an organization serves to give a bigger picture of the various efforts being made in the field of HDI reliability and link them together. From this chart, a causal relationship is suggested between the electrostatic, intermolecular and meniscus forces.
Study of Solid State Drives performance in PROOF distributed analysis system
NASA Astrophysics Data System (ADS)
Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.
2010-04-01
Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.
Optimization of Smart Structure for Improving Servo Performance of Hard Disk Drive
NASA Astrophysics Data System (ADS)
Kajiwara, Itsuro; Takahashi, Masafumi; Arisaka, Toshihiro
Head positioning accuracy of the hard disk drive should be improved to meet today's increasing performance demands. Vibration suppression of the arm in the hard disk drive is very important to enhance the servo bandwidth of the head positioning system. In this study, smart structure technology is introduced into the hard disk drive to suppress the vibration of the head actuator. It has been expected that the smart structure technology will contribute to the development of small and light-weight mechatronics devices with the required performance. First, modeling of the system is conducted with finite element method and modal analysis. Next, the actuator location and the control system are simultaneously optimized using genetic algorithm. Vibration control effect with the proposed vibration control mechanisms has been evaluated by some simulations.
Architecture and method for a burst buffer using flash technology
Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung
2016-03-15
A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.
The structure and dynamics of interactive documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocha, J.T.
1999-04-01
Advances in information technology continue to accelerate as the new millennium approaches. With these advances, electronic information management is becoming increasingly important and is now supported by a seemingly bewildering array of hardware and software whose sole purpose is the design and implementation of interactive documents employing multimedia applications. Multimedia memory and storage applications such as Compact Disk-Read Only Memory (CD-ROMs) are already a familiar interactive tool in both the entertainment and business sectors. Even home enthusiasts now have the means at their disposal to design and produce CD-ROMs. More recently, Digital Video Disk (DVD) technology is carving its ownmore » niche in these markets and may (once application bugs are corrected and prices are lowered) eventually supplant CD-ROM technology. CD-ROM and DVD are not the only memory and storage applications capable of supporting interactive media. External, high-capacity drives and disks such as the Iomega{copyright} zip{reg_sign} and jaz{reg_sign} are also useful platforms for launching interactive documents without the need for additional hardware such as CD-ROM burners and copiers. The main drawback here, however, is the relatively high unit price per disk when compared to the unit cost of CD-ROMs. Regardless of the application chosen, there are fundamental structural characteristics that must be considered before effective interactive documents can be created. Additionally, the dynamics of interactive documents employing hypertext links are unique and bear only slight resemblance to those of their traditional hard-copy counterparts. These two considerations form the essential content of this paper.« less
The medium is NOT the message or Indefinitely long-term file storage at Leeds University
NASA Technical Reports Server (NTRS)
Holdsworth, David
1996-01-01
Approximately 3 years ago we implemented an archive file storage system which embodies experiences gained over more than 25 years of using and writing file storage systems. It is the third in-house system that we have written, and all three systems have been adopted by other institutions. This paper discusses the requirements for long-term data storage in a university environment, and describes how our present system is designed to meet these requirements indefinitely. Particular emphasis is laid on experiences from past systems, and their influence on current system design. We also look at the influence of the IEEE-MSS standard. We currently have the system operating in five UK universities. The system operates in a multi-server environment, and is currently operational with UNIX (SunOS4, Solaris2, SGI-IRIX, HP-UX), NetWare3 and NetWare4. PCs logged on to NetWare can also archive and recover files that live on their hard disks.
Optical Digital Disk Storage: An Application for News Libraries.
ERIC Educational Resources Information Center
Crowley, Mary Jo
1988-01-01
Describes the technology, equipment, and procedures necessary for converting a historical newspaper clipping collection to optical disk storage. Alternative storage systems--microforms, laser scanners, optical storage--are also retrieved, and the advantages and disadvantages of optical storage are considered. (MES)
Floppy disk utility user's guide
NASA Technical Reports Server (NTRS)
Akers, J. W.
1981-01-01
The Floppy Disk Utility Program transfers programs between files on the hard disk and floppy disk. It also copies the data on one floppy disk onto another floppy disk and compares the data. The program operates on the Data General NOVA-4X under the Real Time Disk Operating System (RDOS).
Planning for optical disk technology with digital cartography.
Light, D.L.
1986-01-01
A major shortfall that still exists in digital systems is the need for very large mass storage capacity. The decade of the 1980s has introduced laser optical disk storage technology, which may be the breakthrough needed for mass storage. This paper addresses system concepts for digital cartography during the transition period. Emphasis will be placed on determining USGS mass storage requirements and introducing laser optical disk technology for handling storage problems for digital data in this decade.-from Author
Basics of Videodisc and Optical Disk Technology.
ERIC Educational Resources Information Center
Paris, Judith
1983-01-01
Outlines basic videodisc and optical disk technology describing both optical and capacitance videodisc technology. Optical disk technology is defined as a mass digital image and data storage device and briefly compared with other information storage media including magnetic tape and microforms. The future of videodisc and optical disk is…
Embedded optical interconnect technology in data storage systems
NASA Astrophysics Data System (ADS)
Pitwon, Richard C. A.; Hopkins, Ken; Milward, Dave; Muggeridge, Malcolm
2010-05-01
As both data storage interconnect speeds increase and form factors in hard disk drive technologies continue to shrink, the density of printed channels on the storage array midplane goes up. The dominant interconnect protocol on storage array midplanes is expected to increase to 12 Gb/s by 2012 thereby exacerbating the performance bottleneck in future digital data storage systems. The design challenges inherent to modern data storage systems are discussed and an embedded optical infrastructure proposed to mitigate this bottleneck. The proposed solution is based on the deployment of an electro-optical printed circuit board and active interconnect technology. The connection architecture adopted would allow for electronic line cards with active optical edge connectors to be plugged into and unplugged from a passive electro-optical midplane with embedded polymeric waveguides. A demonstration platform has been developed to assess the viability of embedded electro-optical midplane technology in dense data storage systems and successfully demonstrated at 10.3 Gb/s. Active connectors incorporate optical transceiver interfaces operating at 850 nm and are connected in an in-plane coupling configuration to the embedded waveguides in the midplane. In addition a novel method of passively aligning and assembling passive optical devices to embedded polymer waveguide arrays has also been demonstrated.
Mean PB To Failure - Initial results from a long-term study of disk storage patterns at the RACF
NASA Astrophysics Data System (ADS)
Caramarcu, C.; Hollowell, C.; Rao, T.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, S. A.
2015-12-01
The RACF (RHIC-ATLAS Computing Facility) has operated a large, multi-purpose dedicated computing facility since the mid-1990’s, serving a worldwide, geographically diverse scientific community that is a major contributor to various HEPN projects. A central component of the RACF is the Linux-based worker node cluster that is used for both computing and data storage purposes. It currently has nearly 50,000 computing cores and over 23 PB of storage capacity distributed over 12,000+ (non-SSD) disk drives. The majority of the 12,000+ disk drives provide a cost-effective solution for dCache/XRootD-managed storage, and a key concern is the reliability of this solution over the lifetime of the hardware, particularly as the number of disk drives and the storage capacity of individual drives grow. We report initial results of a long-term study to measure lifetime PB read/written to disk drives in the worker node cluster. We discuss the historical disk drive mortality rate, disk drive manufacturers' published MPTF (Mean PB to Failure) data and how they are correlated to our results. The results help the RACF understand the productivity and reliability of its storage solutions and have implications for other highly-available storage systems (NFS, GPFS, CVMFS, etc) with large I/O requirements.
47 CFR 1.734 - Specifications as to pleadings, briefs, and other documents; subscription.
Code of Federal Regulations, 2010 CFR
2010-10-01
... submitted both as hard copies and on computer disk formatted to be compatible with the Commission's computer... copies of tariffs or reports with their hard copies need not include such tariffs or reports on the disk...
Libraries and Desktop Storage Options: Results of a Web-Based Survey.
ERIC Educational Resources Information Center
Hendricks, Arthur; Wang, Jian
2002-01-01
Reports the results of a Web-based survey that investigated what plans, if any, librarians have for dealing with the expected obsolescence of the floppy disk and still retain effective library service. Highlights include data storage options, including compact disks, zip disks, and networked storage products; and a copy of the Web survey.…
Floppy disk utility user's guide
NASA Technical Reports Server (NTRS)
Akers, J. W.
1980-01-01
A floppy disk utility program is described which transfers programs between files on a hard disk and floppy disk. It also copies the data on one floppy disk onto another floppy disk and compares the data. The program operates on the Data General NOVA-4X under the Real Time Disk Operating System. Sample operations are given.
Magnetic Thin Films for Perpendicular Magnetic Recording Systems
NASA Astrophysics Data System (ADS)
Sugiyama, Atsushi; Hachisu, Takuma; Osaka, Tetsuya
In the advanced information society of today, information storage technology, which helps to store a mass of electronic data and offers high-speed random access to the data, is indispensable. Against this background, hard disk drives (HDD), which are magnetic recording devices, have gained in importance because of their advantages in capacity, speed, reliability, and production cost. These days, the uses of HDD extend not only to personal computers and network servers but also to consumer electronics products such as personal video recorders, portable music players, car navigation systems, video games, video cameras, and personal digital assistances.
Overview of the H.264/AVC video coding standard
NASA Astrophysics Data System (ADS)
Luthra, Ajay; Topiwala, Pankaj N.
2003-11-01
H.264/MPEG-4 AVC is the latest coding standard jointly developed by the Video Coding Experts Group (VCEG) of ITU-T and Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state of the art coding tools and provides enhanced coding efficiency for a wide range of applications including video telephony, video conferencing, TV, storage (DVD and/or hard disk based), streaming video, digital video creation, digital cinema and others. In this paper an overview of this standard is provided. Some comparisons with the existing standards, MPEG-2 and MPEG-4 Part 2, are also provided.
Nespoli conducts a test run with the French/CNES Neuroscientific Research Experiment
2011-02-12
ISS026-E-027000 (12 Feb. 2011) --- European Space Agency (ESA) astronaut Paolo Nespoli, Expedition 26 flight engineer, conducts a test run with the French/CNES neuroscientific research experiment ?3D-Space? (SAP) in the Columbus laboratory of the International Space Station. While floating freely, Nespoli used the ESA multipurpose laptop with a prepared hard disk drive, data storage on a memory card, and an electronic pen table connected to it. 3D-Space, which involves distance, writing and illusion exercises, is designed to test the hypothesis that altered visual perception affects motor control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mason, J.
CCHDT constructs and classifies various arrangements of hard disks of a single radius places on the unit square with periodic boundary conditions. Specifially, a given configuration is evolved to the nearest critical point on a smoothed hard disk energy fuction, and is classified by the adjacency matrix of the canonically labelled contact graph.
A review of high magnetic moment thin films for microscale and nanotechnology applications
Scheunert, Gunther; Heinonen, O.; Hardeman, R.; ...
2016-02-17
Here, the creation of large magnetic fields is a necessary component in many technologies, ranging from magnetic resonance imaging, electric motors and generators, and magnetic hard disk drives in information storage. This is typically done by inserting a ferromagnetic pole piece with a large magnetisation density M S in a solenoid. In addition to large M S, it is usually required or desired that the ferromagnet is magnetically soft and has a Curie temperature well above the operating temperature of the device. A variety of ferromagnetic materials are currently in use, ranging from FeCo alloys in, for example, hard diskmore » drives, to rare earth metals operating at cryogenic temperatures in superconducting solenoids. These latter can exceed the limit on M S for transition metal alloys given by the Slater-Pauling curve. This article reviews different materials and concepts in use or proposed for technological applications that require a large M S, with an emphasis on nanoscale material systems, such as thin and ultra-thin films. Attention is also paid to other requirements or properties, such as the Curie temperature and magnetic softness. In a final summary, we evaluate the actual applicability of the discussed materials for use as pole tips in electromagnets, in particular, in nanoscale magnetic hard disk drive read-write heads; the technological advancement of the latter has been a very strong driving force in the development of the field of nanomagnetism.« less
X-Ray Spectral Analysis of the Steady States of GRS1915+105
NASA Astrophysics Data System (ADS)
Peris, Charith S.; Remillard, Ronald A.; Steiner, James F.; Vrtilek, Saeqa D.; Varnière, Peggy; Rodriguez, Jerome; Pooley, Guy
2016-05-01
We report on the X-ray spectral behavior within the steady states of GRS1915+105. Our work is based on the full data set of the source obtained using the Proportional Counter Array (PCA) on the Rossi X-ray Timing Explorer (RXTE) and 15 GHz radio data obtained using the Ryle Telescope. The steady observations within the X-ray data set naturally separated into two regions in the color-color diagram and we refer to these regions as steady-soft and steady-hard. GRS1915+105 displays significant curvature in the coronal component in both the soft and hard data within the RXTE/PCA bandpass. A majority of the steady-soft observations displays a roughly constant inner disk radius ({R}{{in}}), while the steady-hard observations display an evolving disk truncation which is correlated to the mass accretion rate through the disk. The disk flux and coronal flux are strongly correlated in steady-hard observations and very weakly correlated in the steady-soft observations. Within the steady-hard observations, we observe two particular circumstances when there are correlations between the coronal X-ray flux and the radio flux with log slopes η ˜ 0.68+/- 0.35 and η ˜ 1.12+/- 0.13. They are consistent with the upper and lower tracks of Gallo et al. (2012), respectively. A comparison of the model parameters to the state definitions shows that almost all of the steady-soft observations match the criteria of either a thermal or steep power-law state, while a large portion of the steady-hard observations match the hard-state criteria when the disk fraction constraint is neglected.
ERIC Educational Resources Information Center
Cerva, John R.; And Others
1986-01-01
Eight papers cover: optical storage technology; cross-cultural videodisc design; optical disk technology use at the Library of Congress Research Service and National Library of Medicine; Internal Revenue Service image storage and retrieval system; solving business problems with CD-ROM; a laser disk operating system; and an optical disk for…
Code of Federal Regulations, 2013 CFR
2013-10-01
... FEDERAL COMMUNICATIONS COMMISSION GENERAL ACCESS TO ADVANCED COMMUNICATIONS SERVICES AND EQUIPMENT BY... a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this subpart... submitted both as a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this...
Code of Federal Regulations, 2012 CFR
2012-10-01
... FEDERAL COMMUNICATIONS COMMISSION GENERAL ACCESS TO ADVANCED COMMUNICATIONS SERVICES AND EQUIPMENT BY... a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this subpart... submitted both as a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this...
Code of Federal Regulations, 2014 CFR
2014-10-01
... FEDERAL COMMUNICATIONS COMMISSION GENERAL ACCESS TO ADVANCED COMMUNICATIONS SERVICES AND EQUIPMENT BY... a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this subpart... submitted both as a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this...
Dwivedi, Neeraj; Satyanarayana, Nalam; Yeo, Reuben J; Xu, Hai; Ping Loh, Kian; Tripathy, Sudhiranjan; Bhatia, Charanjit S
2015-06-25
One of the key issues for future hard disk drive technology is to design and develop ultrathin (<2 nm) overcoats with excellent wear- and corrosion protection and high thermal stability. Forming carbon overcoats (COCs) having interspersed nanostructures by the filtered cathodic vacuum arc (FCVA) process can be an effective approach to achieve the desired target. In this work, by employing a novel bi-level surface modification approach using FCVA, the formation of a high sp(3) bonded ultrathin (~1.7 nm) amorphous carbon overcoat with interspersed graphene/fullerene-like nanostructures, grown on magnetic hard disk media, is reported. The in-depth spectroscopic and microscopic analyses by high resolution transmission electron microscopy, scanning tunneling microscopy, time-of-flight secondary ion mass spectrometry, and Raman spectroscopy support the observed findings. Despite a reduction of ~37% in COC thickness, the FCVA-processed thinner COC (~1.7 nm) shows promising functional performance in terms of lower coefficient of friction (~0.25), higher wear resistance, lower surface energy, excellent hydrophobicity and similar/better oxidation corrosion resistance than current commercial COCs of thickness ~2.7 nm. The surface and tribological properties of FCVA-deposited COC was further improved after deposition of lubricant layer.
NASA Astrophysics Data System (ADS)
Dwivedi, Neeraj; Satyanarayana, Nalam; Yeo, Reuben J.; Xu, Hai; Ping Loh, Kian; Tripathy, Sudhiranjan; Bhatia, Charanjit S.
2015-06-01
One of the key issues for future hard disk drive technology is to design and develop ultrathin (<2 nm) overcoats with excellent wear- and corrosion protection and high thermal stability. Forming carbon overcoats (COCs) having interspersed nanostructures by the filtered cathodic vacuum arc (FCVA) process can be an effective approach to achieve the desired target. In this work, by employing a novel bi-level surface modification approach using FCVA, the formation of a high sp3 bonded ultrathin (~1.7 nm) amorphous carbon overcoat with interspersed graphene/fullerene-like nanostructures, grown on magnetic hard disk media, is reported. The in-depth spectroscopic and microscopic analyses by high resolution transmission electron microscopy, scanning tunneling microscopy, time-of-flight secondary ion mass spectrometry, and Raman spectroscopy support the observed findings. Despite a reduction of ~37 % in COC thickness, the FCVA-processed thinner COC (~1.7 nm) shows promising functional performance in terms of lower coefficient of friction (~0.25), higher wear resistance, lower surface energy, excellent hydrophobicity and similar/better oxidation corrosion resistance than current commercial COCs of thickness ~2.7 nm. The surface and tribological properties of FCVA-deposited COC was further improved after deposition of lubricant layer.
Dwivedi, Neeraj; Satyanarayana, Nalam; Yeo, Reuben J.; Xu, Hai; Ping Loh, Kian; Tripathy, Sudhiranjan; Bhatia, Charanjit S.
2015-01-01
One of the key issues for future hard disk drive technology is to design and develop ultrathin (<2 nm) overcoats with excellent wear- and corrosion protection and high thermal stability. Forming carbon overcoats (COCs) having interspersed nanostructures by the filtered cathodic vacuum arc (FCVA) process can be an effective approach to achieve the desired target. In this work, by employing a novel bi-level surface modification approach using FCVA, the formation of a high sp3 bonded ultrathin (~1.7 nm) amorphous carbon overcoat with interspersed graphene/fullerene-like nanostructures, grown on magnetic hard disk media, is reported. The in-depth spectroscopic and microscopic analyses by high resolution transmission electron microscopy, scanning tunneling microscopy, time-of-flight secondary ion mass spectrometry, and Raman spectroscopy support the observed findings. Despite a reduction of ~37 % in COC thickness, the FCVA-processed thinner COC (~1.7 nm) shows promising functional performance in terms of lower coefficient of friction (~0.25), higher wear resistance, lower surface energy, excellent hydrophobicity and similar/better oxidation corrosion resistance than current commercial COCs of thickness ~2.7 nm. The surface and tribological properties of FCVA-deposited COC was further improved after deposition of lubricant layer. PMID:26109208
Synthesis of Ultrathin ta-C Films by Twist-Filtered Cathodic Arc Carbon Plasmas
2001-04-01
system. Ultrathin tetrahedral amorphous carbon (ta-C) films have been deposited on 6 inch wafers. Film properties have been investigated with respect to...Diamondlike films are characterized by an outstanding combination of advantageous properties : they can be very hard, tough, super-smooth, chemically...5 nm) hard carbon films are being used as protective overcoats on hard disks and read-write heads. The tribological properties of the head-disk
Future Hard Disk Storage: Limits & Potential Solutions
NASA Astrophysics Data System (ADS)
Lambeth, David N.
2000-03-01
For several years the hard disk drive technology pace has raced along at 60-100products this year and laboratory demonstrations approaching what has been estimated as a physical thermal stability limit of around 40 Gbit/in2. For sometime now the data storage industry has recogniz d that doing business as usually will not be viable for long and so both incremental evolutionary and revolutionary technologies are being explored. While new recording head materials or thermal recording techniques may allow higher coercivity materials to be recorded upon, and while high sensitivity spin transport transducer technology may provide sufficient signals to extend beyond the 100 Gigabit/in2 regime, conventional isotropic longitudinal media will show large data retention problems at less than 1/2 of this value. We have recently developed a simple model which indicates that while thermal instability issues may appear at different areal densities, they are non-discriminatory as to the magnetic recording modality: longitudinal, perpendicular, magnetooptic, near field, etc. The model indicates that a strong orientation of the media tends to abate the onset of the thermal limit. Hence, for the past few years we have taken an approach of controlled growth of the microstructure of thin film media. This knowledge has lead us to believe that epitaxial growth of multiple thin film layers on single crystalline Si may provide a pathway to nearly perfect crystallites of various, highly oriented, thin film textures. Here we provide an overview of the recording system media challenges, which are useful for the development of a future media design philosophy and then discuss materials issues and processing techniques for multi-layered thin film material structures which may be used to achieve media structures which can easy exceed the limits predicted for isotropic media.
Halbach array type focusing actuator for small and thin optical data storage device
NASA Astrophysics Data System (ADS)
Lee, Sung Q.; Park, Kang-Ho; Paek, Mun Chul
2004-09-01
The small form factor optical data storage devices are developing rapidly nowadays. Since it is designed for portable and compatibility with flesh memory, its components such as disk, head, focusing actuator, and spindle motor should be assembled within 5 mm. The thickness of focusing actuator is within 2 mm and the total working range is +/-100um, with the resolution of less than 1μm. Since the thickness is limited tightly, it is hard to place the yoke that closes the magnetic circuit and hard to make strong flux density without yoke. Therefore, Halbach array is adopted to increase the magnetic flux of one side without yoke. The proposed Halbach array type focusing actuator has the advantage of thin actuation structure with sacrificing less flex density than conventional magnetic array. The optical head unit is moved on the swing arm type tracking actuator. Focusing coil is attached to swing arm, and Halbach magnet array is positioned at the bottom of deck along the tracking line, and focusing actuator exerts force by the Fleming's left hand rule. The dynamics, working range, control resolution of focusing actuator are analyzed and performed.
Fast disk array for image storage
NASA Astrophysics Data System (ADS)
Feng, Dan; Zhu, Zhichun; Jin, Hai; Zhang, Jiangling
1997-01-01
A fast disk array is designed for the large continuous image storage. It includes a high speed data architecture and the technology of data striping and organization on the disk array. The high speed data path which is constructed by two dual port RAM and some control circuit is configured to transfer data between a host system and a plurality of disk drives. The bandwidth can be more than 100 MB/s if the data path based on PCI (peripheral component interconnect). The organization of data stored on the disk array is similar to RAID 4. Data are striped on a plurality of disk, and each striping unit is equal to a track. I/O instructions are performed in parallel on the disk drives. An independent disk is used to store the parity information in the fast disk array architecture. By placing the parity generation circuit directly on the SCSI (or SCSI 2) bus, the parity information can be generated on the fly. It will affect little on the data writing in parallel on the other disks. The fast disk array architecture designed in the paper can meet the demands of the image storage.
PLANNING FOR OPTICAL DISK TECHNOLOGY WITH DIGITAL CARTOGRAPHY.
Light, Donald L.
1984-01-01
Progress in the computer field continues to suggest that the transition from traditional analog mapping systems to digital systems has become a practical possibility. A major shortfall that still exists in digital systems is the need for very large mass storage capacity. The decade of the 1980's has introduced laser optical disk storage technology, which may be the breakthrough needed for mass storage. This paper addresses system concepts for digital cartography during the transition period. Emphasis is placed on determining U. S. Geological Survey mass storage requirements and introducing laser optical disk technology for handling storage problems for digital data in this decade.
Code of Federal Regulations, 2010 CFR
2010-10-01
... “Proposed Order.” The proposed order shall be submitted both as a hard copy and on computer disk in accordance with the requirements of § 1.734(d). Where appropriate, the proposed order format should conform... a “Proposed Order.” The proposed order shall be submitted both as a hard copy and on computer disk...
TRACING THE REVERBERATION LAG IN THE HARD STATE OF BLACK HOLE X-RAY BINARIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Marco, B.; Ponti, G.; Nandra, K.
2015-11-20
We report results obtained from a systematic analysis of X-ray lags in a sample of black hole X-ray binaries, with the aim of assessing the presence of reverberation lags and studying their evolution during outburst. We used XMM-Newton and simultaneous Rossi X-ray Timing Explorer (RXTE) observations to obtain broadband energy coverage of both the disk and the hard X-ray Comptonization components. In most cases the detection of reverberation lags is hampered by low levels of variability-power signal-to-noise ratio (typically when the source is in a soft state) and/or short exposure times. The most detailed study was possible for GX 339-4more » in the hard state, which allowed us to characterize the evolution of X-ray lags as a function of luminosity in a single source. Over all the sampled frequencies (∼0.05–9 Hz), we observe the hard lags intrinsic to the power-law component, already well known from previous RXTE studies. The XMM-Newton soft X-ray response allows us to detail the disk variability. At low frequencies (long timescales) the disk component always leads the power-law component. On the other hand, a soft reverberation lag (ascribable to thermal reprocessing) is always detected at high frequencies (short timescales). The intrinsic amplitude of the reverberation lag decreases as the source luminosity and the disk fraction increase. This suggests that the distance between the X-ray source and the region of the optically thick disk where reprocessing occurs gradually decreases as GX 339-4 rises in luminosity through the hard state, possibly as a consequence of reduced disk truncation.« less
Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gibson, Garth Alan
1990-01-01
During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.
The LHCb Grid Simulation: Proof of Concept
NASA Astrophysics Data System (ADS)
Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.
2017-10-01
The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.
An Improved B+ Tree for Flash File Systems
NASA Astrophysics Data System (ADS)
Havasi, Ferenc
Nowadays mobile devices such as mobile phones, mp3 players and PDAs are becoming evermore common. Most of them use flash chips as storage. To store data efficiently on flash, it is necessary to adapt ordinary file systems because they are designed for use on hard disks. Most of the file systems use some kind of search tree to store index information, which is very important from a performance aspect. Here we improved the B+ search tree algorithm so as to make flash devices more efficient. Our implementation of this solution saves 98%-99% of the flash operations, and is now the part of the Linux kernel.
ERIC Educational Resources Information Center
Valentine, Pamela
1980-01-01
The author describes the floppy disk with an analogy to the phonograph record, and discusses the advantages, disadvantages, and capabilities of hard-sectored and soft-sectored floppy disks. She concludes that, at present, the floppy disk will continue to be the primary choice of personal computer manufacturers and their customers. (KC)
Optical Disk for Digital Storage and Retrieval Systems.
ERIC Educational Resources Information Center
Rose, Denis A.
1983-01-01
Availability of low-cost digital optical disks will revolutionize storage and retrieval systems over next decade. Three major factors will effect this change: availability of disks and controllers at low-cost and in plentiful supply; availability of low-cost and better output means for system users; and more flexible, less expensive communication…
Theories 3:30 p.m. DIRECTOR'S COFFEE BREAK - 2nd Flr X-Over 4:00 p.m. Accelerator Physics and Technology ; --Siri Steiner Temporary restrictions transferring disk drives If your hard drive breaks down and you try to you. They are just following a new rule. According to a recent DOE memo, no hard disk drive or
A high-speed, large-capacity, 'jukebox' optical disk system
NASA Technical Reports Server (NTRS)
Ammon, G. J.; Calabria, J. A.; Thomas, D. T.
1985-01-01
Two optical disk 'jukebox' mass storage systems which provide access to any data in a store of 10 to the 13th bits (1250G bytes) within six seconds have been developed. The optical disk jukebox system is divided into two units, including a hardware/software controller and a disk drive. The controller provides flexibility and adaptability, through a ROM-based microcode-driven data processor and a ROM-based software-driven control processor. The cartridge storage module contains 125 optical disks housed in protective cartridges. Attention is given to a conceptual view of the disk drive unit, the NASA optical disk system, the NASA database management system configuration, the NASA optical disk system interface, and an open systems interconnect reference model.
The performance of disk arrays in shared-memory database machines
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Hong, Wei
1993-01-01
In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.
Active versus Passive Hard Disks against a Membrane: Mechanical Pressure and Instability.
Junot, G; Briand, G; Ledesma-Alonso, R; Dauchot, O
2017-07-14
We experimentally study the mechanical pressure exerted by a set of respectively passive isotropic and self-propelled polar disks onto two different flexible unidimensional membranes. In the case of the isotropic disks, the mechanical pressure, inferred from the shape of the membrane, is identical for both membranes and follows the equilibrium equation of state for hard disks. On the contrary, for the self-propelled disks, the mechanical pressure strongly depends on the membrane in use and thus is not a state variable. When self-propelled disks are present on both sides of the membrane, we observe an instability of the membrane akin to the one predicted theoretically for active Brownian particles against a soft wall. In that case, the integrated mechanical pressure difference across the membrane cannot be computed from the sole knowledge of the packing fractions on both sides, further evidence of the absence of an equation of state.
A kilobyte rewritable atomic memory
NASA Astrophysics Data System (ADS)
Kalff, Floris; Rebergen, Marnix; Fahrenfort, Nora; Girovsky, Jan; Toskovic, Ranko; Lado, Jose; FernáNdez-Rossier, JoaquíN.; Otte, Sander
The ability to manipulate individual atoms by means of scanning tunneling microscopy (STM) opens op opportunities for storage of digital data on the atomic scale. Recent achievements in this direction include data storage based on bits encoded in the charge state, the magnetic state, or the local presence of single atoms or atomic assemblies. However, a key challenge at this stage is the extension of such technologies into large-scale rewritable bit arrays. We demonstrate a digital atomic-scale memory of up to 1 kilobyte (8000 bits) using an array of individual surface vacancies in a chlorine terminated Cu(100) surface. The chlorine vacancies are found to be stable at temperatures up to 77 K. The memory, crafted using scanning tunneling microscopy at low temperature, can be read and re-written automatically by means of atomic-scale markers, and offers an areal density of 502 Terabits per square inch, outperforming state-of-the-art hard disk drives by three orders of magnitude.
Evaluating the effect of online data compression on the disk cache of a mass storage system
NASA Technical Reports Server (NTRS)
Pentakalos, Odysseas I.; Yesha, Yelena
1994-01-01
A trace driven simulation of the disk cache of a mass storage system was used to evaluate the effect of an online compression algorithm on various performance measures. Traces from the system at NASA's Center for Computational Sciences were used to run the simulation and disk cache hit ratios, number of files and bytes migrating to tertiary storage were measured. The measurements were performed for both an LRU and a size based migration algorithm. In addition to seeing the effect of online data compression on the disk cache performance measure, the simulation provided insight into the characteristics of the interactive references, suggesting that hint based prefetching algorithms are the only alternative for any future improvements to the disk cache hit ratio.
Optical Disks Compete with Videotape and Magnetic Storage Media: Part I.
ERIC Educational Resources Information Center
Urrows, Henry; Urrows, Elizabeth
1988-01-01
Describes the latest technology in videotape cassette systems and other magnetic storage devices and their possible effects on optical data disks. Highlights include Honeywell's Very Large Data Store (VLDS); Exabyte's tape cartridge storage system; standards for tape drives; and Masstor System's videotape cartridge system. (LRW)
Modeling the X-Ray Timing Properties of Cygnus X-1 Caused by Waves Propagating in a Transition Disk
NASA Astrophysics Data System (ADS)
Misra, R.
2000-02-01
We show that waves propagating in a transition disk can explain the short-term temporal behavior of Cygnus X-1. In the transition-disk model, the spectrum is produced by saturated Comptonization within the inner region of the accretion disk where the temperature varies rapidly with radius. Recently, the spectrum from such a disk has been shown to fit the average broadband spectrum of this source better than that predicted by the soft-photon Comptonization model. Here we consider a simple model in which waves are propagating cylindrically symmetrically in the transition disk with a uniform propagation speed (cp). We show that this model can qualitatively explain (1) the variation of the power spectral density with energy, (2) the hard lags as a function of frequency, and (3) the hard lags as a function of energy for various frequencies. Thus, the transition-disk model can explain the average spectrum and the short-term temporal behavior of Cyg X-1.
Bond-orientational analysis of hard-disk and hard-sphere structures.
Senthil Kumar, V; Kumaran, V
2006-05-28
We report the bond-orientational analysis results for the thermodynamic, random, and homogeneously sheared inelastic structures of hard-disks and hard-spheres. The thermodynamic structures show a sharp rise in the order across the freezing transition. The random structures show the absence of crystallization. The homogeneously sheared structures get ordered at a packing fraction higher than the thermodynamic freezing packing fraction, due to the suppression of crystal nucleation. On shear ordering, strings of close-packed hard-disks in two dimensions and close-packed layers of hard-spheres in three dimensions, oriented along the velocity direction, slide past each other. Such a flow creates a considerable amount of fourfold order in two dimensions and body-centered-tetragonal (bct) structure in three dimensions. These transitions are the flow analogs of the martensitic transformations occurring in metals due to the stresses induced by a rapid quench. In hard-disk structures, using the bond-orientational analysis we show the presence of fourfold order. In sheared inelastic hard-sphere structures, even though the global bond-orientational analysis shows that the system is highly ordered, a third-order rotational invariant analysis shows that only about 40% of the spheres have face-centered-cubic (fcc) order, even in the dense and near-elastic limits, clearly indicating the coexistence of multiple crystalline orders. When layers of close-packed spheres slide past each other, in addition to the bct structure, the hexagonal-close-packed (hcp) structure is formed due to the random stacking faults. Using the Honeycutt-Andersen pair analysis and an analysis based on the 14-faceted polyhedra having six quadrilateral and eight hexagonal faces, we show the presence of bct and hcp signatures in shear ordered inelastic hard-spheres. Thus, our analysis shows that the dense sheared inelastic hard-spheres have a mixture of fcc, bct, and hcp structures.
Disk storage management for LHCb based on Data Popularity estimator
NASA Astrophysics Data System (ADS)
Hushchyn, Mikhail; Charpentier, Philippe; Ustyuzhanin, Andrey
2015-12-01
This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.
Recording and reading of information on optical disks
NASA Astrophysics Data System (ADS)
Bouwhuis, G.; Braat, J. J. M.
In the storage of information, related to video programs, in a spiral track on a disk, difficulties arise because the bandwidth for video is much greater than for audio signals. An attractive solution was found in optical storage. The optical noncontact method is free of wear, and allows for fast random access. Initial problems regarding a suitable light source could be overcome with the aid of appropriate laser devices. The basic concepts of optical storage on disks are treated insofar as they are relevant for the optical arrangement. A general description is provided of a video, a digital audio, and a data storage system. Scanning spot microscopy for recording and reading of optical disks is discussed, giving attention to recording of the signal, the readout of optical disks, the readout of digitally encoded signals, and cross talk. Tracking systems are also considered, taking into account the generation of error signals for radial tracking and the generation of focus error signals.
NASA Astrophysics Data System (ADS)
Wu, Lin
2018-05-01
In this paper, we model the depletion dynamics of the molecularly thin layer of lubricants on a bit patterned media disk of hard disk drives under a sliding air bearing head. The dominant physics and consequently, the lubricant depletion dynamics on a patterned disk are shown to be significantly different from the well-studied cases of a smooth disk. Our results indicate that the surface tension effect, which is negligible on a flat disk, apparently suppresses depletion by enforcing a bottleneck effect around the disk pattern peak regions to thwart the migration of lubricants. When the disjoining pressure is relatively small, it assists the depletion. But, when the disjoining pressure becomes dominant, the disjoining pressure resists depletion. Disk pattern orientation plays a critical role in the depletion process. The effect of disk pattern orientation on depletion originates from its complex interaction with other intermingled factors of external air shearing stress distribution and lubricant particle trajectory. Patterning a disk surface with nanostructures of high density, large height/pitch ratio, and particular orientation is demonstrated to be one efficient way to alleviate the formation of lubricant depletion tracks.
Cost-effective data storage/archival subsystem for functional PACS
NASA Astrophysics Data System (ADS)
Chen, Y. P.; Kim, Yongmin
1993-09-01
Not the least of the requirements of a workable PACS is the ability to store and archive vast amounts of information. A medium-size hospital will generate between 1 and 2 TBytes of data annually on a fully functional PACS. A high-speed image transmission network coupled with a comparably high-speed central data storage unit can make local memory and magnetic disks in the PACS workstations less critical and, in an extreme case, unnecessary. Under these circumstances, the capacity and performance of the central data storage subsystem and database is critical in determining the response time at the workstations, thus significantly affecting clinical acceptability. The central data storage subsystem not only needs to provide sufficient capacity to store about ten days worth of images (five days worth of new studies, and on the average, about one comparison study for each new study), but also supplies images to the requesting workstation in a timely fashion. The database must provide fast retrieval responses upon users' requests for images. This paper analyzes both advantages and disadvantages of multiple parallel transfer disks versus RAID disks for short-term central data storage subsystem, as well as optical disk jukebox versus digital recorder tape subsystem for long-term archive. Furthermore, an example high-performance cost-effective storage subsystem which integrates both the RAID disks and high-speed digital tape subsystem as a cost-effective PACS data storage/archival unit are presented.
Galactic Black Holes in the Hard State: A Multi-Wavelength View of Accretion and Ejection
NASA Technical Reports Server (NTRS)
Kalemci; Tomsick, John A.; Migliari; Corbel; Markoff
2010-01-01
The canonical hard state is associated with emission from all three fundamental accretion components: the accretion disk, the hot accretion disk corona and the jet. On top of these, the hard state also hosts very rich temporal variability properties (low frequency QPOs in the PDS, time lags, long time scale evolution). Our group has been working on the major questions of the hard state both observationally (with mult i-wavelength campaigns using RXTE, Swift, Suzaku, Spitzer, VLA, ATCA, SMARTS) and theoretically (through jet models that can fit entire SEDs). Through spectral and temporal analysis we seek to determine the geometry of accretion components, and relate the geometry to the formation and emission from a jet. In this presentation I will review the recent contributions of our group to the field, including the Swift results on the disk geometry at low accretion rates, the jet model fits to the hard state SEDs (including Spitzer data) of GRO J1655-40, and the final results on the evolution of spectral (including X-ray, radio and infrared) and temporal properties of elected black holes in the hard states. I will also talk about impact of ASTROSAT to the science objective of our group.
Saying goodbye to optical storage technology.
McLendon, Kelly; Babbitt, Cliff
2002-08-01
The days of using optical disk based mass storage devices for high volume applications like health care document imaging are coming to an end. The price/performance curve for redundant magnetic disks, known as RAID, is now more positive than for optical disks. All types of application systems, across many sectors of the marketplace are using these newer magnetic technologies, including insurance, banking, aerospace, as well as health care. The main components of these new storage technologies are RAID and SAN. SAN refers to storage area network, which is a complex mechanism of switches and connections that allow multiple systems to store huge amounts of data securely and safely.
Swivel Joint For Liquid Nitrogen
NASA Technical Reports Server (NTRS)
Milner, James F.
1988-01-01
Swivel joint allows liquid-nitrogen pipe to rotate through angle of 100 degree with respect to mating pipe. Functions without cracking hard foam insulation on lines. Pipe joint rotates on disks so mechanical stress not transmitted to thick insulation on pipes. Inner disks ride on fixed outer disks. Disks help to seal pressurized liquid nitrogen flowing through joint.
NASA Astrophysics Data System (ADS)
Cheng, Feng
The emerging Big Data era demands the rapidly increasing need for speed and capacity of storing and processing information. Standalone magnetic recording devices, such as hard disk drives (HDDs), have always been playing a central role in modern data storage and continuously advancing. Recognizing the growing capacity gap between the demand and production, industry has pushed the bit areal density in HDDs to 900 Giga-bit/square-inch, a remarkable 450-million-fold increase since the invention of the first hard disk drive in 1956. However, the further development of HDD capacity is facing a pressing challenge, the so-called superparamagnetic effect, that leads to the loss of information when a single bit becomes too small to preserve the magnetization. This requires new magnetic recording technologies that can write more stable magnetic bits into hard magnetic materials. Recent research has shown that it is possible to use ultrafast laser pulses to switch the magnetization in certain types of magnetic thin films. Surprisingly, such a process does not require an externally applied magnetic field that always exists in conventional HDDs. Furthermore, the optically induced magnetization switching is extremely fast, up to sub-picosecond (10 -12 s) level, while with traditional recording method the deterministic switching does not take place shorter than 20 ps. It's worth noting that the direction of magnetization is related to the helicity of the incident laser pulses. Namely, the right-handed polarized laser pulses will generate magnetization pointing in one direction while left-handed polarized laser pulses generate magnetization pointing in the other direction. This so-called helicity-dependent all-optical switching (HD-AOS) phenomenon can be potentially used in the next-generation of magnetic storage systems. In this thesis, I explore the HD-AOS phenomenon in hybrid metal-ferromagnet structures, which consist of gold and Co/Pt multilayers. The experiment results show that such CoPtAu hybrid structures have stable HD-AOS phenomenon over a wild range of repetition rates and peak powers. A macroscopic three-temperature model is developed to explain the experiment results. In order to reduce the magnetic bit size and power consumption to transform future magnetic data storage techniques, I further propose plasmonic-enhanced all-optical switching (PE-AOS) by utilizing the unique properties of the tight field confinement and strong local field enhancement that arise from the excitation of surface plasmons supported by judiciously designed metallic nanostructures. The preliminary results on PE-AOS are presented. Finally, I provide a discussion on the future work to explore the underline mechanism of the HD-AOS phenomenon in hybrid metal-ferromagnetic thin films. Different materials and plasmonic nanostructures are also proposed as further work.
Software Engineering Principles 3-14 August 1981,
1981-08-01
small disk used (but rot that of the extended mass storage or large disk option); it is very fast (about 1/5 the speed of the primary memory, where the...extended mass storage or large disk option); it is very fast (about 1/5 the speed of the primary memory, where the disk was 1/10000 for access); and...programed and tested - must be correct and fast D. Choice of right synchronization operations: Design problem 1. Several mentioned in literature 9-22
Hirota, Akihiko; Ito, Shin-ichi
2006-06-01
Using real-time hard disk recording, we have developed an optical system for the long-duration detection of changes in membrane potential from 1,020 sites with a high temporal resolution. The signal-to-noise ratio was sufficient for analyzing the spreading pattern of excitatory waves in frog atria in a single sweep.
Heat-Assisted Magnetic Recording: Fundamental Limits to Inverse Electromagnetic Design
NASA Astrophysics Data System (ADS)
Bhargava, Samarth
In this dissertation, we address the burgeoning fields of diffractive optics, metals-optics and plasmonics, and computational inverse problems in the engineering design of electromagnetic structures. We focus on the application of the optical nano-focusing system that will enable Heat-Assisted Magnetic Recording (HAMR), a higher density magnetic recording technology that will fulfill the exploding worldwide demand of digital data storage. The heart of HAMR is a system that focuses light to a nano- sub-diffraction-limit spot with an extremely high power density via an optical antenna. We approach this engineering problem by first discussing the fundamental limits of nano-focusing and the material limits for metal-optics and plasmonics. Then, we use efficient gradient-based optimization algorithms to computationally design shapes of 3D nanostructures that outperform human designs on the basis of mass-market product requirements. In 2014, the world manufactured ˜1 zettabyte (ZB), ie. 1 Billion terabytes (TBs), of data storage devices, including ˜560 million magnetic hard disk drives (HDDs). Global demand of storage will likely increase by 10x in the next 5-10 years, and manufacturing capacity cannot keep up with demand alone. We discuss the state-of-art HDD and why industry invented Heat-Assisted Magnetic Recording (HAMR) to overcome the data density limitations. HAMR leverages the temperature sensitivity of magnets, in which the coercivity suddenly and non-linearly falls at the Curie temperature. Data recording to high-density hard disks can be achieved by locally heating one bit of information while co-applying a magnetic field. The heating can be achieved by focusing 100 microW of light to a 30nm diameter spot on the hard disk. This is an enormous light intensity, roughly ˜100,000,000x the intensity of sunlight on the earth's surface! This power density is ˜1,000x the output of gold-coated tapered optical fibers used in Near-field Scanning Optical Microscopes (NSOM), which is the incumbent technology allowing the focus of light to the nano-scale. Even in these lower power NSOM probe tips, optical self-heating and deformation of the nano- gold tips are significant reliability and performance bottlenecks. Hence, the design and manufacture of the higher power optical nano-focusing system for HAMR must overcome great engineering challenges in optical and thermal performance. There has been much debate about alternative materials for metal-optics and plasmonics to cure the current plague of optical loss and thermal reliability in this burgeoning field. We clear the air. For an application like HAMR, where intense self-heating occurs, refractory metals and metals nitrides with high melting points but low optical and thermal conductivities are inferior to noble metals. This conclusion is contradictory to several claims and may be counter-intuitive to some, but the analysis is simple, evident and relevant to any engineer working on metal-optics and plasmonics. Indeed, the best metals for DC and RF electronics are also the best at optical frequencies. We also argue that the geometric design of electromagnetic structures (especially sub-wavelength devices) is too cumbersome for human designers, because the wave nature of light necessitates that this inverse problem be non-convex and non-linear. When the computation for one forward simulation is extremely demanding (hours on a high-performance computing cluster), typical designers constrain themselves to only 2 or 3 degrees of freedom. We attack the inverse electromagnetic design problem using gradient-based optimization after leveraging the adjoint-method to efficiently calculate the gradient (ie. the sensitivity) of an objective function with respect to thousands to millions of parameters. This approach results in creative computational designs of electromagnetic structures that human designers could not have conceived yet yield better optical performance. After gaining key insights from the fundamental limits and building our Inverse Electromagnetic Design software, we finally attempt to solve the challenges in enabling HAMR and the future supply of digital data storage hardware. In 2014, the hard disk industry spent ˜$200 million dollars in R&D but poor optical and thermal performance of the metallic nano-transducer continues to prevent commercial HAMR product. Via our design process, we successfully computationally-generated designs for the nano-focusing system that meets specifications for higher data density, lower adjacent track interference, lower laser power requirements and, most notably, lower self-heating of the crucial metallic nano-antenna. We believe that computational design will be a crucial component in commercial HAMR as well as many other commercially significant applications of micro- and nano- optics. If successful in commercializing HAMR, the hard disk industry may sell 1 billion HDDs per year by 2025, with an average of 6 semiconductor diode lasers and 6 optical chips per drive. The key players will become the largest manufacturers of integrated optical chips and nano-antennas in the world. This industry will perform millions of single-mode laser alignments per day. (Abstract shortened by UMI.).
Using Solid State Disk Array as a Cache for LHC ATLAS Data Analysis
NASA Astrophysics Data System (ADS)
Yang, W.; Hanushevsky, A. B.; Mount, R. P.; Atlas Collaboration
2014-06-01
User data analysis in high energy physics presents a challenge to spinning-disk based storage systems. The analysis is data intense, yet reads are small, sparse and cover a large volume of data files. It is also unpredictable due to users' response to storage performance. We describe here a system with an array of Solid State Disk as a non-conventional, standalone file level cache in front of the spinning disk storage to help improve the performance of LHC ATLAS user analysis at SLAC. The system uses several days of data access records to make caching decisions. It can also use information from other sources such as a work-flow management system. We evaluate the performance of the system both in terms of caching and its impact on user analysis jobs. The system currently uses Xrootd technology, but the technique can be applied to any storage system.
Broadband X-Ray Spectra of GX 339-4 and the Geometry of Accreting Black Holes in the Hard State
NASA Technical Reports Server (NTRS)
Tomsick, John A.; Kalemci, Emrah; Kaaret, Philip; Markoff, Sera; Corbel, Stephane; Migliari, Simone; Fender, Rob; Bailyn, Charles D.; Buxton, Michelle M.
2008-01-01
A major question in the study of black hole binaries involves our understanding of the accretion geometry when the sources are in the "hard" state, with an X-ray energy spectrum dominated by a hard power-law component and radio emission coming from a steady "compact" jet. Although the common hard state picture is that the accretion disk is truncated, perhaps at hundreds of gravitational radii (Rg) from the black hole, recent results for the recurrent transient GX 339-4 by Miller and coworkers show evidence for disk material very close to the black hole's innermost stable circular orbit. That work studied GX 339-4 at a luminosity of approximately 5% of the Eddington limit (L(sub Edd) and used parameters from a relativistic reflection model and the presence of a thermal component as diagnostics. Here we use similar diagnostics but extend the study to lower luminosities (2.3% and 0.8% L(sub Edd)) using Swift and RXTE observations of GX 339-4. We detect a thermal component with an inner disk temperature of approximately 0.2 keV at 2.3% L (sub Edd). At both luminosities, we detect broad features due to iron K-alpha that are likely related to reflection of hard X-rays off disk material. If these features are broadened by relativistic effects, they indicate that the material resides within 10 Rg, and the measurements are consistent with the disk's inner radius remaining at approximately 4 Rg down to 0.8% L(sub Edd). However, we also discuss an alternative model for the broadening, and we note that the evolution of the thermal component is not entirely consistent with the constant inner radius interpretation. Finally, we discuss the results in terms of recent theoretical work by Liu and co-workers on the possibility that material may condense out of an Advection-Dominated Accretion Flow to maintain an inner optically thick disk.
NASA Astrophysics Data System (ADS)
Shidatsu, M.; Ueda, Y.; Yamada, S.; Done, C.; Hori, T.; Yamaoka, K.; Kubota, A.; Nagayama, T.; Moritani, Y.
2014-07-01
We report on the results from Suzaku observations of the Galactic black hole X-ray binary H1743-322 in the low/hard state during its outburst in 2012 October. We appropriately take into account the effects of dust scattering to accurately analyze the X-ray spectra. The time-averaged spectra in the 1-200 keV band are dominated by a hard power-law component of a photon index of ≈1.6 with a high-energy cutoff at ≈60 keV, which is well described with the Comptonization of the disk emission by the hot corona. We estimate the inner disk radius from the multi-color disk component, and find that it is 1.3-2.3 times larger than the radius in the high/soft state. This suggests that the standard disk was not extended to the innermost stable circular orbit. A reflection component from the disk is detected with R = Ω/2π ≈ 0.6 (Ω is the solid angle). We also successfully estimate the stable disk component independent of the time-averaged spectral modeling by analyzing short-term spectral variability on a ~1 s timescale. A weak low-frequency quasi-periodic oscillation at 0.1-0.2 Hz is detected, whose frequency is found to correlate with the X-ray luminosity and photon index. This result may be explained by the evolution of the disk truncation radius.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajauria, Sukumar, E-mail: sukumar.rajauria@hgst.com; Canchi, Sripathi V., E-mail: sripathi.canchi@hgst.com; Schreck, Erhard
The kinetic friction and wear at high sliding speeds is investigated using the head-disk interface of hard disk drives, wherein the head and the disk are less than 10 nm apart and move at sliding speeds of 5–10 m/s relative to each other. While the spacing between the sliding surfaces is of the same order of magnitude as various AFM based fundamental studies on friction, the sliding speed is nearly six orders of magnitude larger, allowing a unique set-up for a systematic study of nanoscale wear at high sliding speeds. In a hard disk drive, the physical contact between the head andmore » the disk leads to friction, wear, and degradation of the head overcoat material (typically diamond like carbon). In this work, strain gauge based friction measurements are performed; the friction coefficient as well as the adhering shear strength at the head-disk interface is extracted; and an experimental set-up for studying friction between high speed sliding surfaces is exemplified.« less
Finite Element Analysis of Flexural Vibrations in Hard Disk Drive Spindle Systems
NASA Astrophysics Data System (ADS)
LIM, SEUNGCHUL
2000-06-01
This paper is concerned with the flexural vibration analysis of the hard disk drive (HDD) spindle system by means of the finite element method. In contrast to previous research, every system component is here analytically modelled taking into account its structural flexibility and also the centrifugal effect particularly on the disk. To prove the effectiveness and accuracy of the formulated models, commercial HDD systems with two and three identical disks are selected as examples. Then their major natural modes are computed with only a small number of element meshes as the shaft rotational speed is varied, and subsequently compared with the existing numerical results obtained using other methods and newly acquired experimental ones. Based on such a series of studies, the proposed method can be concluded as a very promising tool for the design of HDDs and various other high-performance computer disk drives such as floppy disk drives, CD ROM drives, and their variations having spindle mechanisms similar to those of HDDs.
Nanoscale wear and kinetic friction between atomically smooth surfaces sliding at high speeds
NASA Astrophysics Data System (ADS)
Rajauria, Sukumar; Canchi, Sripathi V.; Schreck, Erhard; Marchon, Bruno
2015-02-01
The kinetic friction and wear at high sliding speeds is investigated using the head-disk interface of hard disk drives, wherein the head and the disk are less than 10 nm apart and move at sliding speeds of 5-10 m/s relative to each other. While the spacing between the sliding surfaces is of the same order of magnitude as various AFM based fundamental studies on friction, the sliding speed is nearly six orders of magnitude larger, allowing a unique set-up for a systematic study of nanoscale wear at high sliding speeds. In a hard disk drive, the physical contact between the head and the disk leads to friction, wear, and degradation of the head overcoat material (typically diamond like carbon). In this work, strain gauge based friction measurements are performed; the friction coefficient as well as the adhering shear strength at the head-disk interface is extracted; and an experimental set-up for studying friction between high speed sliding surfaces is exemplified.
NASA Technical Reports Server (NTRS)
White, Nicholas E. (Technical Monitor); Ebisawa, Ken; Zycki, Piotr; Kubota, Aya; Mizuno, Tsunefumi; Watarai, Ken-ya
2003-01-01
Ultra-luminous Compact X-ray Sources (ULXs) in nearby spiral galaxies and Galactic superluminal jet sources share the common spectral characteristic that they have unusually high disk temperatures which cannot be explained in the framework of the standard optically thick accretion disk in the Schwarzschild metric. On the other hand, the standard accretion disk around the Kerr black hole might explain the observed high disk temperature, as the inner radius of the Kerr disk gets smaller and the disk temperature can be consequently higher. However, we point out that the observable Kerr disk spectra becomes significantly harder than Schwarzschild disk spectra only when the disk is highly inclined. This is because the emission from the innermost part of the accretion disk is Doppler-boosted for an edge-on Kerr disk, while hardly seen for a face-on disk. The Galactic superluminal jet sources are known to be highly inclined systems, thus their energy spectra may be explained with the standard Kerr disk with known black hole masses. For ULXs, on the other hand, the standard Kerr disk model seems implausible, since it is highly unlikely that their accretion disks are preferentially inclined, and, if edge-on Kerr disk model is applied, the black hole mass becomes unreasonably large (greater than or approximately equal to 300 Solar Mass). Instead, the slim disk (advection dominated optically thick disk) model is likely to explain the observed super- Eddington luminosities, hard energy spectra, and spectral variations of ULXs. We suggest that ULXs are accreting black holes with a few tens of solar mass, which is not unexpected from the standard stellar evolution scenario, and their X-ray emission is from the slim disk shining at super-Eddington luminosities.
Optical system storage design with diffractive optical elements
NASA Technical Reports Server (NTRS)
Kostuk, Raymond K.; Haggans, Charles W.
1993-01-01
Optical data storage systems are gaining widespread acceptance due to their high areal density and the ability to remove the high capacity hard disk from the system. In magneto-optical read-write systems, a small rotation of the polarization state in the return signal from the MO media is the signal which must be sensed. A typical arrangement used for detecting these signals and correcting for errors in tracking and focusing on the disk is illustrated. The components required to achieve these functions are listed. The assembly and alignment of this complex system has a direct impact on cost, and also affects the size, weight, and corresponding data access rates. As a result, integrating these optical components and improving packaging techniques is an active area of research and development. Most designs of binary optic elements have been concerned with optimizing grating efficiency. However, rigorous coupled wave models for vector field diffraction from grating surfaces can be extended to determine the phase and polarization state of the diffracted field, and the design of polarization components. A typical grating geometry and the phase and polarization angles associated with the incident and diffracted fields are shown. In our current stage of work, we are examining system configurations which cascade several polarization functions on a single substrate. In this design, the beam returning from the MO disk illuminates a cascaded grating element which first couples light into the substrate, then introduces a quarter wave retardation, then a polarization rotation, and finally separates s- and p-polarized fields through a polarization beam splitter. The input coupler and polarization beam splitter are formed in volume gratings, and the two intermediate elements are zero-order elements.
Head-Disk Interface Technology: Challenges and Approaches
NASA Astrophysics Data System (ADS)
Liu, Bo
Magnetic hard disk drive (HDD) technology is believed to be one of the most successful examples of modern mechatronics systems. The mechanical beauty of magnetic HDD includes simple but super high accuracy positioning head, positioning technology, high speed and stability spindle motor technology, and head-disk interface technology which keeps the millimeter sized slider flying over a disk surface at nanometer level slider-disk spacing. This paper addresses the challenges and possible approaches on how to further reduce the slider disk spacing whilst retaining the stability and robustness level of head-disk systems for future advanced magnetic disk drives.
Accretion disk winds as the jet suppression mechanism in the microquasar GRS 1915+105.
Neilsen, Joseph; Lee, Julia C
2009-03-26
Stellar-mass black holes with relativistic jets, also known as microquasars, mimic the behaviour of quasars and active galactic nuclei. Because timescales around stellar-mass black holes are orders of magnitude smaller than those around more distant supermassive black holes, microquasars are ideal nearby 'laboratories' for studying the evolution of accretion disks and jet formation in black-hole systems. Whereas studies of black holes have revealed a complex array of accretion activity, the mechanisms that trigger and suppress jet formation remain a mystery. Here we report the presence of a broad emission line in the faint, hard states and narrow absorption lines in the bright, soft states of the microquasar GRS 1915+105. ('Hard' and 'soft' denote the character of the emitted X-rays.) Because the hard states exhibit prominent radio jets, we argue that the broad emission line arises when the jet illuminates the inner accretion disk. The jet is weak or absent during the soft states, and we show that the absorption lines originate when the powerful radiation field around the black hole drives a hot wind off the accretion disk. Our analysis shows that this wind carries enough mass away from the disk to halt the flow of matter into the radio jet.
Status of emerging standards for removable computer storage media and related contributions of NIST
NASA Technical Reports Server (NTRS)
Podio, Fernando L.
1992-01-01
Standards for removable computer storage media are needed so that users may reliably interchange data both within and among various computer installations. Furthermore, media interchange standards support competition in industry and prevent sole-source lock-in. NIST participates in magnetic tape and optical disk standards development through Technical Committees X3B5, Digital Magnetic Tapes, X3B11, Optical Digital Data Disk, and the Joint Technical Commission on Data Permanence. NIST also participates in other relevant national and international standards committees for removable computer storage media. Industry standards for digital magnetic tapes require the use of Standard Reference Materials (SRM's) developed and maintained by NIST. In addition, NIST has been studying care and handling procedures required for digital magnetic tapes. NIST has developed a methodology for determining the life expectancy of optical disks. NIST is developing care and handling procedures for optical digital data disks and is involved in a program to investigate error reporting capabilities of optical disk drives. This presentation reflects the status of emerging magnetic tape and optical disk standards, as well as NIST's contributions in support of these standards.
Kodak Optical Disk and Microfilm Technologies Carve Niches in Specific Applications.
ERIC Educational Resources Information Center
Gallenberger, John; Batterton, John
1989-01-01
Describes the Eastman Kodak Company's microfilm and optical disk technologies and their applications. Topics discussed include WORM technology; retrieval needs and cost effective archival storage needs; engineering applications; jukeboxes; optical storage options; systems for use with mainframes and microcomputers; and possible future…
Jefferson Lab Mass Storage and File Replication Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ian Bird; Ying Chen; Bryan Hess
Jefferson Lab has implemented a scalable, distributed, high performance mass storage system - JASMine. The system is entirely implemented in Java, provides access to robotic tape storage and includes disk cache and stage manager components. The disk manager subsystem may be used independently to manage stand-alone disk pools. The system includes a scheduler to provide policy-based access to the storage systems. Security is provided by pluggable authentication modules and is implemented at the network socket level. The tape and disk cache systems have well defined interfaces in order to provide integration with grid-based services. The system is in production andmore » being used to archive 1 TB per day from the experiments, and currently moves over 2 TB per day total. This paper will describe the architecture of JASMine; discuss the rationale for building the system, and present a transparent 3rd party file replication service to move data to collaborating institutes using JASMine, XM L, and servlet technology interfacing to grid-based file transfer mechanisms.« less
Electron trapping data storage system and applications
NASA Technical Reports Server (NTRS)
Brower, Daniel; Earman, Allen; Chaffin, M. H.
1993-01-01
The advent of digital information storage and retrieval has led to explosive growth in data transmission techniques, data compression alternatives, and the need for high capacity random access data storage. Advances in data storage technologies are limiting the utilization of digitally based systems. New storage technologies will be required which can provide higher data capacities and faster transfer rates in a more compact format. Magnetic disk/tape and current optical data storage technologies do not provide these higher performance requirements for all digital data applications. A new technology developed at the Optex Corporation out-performs all other existing data storage technologies. The Electron Trapping Optical Memory (ETOM) media is capable of storing as much as 14 gigabytes of uncompressed data on a single, double-sided 54 inch disk with a data transfer rate of up to 12 megabits per second. The disk is removable, compact, lightweight, environmentally stable, and robust. Since the Write/Read/Erase (W/R/E) processes are carried out 100 percent photonically, no heating of the recording media is required. Therefore, the storage media suffers no deleterious effects from repeated Write/Read/Erase cycling.
Digital image archiving: challenges and choices.
Dumery, Barbara
2002-01-01
In the last five years, imaging exam volume has grown rapidly. In addition to increased image acquisition, there is more patient information per study. RIS-PACS integration and information-rich DICOM headers now provide us with more patient information relative to each study. The volume of archived digital images is increasing and will continue to rise at a steeper incline than film-based storage of the past. Many filmless facilities have been caught off guard by this increase, which has been stimulated by many factors. The most significant factor is investment in new digital and DICOM-compliant modalities. A huge volume driver is the increase in images per study from multi-slice technology. Storage requirements also are affected by disaster recovery initiatives and state retention mandates. This burgeoning rate of imaging data volume presents many challenges: cost of ownership, data accessibility, storage media obsolescence, database considerations, physical limitations, reliability and redundancy. There are two basic approaches to archiving--single tier and multi-tier. Each has benefits. With a single-tier approach, all the data is stored on a single media that can be accessed very quickly. A redundant copy of the data is then stored onto another less expensive media. This is usually a removable media. In this approach, the on-line storage is increased incrementally as volume grows. In a multi-tier approach, storage levels are set up based on access speed and cost. In other words, all images are stored at the deepest archiving level, which is also the least expensive. Images are stored on or moved back to the intermediate and on-line levels if they will need to be accessed more quickly. It can be difficult to decide what the best approach is for your organization. The options include RAIDs (redundant array of independent disks), direct attached RAID storage (DAS), network storage using RAIDs (NAS and SAN), removable media such as different types of tape, compact disks (CDs and DVDs) and magneto-optical disks (MODs). As you evaluate the various options for storage, it is important to consider both performance and cost. For most imaging enterprises, a single-tier archiving approach is the best solution. With the cost of hard drives declining, NAS is a very feasible solution today. It is highly reliable, offers immediate access to all exams, and easily scales as imaging volume grows. Best of all, media obsolescence challenges need not be of concern. For back-up storage, removable media can be implemented, with a smaller investment needed as it will only be used for a redundant copy of the data. There is no need to keep it online and available. If further system redundancy is desired, multiple servers should be considered. The multi-tier approach still has its merits for smaller enterprises, but with a detailed long-term cost of ownership analysis, NAS will probably still come out on top as the solution of choice for many imaging facilities.
An object-oriented approach to data display and storage: 3 years experience, 25,000 cases.
Sainsbury, D A
1993-11-01
Object-oriented programming techniques were used to develop computer based data display and storage systems. These have been operating in the 8 anaesthetising areas of the Adelaide Children's Hospital for 3 years. The analogue and serial outputs from an array of patient monitors are connected to IBM compatible PC-XT computers. The information is displayed on a colour screen as wave-form and trend graphs and digital format in 'real time'. The trend data is printed simultaneously on a dot matrix printer. This data is also stored for 24 hours on 'hard' disk. The major benefit has been the provision of a single visual focus for all monitored variables. The automatic logging of data has been invaluable in the analysis of critical incidents. The systems were made possible by recent, rapid improvements in computer hardware and software. This paper traces the development of the program and demonstrates the advantages of object-oriented programming techniques.
A Simulation Model Of A Picture Archival And Communication System
NASA Astrophysics Data System (ADS)
D'Silva, Vijay; Perros, Harry; Stockbridge, Chris
1988-06-01
A PACS architecture was simulated to quantify its performance. The model consisted of reading stations, acquisition nodes, communication links, a database management system, and a storage system consisting of magnetic and optical disks. Two levels of storage were simulated, a high-speed magnetic disk system for short term storage, and optical disk jukeboxes for long term storage. The communications link was a single bus via which image data were requested and delivered. Real input data to the simulation model were obtained from surveys of radiology procedures (Bowman Gray School of Medicine). From these the following inputs were calculated: - the size of short term storage necessary - the amount of long term storage required - the frequency of access of each store, and - the distribution of the number of films requested per diagnosis. The performance measures obtained were - the mean retrieval time for an image, - mean queue lengths, and - the utilization of each device. Parametric analysis was done for - the bus speed, - the packet size for the communications link, - the record size on the magnetic disk, - compression ratio, - influx of new images, - DBMS time, and - diagnosis think times. Plots give the optimum values for those values of input speed and device performance which are sufficient to achieve subsecond image retrieval times
NASA Technical Reports Server (NTRS)
Kobler, Benjamin (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
Attaching IBM-compatible 3380 disks to Cray X-MP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.; Midlock, J.L.
1989-01-01
A method of attaching IBM-compatible 3380 disks directly to a Cray X-MP via the XIOP with a BMC is described. The IBM 3380 disks appear to the UNICOS operating system as DD-29 disks with UNICOS file systems. IBM 3380 disks provide cheap, reliable large capacity disk storage. Combined with a small number of high-speed Cray disks, the IBM disks provide for the bulk of the storage for small files and infrequently used files. Cray Research designed the BMC and its supporting software in the XIOP to allow IBM tapes and other devices to be attached to the X-MP. No hardwaremore » changes were necessary, and we added less than 2000 lines of code to the XIOP to accomplish this project. This system has been in operation for over eight months. Future enhancements such as the use of a cache controller and attachment to a Y-MP are also described. 1 tab.« less
NASA Astrophysics Data System (ADS)
Tobochnik, Jan; Chapin, Phillip M.
1988-05-01
Monte Carlo simulations were performed for hard disks on the surface of an ordinary sphere and hard spheres on the surface of a four-dimensional hypersphere. Starting from the low density fluid the density was increased to obtain metastable amorphous states at densities higher than previously achieved. Above the freezing density the inverse pressure decreases linearly with density, reaching zero at packing fractions equal to 68% for hard spheres and 84% for hard disks. Using these new estimates for random closest packing and coefficients from the virial series we obtain an equation of state which fits all the data up to random closest packing. Usually, the radial distribution function showed the typical split second peak characteristic of amorphous solids and glasses. High density systems which lacked this split second peak and showed other sharp peaks were interpreted as signaling the onset of crystal nucleation.
General consumer communication tools for improved image management and communication in medicine.
Rosset, Chantal; Rosset, Antoine; Ratib, Osman
2005-12-01
We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wicht, S., E-mail: s.wicht@ifw-dresden.de; TU Dresden, Institut für Werkstoffwissenschaft, Helmholtzstraße 10, D-01069 Dresden; Neu, V.
2015-01-07
The steadily increasing amount of digital information necessitates the availability of reliable high capacity magnetic data storage. Here, future hard disk drives with extended areal storage densities beyond 1.0 Tb/in{sup 2} are envisioned by using high anisotropy granular and chemically L1{sub 0}-ordered FePt (002) perpendicular media within a heat-assisted magnetic recording scheme. Perpendicular texturing of the [001] easy axes of the individual grains can be achieved by using MgO seed layers. It is therefore investigated, if and how an Ar{sup +} ion irradiation of the MgO seed layer prior to the deposition of the magnetic material influences the MgO surfacemore » properties and hereby the FePt [001] texture. Structural investigations reveal a flattening of the seed layer surface accompanied by a change in the morphology of the FePt grains. Moreover, the fraction of small second layer particles and the degree of coalescence of the primarily deposited FePt grains strongly increases. As for the magnetic performance, this results in a reduced coercivity along the magnetic easy axis (out of plane) and in enhanced hard axis (in-plane) remanence values. The irradiation induced changes in the magnetic properties of the granular FePt-C films are traced back to the accordingly modified atomic structure of the FePt-MgO interface region.« less
A Layered Solution for Supercomputing Storage
Grider, Gary
2018-06-13
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storageâbased on inexpensive, failure-prone disk drivesâbetween disk drives and tape archives.
Method and apparatus for bistable optical information storage for erasable optical disks
Land, Cecil E.; McKinney, Ira D.
1990-01-01
A method and an optical device for bistable storage of optical information, together with reading and erasure of the optical information, using a photoactivated shift in a field dependent phase transition between a metastable or a bias-stabilized ferroelectric (FE) phase and a stable antiferroelectric (AFE) phase in an lead lanthanum zirconate titanate (PLZT). An optical disk contains the PLZT. Writing and erasing of optical information can be accomplished by a light beam normal to the disk. Reading of optical information can be accomplished by a light beam at an incidence angle of 15 to 60 degrees to the normal of the disk.
Method and apparatus for bistable optical information storage for erasable optical disks
Land, C.E.; McKinney, I.D.
1988-05-31
A method and an optical device for bistable storage of optical information, together with reading and erasure of the optical information, using a photoactivated shift in a field dependent phase transition between a metastable or a bias-stabilized ferroelectric (FE) phase and a stable antiferroelectric (AFE) phase in a lead lanthanum zirconate titanate (PLZT). An optical disk contains the PLZT. Writing and erasing of optical information can be accomplished by a light beam normal to the disk. Reading of optical information can be accomplished by a light beam at an incidence angle of 15 to 60 degrees to the normal of the disk. 10 figs.
X-ray spectral analysis of the steady states of GRS 1915+105
NASA Astrophysics Data System (ADS)
Peris, Charith; Remillard, Ronald A.; Steiner, James F.; Vrtilek, Saeqa Dil; Varniere, Peggy; Rodriguez, Jerome; Pooley, Guy G.
2016-04-01
Of the black hole binaries (BHBs) discovered thus far, GRS 1915+105 stands out as an exceptional source primarily due to its wild X-ray variability, the diversity of which has not been replicated in any other stellar-mass black hole. Although extreme variability is commonplace in its light-curve, about half of the observations of GRS1915+105 show fairly steady X-ray intensity. We report on the X-ray spectral behavior within these steady observations. Our work is based on a vast RXTE/PCA data set obtained on GRS 1915+105 during the course of its entire mission and 10 years of radio data from the Ryle Telescope, which overlap the X-ray data. We find that the steady observations within the X-ray data set naturally separate into two regions in a color-color diagram, which we refer to as steady-soft and steady-hard. GRS 1915+105 displays significant curvature in the Comptonization component within the PCA band pass suggesting significantly heating from a hot disk present in all states. A new Comptonization model 'simplcut' was developed in order to model this curvature to best effect. A majority of the steady-soft observations display a roughly constant inner disk radius, remarkably reminiscent of canonical soft state black hole binaries. In contrast, the steady-hard observations display a growing disk truncation that is correlated to the mass accretion rate through the disk, which suggests a magnetically truncated disk. A comparison of X-ray model parameters to the canonical state definitions show that almost all steady-soft observations match the criteria of either thermal or steep power law state, while the thermal state observations dominate the constant radius branch. A large portion 80 % of the steady-hard observations matches the hard state criteria when the disk fraction constraint is neglected. These results combine to suggest that within the complexity of this source is a simpler underlying basis of states, which map to those observed in canonical BHBs.
Implementation of an Enterprise Information Portal (EIP) in the Loyola University Health System
Price, Ronald N.; Hernandez, Kim
2001-01-01
Loyola University Chicago Stritch School of Medicine and Loyola University Medical Center have long histories in the development of applications to support the institutions' missions of education, research and clinical care. In late 1998, the institutions' application development group undertook an ambitious program to re-architecture more than 10 years of legacy application development (30+ core applications) into a unified World Wide Web (WWW) environment. The primary project objectives were to construct an environment that would support the rapid development of n-tier, web-based applications while providing standard methods for user authentication/validation, security/access control and definition of a user's organizational context. The project's efforts resulted in Loyola's Enterprise Information Portal (EIP), which meets the aforementioned objectives. This environment: 1) allows access to other vertical Intranet portals (e.g., electronic medical record, patient satisfaction information and faculty effort); 2) supports end-user desktop customization; and 3) provides a means for standardized application “look and feel.” The portal was constructed utilizing readily available hardware and software. Server hardware consists of multiprocessor (Intel Pentium 500Mhz) Compaq 6500 servers with one gigabyte of random access memory and 75 gigabytes of hard disk storage. Microsoft SQL Server was selected to house the portal's internal or security data structures. Netscape Enterprise Server was selected for the web server component of the environment and Allaire's ColdFusion was chosen for access and application tiers. Total costs for the portal environment was less than $40,000. User data storage is accomplished through two Microsoft SQL Servers and an existing SUN Microsystems enterprise server with eight processors, 750 gigabytes of disk storage operating Sybase relational database manager. Total storage capacity for all system exceeds one terabyte. In the past 12 months, the EIP has supported development of more than 88 applications and is utilized by more than 2,200 users.
Laser beam modeling in optical storage systems
NASA Technical Reports Server (NTRS)
Treptau, J. P.; Milster, T. D.; Flagello, D. G.
1991-01-01
A computer model has been developed that simulates light propagating through an optical data storage system. A model of a laser beam that originates at a laser diode, propagates through an optical system, interacts with a optical disk, reflects back from the optical disk into the system, and propagates to data and servo detectors is discussed.
NASA Astrophysics Data System (ADS)
Bainbridge, Ross C.
1984-09-01
The Institute for Computer Sciences and Technology at the National Bureau of Standards is pleased to cooperate with the International Society for Optical Engineering and to join with the other distinguished organizations in cosponsoring this conference on applications of optical digital data disk storage systems.
Lower Bound on the Mean Square Displacement of Particles in the Hard Disk Model
NASA Astrophysics Data System (ADS)
Richthammer, Thomas
2016-08-01
The hard disk model is a 2D Gibbsian process of particles interacting via pure hard core repulsion. At high particle density the model is believed to show orientational order, however, it is known not to exhibit positional order. Here we investigate to what extent particle positions may fluctuate. We consider a finite volume version of the model in a box of dimensions 2 n × 2 n with arbitrary boundary configuration, and we show that the mean square displacement of particles near the center of the box is bounded from below by c log n. The result generalizes to a large class of models with fairly arbitrary interaction.
Incorporating Oracle on-line space management with long-term archival technology
NASA Technical Reports Server (NTRS)
Moran, Steven M.; Zak, Victor J.
1996-01-01
The storage requirements of today's organizations are exploding. As computers continue to escalate in processing power, applications grow in complexity and data files grow in size and in number. As a result, organizations are forced to procure more and more megabytes of storage space. This paper focuses on how to expand the storage capacity of a Very Large Database (VLDB) cost-effectively within a Oracle7 data warehouse system by integrating long term archival storage sub-systems with traditional magnetic media. The Oracle architecture described in this paper was based on an actual proof of concept for a customer looking to store archived data on optical disks yet still have access to this data without user intervention. The customer had a requirement to maintain 10 years worth of data on-line. Data less than a year old still had the potential to be updated thus will reside on conventional magnetic disks. Data older than a year will be considered archived and will be placed on optical disks. The ability to archive data to optical disk and still have access to that data provides the system a means to retain large amounts of data that is readily accessible yet significantly reduces the cost of total system storage. Therefore, the cost benefits of archival storage devices can be incorporated into the Oracle storage medium and I/O subsystem without loosing any of the functionality of transaction processing, yet at the same time providing an organization access to all their data.
Renormalization group study of the melting of a two-dimensional system of collapsing hard disks
NASA Astrophysics Data System (ADS)
Ryzhov, V. N.; Tareyeva, E. E.; Fomin, Yu. D.; Tsiok, E. N.; Chumakov, E. S.
2017-06-01
We consider the melting of a two-dimensional system of collapsing hard disks (a system with a hard-disk potential to which a repulsive step is added) for different values of the repulsive-step width. We calculate the system phase diagram by the method of the density functional in crystallization theory using equations of the Berezinskii-Kosterlitz-Thouless-Halperin-Nelson-Young theory to determine the lines of stability with respect to the dissociation of dislocation pairs, which corresponds to the continuous transition from the solid to the hexatic phase. We show that the crystal phase can melt via a continuous transition at low densities (the transition to the hexatic phase) with a subsequent transition from the hexatic phase to the isotropic liquid and via a first-order transition. Using the solution of renormalization group equations with the presence of singular defects (dislocations) in the system taken into account, we consider the influence of the renormalization of the elastic moduli on the form of the phase diagram.
Telemetry data storage systems technology for the Space Station Freedom era
NASA Technical Reports Server (NTRS)
Dalton, John T.
1989-01-01
This paper examines the requirements and functions of the telemetry-data recording and storage systems, and the data-storage-system technology projected for the Space Station, with particular attention given to the Space Optical Disk Recorder, an on-board storage subsystem based on 160 gigabit erasable optical disk units each capable of operating at 300 M bits per second. Consideration is also given to storage systems for ground transport recording, which include systems for data capture, buffering, processing, and delivery on the ground. These can be categorized as the first in-first out storage, the fast random-access storage, and the slow access with staging. Based on projected mission manifests and data rates, the worst case requirements were developed for these three storage architecture functions. The results of the analysis are presented.
Implementing Journaling in a Linux Shared Disk File System
NASA Technical Reports Server (NTRS)
Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew;
2000-01-01
In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.
Short-term storage allocation in a filmless hospital
NASA Astrophysics Data System (ADS)
Strickland, Nicola H.; Deshaies, Marc J.; Reynolds, R. Anthony; Turner, Jonathan E.; Allison, David J.
1997-05-01
Optimizing limited short term storage (STS) resources requires gradual, systematic changes, monitored and modified within an operational PACS environment. Optimization of the centralized storage requires a balance of exam numbers and types in STS to minimize lengthy retrievals from long term archive. Changes to STS parameters and work procedures were made while monitoring the effects on resource allocation by analyzing disk space temporally. Proportions of disk space allocated to each patient category on STS were measured to approach the desired proportions in a controlled manner. Key factors for STS management were: (1) sophisticated exam prefetching algorithms: HIS/RIS-triggered, body part-related and historically-selected, and (2) a 'storage onion' design allocating various exam categories to layers with differential deletion protection. Hospitals planning for STS space should consider the needs of radiology, wards, outpatient clinics and clinicoradiological conferences for new and historical exams; desired on-line time; and potential increase in image throughput and changing resources, such as an increase in short term storage disk space.
Integrating new Storage Technologies into EOS
NASA Astrophysics Data System (ADS)
Peters, Andreas J.; van der Ster, Dan C.; Rocha, Joaquim; Lensing, Paul
2015-12-01
The EOS[1] storage software was designed to cover CERN disk-only storage use cases in the medium-term trading scalability against latency. To cover and prepare for long-term requirements the CERN IT data and storage services group (DSS) is actively conducting R&D and open source contributions to experiment with a next generation storage software based on CEPH[3] and ethernet enabled disk drives. CEPH provides a scale-out object storage system RADOS and additionally various optional high-level services like S3 gateway, RADOS block devices and a POSIX compliant file system CephFS. The acquisition of CEPH by Redhat underlines the promising role of CEPH as the open source storage platform of the future. CERN IT is running a CEPH service in the context of OpenStack on a moderate scale of 1 PB replicated storage. Building a 100+PB storage system based on CEPH will require software and hardware tuning. It is of capital importance to demonstrate the feasibility and possibly iron out bottlenecks and blocking issues beforehand. The main idea behind this R&D is to leverage and contribute to existing building blocks in the CEPH storage stack and implement a few CERN specific requirements in a thin, customisable storage layer. A second research topic is the integration of ethernet enabled disks. This paper introduces various ongoing open source developments, their status and applicability.
No Disk Winds in Failed Black Hole Outbursts? New Observations of H1743-322
NASA Astrophysics Data System (ADS)
Neilsen, Joseph; Coriat, Mickael; Motta, Sara; Fender, Rob P.; Ponti, Gabriele; Corbel, Stephane
2016-04-01
The rich and complex physics of stellar-mass black holes in outburst is often referred to as the "disk-jet connection," a term that encapsulates the evolution of accretion disks over several orders of magnitude in Eddington ratio; through Compton scattering, reflection, and thermal emission; as they produce steady compact jets, relativistic plasma ejections, and (from high spectral resolution revelations of the last 15 years) massive, ionized disk winds. It is well established that steady jets are associated with radiatively inefficient X-ray states, and that winds tend to appear during states with more luminous disks, but the underlying physical processes that govern these connections (and their changes during state transitions) are not fully understood. I will present a unique perspective on the disk-wind-jet connection based on new Chandra HETGS, NuSTAR, and JVLA observations of the black hole H1743-322. Rather than following the usual outburst track, the 2015 outburst of H1743 fizzled: the disk never appeared in X-rays, and the source remained spectrally hard for the entire ~100 days. Remarkably, we find no evidence for any accretion disk wind in our data, even though H1743-322 has produced winds at comparable hard X-ray luminosities. I will discuss the implications of this "failed outburst" for our picture of winds from black holes and the astrophysics that governs them.
ERIC Educational Resources Information Center
Gale, John C.; And Others
1985-01-01
This four-article section focuses on information storage capacity of the optical disk covering the information workstation (uses microcomputer, optical disk, compact disc to provide reference information, information content, work product support); use of laser videodisc technology for dissemination of agricultural information; encoding databases…
Computer hardware for radiologists: Part 2
Indrajit, IK; Alam, A
2010-01-01
Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future. PMID:21423895
Computer hardware for radiologists: Part 2.
Indrajit, Ik; Alam, A
2010-11-01
Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the 'ever increasing' digital future.
Defect reduction of patterned media templates and disks
NASA Astrophysics Data System (ADS)
Luo, Kang; Ha, Steven; Fretwell, John; Ramos, Rick; Ye, Zhengmao; Schmid, Gerard; LaBrake, Dwayne; Resnick, Douglas J.; Sreenivasan, S. V.
2010-05-01
Imprint lithography has been shown to be an effective technique for the replication of nano-scale features. Acceptance of imprint lithography for manufacturing will require a demonstration of defect levels commensurate with cost-effective device production. This work summarizes the results of defect inspections of hard disks patterned using Jet and Flash Imprint Lithography (J-FILTM). Inspections were performed with optical based automated inspection tools. For the hard drive market, it is important to understand the defectivity of both the template and the imprinted disk. This work presents a methodology for automated pattern inspection and defect classification for imprint-patterned media. Candela CS20 and 6120 tools from KLA-Tencor map the optical properties of the disk surface, producing highresolution grayscale images of surface reflectivity and scattered light. Defects that have been identified in this manner are further characterized according to the morphology. The imprint process was tested after optimizing both the disk cleaning and adhesion layers processes that precede imprinting. An extended imprint run was performed and both the defect types and trends are reported.
NASA Technical Reports Server (NTRS)
Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
This report contains copies of nearly all of the technical papers and viewgraphs presented at the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Application. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include the following: magnetic disk and tape technologies; optical disk and tape; software storage and file management systems; and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
NASA Technical Reports Server (NTRS)
Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
This report contains copies of nearly all of the technical papers and viewgraphs presented at the National Space Science Data Center (NSSDC) Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990s.
An Effective Cache Algorithm for Heterogeneous Storage Systems
Li, Yong; Feng, Dan
2013-01-01
Modern storage environment is commonly composed of heterogeneous storage devices. However, traditional cache algorithms exhibit performance degradation in heterogeneous storage systems because they were not designed to work with the diverse performance characteristics. In this paper, we present a new cache algorithm called HCM for heterogeneous storage systems. The HCM algorithm partitions the cache among the disks and adopts an effective scheme to balance the work across the disks. Furthermore, it applies benefit-cost analysis to choose the best allocation of cache block to improve the performance. Conducting simulations with a variety of traces and a wide range of cache size, our experiments show that HCM significantly outperforms the existing state-of-the-art storage-aware cache algorithms. PMID:24453890
NASA Astrophysics Data System (ADS)
Wong, G.
The unparalleled cost and form factor advantages of NAND flash memory has driven 35 mm photographic film, floppy disks and one-inch hard drives to extinction. Due to its compelling price/performance characteristics, NAND Flash memory is now expanding its reach into the once-exclusive domain of hard disk drives and DRAM in the form of Solid State Drives (SSDs). Driven by the proliferation of thin and light mobile devices and the need for near-instantaneous accessing and sharing of content through the cloud, SSDs are expected to become a permanent fixture in the computing infrastructure.
On the Dynamics of Rocking Motion of the Hard-Disk Drive Spindle Motor System
NASA Astrophysics Data System (ADS)
Wang, Joseph
Excessive rocking motion of the spindle motor system can cause track misregistration resulting in poor throughput or even drive failure. The chance of excessive disk stack rocking increases as a result of decreasing torsional stiffness of spindle motor bearing system due to the market demand for low profile hard drives. As the track density increases and the vibration specification becomes increasingly stringent, rocking motion of a spindle motor system deserves even more attention and has become a primary challenge for a spindle motor system designer. Lack of understanding of the rocking phenomenon combined with misleading paradox has presented a great difficulty in the effort of avoiding the rocking motion in the hard-disk drive industry. This paper aims to provide fundamental understanding of the rocking phenomenon of a rotating spindle motor system, to clarify the paradox in disk-drive industry and to provide a design guide to an optimized spindle system. This paper, theoretically and experimentally, covers a few important areas of industrial interest including the prediction of rocking natural frequencies and mode shape of a rotating spindle, free vibration, and frequency response under common forcing functions such as rotating and fixed-plane forcing functions. The theory presented here meets with agreeable experimental observation.
Reig, Candid; Cubells-Beltran, María-Dolores; Muñoz, Diego Ramírez
2009-01-01
The 2007 Nobel Prize in Physics can be understood as a global recognition to the rapid development of the Giant Magnetoresistance (GMR), from both the physics and engineering points of view. Behind the utilization of GMR structures as read heads for massive storage magnetic hard disks, important applications as solid state magnetic sensors have emerged. Low cost, compatibility with standard CMOS technologies and high sensitivity are common advantages of these sensors. This way, they have been successfully applied in a lot different environments. In this work, we are trying to collect the Spanish contributions to the progress of the research related to the GMR based sensors covering, among other subjects, the applications, the sensor design, the modelling and the electronic interfaces, focusing on electrical current sensing applications. PMID:22408486
Kim, Taeho Roy; Phatak, Charudatta; Petford-Long, Amanda K.; ...
2017-10-23
In order to increase the storage density of hard disk drives, a detailed understanding of the magnetic structure of the granular magnetic layer is essential. Here, we demonstrate an experimental procedure of imaging recorded bits on heat-assisted magnetic recording (HAMR) media in cross section using Lorentz transmission electron microscopy (TEM). With magnetic force microscopy and focused ion beam (FIB), we successfully targeted a single track to prepare cross-sectional TEM specimens. Then, we characterized the magnetic structure of bits with their precise location and orientation using Fresnel mode of Lorentz TEM. Here, this method can promote understanding of the correlation betweenmore » bits and their material structure in HAMR media to design better the magnetic layer.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Taeho Roy; Phatak, Charudatta; Petford-Long, Amanda K.
In order to increase the storage density of hard disk drives, a detailed understanding of the magnetic structure of the granular magnetic layer is essential. Here, we demonstrate an experimental procedure of imaging recorded bits on heat-assisted magnetic recording (HAMR) media in cross section using Lorentz transmission electron microscopy (TEM). With magnetic force microscopy and focused ion beam (FIB), we successfully targeted a single track to prepare cross-sectional TEM specimens. Then, we characterized the magnetic structure of bits with their precise location and orientation using Fresnel mode of Lorentz TEM. Here, this method can promote understanding of the correlation betweenmore » bits and their material structure in HAMR media to design better the magnetic layer.« less
Accounting Systems and the Electronic Office.
ERIC Educational Resources Information Center
Gafney, Leo
1986-01-01
Discusses a systems approach to accounting instruction and examines it from the viewpoint of four components: people (titles and responsibilities, importance of interaction), forms (nonpaper records such as microfiche, floppy disks, hard disks), procedures (for example, electronic funds transfer), and technology (for example, electronic…
Free-energy landscape for cage breaking of three hard disks.
Hunter, Gary L; Weeks, Eric R
2012-03-01
We investigate cage breaking in dense hard-disk systems using a model of three Brownian disks confined within a circular corral. This system has a six-dimensional configuration space, but can be equivalently thought to explore a symmetric one-dimensional free-energy landscape containing two energy minima separated by an energy barrier. The exact free-energy landscape can be calculated as a function of system size by a direct enumeration of states. Results of simulations show the average time between cage breaking events follows an Arrhenius scaling when the energy barrier is large. We also discuss some of the consequences of using a one-dimensional representation to understand dynamics through a multidimensional space, such as diffusion acquiring spatial dependence and discontinuities in spatial derivatives of free energy.
Attention Novices: Friendly Intro to Shiny Disks.
ERIC Educational Resources Information Center
Bardes, D'Ellen
1986-01-01
Provides an overview of how optical storage technologies--videodisk, Write-Once disks, and CD-ROM CD-I disks are built into and controlled via DEC, Apple, Atari, Amiga, and IBM PC compatible microcomputers. Several available products are noted and a list of producers is included. (EM)
Tutorial: Performance and reliability in redundant disk arrays
NASA Technical Reports Server (NTRS)
Gibson, Garth A.
1993-01-01
A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.
Hybrid accretion disks in active galactic nuclei. I - Structure and spectra
NASA Technical Reports Server (NTRS)
Wandel, Amri; Liang, Edison P.
1991-01-01
A unified treatment is presented of the two distinct states of vertically thin AGN accretion disks: a cool (about 10 to the 6th K) optically thick solution, and a hot (about 10 to the 9th K) optically thin solution. A generalized formalism and a new radiative cooling equation valid in both regimes are introduced. A new luminosity limit is found at which the hot and cool alpha solutions merge into a single solution of intermediate optical depth. Analytic solutions for the disk structure are given, and output spectra are computed numerically. This is used to demonstrate the prospect of fitting AGN broadband spectra containing both the UV bump as well as the hard X-ray and gamma-ray tail, using a single accretion disk model. Such models are found to make definite predictions about the observed spectrum, such as the relation between the hard X-ray spectral index, the UV-to-X-ray luminosity ratio, and a feature of about 1 MeV.
NuSTAR and XMM-Newton Observations of the 2015 Outburst Decay of GX 339-4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stiele, H.; Kong, A. K. H., E-mail: hstiele@mx.nthu.edu.tw
The extent of the accretion disk in the low/hard state of stellar mass black hole X-ray binaries remains an open question. There is some evidence suggesting that the inner accretion disk is truncated and replaced by a hot flow, while the detection of relativistic broadened iron emission lines seems to require an accretion disk extending fully to the innermost stable circular orbit. We present comprehensive spectral and timing analyses of six Nuclear Spectroscopic Telescope Array and XMM-Newton observations of GX 339–4 taken during outburst decay in the autumn of 2015. Using a spectral model consisting of a thermal accretion disk,more » Comptonized emission, and a relativistic reflection component, we obtain a decreasing photon index, consistent with an X-ray binary during outburst decay. Although we observe a discrepancy in the inner radius of the accretion disk and that of the reflector, which can be attributed to the different underlying assumptions in each model, both model components indicate a truncated accretion disk that resiles with decreasing luminosity. The evolution of the characteristic frequency in Fourier power spectra and their missing energy dependence support the interpretation of a truncated and evolving disk in the hard state. The XMM-Newton data set allowed us to study, for the first time, the evolution of the covariance spectra and ratio during outburst decay. The covariance ratio increases and steeps during outburst decay, consistent with increased disk instabilities.« less
Design Alternatives to Improve Access Time Performance of Disk Drives Under DOS and UNIX
NASA Astrophysics Data System (ADS)
Hospodor, Andy
For the past 25 years, improvements in CPU performance have overshadowed improvements in the access time performance of disk drives. CPU performance has been slanted towards greater instruction execution rates, measured in millions of instructions per second (MIPS). However, the slant for performance of disk storage has been towards capacity and corresponding increased storage densities. The IBM PC, introduced in 1982, processed only a fraction of a MIP. Follow-on CPUs, such as the 80486 and 80586, sported 5-10 MIPS by 1992. Single user PCs and workstations, with one CPU and one disk drive, became the dominant application, as implied by their production volumes. However, disk drives did not enjoy a corresponding improvement in access time performance, although the potential still exists. The time to access a disk drive improves (decreases) in two ways: by altering the mechanical properties of the drive or by adding cache to the drive. This paper explores the improvement to access time performance of disk drives using cache, prefetch, faster rotation rates, and faster seek acceleration.
An X-Ray Reprocessing Model of Disk Thermal Emission in Type 1 Seyfert Galaxies
NASA Technical Reports Server (NTRS)
Chiang, James; White, Nicholas E. (Technical Monitor)
2002-01-01
Using a geometry consisting of a hot central Comptonizing plasma surrounded by a thin accretion disk, we model the optical through hard X-ray spectral energy distributions of the type 1 Seyfert. galaxies NGC 3516 and NGC 7469. As in the model proposed by Poutanen, Krolik, and Ryde for the X-ray binary Cygnus X-1 and later applied to Seyfert galaxies by Zdziarski, Lubifiski, and Smith, feedback between the radiation reprocessed by the disk and the thermal Comptonization emission from the hot central plasma plays a pivotal role in determining the X-ray spectrum, and as we show, the optical and ultraviolet spectra as well. Seemingly uncorrelated optical/UV and X-ray light curves, similar to those which have been observed from these objects can, in principle, be explained by variations in the size, shape, and temperature of the Comptonizing plasma. Furthermore, by positing a disk mass accretion rate which satisfies a condition for global energy balance between the thermal Comptonization luminosity and the power available from accretion, one can predict the spectral properties of the heretofore poorly measured hard X-ray continuum above approximately 50 keV in type 1 Seyfert galaxies. Conversely, forthcoming measurements of the hard X-ray continuum by more sensitive hard X-ray and soft gamma-ray telescopes, such as those aboard the International Gamma-Ray Astrophysics Laboratory (INTEGRAL) in conjunction with simultaneous optical, UV, and soft X-ray monitoring, will allow the mass accretion rates to be directly constrained for these sources in the context of this model.
STRONGER REFLECTION FROM BLACK HOLE ACCRETION DISKS IN SOFT X-RAY STATES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steiner, James F.; Remillard, Ronald A.; García, Javier A.
We analyze 15,000 spectra of 29 stellar-mass black hole (BH) candidates collected over the 16 year mission lifetime of Rossi X-ray Timing Explorer using a simple phenomenological model. As these BHs vary widely in luminosity and progress through a sequence of spectral states, which we broadly refer to as hard and soft, we focus on two spectral components: the Compton power law and the reflection spectrum it generates by illuminating the accretion disk. Our proxy for the strength of reflection is the equivalent width of the Fe–K line as measured with respect to the power law. A key distinction ofmore » our work is that for all states we estimate the continuum under the line by excluding the thermal disk component and using only the component that is responsible for fluorescing the Fe–K line, namely, the Compton power law. We find that reflection is several times more pronounced (∼3) in soft compared to hard spectral states. This is most readily caused by the dilution of the Fe line amplitude from Compton scattering in the corona, which has a higher optical depth in hard states. Alternatively, this could be explained by a more compact corona in soft (compared to hard) states, which would result in a higher reflection fraction.« less
X-ray nova MAXI J1828-249. Evolution of the broadband spectrum during its 2013-2014 outburst
NASA Astrophysics Data System (ADS)
Grebenev, S. A.; Prosvetov, A. V.; Burenin, R. A.; Krivonos, R. A.; Mescheryakov, A. V.
2016-02-01
Based on data from the SWIFT, INTEGRAL, MAXI/ISS orbital observatories, and the ground-based RTT-150 telescope, we have investigated the broadband (from the optical to the hard X-ray bands) spectrum of the X-ray nova MAXI J1828-249 and its evolution during the outburst of the source in 2013-2014. The optical and infrared emissions from the nova are shown to be largely determined by the extension of the power-law component responsible for the hard X-ray emission. The contribution from the outer cold regions of the accretion disk, even if the X-ray heating of its surface is taken into account, turns out to be moderate during the source's "high" state (when a soft blackbody emission component is observed in the X-ray spectrum) and is virtually absent during its "low" ("hard") state. This result suggests that much of the optical and infrared emissions from such systems originates in the same region of main energy release where their hard X-ray emission is formed. This can be the Compton or synchro-Compton radiation from a high-temperature plasma in the central accretion disk region puffed up by instabilities, the synchrotron radiation from a hot corona above the disk, or the synchrotron radiation from its relativistic jets.
Quance, S C; Shortall, A C; Harrington, E; Lumley, P J
2001-11-01
The effect of variation in post-exposure storage temperature (18 vs. 37 degrees C) and light intensity (200 vs. 500mW/cm(2)) on micro-hardness of seven light-activated resin composite materials, cured with a Prismetics Mk II (Dentsply) light activation unit, were studied. Hardness values at the upper and lower surfaces of 2mm thick disc shaped specimens of seven light-cured resin composite materials (Herculite XRV and Prodigy/Kerr, Z100 and Silux Plus/3M, TPH/Dentsply, Pertac-Hybrid/Espe, and Charisma/Kulzer), which had been stored dry, were determined 24h after irradiation with a Prismetics Mk II (Dentsply) light activation unit. Hardness values varied with product, surface, storage temperature, and curing light intensity. In no case did the hardness at the lower surface equal that of the upper surface, and the combination of 500mW/cm(2) intensity and 37 degrees C storage produced the best hardness results at the lower surface. Material composition had a significant influence on surface hardness. Only one of the seven products (TPH) produced a mean hardness values at the lower surface >80% of the maximum mean upper surface hardness obtained for the corresponding product at 500mW/cm(2) intensity/37 degrees C storage temperature when subjected to all four test regimes. Despite optimum post-cure storage conditions, 200mW/cm(2) intensity curing for 40s will not produce acceptable hardness at the lower surface of 2mm increments of the majority of products tested.
NASA Astrophysics Data System (ADS)
Miller, J. M.; Fabian, A. C.; Reynolds, C. S.; Nowak, M. A.; Homan, J.; Freyberg, M. J.; Ehle, M.; Belloni, T.; Wijnands, R.; van der Klis, M.; Charles, P. A.; Lewin, W. H. G.
2004-05-01
We have analyzed spectra of the Galactic black hole GX 339-4 obtained through simultaneous 76 ks XMM-Newton/EPIC-pn and 10 ks Rossi X-Ray Timing Explorer observations during a bright phase of its 2002-2003 outburst. An extremely skewed, relativistic Fe Kα emission line and ionized disk reflection spectrum are revealed in these spectra. Self-consistent models for the Fe Kα emission-line profile and disk reflection spectrum rule out an inner disk radius compatible with a Schwarzschild black hole at more than the 8 σ level of confidence. The best-fit inner disk radius of (2-3)rg suggests that GX 339-4 harbors a black hole with a>=0.8-0.9 (where rg=GM/c2 and a=cJ/GM2, and assuming that reflection in the plunging region is relatively small). This confirms indications for black hole spin based on a Chandra spectrum obtained later in the outburst. The emission line and reflection spectrum also rule out a standard power-law disk emissivity in GX 339-4 a broken power-law form with enhanced emissivity inside ~6rg gives improved fits at more than the 8 σ level of confidence. The extreme red wing of the line and the steep emissivity require a centrally concentrated source of hard X-rays that can strongly illuminate the inner disk. Hard X-ray emission from the base of a jet-enhanced by gravitational light-bending effects-could create the concentrated hard X-ray emission; this process may be related to magnetic connections between the black hole and the inner disk. We discuss these results within the context of recent results from analyses of XTE J1650-500 and MCG -6-30-15, and of models for the inner accretion flow environment around black holes.
Influence of investment, disinfection, and storage on the microhardness of ocular resins.
Goiato, Marcelo Coelho; dos Santos, Daniela Micheline; Gennari-Filho, Humberto; Zavanelli, Adriana Cristina; Dekon, Stefan Fiuza de Carvalho; Mancuso, Daniela Nardi
2009-01-01
The longevity of an ocular prosthesis is directly related to the resistance to erosion of its material. The purpose of this study was to evaluate the effects of chemical disinfection and the method of investment on the microhardness of ocular prosthesis acrylic resin. Thirty-two test specimen investments were obtained in two silicones. A segment was cut in each test specimen, and each specimen was fixed in an acrylic disk. The specimens were then polished and submitted to the first microhardness test before immersion in distilled water and incubation for 2 months. During this 2-month period, the specimens were immersed in a water bath at 37 degrees C and were disinfected daily; half were disinfected with neutral soap and the other half were disinfected with 4% chlorhexidine gluconate. After the storage phase and disinfection, a second microhardness test was performed. The surface microhardness values for the acrylic resins were submitted to ANOVA, followed by the Tukey test. The disinfection and the period of storage did not statistically influence the surface microhardness of the acrylic resin, independent of the method of investment of the specimens (Zetalabor or Vipi Sil). The investment of specimens with Zetalabor silicone presented a greater surface hardness, independent of the type of disinfection and the period of storage. Based on these results, we suggest that the microhardness of the resin evaluated was not influenced by the method of disinfection or the time of storage used and was affected only by the investment material.
A media maniac's guide to removable mass storage media
NASA Technical Reports Server (NTRS)
Kempster, Linda S.
1996-01-01
This paper addresses at a high level, the many individual technologies available today in the removable storage arena including removable magnetic tapes, magnetic floppies, optical disks and optical tape. Tape recorders represented below discuss logitudinal, serpantine, logitudinal serpantine,and helical scan technologies. The magnetic floppies discussed will be used for personal electronic in-box applications.Optical disks still fill the role for dense long-term storage. The media capacities quoted are for native data. In some cases, 2 KB ASC2 pages or 50 KB document images will be referenced.
Efficient micromagnetics for magnetic storage devices
NASA Astrophysics Data System (ADS)
Escobar Acevedo, Marco Antonio
Micromagnetics is an important component for advancing the magnetic nanostructures understanding and design. Numerous existing and prospective magnetic devices rely on micromagnetic analysis, these include hard disk drives, magnetic sensors, memories, microwave generators, and magnetic logic. The ability to examine, describe, and predict the magnetic behavior, and macroscopic properties of nanoscale magnetic systems is essential for improving the existing devices, for progressing in their understanding, and for enabling new technologies. This dissertation describes efficient micromagnetic methods as required for magnetic storage analysis. Their performance and accuracy is demonstrated by studying realistic, complex, and relevant micromagnetic system case studies. An efficient methodology for dynamic micromagnetics in large scale simulations is used to study the writing process in a full scale model of a magnetic write head. An efficient scheme, tailored for micromagnetics, to find the minimum energy state on a magnetic system is presented. This scheme can be used to calculate hysteresis loops. An efficient scheme, tailored for micromagnetics, to find the minimum energy path between two stable states on a magnetic system is presented. This minimum energy path is intimately related to the thermal stability.
Laser Optical Disk: The Coming Revolution in On-Line Storage.
ERIC Educational Resources Information Center
Fujitani, Larry
1984-01-01
Review of similarities and differences between magnetic-based and optical disk drives includes a discussion of the electronics necessary for their operation; describes benefits, possible applications, and future trends in development of laser-based drives; and lists manufacturers of laser optical disk drives. (MBR)
Set processing in a network environment. [data bases and magnetic disks and tapes
NASA Technical Reports Server (NTRS)
Hardgrave, W. T.
1975-01-01
A combination of a local network, a mass storage system, and an autonomous set processor serving as a data/storage management machine is described. Its characteristics include: content-accessible data bases usable from all connected devices; efficient storage/access of large data bases; simple and direct programming with data manipulation and storage management handled by the set processor; simple data base design and entry from source representation to set processor representation with no predefinition necessary; capability available for user sort/order specification; significant reduction in tape/disk pack storage and mounts; flexible environment that allows upgrading hardware/software configuration without causing major interruptions in service; minimal traffic on data communications network; and improved central memory usage on large processors.
TransAtlasDB: an integrated database connecting expression data, metadata and variants
Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J
2018-01-01
Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361
NASA Astrophysics Data System (ADS)
Isobe, Masaharu
Hard sphere/disk systems are among the simplest models and have been used to address numerous fundamental problems in the field of statistical physics. The pioneering numerical works on the solid-fluid phase transition based on Monte Carlo (MC) and molecular dynamics (MD) methods published in 1957 represent historical milestones, which have had a significant influence on the development of computer algorithms and novel tools to obtain physical insights. This chapter addresses the works of Alder's breakthrough regarding hard sphere/disk simulation: (i) event-driven molecular dynamics, (ii) long-time tail, (iii) molasses tail, and (iv) two-dimensional melting/crystallization. From a numerical viewpoint, there are serious issues that must be overcome for further breakthrough. Here, we present a brief review of recent progress in this area.
Striped tertiary storage arrays
NASA Technical Reports Server (NTRS)
Drapeau, Ann L.
1993-01-01
Data stripping is a technique for increasing the throughput and reducing the response time of large access to a storage system. In striped magnetic or optical disk arrays, a single file is striped or interleaved across several disks; in a striped tape system, files are interleaved across tape cartridges. Because a striped file can be accessed by several disk drives or tape recorders in parallel, the sustained bandwidth to the file is greater than in non-striped systems, where access to the file are restricted to a single device. It is argued that applying striping to tertiary storage systems will provide needed performance and reliability benefits. The performance benefits of striping for applications using large tertiary storage systems is discussed. It will introduce commonly available tape drives and libraries, and discuss their performance limitations, especially focusing on the long latency of tape accesses. This section will also describe an event-driven tertiary storage array simulator that is being used to understand the best ways of configuring these storage arrays. The reliability problems of magnetic tape devices are discussed, and plans for modeling the overall reliability of striped tertiary storage arrays to identify the amount of error correction required are described. Finally, work being done by other members of the Sequoia group to address latency of accesses, optimizing tertiary storage arrays that perform mostly writes, and compression is discussed.
NASA Astrophysics Data System (ADS)
Xu, Yanjun; Harrison, Fiona A.; García, Javier A.; Fabian, Andrew C.; Fürst, Felix; Gandhi, Poshak; Grefenstette, Brian W.; Madsen, Kristin K.; Miller, Jon M.; Parker, Michael L.; Tomsick, John A.; Walton, Dominic J.
2018-01-01
We report on a Nuclear Spectroscopic Telescope Array (NuSTAR) observation of the recently discovered bright black hole candidate MAXI J1535-571. NuSTAR observed the source on MJD 58003 (five days after the outburst was reported). The spectrum is characteristic of a black hole binary in the hard state. We observe clear disk reflection features, including a broad Fe Kα line and a Compton hump peaking around 30 keV. Detailed spectral modeling reveals a narrow Fe Kα line complex centered around 6.5 keV on top of the strong relativistically broadened Fe Kα line. The narrow component is consistent with distant reflection from moderately ionized material. The spectral continuum is well described by a combination of cool thermal disk photons and a Comptonized plasma with the electron temperature {{kT}}{{e}}=19.7+/- 0.4 keV. An adequate fit can be achieved for the disk reflection features with a self-consistent relativistic reflection model that assumes a lamp-post geometry for the coronal illuminating source. The spectral fitting measures a black hole spin a> 0.84, inner disk radius {R}{in}< 2.01 {r}{ISCO}, and a lamp-post height h={7.2}-2.0+0.8 {r}{{g}} (statistical errors, 90% confidence), indicating no significant disk truncation and a compact corona. Although the distance and mass of this source are not currently known, this suggests the source was likely in the brighter phases of the hard state during this NuSTAR observation.
NASA Astrophysics Data System (ADS)
Vajda, Istvan; Kohari, Zalan; Porjesz, Tamas; Benko, Laszlo; Meerovich, V.; Sokolovsky; Gawalek, W.
2002-08-01
Technical and economical feasibilities of short-term energy storage flywheels with high temperature superconducting (HTS) bearing are widely investigated. It is essential to reduce the ac losses caused by magnetic field variations in HTS bulk disks/rings (levitators) used in the magnetic bearings of flywheels. For the HTS bearings the calculation and measurement of the magnetic field distribution were performed. Effects like eccentricity, tilting were measured. Time dependency of the levitation force following a jumpwise movement of the permanent magnet was measured. The results were used to setup an engineering design algorithm for energy storage HTS flywheels. This algorithm was applied to an experimental HTS flywheel model with a disk type permanent magnet motor/generator unit designed and constructed by the authors. A conceptual design of the disk-type motor/generator with radial flux is shown.
Advanced optical disk storage technology
NASA Technical Reports Server (NTRS)
Haritatos, Fred N.
1996-01-01
There is a growing need within the Air Force for more and better data storage solutions. Rome Laboratory, the Air Force's Center of Excellence for C3I technology, has sponsored the development of a number of operational prototypes to deal with this growing problem. This paper will briefly summarize the various prototype developments with examples of full mil-spec and best commercial practice. These prototypes have successfully operated under severe space, airborne and tactical field environments. From a technical perspective these prototypes have included rewritable optical media ranging from a 5.25-inch diameter format up to the 14-inch diameter disk format. Implementations include an airborne sensor recorder, a deployable optical jukebox and a parallel array of optical disk drives. They include stand-alone peripheral devices to centralized, hierarchical storage management systems for distributed data processing applications.
Hiramatsu, Reiji; Matsumoto, Masakado; Sakae, Kenji; Miyazaki, Yutaka
2005-01-01
In order to determine desiccation tolerances of bacterial strains, the survival of 58 diarrheagenic strains (18 salmonellae, 35 Shiga toxin-producing Escherichia coli [STEC], and 5 shigellae) and of 15 nonpathogenic E. coli strains was determined after drying at 35°C for 24 h in paper disks. At an inoculum level of 107 CFU/disk, most of the salmonellae (14/18) and the STEC strains (31/35) survived with a population of 103 to 104 CFU/disk, whereas all of the shigellae (5/5) and the majority of the nonpathogenic E. coli strains (9/15) did not survive (the population was decreased to less than the detection limit of 102 CFU/disk). After 22 to 24 months of subsequent storage at 4°C, all of the selected salmonellae (4/4) and most of the selected STEC strains (12/15) survived, keeping the original populations (103 to 104 CFU/disk). In contrast to the case for storage at 4°C, all of 15 selected strains (5 strains each of Salmonella spp., STEC O157, and STEC O26) died after 35 to 70 days of storage at 25°C and 35°C. The survival rates of all of these 15 strains in paper disks after the 24 h of drying were substantially increased (10 to 79 times) by the presence of sucrose (12% to 36%). All of these 15 desiccated strains in paper disks survived after exposure to 70°C for 5 h. The populations of these 15 strains inoculated in dried foods containing sucrose and/or fat (e.g., chocolate) were 100 times higher than those in the dried paper disks after drying for 24 h at 25°C. PMID:16269694
NASA Technical Reports Server (NTRS)
Blackwell, Kim; Blasso, Len (Editor); Lipscomb, Ann (Editor)
1991-01-01
The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
NASA Astrophysics Data System (ADS)
Xiong, Shaomin
The magnetic storage areal density keeps increasing every year, and magnetic recording-based hard disk drives provide a very cheap and effective solution to the ever increasing demand for data storage. Heat assisted magnetic recording (HAMR) and bit patterned media have been proposed to increase the magnetic storage density beyond 1 Tb/in2. In HAMR systems, high magnetic anisotropy materials are recommended to break the superparamagnetic limit for further scaling down the size of magnetic bits. However, the current magnetic transducers are not able to generate strong enough field to switch the magnetic orientations of the high magnetic anisotropy material so the data writing is not able to be achieved. So thermal heating has to be applied to reduce the coercivity for the magnetic writing. To provide the heating, a laser is focused using a near field transducer (NFT) to locally heat a ~(25 nm)2 spot on the magnetic disk to the Curie temperature, which is ~ 400 C-600°C, to assist in the data writing process. But this high temperature working condition is a great challenge for the traditional head-disk interface (HDI). The disk lubricant can be depleted by evaporation or decomposition. The protective carbon overcoat can be graphitized or oxidized. The surface quality, such as its roughness, can be changed as well. The NFT structure is also vulnerable to degradation under the large number of thermal load cycles. The changes of the HDI under the thermal conditions could significantly reduce the robustness and reliability of the HAMR products. In bit patterned media systems, instead of using the continuous magnetic granular material, physically isolated magnetic islands are used to store data. The size of the magnetic islands should be about or less than 25 nm in order to achieve the storage areal density beyond 1 Tb/in2. However, the manufacture of the patterned media disks is a great challenge for the current optical lithography technology. Alternative lithography solutions, such as nanoimprint, plasmonic nanolithography, could be potential candidates for the fabrication of patterned disks. This dissertation focuses mainly on: (1) an experimental study of the HDI under HAMR conditions (2) exploration of a plasmonic nanolithography technology. In this work, an experimental HAMR testbed (named "Cal stage") is developed to study different aspects of HAMR systems, including the tribological head-disk interface and heat transfer in the head-disk gap. A temperature calibration method based on magnetization decay is proposed to obtain the relationship between the laser power input and temperature increase on the disk. Furthermore, lubricant depletion tests under various laser heating conditions are performed. The effects of laser heating repetitions, laser power and disk speeds on lubricant depletion are discussed. Lubricant depletion under the optical focused laser beam heating and the NFT heating are compared, revealing that thermal gradient plays an important role for lubricant depletion. Lubricant reflow behavior under various conditions is also studied, and a power law dependency of lubricant depletion on laser heating repetitions is obtained from the experimental results. A conductive-AFM system is developed to measure the electrical properties of thin carbon films. The conductivity or resistivity is a good parameter for characterizing the sp2/sp3 components of the carbon films. Different heating modes are applied to study the degradation of the carbon films, including temperature-controlled electric heater heating, focused laser beam heating and NFT heating. It is revealed that the temperature and heating duration significantly affect the degradation of the carbon films. Surface reflectivity and roughness are changed under certain heating conditions. The failure of the NFT structure during slider flying is investigated using our in-house fabricated sliders. In order to extend the lifetime of the NFT, a two-stage heating scheme is proposed and a numerical simulation has verified the feasibility of this new scheme. The heat dissipated around the NFT structure causes a thermal protrusion. There is a chance for contact to occur between the protrusion and disk which can result in a failure of the NFT. A design method to combine both TFC protrusion and laser induced NFT protrusion is proposed to reduce the fly-height modulation and chance of head-disk contact. Finally, an integrated plasmonic nanolithography machine is introduced to fabricate the master template for patterned disks. The plasmonic nanolithography machine uses a flying slider with a plasmonic lens to expose the thermal resist on a spinning wafer. The system design, optimization and integration have been performed over the past few years. Several sub-systems of the plasmonic nanolithography machine, such as the radial and circumferential direction position control, high speed pattern generation, are presented in this work. The lithography results are shown as well.
A Layered Solution for Supercomputing Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grider, Gary
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.
DNA as a digital information storage device: hope or hype?
Panda, Darshan; Molla, Kutubuddin Ali; Baig, Mirza Jainul; Swain, Alaka; Behera, Deeptirekha; Dash, Manaswini
2018-05-01
The total digital information today amounts to 3.52 × 10 22 bits globally, and at its consistent exponential rate of growth is expected to reach 3 × 10 24 bits by 2040. Data storage density of silicon chips is limited, and magnetic tapes used to maintain large-scale permanent archives begin to deteriorate within 20 years. Since silicon has limited data storage ability and serious limitations, such as human health hazards and environmental pollution, researchers across the world are intently searching for an appropriate alternative. Deoxyribonucleic acid (DNA) is an appealing option for such a purpose due to its endurance, a higher degree of compaction, and similarity to the sequential code of 0's and 1's as found in a computer. This emerging field of DNA as means of data storage has the potential to transform science fiction into reality, wherein a device that can fit in our palms can accommodate the information of the entire world, as latest research has revealed that just four grams of DNA could store the annual global digital information. DNA has all the properties to supersede the conventional hard disk, as it is capable of retaining ten times more data, has a thousandfold storage density, and consumes 10 8 times less power to store a similar amount of data. Although DNA has an enormous potential as a data storage device of the future, multiple bottlenecks such as exorbitant costs, excruciatingly slow writing and reading mechanisms, and vulnerability to mutations or errors need to be resolved. In this review, we have critically analyzed the emergence of DNA as a molecular storage device for the future, its ability to address the future digital data crunch, potential challenges in achieving this objective, various current industrial initiatives, and major breakthroughs.
Sabbaghi, Mostafa; Esmaeilian, Behzad; Raihanian Mashhadi, Ardeshir; Behdad, Sara; Cade, Willie
2015-02-01
Consumers often have a tendency to store their used, old or un-functional electronics for a period of time before they discard them and return them back to the waste stream. This behavior increases the obsolescence rate of used still-functional products leading to lower profitability that could be resulted out of End-of-Use (EOU) treatments such as reuse, upgrade, and refurbishment. These types of behaviors are influenced by several product and consumer-related factors such as consumers' traits and lifestyles, technology evolution, product design features, product market value, and pro-environmental stimuli. Better understanding of different groups of consumers, their utilization and storage behavior and the connection of these behaviors with product design features helps Original Equipment Manufacturers (OEMs) and recycling and recovery industry to better overcome the challenges resulting from the undesirable storage of used products. This paper aims at providing insightful statistical analysis of Electronic Waste (e-waste) dynamic nature by studying the effects of design characteristics, brand and consumer type on the electronics usage time and end of use time-in-storage. A database consisting of 10,063 Hard Disk Drives (HDD) of used personal computers returned back to a remanufacturing facility located in Chicago, IL, USA during 2011-2013 has been selected as the base for this study. The results show that commercial consumers have stored computers more than household consumers regardless of brand and capacity factors. Moreover, a heterogeneous storage behavior is observed for different brands of HDDs regardless of capacity and consumer type factors. Finally, the storage behavior trends are projected for short-time forecasting and the storage times are precisely predicted by applying machine learning methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Status of international optical disk standards
NASA Astrophysics Data System (ADS)
Chen, Di; Neumann, John
1999-11-01
Optical technology for data storage offers media removability with unsurpassed reliability. As the media are removable, data interchange between the media and drives from different sources is a major concern. The optical recording community realized, at the inception of this new storage technology development, that international standards for all optical recording disk/cartridge must be established to insure the healthy growth of this industry and for the benefit of the users. Many standards organizations took up the challenge and numerous international standards were established which are now being used world-wide. This paper provides a brief summary of the current status of the international optical disk standards.
Reading magnetic ink patterns with magnetoresistive sensors
NASA Astrophysics Data System (ADS)
Merazzo, K. J.; Costa, T.; Franco, F.; Ferreira, R.; Zander, M.; Türr, M.; Becker, T.; Freitas, P. P.; Cardoso, S.
2018-05-01
Information storage and monitoring relies on sensitive transducers with high robustness and reliability. This paper shows a methodology enabling the qualification of magnetic sensors for magnetic pattern readout, in applications different than hard disk magnetic recording. A magnetic tunnel junction MTJ sensor was incorporated in a reader setup for recognition of the magnetization of patterned arrays made of CoCrPt thin films and magnetic ink. The geometry of the sensor (in particular, the footprint and vertical distance to the media) was evaluated for two sensor configurations. The readout conditions were optimized to cope for variable media field intensity, resulting from CoCrPt film or magnetic ink thickness, with fixed reading distance and dimensions of the pattern. The calibration of the ink magnetic signal could be inferred from the analytical calculations carried out to validate the CoCrPt results.
Mobile-PKI Service Model for Ubiquitous Environment
NASA Astrophysics Data System (ADS)
Jeun, Inkyung; Chun, Kilsoo
One of the most important things in PKI(Public Key Infrastructure) is the private key management issue. The private key must be deal with safely for secure PKI service. Even though PKI service is usually used for identification and authentication of user in e-commerce, PKI service has many inconvenient factors. Especially, the fact that storage media of private key for PKI service is limited to PC hard disk drive or smart card users must always carry, gives an inconvenience to user and is not suitable in ubiquitous network. This paper suggests the digital signature service using a mobile phone(m-PKI service) which is suitable in future network. A mobile phone is the most widely used for personal communication means and has a characteristic of high movability. We can use the PKI service anytime and anywhere using m-PKI.
Rawson, Ashish; Koidis, Anastasios; Rai, Dilip K; Tuohy, Maria; Brunton, Nigel
2010-07-14
The effect of blanching (95 +/- 3 degrees C) followed by sous vide (SV) processing (90 degrees C for 10 min) on levels of two polyacetylenes in parsnip disks immediately after processing and during chill storage was studied and compared with the effect of water immersion (WI) processing (70 degrees C for 2 min.). Blanching had the greatest influence on the retention of polyacetylenes in sous vide processed parsnip disks resulting in significant decreases of 24.5 and 24% of falcarinol (1) and falcarindiol (2) respectively (p < 0.05). Subsequent SV processing did not result in additional significant losses in polyacetylenes compared to blanched samples. Subsequent anaerobic storage of SV processed samples resulted in a significant decrease in 1 levels (p < 0.05) although no change in 2 levels was observed (p > 0.05). 1 levels in WI processed samples were significantly higher than in SV samples (p
NASA Astrophysics Data System (ADS)
Alvarez, Alejandro; Beche, Alexandre; Furano, Fabrizio; Hellmich, Martin; Keeble, Oliver; Rocha, Ricardo
2012-12-01
The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During the last year we have been working on providing stable, high performant data access to our storage system using standard protocols, while extending the storage management functionality and adapting both configuration and deployment procedures to reuse commonly used building blocks. In this contribution we cover in detail the extensive evaluation we have performed of our new HTTP/WebDAV and NFS 4.1 frontends, in terms of functionality and performance. We summarize the issues we faced and the solutions we developed to turn them into valid alternatives to the existing grid protocols - namely the additional work required to provide multi-stream transfers for high performance wide area access, support for third party copies, credential delegation or the required changes in the experiment and fabric management frameworks and tools. We describe new functionality that has been added to ease system administration, such as different filesystem weights and a faster disk drain, and new configuration and monitoring solutions based on the industry standards Puppet and Nagios. Finally, we explain some of the internal changes we had to do in the DPM architecture to better handle the additional load from the analysis use cases.
Optical storage media data integrity studies
NASA Technical Reports Server (NTRS)
Podio, Fernando L.
1994-01-01
Optical disk-based information systems are being used in private industry and many Federal Government agencies for on-line and long-term storage of large quantities of data. The storage devices that are part of these systems are designed with powerful, but not unlimited, media error correction capacities. The integrity of data stored on optical disks does not only depend on the life expectancy specifications for the medium. Different factors, including handling and storage conditions, may result in an increase of medium errors in size and frequency. Monitoring the potential data degradation is crucial, especially for long term applications. Efforts are being made by the Association for Information and Image Management Technical Committee C21, Storage Devices and Applications, to specify methods for monitoring and reporting to the user medium errors detected by the storage device while writing, reading or verifying the data stored in that medium. The Computer Systems Laboratory (CSL) of the National Institute of Standard and Technology (NIST) has a leadership role in the development of these standard techniques. In addition, CSL is researching other data integrity issues, including the investigation of error-resilient compression algorithms. NIST has conducted care and handling experiments on optical disk media with the objective of identifying possible causes of degradation. NIST work in data integrity and related standards activities is described.
A Persistent Disk Wind in GRS 1915+105 with NICER
NASA Astrophysics Data System (ADS)
Neilsen, J.; Cackett, E.; Remillard, R. A.; Homan, J.; Steiner, J. F.; Gendreau, K.; Arzoumanian, Z.; Prigozhin, G.; LaMarr, B.; Doty, J.; Eikenberry, S.; Tombesi, F.; Ludlam, R.; Kara, E.; Altamirano, D.; Fabian, A. C.
2018-06-01
The bright, erratic black hole X-ray binary GRS 1915+105 has long been a target for studies of disk instabilities, radio/infrared jets, and accretion disk winds, with implications that often apply to sources that do not exhibit its exotic X-ray variability. With the launch of the Neutron star Interior Composition Explorer (NICER), we have a new opportunity to study the disk wind in GRS 1915+105 and its variability on short and long timescales. Here we present our analysis of 39 NICER observations of GRS 1915+105 collected during five months of the mission data validation and verification phase, focusing on Fe XXV and Fe XXVI absorption. We report the detection of strong Fe XXVI in 32 (>80%) of these observations, with another four marginal detections; Fe XXV is less common, but both likely arise in the well-known disk wind. We explore how the properties of this wind depend on broad characteristics of the X-ray lightcurve: mean count rate, hardness ratio, and fractional rms variability. The trends with count rate and rms are consistent with an average wind column density that is fairly steady between observations but varies rapidly with the source on timescales of seconds. The line dependence on spectral hardness echoes the known behavior of disk winds in outbursts of Galactic black holes; these results clearly indicate that NICER is a powerful tool for studying black hole winds.
28 CFR 51.20 - Form of submissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... megabyte MS-DOS formatted diskettes; 5 1/4″ 1.2 megabyte MS-DOS formatted floppy disks; nine-track tape... provided in hard copy. (c) All magnetic media shall be clearly labeled with the following information: (1... a disk operating system (DOS) file, it shall be formatted in a standard American Standard Code for...
Solving Reynolds Equation in the Head-Disk Interface of Hard Disk Drives by Using a Meshless Method
NASA Astrophysics Data System (ADS)
Bao-Jun, Shi; Ting-Yi, Yang; Jian, Zhang; Yun-Dong, Du
2010-05-01
With the decrease of the flying height of the magnetic head/slider in hard disk drives (HDDs), Reynolds equation, which is used to describe the pressure distribution of the air bearing film in HDDs, must be modified to account for the rarefaction effect. Meshless local Petrov-Galerkin (MLPG) method has been successfully used in some fields of solid mechanics and fluid mechanics and was proven to be an efficacious method. No meshes are needed in MLPG method either for the interpolation of the trial and test functions, or for the integration of the weak form of the related differential equation. We solve Reynolds equation in the head-disk interface (HDI) of HDDs by using MLPG method. The pressure distribution of the air baring film by using MLPG method is obtained and compared with the exact solution and that obtained by using a least square finite difference (LSFD) method. We also investigate effects of the bearing number on the pressure value and the center of pressure based on this meshless method for different film-thickness ratios.
Evolution of Large-Scale Magnetic Fields and State Transitions in Black Hole X-Ray Binaries
NASA Astrophysics Data System (ADS)
Wang, Ding-Xiong; Huang, Chang-Yin; Wang, Jiu-Zhou
2010-04-01
The state transitions of black hole (BH) X-ray binaries are discussed based on the evolution of large-scale magnetic fields, in which the combination of three energy mechanisms are involved: (1) the Blandford-Znajek (BZ) process related to the open field lines connecting a rotating BH with remote astrophysical loads, (2) the magnetic coupling (MC) process related to the closed field lines connecting the BH with its surrounding accretion disk, and (3) the Blandford-Payne (BP) process related to the open field lines connecting the disk with remote astrophysical loads. It turns out that each spectral state of the BH binaries corresponds to each configuration of magnetic field in BH magnetosphere, and the main characteristics of low/hard (LH) state, hard intermediate (HIM) state and steep power law (SPL) state are roughly fitted based on the evolution of large-scale magnetic fields associated with disk accretion.
Redundant Disk Arrays in Transaction Processing Systems. Ph.D. Thesis, 1993
NASA Technical Reports Server (NTRS)
Mourad, Antoine Nagib
1994-01-01
We address various issues dealing with the use of disk arrays in transaction processing environments. We look at the problem of transaction undo recovery and propose a scheme for using the redundancy in disk arrays to support undo recovery. The scheme uses twin page storage for the parity information in the array. It speeds up transaction processing by eliminating the need for undo logging for most transactions. The use of redundant arrays of distributed disks to provide recovery from disasters as well as temporary site failures and disk crashes is also studied. We investigate the problem of assigning the sites of a distributed storage system to redundant arrays in such a way that a cost of maintaining the redundant parity information is minimized. Heuristic algorithms for solving the site partitioning problem are proposed and their performance is evaluated using simulation. We also develop a heuristic for which an upper bound on the deviation from the optimal solution can be established.
NASA Astrophysics Data System (ADS)
Tan, Baolin; Mapps, Desmond J.; Pan, Genhua; Robinson, Paul
1996-03-01
A disk with a data, servo and isolation layer has been fabricated with the data layer magnetized along the circumferential direction. The servo layer was recorded with servo pattern magnetized along the radial direction. A continuous servo signal is obtained and the servo does not occupy any data area. In this new method, the servo and data bits can share media surface area on the disk without interference. Track following on 0.7 μm tracks has been demonstrated using the new servo method on longitudinal rigid disks.
Noise Reduction Based on an Fe -Rh Interlayer in Exchange-Coupled Heat-Assisted Recording Media
NASA Astrophysics Data System (ADS)
Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter
2017-11-01
High storage density and high data rate are two of the most desired properties of modern hard disk drives. Heat-assisted magnetic recording (HAMR) is believed to achieve both. Recording media, consisting of exchange-coupled grains with a high and a low TC part, were shown to have low dc noise—but increased ac noise—compared to hard magnetic single-phase grains like FePt. We extensively investigate the influence of an Fe -Rh interlayer on the magnetic noise in exchange-coupled grains. We find an optimal grain design that reduces the jitter in the down-track direction by up to 30% and in the off-track direction by up to 50%, depending on the head velocity, compared to the same structures without FeRh. Furthermore, the mechanisms causing this jitter reduction are demonstrated. Additionally, we show that, for short heat pulses and low write temperatures, the switching-time distribution of the analyzed grain structure is reduced by a factor of 4 compared to the same structure without an Fe -Rh layer. This feature could be interesting for HAMR use with a pulsed laser spot and could encourage discussion of this HAMR technique.
Flexible matrix composite laminated disk/ring flywheel
NASA Technical Reports Server (NTRS)
Gupta, B. P.; Hannibal, A. J.
1984-01-01
An energy storage flywheel consisting of a quasi-isotropic composite disk overwrapped by a circumferentially wound ring made of carbon fiber and a elastometric matrix is proposed. Through analysis it was demonstrated that with an elastomeric matrix to relieve the radial stresses, a laminated disk/ring flywheel can be designed to store a least 80.3 Wh/kg or about 68% more than previous disk/ring designs. at the same time the simple construction is preserved.
SAM-FS: LSC's New Solaris-Based Storage Management Product
NASA Technical Reports Server (NTRS)
Angell, Kent
1996-01-01
SAM-FS is a full featured hierarchical storage management (HSM) device that operates as a file system on Solaris-based machines. The SAM-FS file system provides the user with all of the standard UNIX system utilities and calls, and adds some new commands, i.e. archive, release, stage, sls, sfind, and a family of maintenance commands. The system also offers enhancements such as high performance virtual disk read and write, control of the disk through an extent array, and the ability to dynamically allocate block size. SAM-FS provides 'archive sets' which are groupings of data to be copied to secondary storage. In practice, as soon as a file is written to disk, SAM-FS will make copies onto secondary media. SAM-FS is a scalable storage management system. The system can manage millions of files per system, though this is limited today by the speed of UNIX and its utilities. In the future, a new search algorithm will be implemented that will remove logical and performance restrictions on the number of files managed.
Tekçe, Neslihan; Pala, Kansad; Demirci, Mustafa; Tuncer, Safa
2016-11-01
To evaluate changes in surface characteristics of two different resin composites after 1 year of water storage using a profilometer, Vickers hardness, scanning electron microscopy (SEM), and atomic force microscopy (AFM). A total of 46 composite disk specimens (10 mm in diameter and 2 mm thick) were fabricated using Clearfil Majesty Esthetic and Clearfil Majesty Posterior (Kuraray Medical Co, Tokyo, Japan). Ten specimens from each composite were used for surface roughness and microhardness tests (n = 10). For each composite, scanning electron microscope (SEM, n = 2) and atomic force microscope (AFM, n = 1) images were obtained after 24 h and 1 year of water storage. The data were analyzed using two-way analysis of variance and a post-hoc Bonferroni test. Microhardness values of Clearfil Majesty Esthetic decreased significantly (78.15-63.74, p = 0.015) and surface roughness values did not change after 1 year of water storage (0.36-0.39, p = 0.464). Clearfil Majesty Posterior microhardness values were quite stable (138.74-137.25, p = 0.784), and surface roughness values increased significantly (0.39-0.48, p = 0.028) over 1 year. One year of water storage caused microhardness values for Clearfil Majesty Esthetic to decrease and the surface roughness of Clearfil Majesty Posterior increased. AFM and SEM images demonstrated surface detoration of the materials after 1 year and ensured similar results with the quantitative test methods. SCANNING 38:694-700, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
General consumer communication tools for improved image management and communication in medicine
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Rosset, Antoine; McCoy, J. Michael
2005-04-01
We elected to explore emerging consumer technologies that can be adopted to improve and facilitate image and data communication in medical and clinical environment. The wide adoption of new communication paradigm such as instant messaging, chatting and direct emailing can be integrated in specific applications. The increasing capacity of portable and hand held devices such as iPod music players offer an attractive alternative for data storage that exceeds the capabilities of traditional offline storage media such as CD or even DVD. We adapted medical image display and manipulation software called OSIRIX to integrate different innovative technologies facilitating the communication and data transfer between remote users. We integrated email and instant messaging features to the program allowing users to instantaneously email an image or a set of images that are displayed on the screen. Using iChat instant messaging application from Apple a user can share the content of his screen with a remote correspondent and communicate in real time using voice and video. To provide convenient mechanism for exchange of large data sets the program can store the data in DICOM format on CD or DVD, but was also extended to use the large storage capacity of iPod hard disks as well as Apple"s online storage service "dot Mac" that users can subscribe to benefit from scalable secure storage that accessible from anywhere on the internet. The adoption of these innovative technologies is likely to change the architecture of traditional picture archiving and communication systems and provide more flexible and efficient means of communication.
Design and implementation of a biomedical image database (BDIM).
Aubry, F; Badaoui, S; Kaplan, H; Di Paola, R
1988-01-01
We developed a biomedical image database (BDIM) which proposes a standardized representation of value arrays such as images and curves, and of their associated parameters, independently of their acquisition mode to make their transmission and processing easier. It includes three kinds of interactions, oriented to the users. The network concept was kept as a constraint to incorporate the BDIM in a distributed structure and we maintained compatibility with the ACR/NEMA communication protocol. The management of arrays and their associated parameters includes two distinct bases of objects, linked together via a gateway. The first one manages arrays according to their storage mode: long term storage on optionally on-line mass storage devices, and, for consultations, partial copies of long term stored arrays on hard disk. The second one manages the associated parameters and the gateway by means of the relational DBMS ORACLE. Parameters are grouped into relations. Some of them are in agreement with groups defined by the ACR/NEMA. The other relations describe objects resulting from processed initial objects. These new objects are not described by the ACR/NEMA but they can be inserted as shadow groups of ACR/NEMA description. The relations describing the storage and their pathname constitute the gateway. ORACLE distributed tools and the two-level storage technique will allow the integration of the BDIM into a distributed structure, Queries and array (alone or in sequences) retrieval module has access to the relations via a level in which a dictionary managed by ORACLE is included. This dictionary translates ACR/NEMA objects into objects that can be handled by the DBMS.(ABSTRACT TRUNCATED AT 250 WORDS)
van der Waals-Tonks-type equations of state for hard-hypersphere fluids in four and five dimensions
NASA Astrophysics Data System (ADS)
Wang, Xian-Zhi
2004-04-01
Recently, we developed accurate van der Waals-Tonks-type equations of state for hard-disk and hard-sphere fluids by using the known virial coefficients. In this paper, we derive the van der Waals-Tonks-type equations of state. We further apply these equations of state to hard-hypersphere fluids in four and five dimensions. In the low-density fluid regime, these equations of state are in good agreement with the simulation results and existing equations of state.
Electron trapping optical data storage system and applications
NASA Technical Reports Server (NTRS)
Brower, Daniel; Earman, Allen; Chaffin, M. H.
1993-01-01
A new technology developed at Optex Corporation out-performs all other existing data storage technologies. The Electron Trapping Optical Memory (ETOM) media stores 14 gigabytes of uncompressed data on a single, double-sided 130 mm disk with a data transfer rate of up to 120 megabits per second. The disk is removable, compact, lightweight, environmentally stable, and robust. Since the Write/Read/Erase (W/R/E) processes are carried out photonically, no heating of the recording media is required. Therefore, the storage media suffers no deleterious effects from repeated W/R/E cycling. This rewritable data storage technology has been developed for use as a basis for numerous data storage products. Industries that can benefit from the ETOM data storage technologies include: satellite data and information systems, broadcasting, video distribution, image processing and enhancement, and telecommunications. Products developed for these industries are well suited for the demanding store-and-forward buffer systems, data storage, and digital video systems needed for these applications.
RALPH: An online computer program for acquisition and reduction of pulse height data
NASA Technical Reports Server (NTRS)
Davies, R. C.; Clark, R. S.; Keith, J. E.
1973-01-01
A background/foreground data acquisition and analysis system incorporating a high level control language was developed for acquiring both singles and dual parameter coincidence data from scintillation detectors at the Radiation Counting Laboratory at the NASA Manned Spacecraft Center in Houston, Texas. The system supports acquisition of gamma ray spectra in a 256 x 256 coincidence matrix (utilizing disk storage) and simultaneous operation of any of several background support and data analysis functions. In addition to special instruments and interfaces, the hardware consists of a PDP-9 with 24K core memory, 256K words of disk storage, and Dectape and Magtape bulk storage.
The Weekly Fab Five: Things You Should Do Every Week To Keep Your Computer Running in Tip-Top Shape.
ERIC Educational Resources Information Center
Crispen, Patrick
2001-01-01
Describes five steps that school librarians should follow every week to keep their computers running at top efficiency. Explains how to update virus definitions; run Windows update; run ScanDisk to repair errors on the hard drive; run a disk defragmenter; and backup all data. (LRW)
Sawmill: A Logging File System for a High-Performance RAID Disk Array
1995-01-01
from limiting disk performance, new controller architectures connect the disks directly to the network so that data movement bypasses the file server...These developments raise two questions for file systems: how to get the best performance from a RAID, and how to use such a controller architecture ...the RAID-II storage system; this architecture provides a fast data path that moves data rapidly among the disks, high-speed controller memory, and the
Performance of redundant disk array organizations in transaction processing environments
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.
1993-01-01
A performance evaluation is conducted for two redundant disk-array organizations in a transaction-processing environment, relative to the performance of both mirrored disk organizations and organizations using neither striping nor redundancy. The proposed parity-striping alternative to striping with rotated parity is shown to furnish rapid recovery from failure at the same low storage cost without interleaving the data over multiple disks. Both noncached systems and systems using a nonvolatile cache as the controller are considered.
NASA Astrophysics Data System (ADS)
Pattabhiraman, Harini; Gantapara, Anjan P.; Dijkstra, Marjolein
2015-10-01
Using computer simulations, we study the phase behavior of a model system of colloidal hard disks with a diameter σ and a soft corona of width 1.4σ. The particles interact with a hard core and a repulsive square-shoulder potential. We calculate the free energy of the random-tiling quasicrystal and its crystalline approximants using the Frenkel-Ladd method. We explicitly account for the configurational entropy associated with the number of distinct configurations of the random-tiling quasicrystal. We map out the phase diagram and find that the random tiling dodecagonal quasicrystal is stabilised by entropy at finite temperatures with respect to the crystalline approximants that we considered, and its stability region seems to extend to zero temperature as the energies of the defect-free quasicrystal and the crystalline approximants are equal within our statistical accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apyan, A.; Badillo, J.; Cruz, J. Diaz
The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit thismore » mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.« less
Optical Disk Technology and Information.
ERIC Educational Resources Information Center
Goldstein, Charles M.
1982-01-01
Provides basic information on videodisks and potential applications, including inexpensive online storage, random access graphics to complement online information systems, hybrid network architectures, office automation systems, and archival storage. (JN)
Dynamo magnetic-field generation in turbulent accretion disks
NASA Technical Reports Server (NTRS)
Stepinski, T. F.
1991-01-01
Magnetic fields can play important roles in the dynamics and evolution of accretion disks. The presence of strong differential rotation and vertical density gradients in turbulent disks allows the alpha-omega dynamo mechanism to offset the turbulent dissipation and maintain strong magnetic fields. It is found that MHD dynamo magnetic-field normal modes in an accretion disk are highly localized to restricted regions of a disk. Implications for the character of real, dynamically constrained magnetic fields in accretion disks are discussed. The magnetic stress due to the mean magnetic field is found to be of the order of a viscous stress. The dominant stress, however, is likely to come from small-scale fluctuating magnetic fields. These fields may also give rise to energetic flares above the disk surface, providing a possible explanation for the highly variable hard X-ray emission from objects like Cyg X-l.
Integrated IMA (Information Mission Areas) IC (Information Center) Guide
1989-06-01
COMPUTER AIDED DESIGN / COMPUTER AIDED MANUFACTURE 8-8 8.3.7 LIQUID CRYSTAL DISPLAY PANELS 8-8 8.3.8 ARTIFICIAL INTELLIGENCE APPLIED TO VI 8-9 8.4...2 10.3.1 DESKTOP PUBLISHING 10-3 10.3.2 INTELLIGENT COPIERS 10-5 10.3.3 ELECTRONIC ALTERNATIVES TO PRINTED DOCUMENTS 10-5 10.3.4 ELECTRONIC FORMS...Optical Disk LCD Units Storage Image Scanners Graphics Forms Output Generation Copiers Devices Software Optical Disk Intelligent Storage Copiers Work Group
A study of the cross-correlation and time lag in black hole X-ray binary XTE J1859+226
NASA Astrophysics Data System (ADS)
Pei, Songpeng; Ding, Guoqiang; Li, Zhibing; Lei, Yajuan; Yuen, Rai; Qu, Jinlu
2017-07-01
With Rossi X-ray Timing Explorer (RXTE) data, we systematically study the cross-correlation and time lag in all spectral states of black hole X-ray binary (BHXB) XTE J1859+226 in detail during its entire 1999-2000 outburst that lasted for 166 days. Anti-correlations and positive correlations and their respective soft and hard X-ray lags are only detected in the first 100 days of the outburst when the luminosity is high. This suggests that the cross-correlations may be related to high luminosity. Positive correlations are detected in every state of XTE J1859+226, viz., hard state, hard-intermediate state (HIMS), soft-intermediate state (SIMS) and soft state. However, anti-correlations are only detected in HIMS and SIMS, anti-correlated hard lags are only detected in SIMS, while anti-correlated soft lags are detected in both HIMS and SIMS. Moreover, the ratio of the observations with anti-correlated soft lags to hard lags detected in XTE J1859+226 is significantly different from that in neutron star low-mass X-ray binaries (NS LMXBs). So far, anti-correlations are never detected in the soft state of BHXBs but detected in every branch or state of NS LMXBs. This may be due to the origin of soft seed photons in BHXBs is confined to the accretion disk and, for NS LMXBs, from both accretion disk and the surface of the NS. We notice that the timescale of anti-correlated time lags detected in XTE J1859+226 is similar with that of other BHXBs and NS LMXBs. We suggest that anti-correlated soft lag detected in BHXB may result from fluctuation in the accretion disk as well as NS LMXB.
Fluctuating Navier-Stokes equations for inelastic hard spheres or disks.
Brey, J Javier; Maynar, P; de Soria, M I García
2011-04-01
Starting from the fluctuating Boltzmann equation for smooth inelastic hard spheres or disks, closed equations for the fluctuating hydrodynamic fields to Navier-Stokes order are derived. This requires deriving constitutive relations for both the fluctuating fluxes and the correlations of the random forces. The former are identified as having the same form as the macroscopic average fluxes and involving the same transport coefficients. On the other hand, the random force terms exhibit two peculiarities as compared with their elastic limit for molecular systems. First, they are not white but have some finite relaxation time. Second, their amplitude is not determined by the macroscopic transport coefficients but involves new coefficients. ©2011 American Physical Society
Static structure of active Brownian hard disks
NASA Astrophysics Data System (ADS)
de Macedo Biniossek, N.; Löwen, H.; Voigtmann, Th; Smallenburg, F.
2018-02-01
We explore the changes in static structure of a two-dimensional system of active Brownian particles (ABP) with hard-disk interactions, using event-driven Brownian dynamics simulations. In particular, the effect of the self-propulsion velocity and the rotational diffusivity on the orientationally-averaged fluid structure factor is discussed. Typically activity increases structural ordering and generates a structure factor peak at zero wave vector which is a precursor of motility-induced phase separation. Our results provide reference data to test future statistical theories for the fluid structure of active Brownian systems. This manuscript was submitted for the special issue of the Journal of Physics: Condensed Matter associated with the Liquid Matter Conference 2017.
High-performance mass storage system for workstations
NASA Technical Reports Server (NTRS)
Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.
1993-01-01
Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).
Large Format Multifunction 2-Terabyte Optical Disk Storage System
NASA Technical Reports Server (NTRS)
Kaiser, David R.; Brucker, Charles F.; Gage, Edward C.; Hatwar, T. K.; Simmons, George O.
1996-01-01
The Kodak Digital Science OD System 2000E automated disk library (ADL) base module and write-once drive are being developed as the next generation commercial product to the currently available System 2000 ADL. Under government sponsorship with the Air Force's Rome Laboratory, Kodak is developing magneto-optic (M-O) subsystems compatible with the Kodak Digital Science ODW25 drive architecture, which will result in a multifunction (MF) drive capable of reading and writing 25 gigabyte (GB) WORM media and 15 GB erasable media. In an OD system 2000 E ADL configuration with 4 MF drives and 100 total disks with a 50% ration of WORM and M-O media, 2.0 terabytes (TB) of versatile near line mass storage is available.
34 CFR 668.24 - Record retention and examinations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... representative. (3) An institution may keep required records in hard copy or in microform, computer file, optical disk, CD-ROM, or other media formats, provided that— (i) Except for the records described in paragraph (d)(3)(ii) of this section, all record information must be retrievable in a coherent hard copy format...
Enforcing Hardware-Assisted Integrity for Secure Transactions from Commodity Operating Systems
2015-08-17
OS. First, we dedicate one hard disk to each OS. A System Management Mode ( SMM )-based monitoring module monitors if an OS is accessing another hard...hypervisor- based systems. An adversary can only target the BIOS-anchored SMM code, which is tiny, and without any need for foreign code (i.e. third
NASA Astrophysics Data System (ADS)
Stopper, Daniel; Thorneywork, Alice L.; Dullens, Roel P. A.; Roth, Roland
2018-03-01
Using dynamical density functional theory (DDFT), we theoretically study Brownian self-diffusion and structural relaxation of hard disks and compare to experimental results on quasi two-dimensional colloidal hard spheres. To this end, we calculate the self-van Hove correlation function and distinct van Hove correlation function by extending a recently proposed DDFT-approach for three-dimensional systems to two dimensions. We find that the theoretical results for both self-part and distinct part of the van Hove function are in very good quantitative agreement with the experiments up to relatively high fluid packing fractions of roughly 0.60. However, at even higher densities, deviations between the experiment and the theoretical approach become clearly visible. Upon increasing packing fraction, in experiments, the short-time self-diffusive behavior is strongly affected by hydrodynamic effects and leads to a significant decrease in the respective mean-squared displacement. By contrast, and in accordance with previous simulation studies, the present DDFT, which neglects hydrodynamic effects, shows no dependence on the particle density for this quantity.
Mechanism by Which Magnesium Oxide Suppresses Tablet Hardness Reduction during Storage.
Sakamoto, Takatoshi; Kachi, Shigeto; Nakamura, Shohei; Miki, Shinsuke; Kitajima, Hideaki; Yuasa, Hiroshi
2016-01-01
This study investigated how the inclusion of magnesium oxide (MgO) maintained tablet hardness during storage in an unpackaged state. Tablets were prepared with a range of MgO levels and stored at 40°C with 75% relative humidity for up to 14 d. The hardness of tablets prepared without MgO decreased over time. The amount of added MgO was positively associated with tablet hardness and mass from an early stage during storage. Investigation of the water sorption properties of the tablet components showed that carmellose water sorption correlated positively with the relative humidity, while MgO absorbed and retained moisture, even when the relative humidity was reduced. In tablets prepared using only MgO, a petal- or plate-like material was observed during storage. Fourier transform infrared spectrophotometry showed that this material was hydromagnesite, produced when MgO reacts with water and CO2. The estimated level of hydromagnesite at each time-point showed a significant negative correlation with tablet porosity. These results suggested that MgO suppressed storage-associated softening by absorbing moisture from the environment. The conversion of MgO to hydromagnesite results in solid bridge formation between the powder particles comprising the tablets, suppressing the storage-related increase in volume and increasing tablet hardness.
Accretion in Radiative Equipartition (AiRE) Disks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yazdi, Yasaman K.; Afshordi, Niayesh, E-mail: yyazdi@pitp.ca, E-mail: nafshordi@pitp.ca
2017-07-01
Standard accretion disk theory predicts that the total pressure in disks at typical (sub-)Eddington accretion rates becomes radiation pressure dominated. However, radiation pressure dominated disks are thermally unstable. Since these disks are observed in approximate steady state over the instability timescale, our accretion models in the radiation-pressure-dominated regime (i.e., inner disk) need to be modified. Here, we present a modification to the Shakura and Sunyaev model, where the radiation pressure is in equipartition with the gas pressure in the inner region. We call these flows accretion in radiative equipartition (AiRE) disks. We introduce the basic features of AiRE disks andmore » show how they modify disk properties such as the Toomre parameter and the central temperature. We then show that the accretion rate of AiRE disks is limited from above and below, by Toomre and nodal sonic point instabilities, respectively. The former leads to a strict upper limit on the mass of supermassive black holes as a function of cosmic time (and spin), while the latter could explain the transition between hard and soft states of X-ray binaries.« less
Accretion in Radiative Equipartition (AiRE) Disks
NASA Astrophysics Data System (ADS)
Yazdi, Yasaman K.; Afshordi, Niayesh
2017-07-01
Standard accretion disk theory predicts that the total pressure in disks at typical (sub-)Eddington accretion rates becomes radiation pressure dominated. However, radiation pressure dominated disks are thermally unstable. Since these disks are observed in approximate steady state over the instability timescale, our accretion models in the radiation-pressure-dominated regime (I.e., inner disk) need to be modified. Here, we present a modification to the Shakura & Sunyaev model, where the radiation pressure is in equipartition with the gas pressure in the inner region. We call these flows accretion in radiative equipartition (AiRE) disks. We introduce the basic features of AiRE disks and show how they modify disk properties such as the Toomre parameter and the central temperature. We then show that the accretion rate of AiRE disks is limited from above and below, by Toomre and nodal sonic point instabilities, respectively. The former leads to a strict upper limit on the mass of supermassive black holes as a function of cosmic time (and spin), while the latter could explain the transition between hard and soft states of X-ray binaries.
High pressure processing and storage of blueberries: effect on fruit hardness
NASA Astrophysics Data System (ADS)
Scheidt, Tiago B.; Silva, Filipa V. M.
2018-01-01
Non-thermal preservation technologies such as high pressure processing (HPP) have low impact in original fruit flavours. The objective of this study was to process the whole blueberries by HPP and investigate the effect on its hardness after processing and during 7 and 28 days storage. Whole blueberry immersed in water was the best packaging option. The blueberries submitted to 200 and 600 MPa for 5-60 min and were stored at 3°C for 1 week. In another experiment, HPP blueberries (200 and 600 MPa for 10 min) were stored for 28 days. No difference in sensorial texture was observed between HPP and fresh unprocessed blueberry, although the instrumental hardness decreased significantly. Hardness was not affected by the processing time and was similar just after HPP and one-week storage. The hardness of HPP-processed blueberries was kept along 28 days storage without considerable weight loss as opposed to fresh fruits which collapsed.
High-Speed Recording of Test Data on Hard Disks
NASA Technical Reports Server (NTRS)
Lagarde, Paul M., Jr.; Newnan, Bruce
2003-01-01
Disk Recording System (DRS) is a systems-integration computer program for a direct-to-disk (DTD) high-speed data acquisition system (HDAS) that records rocket-engine test data. The HDAS consists partly of equipment originally designed for recording the data on tapes. The tape recorders were replaced with hard-disk drives, necessitating the development of DRS to provide an operating environment that ties two computers, a set of five DTD recorders, and signal-processing circuits from the original tape-recording version of the HDAS into one working system. DRS includes three subsystems: (1) one that generates a graphical user interface (GUI), on one of the computers, that serves as a main control panel; (2) one that generates a GUI, on the other computer, that serves as a remote control panel; and (3) a data-processing subsystem that performs tasks on the DTD recorders according to instructions sent from the main control panel. The software affords capabilities for dynamic configuration to record single or multiple channels from a remote source, remote starting and stopping of the recorders, indexing to prevent overwriting of data, and production of filtered frequency data from an original time-series data file.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., the following definitions apply to this subchapter: Act means the Social Security Act. Administrative..., statements, and other required documents. Electronic media means: (1) Electronic storage material on which...) and any removable/transportable digital memory medium, such as magnetic tape or disk, optical disk, or...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., the following definitions apply to this subchapter: Act means the Social Security Act. Administrative..., statements, and other required documents. Electronic media means: (1) Electronic storage material on which...) and any removable/transportable digital memory medium, such as magnetic tape or disk, optical disk, or...
IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM
NASA Technical Reports Server (NTRS)
Martin, M. D.
1994-01-01
The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.
NSSDC activities with 12-inch optical disk drives
NASA Technical Reports Server (NTRS)
Lowrey, Barbara E.; Lopez-Swafford, Brian
1986-01-01
The development status of optical-disk data transfer and storage technology at the National Space Science Data Center (NSSDC) is surveyed. The aim of the R&D program is to facilitate the exchange of large volumes of data. Current efforts focus on a 12-inch 1-Gbyte write-once/read-many disk and a disk drive which interfaces with VAX/VMS computer systems. The history of disk development at NSSDC is traced; the results of integration and performance tests are summarized; the operating principles of the 12-inch system are explained and illustrated with diagrams; and the need for greater standardization is indicated.
Physical principles and current status of emerging non-volatile solid state memories
NASA Astrophysics Data System (ADS)
Wang, L.; Yang, C.-H.; Wen, J.
2015-07-01
Today the influence of non-volatile solid-state memories on persons' lives has become more prominent because of their non-volatility, low data latency, and high robustness. As a pioneering technology that is representative of non-volatile solidstate memories, flash memory has recently seen widespread application in many areas ranging from electronic appliances, such as cell phones and digital cameras, to external storage devices such as universal serial bus (USB) memory. Moreover, owing to its large storage capacity, it is expected that in the near future, flash memory will replace hard-disk drives as a dominant technology in the mass storage market, especially because of recently emerging solid-state drives. However, the rapid growth of the global digital data has led to the need for flash memories to have larger storage capacity, thus requiring a further downscaling of the cell size. Such a miniaturization is expected to be extremely difficult because of the well-known scaling limit of flash memories. It is therefore necessary to either explore innovative technologies that can extend the areal density of flash memories beyond the scaling limits, or to vigorously develop alternative non-volatile solid-state memories including ferroelectric random-access memory, magnetoresistive random-access memory, phase-change random-access memory, and resistive random-access memory. In this paper, we review the physical principles of flash memories and their technical challenges that affect our ability to enhance the storage capacity. We then present a detailed discussion of novel technologies that can extend the storage density of flash memories beyond the commonly accepted limits. In each case, we subsequently discuss the physical principles of these new types of non-volatile solid-state memories as well as their respective merits and weakness when utilized for data storage applications. Finally, we predict the future prospects for the aforementioned solid-state memories for the next generation of data-storage devices based on a comparison of their performance. [Figure not available: see fulltext.
Optical Digital Image Storage System
1991-03-18
figures courtesy of Sony Corporation x LIST OF TABLES Indexing Workstation - Ease of Learning ................................... 99 Indexing Workstation...retaining a master negative copy of the microfilm. 121 The Sony Corporation, the supplier of the optical disk media used in the ODISS projeLt, claims...disk." During the ODISS project, several CMSR files-stored on the Sony optical disks were read several thousand times with no -loss of information
Clustering and heterogeneous dynamics in a kinetic Monte Carlo model of self-propelled hard disks
NASA Astrophysics Data System (ADS)
Levis, Demian; Berthier, Ludovic
2014-06-01
We introduce a kinetic Monte Carlo model for self-propelled hard disks to capture with minimal ingredients the interplay between thermal fluctuations, excluded volume, and self-propulsion in large assemblies of active particles. We analyze in detail the resulting (density, self-propulsion) nonequilibrium phase diagram over a broad range of parameters. We find that purely repulsive hard disks spontaneously aggregate into fractal clusters as self-propulsion is increased and rationalize the evolution of the average cluster size by developing a kinetic model of reversible aggregation. As density is increased, the nonequilibrium clusters percolate to form a ramified structure reminiscent of a physical gel. We show that the addition of a finite amount of noise is needed to trigger a nonequilibrium phase separation, showing that demixing in active Brownian particles results from a delicate balance between noise, interparticle interactions, and self-propulsion. We show that self-propulsion has a profound influence on the dynamics of the active fluid. We find that the diffusion constant has a nonmonotonic behavior as self-propulsion is increased at finite density and that activity produces strong deviations from Fickian diffusion that persist over large time scales and length scales, suggesting that systems of active particles generically behave as dynamically heterogeneous systems.
GRS 1739-278 Observed at Very Low Luminosity with XMM-Newton and NuSTAR
NASA Astrophysics Data System (ADS)
Fürst, F.; Tomsick, J. A.; Yamaoka, K.; Dauser, T.; Miller, J. M.; Clavel, M.; Corbel, S.; Fabian, A.; García, J.; Harrison, F. A.; Loh, A.; Kaaret, P.; Kalemci, E.; Migliari, S.; Miller-Jones, J. C. A.; Pottschmidt, K.; Rahoui, F.; Rodriguez, J.; Stern, D.; Stuhlinger, M.; Walton, D. J.; Wilms, J.
2016-12-01
We present a detailed spectral analysis of XMM-Newton and NuSTAR observations of the accreting transient black hole GRS 1739-278 during a very faint low hard state at ˜0.02% of the Eddington luminosity (for a distance of 8.5 kpc and a mass of 10 {M}⊙ ). The broadband X-ray spectrum between 0.5 and 60 keV can be well-described by a power-law continuum with an exponential cutoff. The continuum is unusually hard for such a low luminosity, with a photon index of Γ = 1.39 ± 0.04. We find evidence for an additional reflection component from an optically thick accretion disk at the 98% likelihood level. The reflection fraction is low, with {{ R }}{refl}={0.043}-0.023+0.033. In combination with measurements of the spin and inclination parameters made with NuSTAR during a brighter hard state by Miller et al., we seek to constrain the accretion disk geometry. Depending on the assumed emissivity profile of the accretion disk, we find a truncation radius of 15-35 {R}{{g}} (5-12 {R}{ISCO}) at the 90% confidence limit. These values depend strongly on the assumptions and we discuss possible systematic uncertainties.
Evolution of magnetic disk subsystems
NASA Astrophysics Data System (ADS)
Kaneko, Satoru
1994-06-01
The higher recording density of magnetic disk realized today has brought larger storage capacity per unit and smaller form factors. If the required access performance per MB is constant, the performance of large subsystems has to be several times better. This article describes mainly the technology for improving the performance of the magnetic disk subsystems and the prospects of their future evolution. Also considered are 'crosscall pathing' which makes the data transfer channel more effective, 'disk cache' which improves performance coupling with solid state memory technology, and 'RAID' which improves the availability and integrity of disk subsystems by organizing multiple disk drives in a subsystem. As a result, it is concluded that since the performance of the subsystem is dominated by that of the disk cache, maximation of the performance of the disk cache subsystems is very important.
Quality Detection of Litchi Stored in Different Environments Using an Electronic Nose
Xu, Sai; Lü, Enli; Lu, Huazhong; Zhou, Zhiyan; Wang, Yu; Yang, Jing; Wang, Yajuan
2016-01-01
The purpose of this paper was to explore the utility of an electronic nose to detect the quality of litchi fruit stored in different environments. In this study, a PEN3 electronic nose was adopted to test the storage time and hardness of litchi that were stored in three different types of environment (room temperature, refrigerator and controlled-atmosphere). After acquiring data about the hardness of the sample and from the electronic nose, linear discriminant analysis (LDA), canonical correlation analysis (CCA), BP neural network (BPNN) and BP neural network-partial least squares regression (BPNN-PLSR), were employed for data processing. The experimental results showed that the hardness of litchi fruits stored in all three environments decreased during storage. The litchi stored at room temperature had the fastest rate of decrease in hardness, followed by those stored in a refrigerator environment and under a controlled-atmosphere. LDA has a poor ability to classify the storage time of the three environments in which litchi was stored. BPNN can effectively recognize the storage time of litchi stored in a refrigerator and a controlled-atmosphere environment. However, the BPNN classification of the effect of room temperature storage on litchi was poor. CCA results show a significant correlation between electronic nose data and hardness data under the room temperature, and the correlation is more obvious for those under the refrigerator environment and controlled-atmosphere environment. The BPNN-PLSR can effectively predict the hardness of litchi under refrigerator storage conditions and a controlled-atmosphere environment. However, the BPNN-PLSR prediction of the effect of room temperature storage on litchi and global environment storage on litchi were poor. Thus, this experiment proved that an electronic nose can detect the quality of litchi under refrigeratored storage and a controlled-atmosphere environment. These results provide a useful reference for future studies on nondestructive and intelligent monitoring of fruit quality. PMID:27338391
Correlated Timing and Spectral Behavior of 4U 1705-44
NASA Astrophysics Data System (ADS)
Olive, Jean-François; Barret, Didier; Gierliński, Marek
2003-01-01
We follow the timing properties of the neutron star low-mass X-ray binary system 4U 1705-44 in different spectral states, as monitored by the Rossi X-Ray Timing Explorer over about a month. We fit the power density spectra using multiple Lorentzians. We show that the characteristic frequencies of these Lorentzians, when properly identified, fit within the correlations previously reported. The time evolution of these frequencies and their relation with the parameters of the energy spectra reported in Barret & Olive are used to constrain the accretion geometry changes. The spectral data were fitted by the sum of a blackbody and a Comptonized component and were interpreted in the framework of a truncated accretion disk geometry, with a varying truncation radius. If one assumes that the characteristic frequencies of the Lorentzians are some measure of this truncation radius, as in most theoretical models, then the timing data presented here strengthen the above interpretation. The soft-to-hard and hard-to-soft transitions are clearly associated with the disk receding from and approaching the neutron star, respectively. During the transitions, correlations are found between the Lorentzian frequencies and the flux and temperature of the blackbody, which is thus likely to be coming from the disk. On the other hand, in the hard state, the characteristic Lorentzians frequencies that are the lowest remained nearly constant despite significant evolution of the spectra parameters. The disk no longer contributes to the X-ray emission, and the blackbody is now likely to be emitted by the neutron star surface that is providing the seed photons for the Comptonization.
NASA Technical Reports Server (NTRS)
Zhang, S. N.; Zhang, Xiaoling; Sun, Xuejun; Yao, Yangsen; Cui, Wei; Chen, Wan; Wu, Xuebing; Xu, Haiguang
1999-01-01
We have carried out systematic modeling of the X-ray spectra of the Galactic superluminal jet sources GRS 1915+105 and GRO J1655-40, using our newly developed spectral fitting methods. Our results reveal, for the first time, a three-layered structure of the atmosphere in the inner region of the accretion disks. Above the conanonly known, cold and optically thick disk of a blackbody temperature 0.2-0.5 keV, there is a layer of warm gas with a temperature of 1.0-1.5 keV and an optical depth of around 10. Compton scattering of the underlying disk blackbody photons produces the soft X-ray component we comonly observe. Under certain conditions, there is also a much hotter, optically thin corona above the warm layer, characterized by a temperature of 100 keV or higher and an optical depth of unity or less. The corona produces the hard X-ray component typically seen in these sources. We emphasize that the existence of the warm layer seem to be independent of the presence of the hot corona and, therefore, it is not due to irradiation of the disk by hard X-rays from the corona. Our results suggest a striking structural similarity between the accretion disks and the solar atmosphere, which may provide a new stimulus to study the common underlying physical processes operating in these vastly different systems. We also report the first unambiguous detection of an emission line around 6.4 keV in GRO J1655-40, which may allow further constraining of the accretion disk structure. We acknowledge NASA GSFC and MFC for partial financial support. (copyright) 1999: American Astronomical Society. All rights reverved.
Effect of storage in artificial saliva and thermal cycling on Knoop hardness of resin denture teeth.
Assunção, Wirley Gonçalves; Gomes, Erica Alves; Barão, Valentim Adelino Ricardo; Barbosa, Débora Barros; Delben, Juliana Aparecida; Tabata, Lucas Fernando
2010-07-01
This study aimed to evaluate the effect of different storage periods in artificial saliva and thermal cycling on Knoop hardness of 8 commercial brands of resin denture teeth. Eigth different brands of resin denture teeth were evaluated (Artplus group, Biolux group, Biotone IPN group, Myerson group, SR Orthosit group, Trilux group, Trubyte Biotone group, and Vipi Dent Plus group). Twenty-four teeth of each brand had their occlusal surfaces ground flat and were embedded in autopolymerized acrylic resin. After polishing, the teeth were submitted to different conditions: (1) immersion in distilled water at 37+/-2 degrees C for 48+/-2h (control); (2) storage in artificial saliva at 37+/-2 degrees C for 15, 30 and 60 days, and (3) thermal cycling between 5 and 55 degrees C with 30-s dwell times for 5000 cycles. Knoop hardness test was performed after each condition. Data were analyzed with two-way ANOVA and Tukey's test (alpha=.05). In general, SR Orthosit group presented the highest statistically significant Knoop hardness value while Myerson group exhibited the smallest statistically significant mean (P<.05) in the control period, after thermal cycling, and after all storage periods. The Knoop hardness means obtained before thermal cycling procedure (20.34+/-4.45 KHN) were statistically higher than those reached after thermal cycling (19.77+/-4.13 KHN). All brands of resin denture teeth were significantly softened after storage period in artificial saliva. Storage in saliva and thermal cycling significantly reduced the Knoop hardness of the resin denture teeth. SR Orthosit denture teeth showed the highest Knoop hardness values regardless the condition tested. Copyright 2010 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Evaluation of Optical Disk Jukebox Software.
ERIC Educational Resources Information Center
Ranade, Sanjay; Yee, Fonald
1989-01-01
Discusses software that is used to drive and access optical disk jukeboxes, which are used for data storage. Categories of the software are described, user categories are explained, the design of implementation approaches is discussed, and representative software products are reviewed. (eight references) (LRW)
40 CFR 94.509 - Maintenance of records; submittal of information.
Code of Federal Regulations, 2013 CFR
2013-07-01
... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...
40 CFR 94.509 - Maintenance of records; submittal of information.
Code of Federal Regulations, 2011 CFR
2011-07-01
... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...
40 CFR 94.509 - Maintenance of records; submittal of information.
Code of Federal Regulations, 2012 CFR
2012-07-01
... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...
40 CFR 94.509 - Maintenance of records; submittal of information.
Code of Federal Regulations, 2014 CFR
2014-07-01
... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...
Nanoscale roughness contact in a slider-disk interface.
Hua, Wei; Liu, Bo; Yu, Shengkai; Zhou, Weidong
2009-07-15
The nanoscale roughness contact between molecularly smooth surfaces of a slider-disk interface in a hard disk drive is analyzed, and the lubricant behavior at very high shear rate is presented. A new contact model is developed to study the nanoscale roughness contact behavior by classifying various forms of contact into slider-lubricant contact, slider-disk elastic contact and plastic contact. The contact pressure and the contact probabilities of the three types of contact are investigated. The new contact model is employed to explain and provide insight to an interesting experimental result found in a thermal protrusion slider. The protrusion budget for head surfing in the lubricant, which is the ideal state for contact recording, is also discussed.
Experimental dynamic characterizations and modelling of disk vibrations for HDDs.
Pang, Chee Khiang; Ong, Eng Hong; Guo, Guoxiao; Qian, Hua
2008-01-01
Currently, the rotational speed of spindle motors in HDDs (Hard-Disk Drives) are increasing to improve high data throughput and decrease rotational latency for ultra-high data transfer rates. However, the disk platters are excited to vibrate at their natural frequencies due to higher air-flow excitation as well as eccentricities and imbalances in the disk-spindle assembly. These factors contribute directly to TMR (Track Mis-Registration) which limits achievable high recording density essential for future mobile HDDs. In this paper, the natural mode shapes of an annular disk mounted on a spindle motor used in current HDDs are characterized using FEM (Finite Element Methods) analysis and verified with SLDV (Scanning Laser Doppler Vibrometer) measurements. The identified vibration frequencies and amplitudes of the disk ODS (Operating Deflection Shapes) at corresponding disk mode shapes are modelled as repeatable disturbance components for servo compensation in HDDs. Our experimental results show that the SLDV measurements are accurate in capturing static disk mode shapes without the need for intricate air-flow aero-elastic models, and the proposed disk ODS vibration model correlates well with experimental measurements from a LDV.
Towards more stable operation of the Tokyo Tier2 center
NASA Astrophysics Data System (ADS)
Nakamura, T.; Mashimo, T.; Matsui, N.; Sakamoto, H.; Ueda, I.
2014-06-01
The Tokyo Tier2 center, which is located at the International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hardware as the third system upgrade to deal with analysis for further growing data of the ATLAS experiment. The number of CPU cores are increased by factor of two (9984 cores in total), and the performance of individual CPU core is improved by 20% according to the HEPSPEC06 benchmark test at 32bit compile mode. The score is estimated as 18.03 (SL6) per core by using Intel Xeon E5-2680 2.70 GHz. Since all worker nodes are made by 16 CPU cores configuration, we deployed 624 blade servers in total. They are connected to 6.7 PB of disk storage system with non-blocking 10 Gbps internal network backbone by using two center network switches (NetIron MLXe-32). The disk storage is made by 102 of RAID6 disk arrays (Infortrend DS S24F-G2840-4C16DO0) and served by equivalent number of 1U file servers with 8G-FC connection to maximize the file transfer throughput per storage capacity. As of February 2013, 2560 CPU cores and 2.00 PB of disk storage have already been deployed for WLCG. Currently, the remaining non-grid resources for both CPUs and disk storage are used as dedicated resources for the data analysis by the ATLAS Japan collaborators. Since all hardware in the non-grid resources are made by same architecture with Tier2 resource, they will be able to be migrated as the Tier2 extra resource on demand of the ATLAS experiment in the future. In addition to the upgrade of computing resources, we expect the improvement of connectivity on the wide area network. Thanks to the Japanese NREN (NII), another 10 Gbps trans-Pacific line from Japan to Washington will be available additionally with existing two 10 Gbps lines (Tokyo to New York and Tokyo to Los Angeles). The new line will be connected to LHCONE for the more improvement of the connectivity. In this circumstance, we are working for the further stable operation. For instance, we have newly introduced GPFS (IBM) for the non-grid disk storage, while Disk Pool Manager (DPM) are continued to be used as Tier2 disk storage from the previous system. Since the number of files stored in a DPM pool will be increased with increasing the total amount of data, the development of stable database configuration is one of the crucial issues as well as scalability. We have started some studies on the performance of asynchronous database replication so that we can take daily full backup. In this report, we would like to introduce several improvements in terms of the performances and stability of our new system and possibility of the further improvement of local I/O performance in the multi-core worker node. We also present the status of the wide area network connectivity from Japan to US and/or EU with LHCONE.
Flavor and chiral stability of lemon-flavored hard tea during storage.
He, Fei; Qian, YanPing L; Qian, Michael C
2018-01-15
Flavor stability of hard tea beverage was investigated over eight weeks of storage. The volatile compounds were analyzed using solid-phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS) and two-dimensional GC-MS. Quantitative analysis showed that the concentrations of linalool, citronellol, geranial, neral, geraniol, and nerol decreased dramatically during storage, whereas α-terpineol showed an increasing trend during storage. Heart-cut two-dimensional GC-MS (2D-GC-MS) chirality analysis showed that (R)-(+)-limonene, (R)-(-)-linalool, (S)-(-)-α-terpineol and (S)-(-)-4-terpineol dominated in the fresh hard tea samples, however, the configuration changed during storage for the terpene alcohols. The storage conditions did not change the configuration of limonene. A conversion of (R)-(-)-linalool to (S)-(+) form was observed during storage. Both (S)-α-terpineol and (S)-4-terpineol dominated at beginning of the storage, but (R)-(+)-α-terpineol became dominated after storage, suggested in addition to isomerization from (S)-α-terpineol, other precursors could also generate α-terpineol with (R)-isomer preference. Copyright © 2017 Elsevier Ltd. All rights reserved.
Disks around merging binary black holes: From GW150914 to supermassive black holes
NASA Astrophysics Data System (ADS)
Khan, Abid; Paschalidis, Vasileios; Ruiz, Milton; Shapiro, Stuart L.
2018-02-01
We perform magnetohydrodynamic simulations in full general relativity of disk accretion onto nonspinning black hole binaries with mass ratio q =29 /36 . We survey different disk models which differ in their scale height, total size and magnetic field to quantify the robustness of previous simulations on the initial disk model. Scaling our simulations to LIGO GW150914 we find that such systems could explain possible gravitational wave and electromagnetic counterparts such as the Fermi GBM hard x-ray signal reported 0.4 s after GW150915 ended. Scaling our simulations to supermassive binary black holes, we find that observable flow properties such as accretion rate periodicities, the emergence of jets throughout inspiral, merger and postmerger, disk temperatures, thermal frequencies, and the time delay between merger and the boost in jet outflows that we reported in earlier studies display only modest dependence on the initial disk model we consider here.
RAMA: A file system for massively parallel computers
NASA Technical Reports Server (NTRS)
Miller, Ethan L.; Katz, Randy H.
1993-01-01
This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.
Software for Optical Archive and Retrieval (SOAR) user's guide, version 4.2
NASA Technical Reports Server (NTRS)
Davis, Charles
1991-01-01
The optical disk is an emerging technology. Because it is not a magnetic medium, it offers a number of distinct advantages over the established form of storage, advantages that make it extremely attractive. They are as follows: (1) the ability to store much more data within the same space; (2) the random access characteristics of the Write Once Read Many optical disk; (3) a much longer life than that of traditional storage media; and (4) much greater data access rate. Software for Optical Archive and Retrieval (SOAR) user's guide is presented.
Free Factories: Unified Infrastructure for Data Intensive Web Services
Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.
2010-01-01
We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356
Wide-area-distributed storage system for a multimedia database
NASA Astrophysics Data System (ADS)
Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro
1998-12-01
We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.
Mechanism of Na-Ion Storage in Hard Carbon Anodes Revealed by Heteroatom Doping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhifei; Bommier, Clement; Chong, Zhi Sen
Hard carbon is the candidate anode material for the commercialization of Na-ion batteries the batteries that by virtue of being constructed from inexpensive and abundant components open the door for massive scale up of battery-based storage of electrical energy. Holding back the development of these batteries is that a complete understanding of the mechanism of Na-ion storage in hard carbon has remained elusive. Although as an amorphous carbon, hard carbon possesses a subtle and complex structure composed of domains of layered rumpled sheets that have local order resembling graphene within each layer but complete disorder along the c-axis between layers.more » Here, we present two key discoveries: first that characteristics of hard carbon s structure can be modified systematically by heteroatom doping, and second, that these changes greatly affect Na-ion storage properties, which reveal the mechanisms for Na storage in hard carbon. Specifically, P, S and B doping was used to engineer the density of local defects in graphenic layers, and to modify the spacing between the layers. While opening the interlayer spacing through P or S doping extends the low-voltage capacity plateau, and increasing the defect concentration with P or B doping high first sodiation capacity is achieved. Furthermore, we observe that the highly defective B-doped hard carbon suffers a tremendous irreversible capacity in the first desodiation cycle. Our combined first principles calculations and experimental studies revealed a new trapping mechanism, showing that the high binding energies between B-doping induced defects and Na-ions are responsible for the irreversible capacity. The understanding generated in this work provides a totally new set of guiding principles for materials engineers working to optimize hard carbon for Na-ion battery applications.« less
Mechanism of Na-Ion Storage in Hard Carbon Anodes Revealed by Heteroatom Doping
Li, Zhifei; Bommier, Clement; Chong, Zhi Sen; ...
2017-05-23
Hard carbon is the candidate anode material for the commercialization of Na-ion batteries the batteries that by virtue of being constructed from inexpensive and abundant components open the door for massive scale up of battery-based storage of electrical energy. Holding back the development of these batteries is that a complete understanding of the mechanism of Na-ion storage in hard carbon has remained elusive. Although as an amorphous carbon, hard carbon possesses a subtle and complex structure composed of domains of layered rumpled sheets that have local order resembling graphene within each layer but complete disorder along the c-axis between layers.more » Here, we present two key discoveries: first that characteristics of hard carbon s structure can be modified systematically by heteroatom doping, and second, that these changes greatly affect Na-ion storage properties, which reveal the mechanisms for Na storage in hard carbon. Specifically, P, S and B doping was used to engineer the density of local defects in graphenic layers, and to modify the spacing between the layers. While opening the interlayer spacing through P or S doping extends the low-voltage capacity plateau, and increasing the defect concentration with P or B doping high first sodiation capacity is achieved. Furthermore, we observe that the highly defective B-doped hard carbon suffers a tremendous irreversible capacity in the first desodiation cycle. Our combined first principles calculations and experimental studies revealed a new trapping mechanism, showing that the high binding energies between B-doping induced defects and Na-ions are responsible for the irreversible capacity. The understanding generated in this work provides a totally new set of guiding principles for materials engineers working to optimize hard carbon for Na-ion battery applications.« less
Use of redundant arrays of inexpensive disks in orthodontic practice.
Graham, David Matthew; Graham, Michael James; Mupparapu, Mel
2017-04-01
In a time when orthodontists are getting away from paper charts and going digital with their patient data and imaging, practitioners need to be prepared for a potential hardware failure in their data infrastructure. Although a backup plan in accordance with the Security Rule of the Health Insurance Portability and Accountability Act (HIPAA) of 1996 may prevent data loss in case of a disaster or hard drive failure, it does little to ensure business and practice continuity. Through the implementation of a common technique used in information technology, the redundant array of inexpensive disks, a practice may continue normal operations without interruption if a hard drive fails. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression
NASA Astrophysics Data System (ADS)
Lindstrom, Peter; Chen, Po; Lee, En-Jui
2016-08-01
Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.
Reducing Disk Storage of Full-3D Seismic Waveform Tomography (F3DT) Through Lossy Online Compression
Lindstrom, Peter; Chen, Po; Lee, En-Jui
2016-05-05
Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithmmore » into our F3DT SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.« less
Pooling the resources of the CMS Tier-1 sites
Apyan, A.; Badillo, J.; Cruz, J. Diaz; ...
2015-12-23
The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit thismore » mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.« less
Pooling the resources of the CMS Tier-1 sites
NASA Astrophysics Data System (ADS)
Apyan, A.; Badillo, J.; Diaz Cruz, J.; Gadrat, S.; Gutsche, O.; Holzman, B.; Lahiff, A.; Magini, N.; Mason, D.; Perez, A.; Stober, F.; Taneja, S.; Taze, M.; Wissing, C.
2015-12-01
The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community. The long shutdown of the LHC in 2013-2014 was an opportunity to revisit this mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems. With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Finally, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape. In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.
Data Management, the Victorian era child of the 21st century
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farber, Rob
2007-03-30
Do you remember when a gigabyte disk drive was “a lot” of storage in that by-gone age of the 20th century? Still in our first decade of the 21st century, major supercomputer sites now speak of storage in terms of petabytes, 1015 bytes, or six orders of magnitude increase in capacity over a gigabyte! Unlike our archaic “big” disk drive where all the data was in one place, HPC storage is now distributed across many machines and even across the Internet. Collaborative research engages many scientists who need to find and use each others data, preferably in an automated fashion,more » which complicates an already muddled problem.« less
Records Management with Optical Disk Technology: Now Is the Time.
ERIC Educational Resources Information Center
Retherford, April; Williams, W. Wes
1991-01-01
The University of Kansas record management system using optical disk storage in a network environment and the selection process used to meet existing hardware and budgeting requirements are described. Viability of the technology, document legality, and difficulties encountered during implementation are discussed. (Author/MSE)
Revelations of X-ray spectral analysis of the enigmatic black hole binary GRS 1915+105
NASA Astrophysics Data System (ADS)
Peris, Charith; Remillard, Ronald A.; Steiner, James; Vrtilek, Saeqa Dil; Varniere, Peggy; Rodriguez, Jerome; Pooley, Guy
2016-01-01
Of the black hole binaries discovered thus far, GRS 1915+105 stands out as an exceptional source primarily due to its wild X-ray variability, the diversity of which has not been replicated in any other stellar-mass black hole. Although extreme variability is commonplace in its light-curve, about half of the observations of GRS1915+105 show fairly steady X-ray intensity. We report on the X-ray spectral behavior within these steady observations. Our work is based on a vast RXTE/PCA data set obtained on GRS 1915+105 during the course of its entire mission and 10 years of radio data from the Ryle Telescope, which overlap the X-ray data. We find that the steady observations within the X-ray data set naturally separate into two regions in a color-color diagram, which we refer to as steady-soft and steady-hard. GRS 1915+105 displays significant curvature in the Comptonization component within the PCA band pass suggesting significantly heating from a hot disk present in all states. A new Comptonization model 'simplcut' was developed in order to model this curvature to best effect. A majority of the steady-soft observations display a roughly constant inner radius; remarkably reminiscent of canonical soft state black hole binaries. In contrast, the steady-hard observations display a growing disk truncation that is correlated to the mass accretion rate through the disk, which suggests a magnetically truncated disk. A comparison of X-ray model parameters to the canonical state definitions show that almost all steady-soft observations match the criteria of either thermal or steep power law state, while the thermal state observations dominate the constant radius branch. A large portion (80%) of the steady-hard observations matches the hard state criteria when the disk fraction constraint is neglected. These results suggest that within the complexity of this source is a simpler underlying basis of states, which map to those observed in canonical black hole binaries. When represented in a color-color diagram, state assignments appear to map to ``A, B and C'' (Belloni et al. 2000) regions that govern fast variability cycles in GRS 1915+105 demonstrating a compelling link between short and long time scales in its phenomenology.
A report on the ST ScI optical disk workstation
NASA Technical Reports Server (NTRS)
1985-01-01
The STScI optical disk project was designed to explore the options, opportunities and problems presented by the optical disk technology, and to see if optical disks are a viable, and inexpensive, means of storing the large amount of data which are found in astronomical digital imagery. A separate workstation was purchased on which the development can be done and serves as an astronomical image processing computer, incorporating the optical disks into the solution of standard image processing tasks. It is indicated that small workstations can be powerful tools for image processing, and that astronomical image processing may be more conveniently and cost-effectively performed on microcomputers than on the mainframe and super-minicomputers. The optical disks provide unique capabilities in data storage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabbaghi, Mostafa, E-mail: mostafas@buffalo.edu; Esmaeilian, Behzad, E-mail: b.esmaeilian@neu.edu; Raihanian Mashhadi, Ardeshir, E-mail: ardeshir@buffalo.edu
Highlights: • We analyzed a data set of HDDs returned back to an e-waste collection site. • We studied factors that affect the storage behavior. • Consumer type, brand and size are among factors which affect the storage behavior. • Commercial consumers have stored computers more than household consumers. • Machine learning models were used to predict the storage behavior. - Abstract: Consumers often have a tendency to store their used, old or un-functional electronics for a period of time before they discard them and return them back to the waste stream. This behavior increases the obsolescence rate of usedmore » still-functional products leading to lower profitability that could be resulted out of End-of-Use (EOU) treatments such as reuse, upgrade, and refurbishment. These types of behaviors are influenced by several product and consumer-related factors such as consumers’ traits and lifestyles, technology evolution, product design features, product market value, and pro-environmental stimuli. Better understanding of different groups of consumers, their utilization and storage behavior and the connection of these behaviors with product design features helps Original Equipment Manufacturers (OEMs) and recycling and recovery industry to better overcome the challenges resulting from the undesirable storage of used products. This paper aims at providing insightful statistical analysis of Electronic Waste (e-waste) dynamic nature by studying the effects of design characteristics, brand and consumer type on the electronics usage time and end of use time-in-storage. A database consisting of 10,063 Hard Disk Drives (HDD) of used personal computers returned back to a remanufacturing facility located in Chicago, IL, USA during 2011–2013 has been selected as the base for this study. The results show that commercial consumers have stored computers more than household consumers regardless of brand and capacity factors. Moreover, a heterogeneous storage behavior is observed for different brands of HDDs regardless of capacity and consumer type factors. Finally, the storage behavior trends are projected for short-time forecasting and the storage times are precisely predicted by applying machine learning methods.« less
Using Monte-Carlo Simulations to Study the Disk Structure in Cygnus X-1
NASA Technical Reports Server (NTRS)
Yao, Y.; Zhang, S. N.; Zhang, X. L.; Feng, Y. X.
2002-01-01
As the first dynamically determined black hole X-ray binary system, Cygnus X-1 has been studied extensively. However, its broad-band spectra in hard state with BeppoSAX is still not well understood. Besides the soft excess described by the multi-color disk model (MCD), the power- law component and a broad excess feature above 10 keV (disk reflection component), there is also an additional soft component around 1 keV, whose origin is not known currently.We propose that the additional soft component is due to the thermal Comptonization process between the s oft disk photon and the warm plasma cloud just above the disk.i.e., a warm layer. We use Monte-Carlo technique t o simulate this Compton scattering process and build several table models based on our simulation results.
40 CFR 91.504 - Maintenance of records; submittal of information.
Code of Federal Regulations, 2013 CFR
2013-07-01
... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...
40 CFR 91.504 - Maintenance of records; submittal of information.
Code of Federal Regulations, 2014 CFR
2014-07-01
... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...
40 CFR 90.704 - Maintenance of records; submission of information.
Code of Federal Regulations, 2014 CFR
2014-07-01
... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...
40 CFR 90.704 - Maintenance of records; submission of information.
Code of Federal Regulations, 2013 CFR
2013-07-01
... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...
40 CFR 90.704 - Maintenance of records; submission of information.
Code of Federal Regulations, 2011 CFR
2011-07-01
... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...
40 CFR 90.704 - Maintenance of records; submission of information.
Code of Federal Regulations, 2012 CFR
2012-07-01
... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...
40 CFR 91.504 - Maintenance of records; submittal of information.
Code of Federal Regulations, 2011 CFR
2011-07-01
... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...
40 CFR 91.504 - Maintenance of records; submittal of information.
Code of Federal Regulations, 2012 CFR
2012-07-01
... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...
Using compressed images in multimedia education
NASA Astrophysics Data System (ADS)
Guy, William L.; Hefner, Lance V.
1996-04-01
The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD-ROM depending on the quality of image required for the finding in question. This reduces the weight of a teaching file from that of a baby elephant to that of a single CD-ROM disc. Thus, with this method of teaching file preparation and storage the following advantages are realized: (1) Technically easier and less time consuming image reproduction. (2) Considerably less unwieldy and substantially more portable teaching files. (3) Novel ability to index files and then retrieve specific cases of choice based on descriptive text.
Magnetic field sources and their threat to magnetic media
NASA Technical Reports Server (NTRS)
Jewell, Steve
1993-01-01
Magnetic storage media (tapes, disks, cards, etc.) may be damaged by external magnetic fields. The potential for such damage has been researched, but no objective standard exists for the protection of such media. This paper summarizes a magnetic storage facility standard, Publication 933, that ensures magnetic protection of data storage media.
Emerging Network Storage Management Standards for Intelligent Data Storage Subsystems
NASA Technical Reports Server (NTRS)
Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don
1998-01-01
This paper discusses the need for intelligent storage devices and subsystems that can provide data integrity metadata, the content of the existing data integrity standard for optical disks and techniques and metadata to verify stored data on optical tapes developed by the Association for Information and Image Management (AIIM) Optical Tape Committee.
Multi-Level Bitmap Indexes for Flash Memory Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Madduri, Kamesh; Canon, Shane
2010-07-23
Due to their low access latency, high read speed, and power-efficient operation, flash memory storage devices are rapidly emerging as an attractive alternative to traditional magnetic storage devices. However, tests show that the most efficient indexing methods are not able to take advantage of the flash memory storage devices. In this paper, we present a set of multi-level bitmap indexes that can effectively take advantage of flash storage devices. These indexing methods use coarsely binned indexes to answer queries approximately, and then use finely binned indexes to refine the answers. Our new methods read significantly lower volumes of data atmore » the expense of an increased disk access count, thus taking full advantage of the improved read speed and low access latency of flash devices. To demonstrate the advantage of these new indexes, we measure their performance on a number of storage systems using a standard data warehousing benchmark called the Set Query Benchmark. We observe that multi-level strategies on flash drives are up to 3 times faster than traditional indexing strategies on magnetic disk drives.« less
Inverted Signature Trees and Text Searching on CD-ROMs.
ERIC Educational Resources Information Center
Cooper, Lorraine K. D.; Tharp, Alan L.
1989-01-01
Explores the new storage technology of optical data disks and introduces a data structure, the inverted signature tree, for storing data on optical data disks for efficient text searching. The inverted signature tree approach is compared to the use of text signatures and the B+ tree. (22 references) (Author/CLB)
State transitions of GRS 1739-278 in the 2014 outburst
NASA Astrophysics Data System (ADS)
Wang, Sili; Kawai, Nobuyuki; Shidatsu, Megumi; Tachibana, Yutaro; Yoshii, Taketoshi; Sudo, Masayuki; Kubota, Aya
2018-05-01
We report on the X-ray spectral analysis and time evolution of GRS 1739-278 during its 2014 outburst, based on MAXI/GSC and Swift/XRT observations. Over the course of the outburst, a transition from the low/hard state to the high/soft state and then back to the low/hard state was seen. During the high/soft state, the innermost disk temperature mildly decreased, while the innermost radius estimated with the multi-color disk model remained constant at ˜18 (D/8.5 kpc)(cos i/cos 30°)-1/2 km, where D is the source distance and i is the inclination of observation. This small innermost radius of the accretion disk suggests that the central object is more likely to be a Kerr black hole rather than a Schwardzschild black hole. Applying a relativistic disk emission model to the high/soft state spectra, a mass upper limit of 18.3 M⊙ was obtained based on the inclination limit i < 60° for an assumed distance of 8.5 kpc. Using the empirical relation of the transition luminosity to the Eddington limit, the mass is constrained to 4.0-18.3 M⊙ for the same distance. The mass can be further constrained to be no larger than 9.5 M⊙ by adopting the constraints based on the fits to the NuSTAR spectra with relativistically blurred disk reflection models (Miller et al. 2015, ApJ, 799, L6).
AIBA as Free Radical Initiator for Abrasive-Free Polishing of Hard Disk Substrate
NASA Astrophysics Data System (ADS)
Lei, Hong; Ren, Xiaoyan
2015-04-01
In order to optimize the existing slurry for abrasive-free polishing (AFP) of a hard disk substrate, a water-soluble free radical initiator, 2,2'-azobis (2-methylpropionamidine) dihydrochloride (AIBA) was introduced into H2O2-based slurry in the present work. Polishing experiment results with AIBA in the H2O2 slurry indicate that the material removal rate (MRR) increases and the polished surface has a lower surface roughness. The mechanism of AIBA in AFP was investigated using electron spin-resonance spectroscopy and UV-Visible analysis, which showed that the concentration of hydroxyl radical (a stronger oxidizer than H2O2) in the slurry was enhanced in the present of AIBA. The structure of the film formed on the substrate surface was investigated by scanning electron microscopy, auger electron spectroscopy and electrochemical impedance spectroscopy technology, showing that a looser and porous oxide film was found on the hard disk substrate surface when treated with the H2O2-AIBA slurry. Furthermore, potentiodynamic polarization tests show that the H2O2-AIBA slurry has a higher corrosion current density, implying that a fast dissolution reaction can occur on the substrate surface. Therefore, we can conclude that the stronger oxidation ability, loose oxide film on the substrate surface, and the higher corrosion-wear rate of the H2O2-AIBA slurry lead to the higher MRR.
NASA Technical Reports Server (NTRS)
Nowak, Michael A.; Wilms, Joern; Vaughan, Brian A.; Dove, James B.; Begelman, Mitchell C.
1999-01-01
We have recently shown that a 'sphere + disk' geometry Compton corona model provides a good description of Rossi X-ray Timing Explorer (RXTE) observations of the hard/low state of Cygnus X-1. Separately, we have analyzed the temporal data provided by RXTE. In this paper we consider the implications of this timing analysis for our best-fit 'sphere + disk' Comptonization models. We focus our attention on the observed Fourier frequency-dependent time delays between hard and soft photons. We consider whether the observed time delays are: created in the disk but are merely reprocessed by the corona; created by differences between the hard and soft photon diffusion times in coronae with extremely large radii; or are due to 'propagation' of disturbances through the corona. We find that the time delays are most likely created directly within the corona; however, it is currently uncertain which specific model is the most likely explanation. Models that posit a large coronal radius [or equivalently, a large Advection Dominated Accretion Flow (ADAF) region] do not fully address all the details of the observed spectrum. The Compton corona models that do address the full spectrum do not contain dynamical information. We show, however, that simple phenomenological propagation models for the observed time delays for these latter models imply extremely slow characteristic propagation speeds within the coronal region.
NASA Astrophysics Data System (ADS)
Parkin, Stuart
2012-02-01
Racetrack Memory is a novel high-performance, non-volatile storage-class memory in which magnetic domains are used to store information in a ``magnetic racetrack'' [1]. The magnetic racetrack promises a solid state memory with storage capacities and cost rivaling that of magnetic disk drives but with much improved performance and reliability: a ``hard disk on a chip''. The magnetic racetrack is comprised of a magnetic nanowire in which a series of magnetic domain walls are shifted to and fro along the wire using nanosecond-long pulses of spin polarized current [2]. We have demonstrated the underlying physics that makes Racetrack Memory possible [3,4] and all the basic functions - creation, and manipulation of a train of domain walls and their detection. The physics underlying the current induced dynamics of domain walls will also be discussed. In particular, we show that the domain walls respond as if they have mass, leading to significant inertial driven motion of the domain walls over long times after the current pulses are switched off [3]. We also demonstrate that in perpendicularly magnetized nanowires there are two independent current driving mechanisms: one derived from bulk spin-dependent scattering that drives the domain walls in the direction of electron flow, and a second interfacial mechanism that can drive the domain walls either along or against the electron flow, depending on subtle changes in the nanowire structure. Finally, we demonstrate thermally induced spin currents are large enough that they can be used to manipulate domain walls. [4pt] [1] S.S.P. Parkin, US Patent 6,834,005 (2004); S.S.P. Parkin et al., Science 320, 190 (2008); S.S.P. Parkin, Scientific American (June 2009). [0pt] [2] M. Hayashi, L. Thomas, R. Moriya, C. Rettner and S.S.P. Parkin, Science 320, 209 (2008). [0pt] [3] L. Thomas, R. Moriya, C. Rettner and S.S.P. Parkin, Science 330, 1810 (2010). [0pt] [4] X. Jiang et al. Nat. Comm. 1:25 (2010) and Nano Lett. 11, 96 (2011).
Lifetime of digital media: is optics the solution?
NASA Astrophysics Data System (ADS)
Spitz, Erich; Hourcade, Jean-Charles; Lalo", Franck
2010-01-01
While the short term and mid-term archiving of digital data and information can be handled reasonably well with modern techniques, the long term aspects of the problem (several decades or even centuries) are much more difficult to manage. The heart of the problem is the longevity of storage media, which presently does not go beyond a few years, maybe one or two decades in the best cases. In this article, we review the various strategies for long term archiving, with two main categories: active and passive. We evaluate the various recording media in terms of their longevity. We then discuss the recordable optical digital disks (RODDs) and the state of the art in this domain; the present situation is that, with the techniques that are implemented commercially, good prospects for long term archiving are not available. Nevertheless, the conceptual simplicity of RODDs could be exploited to create new recordable digital media; the improvements that are needed seem to be reachable with reasonable development effort. Since RODDs are now in strong competition with other systems (hard disks or flash memory for instance) that constantly make enormous progress, there seems to be little hope to see RODDs win the race of capacity; nevertheless, longevity could provide them with a new market, since the need for long term archiving is so pressing everywhere in the world.
Broadband X-Ray Spectra of GX 339-4 and the Geometry of Accreting Black Holes in the Hard State
NASA Technical Reports Server (NTRS)
Tomsick; Kalemci; Kaaret; Markoff; Corbel; Migliari; Fender; Bailyn; Buxton
2008-01-01
A major question in the study of black hole binaries involves our understanding of the accretion geometry when the sources are in the "hard" state. In this state, the X-ray energy spectrum is dominated by a hard power-law component and radio observations indicate the presence of a steady and powerful "compact" jet. Although the common hard state picture is that the accretion disk is truncated, perhaps at hundreds of gravitational radii (R(sub g)) from the black hole, recent results for the recurrent transient GX 339-4 by Miller and co-workers show evidence for optically thick material very close to the black hole's innermost stable circular orbit. That work focused on an observation of GX 339-4 at a luminosity of about 5% of the Eddington limit (L(sub Edd)) and used parameters from a relativistic reflection model and the presence of a soft, thermal component as diagnostics. In this work, we use similar diagnostics, but extend the study to lower luminosities (2.3% and 0.8% L(sub Edd)) using Swift and RXTE observations of GX 339-4. We detect a thermal component with an inner disk temperature of approx.0.2 keV at 2.3% L(sub Edd). At 0.8% L(sub Edd), the spectrum is consistent with the presence of such a component, but the component is not required with high confidence. At both luminosities, we detect broad features due to iron Ka that are likely related to reflection of hard X-rays off the optically thick material. If these features are broadened by relativistic effects, they indicate that optically thick material resides within 10 R(sub g) down to 0.8% L(sub Edd), and the measurements are consistent with the inner radius of the disk remaining at approx.4 R(sub g) down to this level. However, we also discuss an alternative model for the broadening, and we note that the evolution of the thermal component is not entirely consistent with the constant inner radius interpretation. Finally, we discuss the results in terms of recent theoretical work by Liu and co-workers on the possibility that material may condense out of an Advection-Dominated Accretion Flow to maintain an inner optically thick disk.
NASA Astrophysics Data System (ADS)
Bagri, Kalyani; Misra, Ranjeev; Rao, Anjali; Singh Yadav, Jagdish; Pandey, Shiv Kumar
2018-05-01
One of the popular models for the low/hard state of black hole binaries is that the standard accretion disk is truncated and the hot inner region produces, via Comptonization, hard X-ray flux. This is supported by the value of the high energy photon index, which is often found to be small, ∼ 1.7(< 2), implying that the hot medium is starved of seed photons. On the other hand, the suggestive presence of a broad relativistic Fe line during the hard state would suggest that the accretion disk is not truncated but extends all the way to the innermost stable circular orbit. In such a case, it is a puzzle why the hot medium would remain photon starved. The broad Fe line should be accompanied by a broad smeared reflection hump at ∼ 30 keV and it may be that this additional component makes the spectrum hard and the intrinsic photon index is larger, i.e. >2. This would mean that the medium is not photon deficient, reconciling the presence of a broad Fe line in the observed hard state. To test this hypothesis, we have analyzed the RXTE observations of GX 339–4 from the four outbursts during 2002–2011 and identify observations when the system was in the hard state and showed a broad Fe line. We have then attempted to fit these observationswith models,which include smeared reflection, to understandwhether the intrinsic photon index can indeed be large. We find that, while for some observations the inclusion of reflection does increase the photon index, there are hard state observations with a broad Fe line that have photon indices less than 2.
Reference System of DNA and Protein Sequences on CD-ROM
NASA Astrophysics Data System (ADS)
Nasu, Hisanori; Ito, Toshiaki
DNASIS-DBREF31 is a database for DNA and Protein sequences in the form of optical Compact Disk (CD) ROM, developed and commercialized by Hitachi Software Engineering Co., Ltd. Both nucleic acid base sequences and protein amino acid sequences can be retrieved from a single CD-ROM. Existing database is offered in the form of on-line service, floppy disks, or magnetic tape, all of which have some problems or other, such as usability or storage capacity. DNASIS-DBREF31 newly adopt a CD-ROM as a database device to realize a mass storage and personal use of the database.
Proposal for a multilayer read-only-memory optical disk structure.
Ichimura, Isao; Saito, Kimihiro; Yamasaki, Takeshi; Osato, Kiyoshi
2006-03-10
Coherent interlayer cross talk and stray-light intensity of multilayer read-only-memory (ROM) optical disks are investigated. From results of scalar diffraction analyses, we conclude that layer separations above 10 microm are preferred in a system using a 0.85 numerical aperture objective lens in terms of signal quality and stability in focusing control. Disk structures are optimized to prevent signal deterioration resulting from multiple reflections, and appropriate detectors are determined to maintain acceptable stray-light intensity. In the experiment, quadrilayer and octalayer high-density ROM disks are prepared by stacking UV-curable films onto polycarbonate substrates. Data-to-clock jitters of < or = 7% demonstrate the feasibility of multilayer disk storage up to 200 Gbytes.
Picosecond, tunable, high-brightness hard x-ray inverse Compton source at Duke storage ring
NASA Astrophysics Data System (ADS)
Litvinenko, Vladimir N.; Wu, Ying; Burnham, Bentley; Barnett, Genevieve A.; Madey, John M. J.
1995-09-01
We suggest a state-of-the art x-ray source using a compact electron storage ring with modest energy (less than 1 GeV) and a high power mm-wave as an undulator. A source of this type has x-ray energies and brightness comparable with third generation synchrotron light sources while it can be very compact and fit in a small university or industrial laboratory or hospital. We propose to operate an isochronous mm-wave FEL and a hard x-ray inverse Compton source at the Duke storage ring to test this concept. Resonant FEL conditions for the mm- wave will be provided by the off-axis interaction with an electromagnetic wave. A special optical resonator with holes for the e-beam is proposed for pumping a hard x-ray inverse Compton source with very high brightness. Simulation results of mm-wave FEL operation of the Duke storage ring are discussed. Expected performance of mm-wave FEL and hard x-ray inverse Compton source are presented.
Optimization of Materials and Interfaces for Spintronic Devices
NASA Astrophysics Data System (ADS)
Clark, Billy
In recent years' Spintronic devices have drawn a significant amount of research attention. This interest comes in large part from their ability to enable interesting and new technology such as Spin Torque Transfer Random Access Memory or improve existing technology such as High Signal Read Heads for Hard Disk Drives. For the former we worked on optimizing and improving magnetic tunnel junctions by optimizing their thermal stability by using Ta insertion layers in the free layer. We further tried to simplify the design of the MTJ stack by attempting to replace the Co/Pd multilayer with CoPd alloy. In this dissertation, we detail its development and examine the switching characteristics. Lastly we look at a highly spin polarized material, Fe2MnGe, for optimizing Hard Drive Disk read heads.
NASA Technical Reports Server (NTRS)
Le, Diana; Cooper, David M. (Technical Monitor)
1994-01-01
Just imagine a mass storage system that consists of a machine with 2 CPUs, 1 Gigabyte (GB) of memory, 400 GB of disk space, 16800 cartridge tapes in the automated tape silos, 88,000 tapes located in the vault, and the software to manage the system. This system is designed to be a data repository; it will always have disk space to store all the incoming data. Currently 9.14 GB of new data per day enters the system with this rate doubling each year. To assure there is always disk space available for new data, the system. has to move data reside from the expensive disk to a much less expensive medium such as the 3480 cartridge tapes. Once the data is archived to tape, it should be able to move back to disk when someone wants to access it and the data movement should be transparent to the user. Now imagine all the tasks that a system administrator must perform to keep this system running 24 hour a day, 7 days a week. Since the filesystem maintains the illusion of unlimited disk space, data that comes to the system must get moved to tapes in an efficient manner. This paper will describe the mass storage system running at the Numerical Aerodynamic Simulation (NAS) at NASA Ames Research Center in both software and hardware aspects, then it will describe all of the tasks the system administrator has to perform on this system.
The Relativistic Iron Line Profile in the Seyfert 1 Galaxy IC4329a
NASA Technical Reports Server (NTRS)
Done, C.; Madejski, G. M.; Zycki, P. T.
2000-01-01
We present simultaneous ASCA and RXTE data on the bright Seyfert 1 galaxy IC4329a. The iron line is significantly broadened, but not to the extent expected from an accretion disk which extends down to the last stable orbit around a black hole. We marginally detect a narrow line component, presumably from the molecular torus, but, even including this gives a line profile from the accretion disk which is significantly narrower that that seen in MCG-6-30-15, and is much more like that seen from the low/hard state galactic black hole candidates. This is consistent with the inner disk being truncated before the last stable orbit, forming a hot flow at small radii as in the ADAF models. However. we cannot rule out the presence of an inner disk which does not contribute to the reflected spectrum. either because of extreme ionisation suppressing the characteristic atomic features of the reflected spectrum or because the X-ray source is intrinsically anisotropic, so it does not illuminate the inner disk. The source was monitored by RXTE every 2 days for 2 months, and these snapshot spectra show that there is intrinsic spectral variability. The data are good enough to disentangle the power law from the reflected continuum and we see that the power law softens as the source brightens. The lack of a corresponding increase in the observed reflected spectrum implies that either the changes in disk inner radial extent/ionization structure are small, or that the variability is actually driven by changes in the seed photons which are decoupled from the hard X-ray mechanism.
SOFT LAGS IN NEUTRON STAR kHz QUASI-PERIODIC OSCILLATIONS: EVIDENCE FOR REVERBERATION?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barret, Didier, E-mail: didier.barret@irap.omp.eu; CNRS, Institut de Recherche en Astrophysique et Planetologie, 9 Av. colonel Roche, BP 44346, F-31028 Toulouse cedex 4
2013-06-10
High frequency soft reverberation lags have now been detected from stellar mass and supermassive black holes. Their interpretation involves reflection of a hard source of photons onto an accretion disk, producing a delayed reflected emission, with a time lag consistent with the light travel time between the irradiating source and the disk. Independently of the location of the clock, the kHz quasi-periodic oscillation (QPO) emission is thought to arise from the neutron star boundary layer. Here, we search for the signature of reverberation of the kHz QPO emission, by measuring the soft lags and the lag energy spectrum of themore » lower kHz QPOs from 4U1608-522. Soft lags, ranging from {approx}15 to {approx}40 {mu}s, between the 3-8 keV and 8-30 keV modulated emissions are detected between 565 and 890 Hz. The soft lags are not constant with frequency and show a smooth decrease between 680 Hz and 890 Hz. The broad band X-ray spectrum is modeled as the sum of a disk and a thermal Comptonized component, plus a broad iron line, expected from reflection. The spectral parameters follow a smooth relationship with the QPO frequency, in particular the fitted inner disk radius decreases steadily with frequency. Both the bump around the iron line in the lag energy spectrum and the consistency between the lag changes and the inferred changes of the inner disk radius, from either spectral fitting or the QPO frequency, suggest that the soft lags may indeed involve reverberation of the hard pulsating QPO source on the disk.« less
An XMM-Newton view of the radio galaxy 3C 411
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostrom, Allison; Reynolds, Christopher S.; Tombesi, Francesco
We present the first high signal-to-noise XMM-Newton observations of the broad-line radio galaxy 3C 411. After fitting various spectral models, an absorbed double power-law (PL) continuum and a blurred relativistic disk reflection model (kdblur) are found to be equally plausible descriptions of the data. While the softer PL component (Γ = 2.11) of the double PL model is entirely consistent with that found in Seyfert galaxies (and hence likely originates from a disk corona), the additional PL component is very hard (Γ = 1.05); amongst the active galactic nucleus zoo, only flat-spectrum radio quasars (FSRQ) have such hard spectra. Togethermore » with the flat radio-spectrum displayed by this source, we suggest that it should instead be classified as an FSRQ. This leads to potential discrepancies regarding the jet inclination angle, with the radio morphology suggesting a large jet inclination but the FSRQ classification suggesting small inclinations. The kdblur model predicts an inner disk radius of at most 20 r {sub g} and relativistic reflection.« less
Micromagnetic structure in Co-alloy thin films and its correlation with microstructure
NASA Astrophysics Data System (ADS)
Tang, Kai
The development of magnetic hard disk recording has resulted in an increase of recording density in an accelerated pace. How to maintain the increasingly smaller bits with low noise presents a tremendous challenge to the recording media, which requires detailed study of micromagnetic structure of the media to understand the noise mechanism, and elucidation of the correlation between the micromagnetic structure and microstructure to systematically develop media materials and tailor their microstructure. Lorentz transmission electron microscopy (LTEM) is a high-resolution magnetic imaging technique. However, it requires uniformly thin specimens, which cannot be produced by conventional TEM specimen preparation methods. Consequently, its application to real computer magnetic hard disks has been limited. In this dissertation, a combined dimpling and chemical etching method is introduced to prepare specimens directly from the unmodified hard disks with the typical C/Co alloy/Cr/NiP/Al (substrate) structure. The specimens typically have 2000 μmsp2 or larger electron transparent areas of Co alloy/Cr films with uniform thickness, which are suitable for LTEM observation. This method is applicable to disks with both smooth and mechanically textured substrates. In this work, LTEM has been employed to study recorded patterns in real hard disks. Magnetic recording was performed on a standard spin stand. Bits of densities from 15 to 100 kfci were examined with head skew angles of 0sp° and 20sp°, respectively. We also compared tracks recorded on dc-erased disks with those on as-deposited disks. We observed magnetic ripples within the tracks and the inter-track regions, magnetic vortices of 0.1-0.2 mum in diameter at the bit-transitions, and curved magnetic domain walls in the track-edge regions resulting from the "dog-bone" shaped head field profile. Our results also indicate that the micromagnetic structure at the track edges is influenced by head skew and magnetization direction in the inter-track regions. The LTEM results are combined with MFM observations to provide further understanding. The study has concentrated on isotropic media on smooth substrates since low head-to-medium spacing required by high recording density demonstrates the need for this type of media. The recorded tracks are remanent magnetic states after a strong (head) magnetic field was applied. We also examined an ac-erased state, in which the effect of external field is removed. Magnetic vortices are identified, in which small crystal grains form magnetic clusters and these clusters then form closed-fluxed vortices. The size of these vortices is estimated to be around 1.0-1.5 mum, about 10 times larger than that found in the bit-transition regions. The smaller vortex sizes in the bit-transition regions may result from constraints from adjacent bits as well as the difference in magnetic processes generating these states. (Abstract shortened by UMI.)
Non-volatile main memory management methods based on a file system.
Oikawa, Shuichi
2014-01-01
There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.
NASA Astrophysics Data System (ADS)
Bhargava, Samarth; Yablonovitch, Eli
2014-09-01
We report using Inverse Electromagnetic Design to computationally optimize the geometric shapes of metallic optical antennas or near-field transducers (NFTs) and dielectric waveguide structures that comprise a sub-wavelength optical focusing system for practical use in Heat Assisted Magnetic Recording (HAMR). This magnetic data-recording scheme relies on focusing optical energy to locally heat the area of a single bit, several hundred square nanometers on a hard disk, to the Curie temperature of the magnetic storage layer. There are three specifications of the optical system that must be met to enable HAMR as a commercial technology. First, to heat the media at scan rates upward of 10 m/s, ~1mW of light (<1% of typical laser diode output power) must be focused to a 30nm×30nm spot on the media. Second, the required lifetime of many years necessitates that the nano-scale NFT must not over-heat from optical absorption. Third, to avoid undesired erasing or interference of adjacent tracks on the media, there must be minimal stray optical radiation away from the hotspot on the hard disk. One cannot design the light delivery system by tackling each of these challenges independently, because they are governed by coupled electromagnetic phenomena. Instead, we propose multiobjective optimization using Inverse Electromagnetic Design in conjunction with a commercial 3D FDTD Maxwell's equations solver. We computationally generated designs of a metallic NFT and a high-index waveguide grating that meet the HAMR specifications simultaneously. Compared to a mock industry design, our proposed design has a similar optical coupling efficiency, ~3x improved suppression of stray optical radiation, and a 60% (280°C) reduction in NFT temperature rise. We also distributed the Inverse Electromagnetic Design software online so that industry partners can use it as a repeatable design process.
How to Use Removable Mass Storage Memory Devices
ERIC Educational Resources Information Center
Branzburg, Jeffrey
2004-01-01
Mass storage refers to the variety of ways to keep large amounts of information that are used on a computer. Over the years, the removable storage devices have grown smaller, increased in capacity, and transferred the information to the computer faster. The 8" floppy disk of the 1960s stored 100 kilobytes, or about 60 typewritten, double-spaced…
Facing the Limitations of Electronic Document Handling.
ERIC Educational Resources Information Center
Moralee, Dennis
1985-01-01
This essay addresses problems associated with technology used in the handling of high-resolution visual images in electronic document delivery. Highlights include visual fidelity, laser-driven optical disk storage, electronics versus micrographics for document storage, videomicrographics, and system configurations and peripherals. (EJS)
Studying the Warm Layer and the Hardening Factor in Cygnus X-1
NASA Technical Reports Server (NTRS)
Yao, Yangsen; Zhang, Shuangnan; Zhang, Xiaoling; Feng, Yuxin
2002-01-01
As the first dynamically determined black hole X-ray binary system, Cygnus X-1 has been studied extensively. However, its broadband spectrum observed with BeppoSax is still not well understood. Besides the soft excess described by the multi-color disk model (MCD), the power-law hard component and a broad excess feature above 10 keV (a disk reflection component), there is also an additional soft component around 1 keV, whose origin is not known currently. Here we propose that the additional soft component is due to the thermal Comptonization between the soft disk photons and a warm plasma cloud just above the disk, i.e., a warm layer. We use the Monte-Carlo technique to simulate this Compton scattering process and build a table model based on our simulation results. With this table model, we study the disk structure and estimate the hardening factor to the MCD component in Cygnus X-1.
Schnyder, Simon K; Horbach, Jürgen
2018-02-16
Molecular dynamics simulations of interacting soft disks confined in a heterogeneous quenched matrix of soft obstacles show dynamics which is fundamentally different from that of hard disks. The interactions between the disks can enhance transport when their density is increased, as disks cooperatively help each other over the finite energy barriers in the matrix. The system exhibits a transition from a diffusive to a localized state, but the transition is strongly rounded. Effective exponents in the mean-squared displacement can be observed over three decades in time but depend on the density of the disks and do not correspond to asymptotic behavior in the vicinity of a critical point, thus, showing that it is incorrect to relate them to the critical exponents in the Lorentz model scenario. The soft interactions are, therefore, responsible for a breakdown of the universality of the dynamics.
NASA Astrophysics Data System (ADS)
Schnyder, Simon K.; Horbach, Jürgen
2018-02-01
Molecular dynamics simulations of interacting soft disks confined in a heterogeneous quenched matrix of soft obstacles show dynamics which is fundamentally different from that of hard disks. The interactions between the disks can enhance transport when their density is increased, as disks cooperatively help each other over the finite energy barriers in the matrix. The system exhibits a transition from a diffusive to a localized state, but the transition is strongly rounded. Effective exponents in the mean-squared displacement can be observed over three decades in time but depend on the density of the disks and do not correspond to asymptotic behavior in the vicinity of a critical point, thus, showing that it is incorrect to relate them to the critical exponents in the Lorentz model scenario. The soft interactions are, therefore, responsible for a breakdown of the universality of the dynamics.
Data storage for managing the health enterprise and achieving business continuity.
Hinegardner, Sam
2003-01-01
As organizations move away from a silo mentality to a vision of enterprise-level information, more healthcare IT departments are rejecting the idea of information storage as an isolated, system-by-system solution. IT executives want storage solutions that act as a strategic element of an IT infrastructure, centralizing storage management activities to effectively reduce operational overhead and costs. This article focuses on three areas of enterprise storage: tape, disk, and disaster avoidance.
Maintaining cultures of wood-rotting fungi.
E.E. Nelson; H.A. Fay
1985-01-01
Phellinus weirii cultures were stored successfully for 10 years in small alder (Alnus rubra Bong.) disks at 2 °C. The six isolates tested appeared morphologically identical and after 10 years varied little in growth rate from those stored on malt agar slants. Long-term storage on alder disks reduces the time required for...
Holographic Compact Disk Read-Only Memories
NASA Technical Reports Server (NTRS)
Liu, Tsuen-Hsi
1996-01-01
Compact disk read-only memories (CD-ROMs) of proposed type store digital data in volume holograms instead of in surface differentially reflective elements. Holographic CD-ROM consist largely of parts similar to those used in conventional CD-ROMs. However, achieves 10 or more times data-storage capacity and throughput by use of wavelength-multiplexing/volume-hologram scheme.
An Optical Disk-Based Information Retrieval System.
ERIC Educational Resources Information Center
Bender, Avi
1988-01-01
Discusses a pilot project by the Nuclear Regulatory Commission to apply optical disk technology to the storage and retrieval of documents related to its high level waste management program. Components and features of the microcomputer-based system which provides full-text and image access to documents are described. A sample search is included.…
Data storage technology comparisons
NASA Technical Reports Server (NTRS)
Katti, Romney R.
1990-01-01
The role of data storage and data storage technology is an integral, though conceptually often underestimated, portion of data processing technology. Data storage is important in the mass storage mode in which generated data is buffered for later use. But data storage technology is also important in the data flow mode when data are manipulated and hence required to flow between databases, datasets and processors. This latter mode is commonly associated with memory hierarchies which support computation. VLSI devices can reasonably be defined as electronic circuit devices such as channel and control electronics as well as highly integrated, solid-state devices that are fabricated using thin film deposition technology. VLSI devices in both capacities play an important role in data storage technology. In addition to random access memories (RAM), read-only memories (ROM), and other silicon-based variations such as PROM's, EPROM's, and EEPROM's, integrated devices find their way into a variety of memory technologies which offer significant performance advantages. These memory technologies include magnetic tape, magnetic disk, magneto-optic disk, and vertical Bloch line memory. In this paper, some comparison between selected technologies will be made to demonstrate why more than one memory technology exists today, based for example on access time and storage density at the active bit and system levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, J; Dossa, D; Gokhale, M
Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less
Antimicrobial Testing Methods & Procedures: MB-31
Information about ATMP - SOP Quantitative Disk Carrier Test Method (QCT-2) Modified for Testing Antimicrobial Products Against Spores of Clostridium difficile (ATCC 43598) on Inanimate, Hard, Non-porous Surfaces - MB-31-Final
Computer simulation and high level virial theory of Saturn-ring or UFO colloids.
Bates, Martin A; Dennison, Matthew; Masters, Andrew
2008-08-21
Monte Carlo simulations are used to map out the complete phase diagram of hard body UFO systems, in which the particles are composed of a concentric sphere and thin disk. The equation of state and phase behavior are determined for a range of relative sizes of the sphere and disk. We show that for relatively large disks, nematic and solid phases are observed in addition to the isotropic fluid. For small disks, two different solid phases exist. For intermediate sizes, only a disordered fluid phase is observed. The positional and orientational structure of the various phases are examined. We also compare the equations of state and the nematic-isotropic coexistence densities with those predicted by an extended Onsager theory using virial coefficients up to B(8).
Computer simulation and high level virial theory of Saturn-ring or UFO colloids
NASA Astrophysics Data System (ADS)
Bates, Martin A.; Dennison, Matthew; Masters, Andrew
2008-08-01
Monte Carlo simulations are used to map out the complete phase diagram of hard body UFO systems, in which the particles are composed of a concentric sphere and thin disk. The equation of state and phase behavior are determined for a range of relative sizes of the sphere and disk. We show that for relatively large disks, nematic and solid phases are observed in addition to the isotropic fluid. For small disks, two different solid phases exist. For intermediate sizes, only a disordered fluid phase is observed. The positional and orientational structure of the various phases are examined. We also compare the equations of state and the nematic-isotropic coexistence densities with those predicted by an extended Onsager theory using virial coefficients up to B8.
50 CFR 660.15 - Equipment requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... water, slime, mud, debris, or other materials. Scale printouts must show: (A) The vessel name and...; (ii) Random Access Memory (RAM): 256 megabytes (MB) or higher; (iii) Hard disk space: (A) If already...
Antimicrobial Testing Methods & Procedures: MB-31-03
Information about ATMP - SOP Quantitative Disk Carrier Test Method (QCT-2) Modified for Testing Antimicrobial Products Against Spores of Clostridium difficile (ATCC 43598) on Inanimate, Hard, Non-porous Surfaces - MB-31-03
50 CFR 660.15 - Equipment requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... water, slime, mud, debris, or other materials. Scale printouts must show: (A) The vessel name and...; (ii) Random Access Memory (RAM): 256 megabytes (MB) or higher; (iii) Hard disk space: (A) If already...
50 CFR 660.15 - Equipment requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... water, slime, mud, debris, or other materials. Scale printouts must show: (A) The vessel name and...; (ii) Random Access Memory (RAM): 256 megabytes (MB) or higher; (iii) Hard disk space: (A) If already...
NASA Astrophysics Data System (ADS)
Cui, Hongtao; Kalinin, Sergei; Yang, Xiaojing; Lowndes, Douglas
2005-03-01
Carbon nanofibers (CNFs) are grown on tipless cantilevers as probe tips for scanning probe microscopy. A catalyst dot pattern is formed on the surface of the tipless cantilever using electron beam lithography and CNF growth is performed in a direct-current plasma enhanced chemical vapor deposition reactor. Because the CNF is aligned with the electric field near the edge of the cantilever during growth, it is tilted with respect to the cantilever surface, which compensates partially for the probe tilt introduced when used in scanning probe microscopy. CNFs with different shapes and tip radii can be produced by variation of experimental conditions. The tip geometries of the CNF probes are defined by their catalyst particles, whose magnetic nature also imparts a capability for imaging magnetic samples. We have demonstrated their use in both atomic force and magnetic force surface imaging. These probe tips may provide information on magnetic phenomena at the nanometer scale in connection with the drive for ever-increasing storage density of magnetic hard disks.
NASA Astrophysics Data System (ADS)
Chung, Pil Seung; Song, Wonyup; Biegler, Lorenz T.; Jhon, Myung S.
2017-05-01
During the operation of hard disk drive (HDD), the perfluoropolyether (PFPE) lubricant experiences elastic or viscous shear/elongation deformations, which affect the performance and reliability of the HDD. Therefore, the viscoelastic responses of PFPE could provide a finger print analysis in designing optimal molecular architecture of lubricants to control the tribological phenomena. In this paper, we examine the rheological responses of PFPEs including storage (elastic) and loss (viscous) moduli (G' and G″) by monitoring the time-dependent-stress-strain relationship via non-equilibrium molecular dynamics simulations. We analyzed the rheological responses by using Cox-Merz rule, and investigated the molecular structural and thermal effects on the solid-like and liquid-like behaviors of PFPEs. The temperature dependence of the endgroup agglomeration phenomena was examined, where the functional endgroups are decoupled as the temperature increases. By analyzing the relaxation processes, the molecular rheological studies will provide the optimal lubricant selection criteria to enhance the HDD performance and reliability for the heat-assisted magnetic recording applications.
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.; Topiwala, Pankaj N.; Luthra, Ajay
2004-11-01
H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.
Atomistic modeling of L10 FePt: path to HAMR 5Tb/in2
NASA Astrophysics Data System (ADS)
Chen, Tianran; Benakli, Mourad; Rea, Chris
2015-03-01
Heat assisted magnetic recording (HAMR) is a promising approach for increasing the storage density of hard disk drives. To increase data density, information must be written in small grains, which requires materials with high anisotropy energy such as L10 FePt. On the other hand, high anisotropy implies high coercivity, making it difficult to write the data with existing recording heads. This issue can be overcome by the technique of HAMR, where a laser is used to heat the recording medium to reduce its coercivity while retaining good thermal stability at room temperature due to the large anisotropy energy. One of the keys to the success of HAMR is the precise control of writing process. In this talk, I will propose a Monte Carlo simulation, based on an atomistic model, that would allow us to study the magnetic properties of L10 FePt and dynamics of spin reversal for the writing process in HAMR.
The Dynamics of Truncated Black Hole Accretion Disks. II. Magnetohydrodynamic Case
NASA Astrophysics Data System (ADS)
Hogg, J. Drew; Reynolds, Christopher S.
2018-02-01
We study a truncated accretion disk using a well-resolved, semi-global magnetohydrodynamic simulation that is evolved for many dynamical times (6096 inner disk orbits). The spectral properties of hard-state black hole binary systems and low-luminosity active galactic nuclei are regularly attributed to truncated accretion disks, but a detailed understanding of the flow dynamics is lacking. In these systems the truncation is expected to arise through thermal instability driven by sharp changes in the radiative efficiency. We emulate this behavior using a simple bistable cooling function with efficient and inefficient branches. The accretion flow takes on an arrangement where a “transition zone” exists in between hot gas in the innermost regions and a cold, Shakura & Sunyaev thin disk at larger radii. The thin disk is embedded in an atmosphere of hot gas that is fed by a gentle outflow originating from the transition zone. Despite the presence of hot gas in the inner disk, accretion is efficient. Our analysis focuses on the details of the angular momentum transport, energetics, and magnetic field properties. We find that the magnetic dynamo is suppressed in the hot, truncated inner region of the disk which lowers the effective α-parameter by 65%.
NASA Astrophysics Data System (ADS)
Torrisi, A.; Torrisi, V.; Tuccitto, N.; Gandolfi, M. G.; Prati, C.; Licciardello, A.
2010-01-01
ToF-SIMS images were obtained from a section of a tooth, obturated by means of a new calcium-silicate based cement (wTCF) after storage for 1 month in a saline solutions (DPBS), in order to simulate the body fluid effects on the obturation. Afterwards, ToF-SIMS spectra were obtained from model samples, prepared by using the same cement paste, after storage for 1 month and 8 months in two different saline solutions (DPBS and HBSS). ToF-SIMS spectra were also obtained from fluorine-free cement (wTC) samples after storage in HBSS for 1 month and 8 months and used for comparison. It was found that the composition of both the saline solution and the cement influenced the composition of the surface of disks and that longer is the storage greater are the differences. Segregation phenomena occur both on the cement obturation of the tooth and on the surface of the disks prepared by using the same cement. Indirect evidences of formation of new crystalline phases are supplied.
NASA Astrophysics Data System (ADS)
You, Bei; Bursa, Michal; Życki, Piotr T.
2018-05-01
We develop a Monte Carlo code to compute the Compton-scattered X-ray flux arising from a hot inner flow that undergoes Lense–Thirring precession. The hot flow intercepts seed photons from an outer truncated thin disk. A fraction of the Comptonized photons will illuminate the disk, and the reflected/reprocessed photons will contribute to the observed spectrum. The total spectrum, including disk thermal emission, hot flow Comptonization, and disk reflection, is modeled within the framework of general relativity, taking light bending and gravitational redshift into account. The simulations are performed in the context of the Lense–Thirring precession model for the low-frequency quasi-periodic oscillations, so the inner flow is assumed to precess, leading to periodic modulation of the emitted radiation. In this work, we concentrate on the energy-dependent X-ray variability of the model and, in particular, on the evolution of the variability during the spectral transition from hard to soft state, which is implemented by the decrease of the truncation radius of the outer disk toward the innermost stable circular orbit. In the hard state, where the Comptonizing flow is geometrically thick, the Comptonization is weakly variable with a fractional variability amplitude of ≤10% in the soft state, where the Comptonizing flow is cooled down and thus becomes geometrically thin, the fractional variability of the Comptonization is highly variable, increasing with photon energy. The fractional variability of the reflection increases with energy, and the reflection emission for low spin is counterintuitively more variable than the one for high spin.
Influence of technology on magnetic tape storage device characteristics
NASA Technical Reports Server (NTRS)
Gniewek, John J.; Vogel, Stephen M.
1994-01-01
There are available today many data storage devices that serve the diverse application requirements of the consumer, professional entertainment, and computer data processing industries. Storage technologies include semiconductors, several varieties of optical disk, optical tape, magnetic disk, and many varieties of magnetic tape. In some cases, devices are developed with specific characteristics to meet specification requirements. In other cases, an existing storage device is modified and adapted to a different application. For magnetic tape storage devices, examples of the former case are 3480/3490 and QIC device types developed for the high end and low end segments of the data processing industry respectively, VHS, Beta, and 8 mm formats developed for consumer video applications, and D-1, D-2, D-3 formats developed for professional video applications. Examples of modified and adapted devices include 4 mm, 8 mm, 12.7 mm and 19 mm computer data storage devices derived from consumer and professional audio and video applications. With the conversion of the consumer and professional entertainment industries from analog to digital storage and signal processing, there have been increasing references to the 'convergence' of the computer data processing and entertainment industry technologies. There has yet to be seen, however, any evidence of convergence of data storage device types. There are several reasons for this. The diversity of application requirements results in varying degrees of importance for each of the tape storage characteristics.
Elastic properties of dense solid phases of hard cyclic pentamers and heptamers in two dimensions.
Wojciechowski, K W; Tretiakov, K V; Kowalik, M
2003-03-01
Systems of model planar, nonconvex, hard-body "molecules" of fivefold and sevenfold symmetry axes are studied by constant pressure Monte Carlo simulations with variable shape of the periodic box. The molecules, referred to as pentamers (heptamers), are composed of five (seven) identical hard disks "atoms" with centers forming regular pentagons (heptagons) of sides equal to the disk diameter. The elastic compliances of defect-free solid phases are computed by analysis of strain fluctuations and the reference (equilibrium) state is determined within the same run in which the elastic properties are computed. Results obtained by using pseudorandom number generators based on the idea proposed by Holian and co-workers [Holian et al., Phys. Rev. E 50, 1607 (1994)] are in good agreement with the results generated by DRAND48. It is shown that singular behavior of the elastic constants near close packing is in agreement with the free volume approximation; the coefficients of the leading singularities are estimated. The simulations prove that the highest density structures of heptamers (in which the molecules cannot rotate) are auxetic, i.e., show negative Poisson ratios.
Radio continuum of galaxies with H2O megamaser disks: 33 GHz VLA data
NASA Astrophysics Data System (ADS)
Kamali, F.; Henkel, C.; Brunthaler, A.; Impellizzeri, C. M. V.; Menten, K. M.; Braatz, J. A.; Greene, J. E.; Reid, M. J.; Condon, J. J.; Lo, K. Y.; Kuo, C. Y.; Litzinger, E.; Kadler, M.
2017-09-01
Context. Galaxies with H2O megamaser disks are active galaxies in whose edge-on accretion disks 22 GHz H2O maser emission has been detected. Because their geometry is known, they provide a unique view into the properties of active galactic nuclei. Aims: The goal of this work is to investigate the nuclear environment of galaxies with H2O maser disks and to relate the maser and host galaxy properties to those of the radio continuum emission of the galaxy. Methods: The 33 GHz (9 mm) radio continuum properties of 24 galaxies with reported 22 GHz H2O maser emission from their disks are studied in the context of the multiwavelength view of these sources. The 29-37 GHz Ka-band observations are made with the Karl Jansky Very Large Array in B, CnB, or BnA configurations, achieving a resolution of 0.2-0.5 arcsec. Hard X-ray data from the Swift/BAT survey and 22 μm infrared data from WISE, 22 GHz H2O maser data and 1.4 GHz data from NVSS and FIRST surveys are also included in the analysis. Results: Eighty-seven percent (21 out of 24) galaxies in our sample show 33 GHz radio continuum emission at levels of 4.5-240σ. Five sources show extended emission (deconvolved source size larger than 2.5 times the major axis of the beam), including one source with two main components and one with three main components. The remaining detected 16 sources (and also some of the above-mentioned targets) exhibit compact cores within the sensitivity limits. Little evidence is found for extended jets (>300 pc) in most sources. Either they do not exist, or our chosen frequency of 33 GHz is too high for a detection of these supposedly steep spectrum features. In NGC 4388, we find an extended jet-like feature that appears to be oriented perpendicular to the H2O megamaser disk. NGC 2273 is another candidate whose radio continuum source might be elongated perpendicular to the maser disk. Smaller 100-300 pc sized jets might also be present, as is suggested by the beam-deconvolved morphology of our sources. Whenever possible, central positions with accuracies of 20-280 mas are provided. A correlation analysis shows that the 33 GHz luminosity weakly correlates with the infrared luminosity. The 33 GHz luminosity is anticorrelated with the circular velocity of the galaxy. The black hole masses show stronger correlations with H2O maser luminosity than with 1.4 GHz, 33 GHz, or hard X-ray luminosities. Furthermore, the inner radii of the disks show stronger correlations with 1.4 GHz, 33 GHz, and hard X-ray luminosities than their outer radii, suggesting that the outer radii may be affected by disk warping, star formation, or peculiar density distributions.
Transport coefficients for dense hard-disk systems.
García-Rojo, Ramón; Luding, Stefan; Brey, J Javier
2006-12-01
A study of the transport coefficients of a system of elastic hard disks based on the use of Helfand-Einstein expressions is reported. The self-diffusion, the viscosity, and the heat conductivity are examined with averaging techniques especially appropriate for event-driven molecular dynamics algorithms with periodic boundary conditions. The density and size dependence of the results are analyzed, and comparison with the predictions from Enskog's theory is carried out. In particular, the behavior of the transport coefficients in the vicinity of the fluid-solid transition is investigated and a striking power law divergence of the viscosity with density is obtained in this region, while all other examined transport coefficients show a drop in that density range in relation to the Enskog's prediction. Finally, the deviations are related to shear band instabilities and the concept of dilatancy.
JARE Syowa Station 11-m Antenna, Antarctica
NASA Technical Reports Server (NTRS)
Aoyama, Yuichi; Doi, Koichiro; Shibuya, Kazuo
2013-01-01
In 2012, the 52nd and the 53rd Japanese Antarctic Research Expeditions (hereinafter, referred to as JARE-52 and JARE-53, respectively) participated in five OHIG sessions - OHIG76, 78, 79, 80, and 81. These data were recorded on hard disks through the K5 terminal. Only the hard disks for the OHIG76 session have been brought back from Syowa Station to Japan, in April 2012, by the icebreaker, Shirase, while those of the other four sessions are scheduled to arrive in April 2013. The data obtained from the OHIG73, 74, 75, and 76 sessions by JARE-52 and JARE-53 have been transferred to the Bonn Correlator via the servers of National Institute of Information and Communications Technology (NICT). At Syowa Station, JARE-53 and JARE-54 will participate in six OHIG sessions in 2013.
Scaling laws and bulk-boundary decoupling in heat flow.
del Pozo, Jesús J; Garrido, Pedro L; Hurtado, Pablo I
2015-03-01
When driven out of equilibrium by a temperature gradient, fluids respond by developing a nontrivial, inhomogeneous structure according to the governing macroscopic laws. Here we show that such structure obeys strikingly simple scaling laws arbitrarily far from equilibrium, provided that both macroscopic local equilibrium and Fourier's law hold. Extensive simulations of hard disk fluids confirm the scaling laws even under strong temperature gradients, implying that Fourier's law remains valid in this highly nonlinear regime, with putative corrections absorbed into a nonlinear conductivity functional. In addition, our results show that the scaling laws are robust in the presence of strong finite-size effects, hinting at a subtle bulk-boundary decoupling mechanism which enforces the macroscopic laws on the bulk of the finite-sized fluid. This allows one to measure the marginal anomaly of the heat conductivity predicted for hard disks.
SODR Memory Control Buffer Control ASIC
NASA Technical Reports Server (NTRS)
Hodson, Robert F.
1994-01-01
The Spacecraft Optical Disk Recorder (SODR) is a state of the art mass storage system for future NASA missions requiring high transmission rates and a large capacity storage system. This report covers the design and development of an SODR memory buffer control applications specific integrated circuit (ASIC). The memory buffer control ASIC has two primary functions: (1) buffering data to prevent loss of data during disk access times, (2) converting data formats from a high performance parallel interface format to a small computer systems interface format. Ten 144 p in, 50 MHz CMOS ASIC's were designed, fabricated and tested to implement the memory buffer control function.
Global EOS: exploring the 300-ms-latency region
NASA Astrophysics Data System (ADS)
Mascetti, L.; Jericho, D.; Hsu, C.-Y.
2017-10-01
EOS, the CERN open-source distributed disk storage system, provides the highperformance storage solution for HEP analysis and the back-end for various work-flows. Recently EOS became the back-end of CERNBox, the cloud synchronisation service for CERN users. EOS can be used to take advantage of wide-area distributed installations: for the last few years CERN EOS uses a common deployment across two computer centres (Geneva-Meyrin and Budapest-Wigner) about 1,000 km apart (∼20-ms latency) with about 200 PB of disk (JBOD). In late 2015, the CERN-IT Storage group and AARNET (Australia) set-up a challenging R&D project: a single EOS instance between CERN and AARNET with more than 300ms latency (16,500 km apart). This paper will report about the success in deploy and run a distributed storage system between Europe (Geneva, Budapest), Australia (Melbourne) and later in Asia (ASGC Taipei), allowing different type of data placement and data access across these four sites.
Ceph-based storage services for Run2 and beyond
NASA Astrophysics Data System (ADS)
van der Ster, Daniel C.; Lamanna, Massimo; Mascetti, Luca; Peters, Andreas J.; Rousseau, Hervé
2015-12-01
In 2013, CERN IT evaluated then deployed a petabyte-scale Ceph cluster to support OpenStack use-cases in production. With now more than a year of smooth operations, we will present our experience and tuning best-practices. Beyond the cloud storage use-cases, we have been exploring Ceph-based services to satisfy the growing storage requirements during and after Run2. First, we have developed a Ceph back-end for CASTOR, allowing this service to deploy thin disk server nodes which act as gateways to Ceph; this feature marries the strong data archival and cataloging features of CASTOR with the resilient and high performance Ceph subsystem for disk. Second, we have developed RADOSFS, a lightweight storage API which builds a POSIX-like filesystem on top of the Ceph object layer. When combined with Xrootd, RADOSFS can offer a scalable object interface compatible with our HEP data processing applications. Lastly the same object layer is being used to build a scalable and inexpensive NFS service for several user communities.
Building an organic block storage service at CERN with Ceph
NASA Astrophysics Data System (ADS)
van der Ster, Daniel; Wiebalck, Arne
2014-06-01
Emerging storage requirements, such as the need for block storage for both OpenStack VMs and file services like AFS and NFS, have motivated the development of a generic backend storage service for CERN IT. The goals for such a service include (a) vendor neutrality, (b) horizontal scalability with commodity hardware, (c) fault tolerance at the disk, host, and network levels, and (d) support for geo-replication. Ceph is an attractive option due to its native block device layer RBD which is built upon its scalable, reliable, and performant object storage system, RADOS. It can be considered an "organic" storage solution because of its ability to balance and heal itself while living on an ever-changing set of heterogeneous disk servers. This work will present the outcome of a petabyte-scale test deployment of Ceph by CERN IT. We will first present the architecture and configuration of our cluster, including a summary of best practices learned from the community and discovered internally. Next the results of various functionality and performance tests will be shown: the cluster has been used as a backend block storage system for AFS and NFS servers as well as a large OpenStack cluster at CERN. Finally, we will discuss the next steps and future possibilities for Ceph at CERN.
Overview of emerging nonvolatile memory technologies
2014-01-01
Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new class of memory technologies and scaling of scientific procedures based on an investigation of recent progress in advanced Flash memory devices. PMID:25278820
Spin-Valve and Spin-Tunneling Devices: Read Heads, MRAMs, Field Sensors
NASA Astrophysics Data System (ADS)
Freitas, P. P.
Hard disk magnetic data storage is increasing at a steady state in terms of units sold, with 144 million drives sold in 1998 (107 million for desktops, 18 million for portables, and 19 million for enterprise drives), corresponding to a total business of 34 billion US [1]. The growing need for storage coming from new PC operating systems, INTERNET applications, and a foreseen explosion of applications connected to consumer electronics (digital TV, video, digital cameras, GPS systems, etc.), keep the magnetics community actively looking for new solutions, concerning media, heads, tribology, and system electronics. Current state of the art disk drives (January 2000), using dual inductive-write, magnetoresistive-read (MR) integrated heads reach areal densities of 15 to 23 bit/μm2, capable of putting a full 20 GB in one platter (a 2 hour film occupies 10 GB). Densities beyond 80 bit/μm2 have already been demonstrated in the laboratory (Fujitsu 87 bit/μm2-Intermag 2000, Hitachi 81 bit/μm2, Read-Rite 78 bit/μ m2, Seagate 70 bit/μ m2 - all the last three demos done in the first 6 months of 2000, with IBM having demonstrated 56 bit/μ m2 already at the end of 1999). At densities near 60 bit/μm2, the linear bit size is sim 43 nm, and the width of the written tracks is sim 0.23 μm. Areal density in commercial drives is increasing steadily at a rate of nearly 100% per year [1], and consumer products above 60 bit/μm2 are expected by 2002. These remarkable achievements are only possible by a stream of technological innovations, in media [2], write heads [3], read heads [4], and system electronics [5]. In this chapter, recent advances on spin valve materials and spin valve sensor architectures, low resistance tunnel junctions and tunnel junction head architectures will be addressed.
Intelligent holographic databases
NASA Astrophysics Data System (ADS)
Barbastathis, George
Memory is a key component of intelligence. In the human brain, physical structure and functionality jointly provide diverse memory modalities at multiple time scales. How could we engineer artificial memories with similar faculties? In this thesis, we attack both hardware and algorithmic aspects of this problem. A good part is devoted to holographic memory architectures, because they meet high capacity and parallelism requirements. We develop and fully characterize shift multiplexing, a novel storage method that simplifies disk head design for holographic disks. We develop and optimize the design of compact refreshable holographic random access memories, showing several ways that 1 Tbit can be stored holographically in volume less than 1 m3, with surface density more than 20 times higher than conventional silicon DRAM integrated circuits. To address the issue of photorefractive volatility, we further develop the two-lambda (dual wavelength) method for shift multiplexing, and combine electrical fixing with angle multiplexing to demonstrate 1,000 multiplexed fixed holograms. Finally, we propose a noise model and an information theoretic metric to optimize the imaging system of a holographic memory, in terms of storage density and error rate. Motivated by the problem of interfacing sensors and memories to a complex system with limited computational resources, we construct a computer game of Desert Survival, built as a high-dimensional non-stationary virtual environment in a competitive setting. The efficacy of episodic learning, implemented as a reinforced Nearest Neighbor scheme, and the probability of winning against a control opponent improve significantly by concentrating the algorithmic effort to the virtual desert neighborhood that emerges as most significant at any time. The generalized computational model combines the autonomous neural network and von Neumann paradigms through a compact, dynamic central representation, which contains the most salient features of the sensory inputs, fused with relevant recollections, reminiscent of the hypothesized cognitive function of awareness. The Declarative Memory is searched both by content and address, suggesting a holographic implementation. The proposed computer architecture may lead to a novel paradigm that solves 'hard' cognitive problems at low cost.
Overview of emerging nonvolatile memory technologies.
Meena, Jagan Singh; Sze, Simon Min; Chand, Umesh; Tseng, Tseung-Yuen
2014-01-01
Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new class of memory technologies and scaling of scientific procedures based on an investigation of recent progress in advanced Flash memory devices.
Mayworm, Camila D; Camargo, Sérgio S; Bastian, Fernando L
2008-09-01
The aim of this study is to compare the wear resistance and hardness of two dental nanohybrid composites and to evaluate the influence of artificial saliva storage on those properties. Specimens were made from two commercial nanohybrid dental composites (Esthet-X-Dentsply and Filtek Supreme-3M). Abrasion tests were carried out in a ball-cratering machine (three body abrasion) and microscopic analysis of the wear surfaces was made using optical and scanning electron microscopy; hardness was quantified by Vickers hardness test. Those tests were repeated on specimens stored in artificial saliva. Results show that the wear rate of the studied materials is within 10(-7)mm(3)/Nmm range, one of the composites presenting wear rate twice as large as the other. After storage in artificial saliva, the wear resistance increases for both materials. Microhardness of the composites is around 52 and 64HV, Esthet-X presents higher hardness values than Filtek Supreme. After storage in artificial saliva, the microhardness of both materials decreases. Data were analyzed using ANOVA test, p < or = 0.05. Artificial saliva storage increases the materials' wear resistance, suggesting that in both materials bulk post-cure takes place and saliva absorption occurs only on the surface of the composites. This effect was confirmed by comparing the Vickers hardness before and after artificial saliva treatment and FTIR analyses. Surface microhardness of the composites decreases after storage in artificial saliva whereas bulk microhardness of the materials increases.
45 CFR 286.260 - May Tribes use sampling and electronic filing?
Code of Federal Regulations, 2010 CFR
2010-10-01
... quarterly reports electronically, based on format specifications that we will provide. Tribes who do not have the capacity to submit reports electronically may submit quarterly reports on a disk or in hard...
The Seven Deadly Sins of Online Microcomputing.
ERIC Educational Resources Information Center
King, Alan
1989-01-01
Offers suggestions for avoiding common errors in online microcomputer use. Areas discussed include learning the basics; hardware protection; backup options; hard disk organization; software selection; file security; and the use of dedicated communications lines. (CLB)
RAMOS, Marcelo Barbosa; PEGORARO, Thiago Amadei; PEGORARO, Luiz Fernando; CARVALHO, Ricardo Marins
2012-01-01
Objectives To determine the micro-hardness profile of two dual cure resin cements (RelyX - U100®, 3M-ESPE and Panavia F 2.0®, Kuraray) used for cementing fiber-reinforced resin posts (Fibrekor® - Jeneric Pentron) under three different curing protocols and two water storage times. Material and methods Sixty 16mm long bovine incisor roots were endodontically treated and prepared for cementation of the Fibrekor posts. The cements were mixed as instructed, dispensed in the canal, the posts were seated and the curing performed as follows: a) no light activation; b) light-activation immediately after seating the post, and; c) light-activation delayed 5 minutes after seating the post. The teeth were stored in water and retrieved for analysis after 7 days and 3 months. The roots were longitudinally sectioned and the microhardness was determined at the cervical, middle and apical regions along the cement line. The data was analyzed by the three-way ANOVA test (curing mode, storage time and thirds) for each cement. The Tukey test was used for the post-hoc analysis. Results Light-activation resulted in a significant increase in the microhardness. This was more evident for the cervical region and for the Panavia cement. Storage in water for 3 months caused a reduction of the micro-hardness for both cements. The U100 cement showed less variation in the micro-hardness regardless of the curing protocol and storage time. Conclusions The micro-hardness of the cements was affected by the curing and storage variables and were material-dependent. PMID:23138743
Kanerva's sparse distributed memory with multiple hamming thresholds
NASA Technical Reports Server (NTRS)
Pohja, Seppo; Kaski, Kimmo
1992-01-01
If the stored input patterns of Kanerva's Sparse Distributed Memory (SDM) are highly correlated, utilization of the storage capacity is very low compared to the case of uniformly distributed random input patterns. We consider a variation of SDM that has a better storage capacity utilization for correlated input patterns. This approach uses a separate selection threshold for each physical storage address or hard location. The selection of the hard locations for reading or writing can be done in parallel of which SDM implementations can benefit.
From Physics to industry: EOS outside HEP
NASA Astrophysics Data System (ADS)
Espinal, X.; Lamanna, M.
2017-10-01
In the competitive market for large-scale storage solutions the current main disk storage system at CERN EOS has been showing its excellence in the multi-Petabyte high-concurrency regime. It has also shown a disruptive potential in powering the service in providing sync and share capabilities and in supporting innovative analysis environments along the storage of LHC data. EOS has also generated interest as generic storage solution ranging from university systems to very large installations for non-HEP applications.
HECWRC, Flood Flow Frequency Analysis Computer Program 723-X6-L7550
1989-02-14
AGENCY NAME AND ADDRESS, ORDER NO., ETC. (1 NTS sells, leave blank) 11. PRICE INFORMA-ION Price includes documentation: Price code: DO1 $50.00 12 ...required is 256 K. Math coprocessor (8087/80287/80387) is highly recommended but not required. 16. DATA FILE TECHNICAL DESCRIPTION The software is...disk drive (360 KB or 1.2 MB). A 10 MB or larger hard disk is recommended. Math coprocessor (8087/80287/80387) is highly recommended but not renuired
1989-06-01
the Chemistry Department, and the WHOI Education Office for providing financial support and a nice place to work. Parts of this research was funded by...and erosion studies is unknown. c 1.5 OBJECTIVES The objectives of this research are 1) to quantify the diffusive mobility of helium isotopes in...specifically tailored for the diffusion experiments. Data is recorded on a hard disk and on paper , and is automatically backed up to floppy disks
Simulation of aerodynamic noise and vibration noise in hard disk drives
NASA Astrophysics Data System (ADS)
Zhu, Lei; Shen, Sheng-Nan; Li, Hui; Zhang, Guo-Qing; Cui, Fu-Hao
2018-05-01
Internal flow field characteristics of HDDs are usually influenced by the arm swing during seek operations. This, in turn, can affect aerodynamic noise and airflow-induced noise. In this paper, the dynamic mesh method is used to calculate the flow-induced vibration (FIV) by transient structure analysis and the boundary element method (BEM) is utilized to predict the vibration noise. Two operational states are considered: the arm is fixed and swinging over the disk. Both aerodynamic noise and vibration noise inside drives increase rapidly with increase in disk rotation and arm swing velocities. The largest aerodynamic noise source is always located near the arm and swung with the arm.
Online performance evaluation of RAID 5 using CPU utilization
NASA Astrophysics Data System (ADS)
Jin, Hai; Yang, Hua; Zhang, Jiangling
1998-09-01
Redundant arrays of independent disks (RAID) technology is the efficient way to solve the bottleneck problem between CPU processing ability and I/O subsystem. For the system point of view, the most important metric of on line performance is the utilization of CPU. This paper first employs the way to calculate the CPU utilization of system connected with RAID level 5 using statistic average method. From the simulation results of CPU utilization of system connected with RAID level 5 subsystem can we see that using multiple disks as an array to access data in parallel is the efficient way to enhance the on-line performance of disk storage system. USing high-end disk drivers to compose the disk array is the key to enhance the on-line performance of system.
NASA Technical Reports Server (NTRS)
Fertig, D.; Mukai, K.; Nelson, T.; Cannizzo, J. K.
2011-01-01
In a dwarf nova, the accretion disk around the white dwarf is a source of ultraviolet, optical, and infrared photons, but is never hot enough to emit X-rays. Observed X-rays instead originate from the boundary layer between the disk and the white dwarf. As the disk switches between quiescence and outburst states, the 2-10 keV X-ray flux is usually seen to be anti-correlated with the optical brightness. Here we present RXTE monitoring observations of two dwarf novae, VW Hyi and WW Cet, confirming the optical/X-ray anti-correlation in these two systems. However, we do not detect any episodes of increased hard X-ray flux on the rise (out of two possible chances for WW Cet) or the decline (two for WW Cet and one for VW Hyi) from outburst, attributes that are clearly established in SS Cyg. The addition of these data to the existing literature establishes the fact that the behavior of SS Cyg is the exception, rather than the archetype as is often assumed. We speculate that only dwarf novae with a massive white dwarf may show these hard X-ray spikes.
Russian-US collaboration on implementation of the active well coincidence counter (AWCC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mozhajev, V.; Pshakin, G.; Stewart, J.
The feasibility of using a standard AWCC at the Obninsk IPPE has been demonstrated through active measurements of single UO{sub 2} (36% enriched) disks and through passive measurements of plutonium metal disks used for simulating reactor cores. The role of the measurements is to verify passport values assigned to the disks by the facility, and thereby facilitate the mass accountability procedures developed for the very large inventory of fuel disks at the facility. The AWCC is a very flexible instrument for verification measurements of the large variety of nuclear material items at the Obninsk IPPE and other Russian facilities. Futuremore » work at the IPPE will include calibration and verification measurements for other materials, both in individual disks and in multi-disk storage tubes; it will also include training in the use of the AWCC.« less
ZFS on RBODs - Leveraging RAID Controllers for Metrics and Enclosure Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stearman, D. M.
2015-03-30
Traditionally, the Lustre file system has relied on the ldiskfs file system with reliable RAID (Redundant Array of Independent Disks) storage underneath. As of Lustre 2.4, ZFS was added as a backend file system, with built-in software RAID, thereby removing the need of expensive RAID controllers. ZFS was designed to work with JBOD (Just a Bunch Of Disks) storage enclosures under the Solaris Operating System, which provided a rich device management system. Long time users of the Lustre file system have relied on the RAID controllers to provide metrics and enclosure monitoring and management services, with rich APIs and commandmore » line interfaces. This paper will study a hybrid approach using an advanced full featured RAID enclosure which is presented to the host as a JBOD, This RBOD (RAIDed Bunch Of Disks) allows ZFS to do the RAID protection and error correction, while the RAID controller handles management of the disks and monitors the enclosure. It was hoped that the value of the RAID controller features would offset the additional cost, and that performance would not suffer in this mode. The test results revealed that the hybrid RBOD approach did suffer reduced performance.« less
Designing a scalable video-on-demand server with data sharing
NASA Astrophysics Data System (ADS)
Lim, Hyeran; Du, David H.
2000-12-01
As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.
Designing a scalable video-on-demand server with data sharing
NASA Astrophysics Data System (ADS)
Lim, Hyeran; Du, David H. C.
2001-01-01
As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.
Curriculum Bank for Individualized Electronic Instruction. Final Report.
ERIC Educational Resources Information Center
Williamson, Bert; Pedersen, Joe F.
Objectives of this project were to update and convert to disk storage appropriate handout materials for courses for the electronic technology open classroom. Project activities were an ERIC search for computer-managed instructional materials; updating of the course outline, lesson outlines, information handouts, and unit tests; and storage of the…
Application of Cold Storage for Raja Sere Banana (Musa acuminata colla)
NASA Astrophysics Data System (ADS)
Crismas, S. R. S.; Purwanto, Y. A.; Sutrisno
2018-05-01
Raja Sere is one of the indigenous banana cultivars in Indonesia. This cultivar has a yellow color when ripen, small size and sweet taste. Traditionally, the growers market this banana cultivar to the market without any treatment to delay the ripening process. Banana fruits are commonly being harvested at the condition of hard green mature. At this condition of hard green mature, banana fruits can be stored for a long-term period. The objective of this study was to examine the effect of cold storage on the quality of raja sere banana that stored at 13°C. Banana fruits cultivar Raja Sere were harvested from local farmer field at the condition of hard green mature (about 14 weeks age after the flower bloom). Fifteen bunches of banana were stored in cold storage with a temperature of 13°C for 0, 3, 6, 9, and 12 days, respectively. For the control, room temperature storage (28°C) was used. At a storage period, samples of banana fruits ripened in the ripening chamber by injecting 100 ppm of ethylene gas at 25°C for 24 hours. The quality parameters namely respiration rate, hardness, total soluble solids (TSS), change in color, and weight loss were measured. For those banana fruits stored at room temperature, the shelf-life of banana was only reached up to 6 days. For those banana fruits stored in cold storage, the condition of banana fruits was reached up to 12 days. After cold storage and ripening, the third day measurement was the optimal time for bananas to be consumed which indicated by the yellow color (lightness value = 68.51, a* = 4.74 and value b* = 62.63), TSS 24.30 °Brix and hardness 0.48 kgf, weight loss about 7.53-16.45% and CO2 respiration rate of 100.37 mLCO2 / kg.hr.
The raw disk i/o performance of compaq storage works RAID arrays under tru64 unix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uselton, A C
2000-10-19
We report on the raw disk i/o performance of a set of Compaq StorageWorks RAID arrays connected to our cluster of Compaq ES40 computers via Fibre Channel. The best cumulative peak sustained data rate is l17MB/s per node for reads and 77MB/s per node for writes. This value occurs for a configuration in which a node has two Fibre Channel interfaces to a switch, which in turn has two connections to each of two Compaq StorageWorks RAID arrays. Each RAID array has two HSG80 RAID controllers controlling (together) two 5+p RAID chains. A 10% more space efficient arrangement using amore » single 1l+p RAID chain in place of the two 5+P chains is 25% slower for reads and 40% slower for writes.« less
A magnetic model for low/hard state of black hole binaries
NASA Astrophysics Data System (ADS)
Ye, Yong-Chun; Wang, Ding-Xiong; Huang, Chang-Yin; Cao, Xiao-Feng
2016-03-01
A magnetic model for the low/hard state (LHS) of two black hole X-ray binaries (BHXBs), H1743-322 and GX 339-4, is proposed based on transport of the magnetic field from a companion into an accretion disk around a black hole (BH). This model consists of a truncated thin disk with an inner advection-dominated accretion flow (ADAF). The spectral profiles of the sources are fitted in agreement with the data observed at four different dates corresponding to the rising phase of the LHS. In addition, the association of the LHS with a quasi-steady jet is modeled based on transport of magnetic field, where the Blandford-Znajek (BZ) and Blandford-Payne (BP) processes are invoked to drive the jets from BH and inner ADAF. It turns out that the steep radio/X-ray correlations observed in H1743-322 and GX 339-4 can be interpreted based on our model.
Physics and Hard Disk Drives-A Career in Industry
NASA Astrophysics Data System (ADS)
Lambert, Steven
2014-03-01
I will participate in a panel discussion about ``Career Opportunities for Physicists.'' I enjoyed 27 years doing technology development and product support in the hard disk drive business. My PhD in low temperature physics was excellent training for this career since I learned how to work in a lab, analyze data, write and present technical information, and define experiments that got to the heart of a problem. An academic position did not appeal to me because I had no passion to pursue a particular topic in basic physics. My work in industry provided an unending stream of challenging problems to solve, and it was a rich and rewarding experience. I'm now employed by the APS to focus on our interactions with physicists in industry. I welcome the chance to share my industrial experience with students, post-docs, and others who are making decisions about their career path. Industrial Physics Fellow, APS Headquarters.
Müller, O; Lützenkirchen-Hecht, D; Frahm, R
2015-03-01
A fast X-ray chopper capable of producing ms long X-ray pulses with a typical rise time of few μs was realized. It is ideally suited to investigate the temporal response of X-ray detectors with response times of the order of μs to ms, in particular, any kind of ionization chambers and large area photo diodes. The drive mechanism consists of a brushless DC motor and driver electronics from a common hard disk drive, keeping the cost at an absolute minimum. Due to its simple construction and small dimensions, this chopper operates at home lab based X-ray tubes and synchrotron radiation sources as well. The dynamics of the most important detectors used in time resolved X-ray absorption spectroscopy, namely, ionization chambers and Passivated Implanted Planar Silicon photodiodes, were investigated in detail. The results emphasize the applicability of this X-ray chopper.
An XMM-Newton Study of the Bright Ultrasoft Narrow-Line Quasar NAB 0205+024
NASA Technical Reports Server (NTRS)
Brandt, Niel
2004-01-01
The broad-band X-ray continuum of NAB 0205424 is well constrained due to the excellent photon statistics obtained (about 97,700 counts), and its impressive soft X-ray excess is clearly apparent. The hard X-ray power law has become notably steeper than when NAB 0205424 was observed with ASCA, attesting to the presence of significant X-ray spectral variability. A strong and broad emission feature is detected from about 5 to 6.4 keV, and we have modeled this as a relativistic line emitted close to the black hole from a narrow annulus of the accretion disk. Furthermore, a strong X-ray flare is detected with a hard X-ray spectrum; this flare may be responsible for illuminating the inner line-emitting part of the accretion disk. The combined observational results can be broadly interpreted in terms of the "thundercloud model proposed by Merloni & Fabian (2001).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuyama, H., E-mail: matsu@phys.sci.hokudai.ac.jp; Nara, D.; Kageyama, R.
We developed a micrometer-sized magnetic tip integrated onto the write head of a hard disk drive for spin-polarized scanning tunneling microscopy (SP-STM) in the modulated tip magnetization mode. Using SP-STM, we measured a well-defined in-plane spin-component of the tunneling current of the rough surface of a polycrystalline NiFe film. The spin asymmetry of the NiFe film was about 1.3% within the bias voltage range of -3 to 1 V. We obtained the local spin component image of the sample surface, switching the magnetic field of the sample to reverse the sample magnetization during scanning. We also obtained a spin imagemore » of the rough surface of a polycrystalline NiFe film evaporated on the recording medium of a hard disk drive.« less
NASA Astrophysics Data System (ADS)
Cordle, Michael; Rea, Chris; Jury, Jason; Rausch, Tim; Hardie, Cal; Gage, Edward; Victora, R. H.
2018-05-01
This study aims to investigate the impact that factors such as skew, radius, and transition curvature have on areal density capability in heat-assisted magnetic recording hard disk drives. We explore a "ballistic seek" approach for capturing in-situ scan line images of the magnetization footprint on the recording media, and extract parametric results of recording characteristics such as transition curvature. We take full advantage of the significantly improved cycle time to apply a statistical treatment to relatively large samples of experimental curvature data to evaluate measurement capability. Quantitative analysis of factors that impact transition curvature reveals an asymmetry in the curvature profile that is strongly correlated to skew angle. Another less obvious skew-related effect is an overall decrease in curvature as skew angle increases. Using conventional perpendicular magnetic recording as the reference case, we characterize areal density capability as a function of recording position.
Disordered hyperuniformity in two-component nonadditive hard-disk plasmas
NASA Astrophysics Data System (ADS)
Lomba, Enrique; Weis, Jean-Jacques; Torquato, Salvatore
2017-12-01
We study the behavior of a classical two-component ionic plasma made up of nonadditive hard disks with additional logarithmic Coulomb interactions between them. Due to the Coulomb repulsion, long-wavelength total density fluctuations are suppressed and the system is globally hyperuniform. Short-range volume effects lead to phase separation or to heterocoordination for positive or negative nonadditivities, respectively. These effects compete with the hidden long-range order imposed by hyperuniformity. As a result, the critical behavior of the mixture is modified, with long-wavelength concentration fluctuations partially damped when the system is charged. It is also shown that the decrease of configurational entropy due to hyperuniformity originates from contributions beyond the two-particle level. Finally, despite global hyperuniformity, we show that in our system the spatial configuration associated with each component separately is not hyperuniform, i.e., the system is not "multihyperuniform."
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Hiromitsu; Sakurai, Soki; Makishima, Kazuo, E-mail: hirotaka@hep01.hepl.hiroshima-u.ac.jp
To investigate the physics of mass accretion onto weakly magnetized neutron stars (NSs), 95 archival Rossi X-Ray Timing Explorer data sets of an atoll source 4U 1608-522, acquired over 1996-2004 in the so-called upper-banana state, were analyzed. The object meantime exhibited 3-30 keV luminosity in the range of {approx}< 10{sup 35}-4 x 10{sup 37} erg s{sup -1}, assuming a distance of 3.6 kpc. The 3-30 keV Proportional Counter Array spectra, produced one from each data set, were represented successfully with a combination of a soft and a hard component, the presence of which was revealed in a model-independent manner bymore » studying spectral variations among the observations. The soft component is expressed by the so-called multi-color disk model with a temperature of {approx}1.8 keV, and is attributed to the emission from an optically thick standard accretion disk. The hard component is a blackbody (BB) emission with a temperature of {approx}2.7 keV, thought to be emitted from the NS surface. As the total luminosity increases, a continuous decrease is observed in the ratio of the BB luminosity to that of the disk component. This property suggests that it gradually becomes difficult for the matter flowing through the accretion disk to reach the NS surface, presumably forming outflows driven by the increased radiation pressure. On timescales of hours to days, the overall source variability was found to be controlled by two independent variables: the mass accretion rate and the innermost disk radius, which changes both physically and artificially.« less
Chandra/ACIS-I Study of the X-Ray Properties of the NGC 6611 and M16 Stellar Populations
NASA Astrophysics Data System (ADS)
Guarcello, M. G.; Caramazza, M.; Micela, G.; Sciortino, S.; Drake, J. J.; Prisinzano, L.
2012-07-01
Mechanisms regulating the origin of X-rays in young stellar objects and the correlation with their evolutionary stage are under debate. Studies of the X-ray properties in young clusters allow us to understand these mechanisms. One ideal target for this analysis is the Eagle Nebula (M16), with its central cluster NGC 6611. At 1750 pc from the Sun, it harbors 93 OB stars, together with a population of low-mass stars from embedded protostars to disk-less Class III objects, with age <=3 Myr. We study an archival 78 ks Chandra/ACIS-I observation of NGC 6611 and two new 80 ks observations of the outer region of M16, one centered on the Column V and the other on a region of the molecular cloud with ongoing star formation. We detect 1755 point sources with 1183 candidate cluster members (219 disk-bearing and 964 disk-less). We study the global X-ray properties of M16 and compare them with those of the Orion Nebula Cluster. We also compare the level of X-ray emission of Class II and Class III stars and analyze the X-ray spectral properties of OB stars. Our study supports the lower level of X-ray activity for the disk-bearing stars with respect to the disk-less members. The X-ray luminosity function (XLF) of M16 is similar to that of Orion, supporting the universality of the XLF in young clusters. Eighty-five percent of the O stars of NGC 6611 have been detected in X-rays. With only one possible exception, they show soft spectra with no hard components, indicating that mechanisms for the production of hard X-ray emission in O stars are not operating in NGC 6611.
Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.
2016-12-01
Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.
NASA Astrophysics Data System (ADS)
Mascetti, L.; Cano, E.; Chan, B.; Espinal, X.; Fiorot, A.; González Labrador, H.; Iven, J.; Lamanna, M.; Lo Presti, G.; Mościcki, JT; Peters, AJ; Ponce, S.; Rousseau, H.; van der Ster, D.
2015-12-01
CERN IT DSS operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. The total usable space available on disk for users is about 100 PB (with relative ratios 1:20:120). EOS actively uses the two CERN Tier0 centres (Meyrin and Wigner) with 50:50 ratio. IT DSS also provide sizeable on-demand resources for IT services most notably OpenStack and NFS-based clients: this is provided by a Ceph infrastructure (3 PB) and few proprietary servers (NetApp). We will describe our operational experience and recent changes to these systems with special emphasis to the present usages for LHC data taking, the convergence to commodity hardware (nodes with 200-TB each with optional SSD) shared across all services. We also describe our experience in coupling commodity and home-grown solution (e.g. CERNBox integration in EOS, Ceph disk pools for AFS, CASTOR and NFS) and finally the future evolution of these systems for WLCG and beyond.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicolae, Bogdan; Riteau, Pierre; Keahey, Kate
Storage elasticity on IaaS clouds is a crucial feature in the age of data-intensive computing, especially when considering fluctuations of I/O throughput. This paper provides a transparent solution that automatically boosts I/O bandwidth during peaks for underlying virtual disks, effectively avoiding over-provisioning without performance loss. The authors' proposal relies on the idea of leveraging short-lived virtual disks of better performance characteristics (and thus more expensive) to act during peaks as a caching layer for the persistent virtual disks where the application data is stored. Furthermore, they introduce a performance and cost prediction methodology that can be used both independently tomore » estimate in advance what trade-off between performance and cost is possible, as well as an optimization technique that enables better cache size selection to meet the desired performance level with minimal cost. The authors demonstrate the benefits of their proposal both for microbenchmarks and for two real-life applications using large-scale experiments.« less
Multi-terabyte EIDE disk arrays running Linux RAID5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.
2004-11-01
High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case ofmore » multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.« less
A novel anti-piracy optical disk with photochromic diarylethene
NASA Astrophysics Data System (ADS)
Liu, Guodong; Cao, Guoqiang; Huang, Zhen; Wang, Shenqian; Zou, Daowen
2005-09-01
Diarylethene is one of photochromic material with many advantages and one of the most promising recording materials for huge optical data storage. Diarylethene has two forms, which can be converted to each other by laser beams of different wavelength. The material has been researched for rewritable optical disks. Volatile data storage is one of its properties, which was always considered as an obstacle to utility. Many researches have been done for combating the obstacle for a long time. In fact, volatile data storage is very useful for anti-piracy optical data storage. Piracy is a social and economical problem. One technology of anti-piracy optical data storage is to limit readout of the data recorded in the material by encryption software. By the development of computer technologies, this kind of software is more and more easily cracked. Using photochromic diarylethene as the optical recording material, the signals of the data recorded in the material are degraded when it is read, and readout of the data is limited. Because the method uses hardware to realize anti-piracy, it is impossible cracked. In this paper, we will introduce this usage of the material. Some experiments are presented for proving its feasibility.
EVIDENCE FOR SIMULTANEOUS JETS AND DISK WINDS IN LUMINOUS LOW-MASS X-RAY BINARIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Homan, Jeroen; Neilsen, Joseph; Allen, Jessamyn L.
Recent work on jets and disk winds in low-mass X-ray binaries (LMXBs) suggests that they are to a large extent mutually exclusive, with jets observed in spectrally hard states and disk winds observed in spectrally soft states. In this paper we use existing literature on jets and disk winds in the luminous neutron star (NS) LMXB GX 13+1, in combination with archival Rossi X-ray Timing Explorer data, to show that this source is likely able to produce jets and disk winds simultaneously. We find that jets and disk winds occur in the same location on the source’s track in itsmore » X-ray color–color diagram. A further study of literature on other luminous LMXBs reveals that this behavior is more common, with indications for simultaneous jets and disk winds in the black hole LMXBs V404 Cyg and GRS 1915+105 and the NS LMXBs Sco X-1 and Cir X-1. For the three sources for which we have the necessary spectral information, we find that simultaneous jets/winds all occur in their spectrally hardest states. Our findings indicate that in LMXBs with luminosities above a few tens of percent of the Eddington luminosity, jets and disk winds are not mutually exclusive, and the presence of disk winds does not necessarily result in jet suppression.« less
NASA Astrophysics Data System (ADS)
Kalyaan, A.; Desch, S. J.; Monga, N.
2015-12-01
The structure and evolution of protoplanetary disks, especially the radial flows of gas through them, are sensitive to a number of factors. One that has been considered only occasionally in the literature is external photoevaporation by far-ultraviolet (FUV) radiation from nearby, massive stars, despite the fact that nearly half of disks will experience photoevaporation. Another effect apparently not considered in the literature is a spatially and temporally varying value of α in the disk (where the turbulent viscosity ν is α times the sound speed C times the disk scale height H). Here we use the formulation of Bai & Stone to relate α to the ionization fraction in the disk, assuming turbulent transport of angular momentum is due to the magnetorotational instability. We calculate the ionization fraction of the disk gas under various assumptions about ionization sources and dust grain properties. Disk evolution is most sensitive to the surface area of dust. We find that typically α ≲ 10-5 in the inner disk (<2 AU), rising to ˜10-1 beyond 20 AU. This drastically alters the structure of the disk and the flow of mass through it: while the outer disk rapidly viscously spreads, the inner disk hardly evolves; this leads to a steep surface density profile ({{Σ }}\\propto {r}-< p> with < p> ≈ 2-5 in the 5-30 AU region) that is made steeper by external photoevaporation. We also find that the combination of variable α and external photoevaporation eventually causes gas as close as 3 AU, previously accreting inward, to be drawn outward to the photoevaporated outer edge of the disk. These effects have drastic consequences for planet formation and volatile transport in protoplanetary disks.
X-Ray Emission from the Soft X-Ray Transient Aquila X-1
NASA Technical Reports Server (NTRS)
Tavani, Marco
1998-01-01
Aquila X-1 is the most prolific of soft X-ray transients. It is believed to contain a rapidly spinning neutron star sporadically accreting near the Eddington limit from a low-mass companion star. The interest in studying the repeated X-ray outbursts from Aquila X-1 is twofold: (1) studying the relation between optical, soft and hard X-ray emission during the outburst onset, development and decay; (2) relating the spectral component to thermal and non-thermal processes occurring near the magnetosphere and in the boundary layer of a time-variable accretion disk. Our investigation is based on the BATSE monitoring of Aquila X-1 performed by our group. We observed Aquila X-1 in 1997 and re-analyzed archival information obtained in April 1994 during a period of extraordinary outbursting activity of the source in the hard X-ray range. Our results allow, for the first time for this important source, to obtain simultaneous spectral information from 2 keV to 200 keV. A black body (T = 0.8 keV) plus a broken power-law spectrum describe accurately the 1994 spectrum. Substantial hard X-ray emission is evident in the data, confirming that the accretion phase during sub-Eddington limit episodes is capable of producing energetic hard emission near 5 x 10(exp 35) ergs(exp -1). A preliminary paper summarizes our results, and a more comprehensive account is being written. We performed a theoretical analysis of possible emission mechanisms, and confirmed that a non-thermal emission mechanism triggered in a highly sheared magnetosphere at the accretion disk inner boundary can explain the hard X-ray emission. An anticorrelation between soft and hard X-ray emission is indeed prominently observed as predicted by this model.
Radio-Loud AGN: The Suzaku View
NASA Technical Reports Server (NTRS)
Sambruna, Rita
2009-01-01
We review our Suzaku observations of Broad-Line Radio Galaxies (BLRGs). The continuum above 2 approx.keV in BLRGs is dominated by emission from an accretion flow, with little or no trace of a jet, which is instead expected to emerge at GeV energies and be detected by Fermi. Concerning the physical conditions of the accretion disk, BLRGs are a mixed bag. In some sources the data suggest relatively high disk ionization, in others obscuration of the innermost regions, perhaps by the jet base. While at hard X-rays the distinction between BLRGs and Seyferts appears blurry, one of the cleanest observational differences between the two classes is at soft X-rays, where Seyferts exhibit warm absorbers related to disk winds while BLRGs do not. We discuss the possibility that jet formation inhibits disk winds, and thus is related to the remarkable dearth of absorption features at soft X-rays in BLRGs and other radio-loud AGN.
Beating the tyranny of scale with a private cloud configured for Big Data
NASA Astrophysics Data System (ADS)
Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag
2015-04-01
The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.
Effect of cleaning methods after reduced-pressure air abrasion on bonding to zirconia ceramic.
Attia, Ahmed; Kern, Matthias
2011-12-01
To evaluate in vitro the influence of different cleaning methods after low-pressure air abrasion on the bond strength of a phosphate monomer-containing luting resin to zirconia ceramic. A total of 112 zirconia ceramic disks were divided into 7 groups (n = 16). In the test groups, disks were air abraded at low pressure (L) 0.05 MPa using 50-μm alumina particles. Prior to bonding, the disks were ultrasonically (U) cleaned either in isopropanol alcohol (AC), hydrofluoric acid (HF), demineralized water (DW), or tap water (TW), or they were used without ultrasonic cleaning. Disks air abraded at a high (H) pressure of 0.25 MPa and cleaned ultrasonically in isopropanol served as positive control; original (O) milled disks used without air abrasion served as the negative control group. Plexiglas tubes filled with composite resin were bonded with the adhesive luting resin Panavia 21 to the ceramic disks. Prior to testing tensile bond strength (TBS), each main group was further subdivided into 2 subgroups (n=8) which were stored in distilled water either at 37°C for 3 days or for 30 days with 7500 thermal cycles. Statistical analyses were conducted with two- and one-way analyses of variance (ANOVA) and Tukey's HSD test. Initial tensile bond strength (TBS) ranged from 32.6 to 42.8 MPa. After 30 days storage in water with thermocycling, TBS ranged from 21.9 to 36.3 MPa. Storage in water and thermocycling significantly decreased the TBS of test groups which were not air abraded (p = 0.05) or which were air abraded but cleaned in tap water (p = 0.002), but not the TBS of the other groups (p > 0.05). Also, the TBS of air-abraded groups were significantly higher than the TBS of the original milled (p < 0.01). Cleaning procedures did not significantly affect TBS either after 3 days or 30 days storage in water and thermocycling (p > 0.05). Air abrasion at 0.05 MPa and ultrasonic cleaning are important factors for improving bonding to zirconia ceramic.
NASA Technical Reports Server (NTRS)
Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don
1998-01-01
Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.
NASA Astrophysics Data System (ADS)
Fruechtenicht, Johannes; Letsch, Andreas; Voss, Andreas; Abdou Ahmed, Marwan; Graf, Thomas
2012-02-01
We present a novel laser beam measurement setup which allows the determination of the beam diameter for each single pulse of a pulsed laser beam at repetition rates of up to 200 kHz. This is useful for online process-parameter control e.g. in micromachining or for laser source characterization. Basically, the developed instrument combines spatial transmission filters specially designed for instantaneous optical determination of the second order moments of the lateral intensity distribution of the light beam and photodiodes coupled to customized electronics. The acquisition is computer-based, enabling real-time operation for online monitoring or control. It also allows data storage for a later analysis and visualization of the measurement results. The single-pulse resolved beam diameter can be measured and recorded without any interruption for an unlimited number of pulses. It is only limited by the capacity of the data storage means. In our setup a standard PC and hard-disk provided 2 hours uninterrupted operation and recording of varying beam diameters at 200 kHz. This is about three orders of magnitude faster than other systems. To calibrate our device we performed experiments in cw and pulsed regimes and the obtained results were compared to those obtained with a commercial camera based system. Only minor deviations of the beam diameter values between the two instruments were observed, proving the reliability of our approach.
Inflow Generated X-ray Corona Around Supermassive Black Holes and Unified Model for X-ray Emission
NASA Astrophysics Data System (ADS)
Wang, Lile; Cen, Renyue
2016-01-01
Three-dimensional hydrodynamic simulations, covering the spatial domain from hundreds of Schwarzschild radii to 2 pc around the central supermassive black hole of mass 108 M⊙, with detailed radiative cooling processes, are performed. Generically found is the existence of a significant amount of shock heated, high temperature (≥108 K) coronal gas in the inner (≤104 rsch) region. It is shown that the composite bremsstrahlung emission spectrum due to coronal gas of various temperatures are in reasonable agreement with the overall ensemble spectrum of AGNs and hard X-ray background. Taking into account inverse Compton processes, in the context of the simulation-produced coronal gas, our model can readily account for the wide variety of AGN spectral shape, which can now be understood physically. The distinguishing feature of our model is that X-ray coronal gas is, for the first time, an integral part of the inflow gas and its observable characteristics are physically coupled to the concomitant inflow gas. One natural prediction of our model is the anti-correlation between accretion disk luminosity and spectral hardness: as the luminosity of SMBH accretion disk decreases, the hard X-ray luminosity increases relative to the UV/optical luminosity.
The Reverberation Lag in the Low-mass X-ray Binary H1743-322
NASA Astrophysics Data System (ADS)
De Marco, Barbara; Ponti, Gabriele
2016-07-01
The evolution of the inner accretion flow of a black hole X-ray binary during an outburst is still a matter of active research. X-ray reverberation lags are powerful tools for constraining disk-corona geometry. We present a study of X-ray lags in the black hole transient H1743-322. We compared the results obtained from analysis of all the publicly available XMM-Newton observations. These observations were carried out during two different outbursts that occurred in 2008 and 2014. During all the observations the source was caught in the hard state and at similar luminosities ({L}3-10{keV}/{L}{Edd}˜ 0.004). We detected a soft X-ray lag of ˜60 ms, most likely due to thermal reverberation. We did not detect any significant change of the lag amplitude among the different observations, indicating a similar disk-corona geometry at the same luminosity in the hard state. On the other hand, we observe significant differences between the reverberation lag detected in H1743-322 and in GX 339-4 (at similar luminosities in the hard state), which might indicate variations of the geometry from source to source.
Wear Behavior of an Ultra-High-Strength Eutectoid Steel
NASA Astrophysics Data System (ADS)
Mishra, Alok; Maity, Joydeep
2018-02-01
Wear behavior of an ultra-high-strength AISI 1080 steel developed through incomplete austenitization-based combined cyclic heat treatment is investigated in comparison with annealed and conventional hardened and tempered conditions against an alumina disk (sliding speed = 1 m s-1) using a pin-on-disk tribometer at a load range of 7.35-14.7 N. On a gross scale, the mechanism of surface damage involves adhesive wear coupled with abrasive wear (microcutting effects in particular) at lower loads. At higher loads, mainly the abrasive wear (both microcutting and microploughing mechanisms) and evolution of adherent oxide are observed. Besides, microhardness of matrix increases with load indicating substantial strain hardening during wear test. The rate of overall wear is found to increase with load. As-received annealed steel with the lowest initial hardness suffers from severe abrasive wear, thereby exhibiting the highest wear loss. Such a severe wear loss is not observed in conventional hardened and tempered and combined cyclic heat treatment conditions. Combined cyclic heat-treated steel exhibits the greatest wear resistance (lowest wear loss) due to its initial high hardness and evolution of hard abrasion-resistant tribolayer during wear test at higher load.
Digital Photography and Its Impact on Instruction.
ERIC Educational Resources Information Center
Lantz, Chris
Today the chemical processing of film is being replaced by a virtual digital darkroom. Digital image storage makes new levels of consistency possible because its nature is less volatile and more mutable than traditional photography. The potential of digital imaging is great, but issues of disk storage, computer speed, camera sensor resolution,…
NASA Technical Reports Server (NTRS)
Shields, Michael F.
1993-01-01
The need to manage large amounts of data on robotically controlled devices has been critical to the mission of this Agency for many years. In many respects this Agency has helped pioneer, with their industry counterparts, the development of a number of products long before these systems became commercially available. Numerous attempts have been made to field both robotically controlled tape and optical disk technology and systems to satisfy our tertiary storage needs. Custom developed products were architected, designed, and developed without vendor partners over the past two decades to field workable systems to handle our ever increasing storage requirements. Many of the attendees of this symposium are familiar with some of the older products, such as: the Braegen Automated Tape Libraries (ATL's), the IBM 3850, the Ampex TeraStore, just to name a few. In addition, we embarked on an in-house development of a shared disk input/output support processor to manage our every increasing tape storage needs. For all intents and purposes, this system was a file server by current definitions which used CDC Cyber computers as the control processors. It served us well and was just recently removed from production usage.
Computer Simulation Results for the Two-Point Probability Function of Composite Media
NASA Astrophysics Data System (ADS)
Smith, P.; Torquato, S.
1988-05-01
Computer simulation results are reported for the two-point matrix probability function S2 of two-phase random media composed of disks distributed with an arbitrary degree of impenetrability λ. The novel technique employed to sample S2( r) (which gives the probability of finding the endpoints of a line segment of length r in the matrix) is very accurate and has a fast execution time. Results for the limiting cases λ = 0 (fully penetrable disks) and λ = 1 (hard disks), respectively, compare very favorably with theoretical predictions made by Torquato and Beasley and by Torquato and Lado. Results are also reported for several values of λ. that lie between these two extremes: cases which heretofore have not been examined.
Godon, Patrick; Sion, Edward M; Balman, Şölen; Blair, William P
2017-09-01
The standard disk is often inadequate to model disk-dominated cataclysmic variables (CVs) and generates a spectrum that is bluer than the observed UV spectra. X-ray observations of these systems reveal an optically thin boundary layer (BL) expected to appear as an inner hole in the disk. Consequently, we truncate the inner disk. However, instead of removing the inner disk, we impose the no-shear boundary condition at the truncation radius, thereby lowering the disk temperature and generating a spectrum that better fits the UV data. With our modified disk, we analyze the archival UV spectra of three novalikes that cannot be fitted with standard disks. For the VY Scl systems MV Lyr and BZ Cam, we fit a hot inflated white dwarf (WD) with a cold modified disk ( [Formula: see text] ~ a few 10 -9 M ⊙ yr -1 ). For V592 Cas, the slightly modified disk ( [Formula: see text] ~ 6 × 10 -9 M ⊙ yr -1 ) completely dominates the UV. These results are consistent with Swift X-ray observations of these systems, revealing BLs merged with ADAF-like flows and/or hot coronae, where the advection of energy is likely launching an outflow and heating the WD, thereby explaining the high WD temperature in VY Scl systems. This is further supported by the fact that the X-ray hardness ratio increases with the shallowness of the UV slope in a small CV sample we examine. Furthermore, for 105 disk-dominated systems, the International Ultraviolet Explorer spectra UV slope decreases in the same order as the ratio of the X-ray flux to optical/UV flux: from SU UMa's, to U Gem's, Z Cam's, UX UMa's, and VY Scl's.
Database recovery using redundant disk arrays
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.
1992-01-01
Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper a method is proposed for using redundant disk arrays to support rapid-recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, it is shown that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.
Recovery issues in databases using redundant disk arrays
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.
1993-01-01
Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper we propose a method for using redundant disk arrays to support rapid recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, we show that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.
Performance evaluation of redundant disk array support for transaction recovery
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.
1991-01-01
Redundant disk arrays provide a way of achieving rapid recovery from media failures with a relatively low storage cost for large scale data systems requiring high availability. Here, we propose a method for using redundant disk arrays to support rapid recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, we show that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.
NASA Astrophysics Data System (ADS)
Amini, Kamran; Akhbarizadeh, Amin; Javadpour, Sirus
2012-09-01
The effect of deep cryogenic treatment on the microstructure, hardness, and wear behavior of D2 tool steel was studied by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), hardness test, pin-on-disk wear test, and the reciprocating pin-on-flat wear test. The results show that deep cryogenic treatment eliminates retained austenite, makes a better carbide distribution, and increases the carbide content. Furthermore, some new nano-sized carbides form during the deep cryogenic treatment, thereby increasing the hardness and improving the wear behavior of the samples.
STS-48 Pilot Reightler on OV-103's aft flight deck poses for ESC photo
NASA Technical Reports Server (NTRS)
1991-01-01
STS-48 Pilot Kenneth S. Reightler, Jr, positioned under overhead window W8, poses for an electronic still camera (ESC) photo on the aft flight deck of the earth-orbiting Discovery, Orbiter Vehicle (OV) 103. Crewmembers were testing the ESC as part of Development Test Objective (DTO) 648, Electronic Still Photography. The digital image was stored on a removable hard disk or small optical disk, and could be converted to a format suitable for downlink transmission. The ESC is making its initial appearance on this Space Shuttle mission.
Alshali, Ruwaida Z; Salim, Nesreen A; Satterthwaite, Julian D; Silikas, Nick
2015-02-01
To measure bottom/top hardness ratio of bulk-fill and conventional resin-composite materials, and to assess hardness changes after dry and ethanol storage. Filler content and kinetics of thermal decomposition were also tested using thermogravimetric analysis (TGA). Six bulk-fill (SureFil SDR, Venus bulk fill, X-tra base, Filtek bulk fill flowable, Sonic fill, and Tetric EvoCeram bulk-fill) and eight conventional resin-composite materials (Grandioso flow, Venus Diamond flow, X-flow, Filtek Supreme Ultra Flowable, Grandioso, Venus Diamond, TPH Spectrum, and Filtek Z250) were tested (n=5). Initial and 24h (post-cure dry storage) top and bottom microhardness values were measured. Microhardness was re-measured after the samples were stored in 75% ethanol/water solution. Thermal decomposition and filler content were assessed by TGA. Results were analysed using one-way ANOVA and paired sample t-test (α=0.05). All materials showed significant increase of microhardness after 24h of dry storage which ranged from 100.1% to 9.1%. Bottom/top microhardness ratio >0.9 was exhibited by all materials. All materials showed significant decrease of microhardness after 24h of storage in 75% ethanol/water which ranged from 14.5% to 74.2%. The extent of post-irradiation hardness development was positively correlated to the extent of ethanol softening (R(2)=0.89, p<0.001). Initial thermal decomposition temperature assessed by TGA was variable and was correlated to ethanol softening. Bulk-fill resin-composites exhibit comparable bottom/top hardness ratio to conventional materials at recommended manufacturer thickness. Hardness was affected to a variable extent by storage with variable inorganic filler content and initial thermal decomposition shown by TGA. The manufacturer recommended depth of cure of bulk-fill resin-composites can be reached based on the microhardness method. Characterization of the primary polymer network of a resin-composite material should be considered when evaluating its stability in the aqueous oral environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Improvement in HPC performance through HIPPI RAID storage
NASA Technical Reports Server (NTRS)
Homan, Blake
1993-01-01
In 1986, RAID (redundant array of inexpensive (or independent) disks) technology was introduced as a viable solution to the I/O bottleneck. A number of different RAID levels were defined in 1987 by the Computer Science Division (EECS) University of California, Berkeley, each with specific advantages and disadvantages. With multiple RAID options available, taking advantage of RAID technology required matching particular RAID levels with specific applications. It was not possible to use one RAID device to address all applications. Maximum Strategy's Gen 4 Storage Server addresses this issue with a new capability called programmable RAID level partitioning. This capability enables users to have multiple RAID levels coexist on the same disks, thereby providing the versatility necessary for multiple concurrent applications.
Structural Dynamics of Maneuvering Aircraft.
1987-09-01
MANDYN. Written in Fortran 77, it was compiled and executed with Microsoft Fortran, Vers. 4.0 on an IBM PC-AT, with a co-processor, and a 20M hard disk...to the pivot area. Pre- sumably, the pivot area is a hard point in the wing structure. -41- NADC M1i4-0 ResulIts The final mass and flexural rigidity...lowest mode) is an important parameter. If it is less than three, the load factor approach can be problema - tical. In assessing the effect of one maneuver
A Disk-Based System for Producing and Distributing Science Products from MODIS
NASA Technical Reports Server (NTRS)
Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael
2007-01-01
Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vevera, Bradley J; Hyres, James W; McClintock, David A
2014-01-01
Irradiated AISI 316L stainless steel disks were removed from the Spallation Neutron Source (SNS) for post-irradiation examination (PIE) to assess mechanical property changes due to radiation damage and erosion of the target vessel. Topics reviewed include high-resolution photography of the disk specimens, cleaning to remove mercury (Hg) residue and surface oxides, profile mapping of cavitation pits using high frequency ultrasonic testing (UT), high-resolution surface replication, and machining of test specimens using wire electrical discharge machining (EDM), tensile testing, Rockwell Superficial hardness testing, Vickers microhardness testing, scanning electron microscopy (SEM), and energy dispersive spectroscopy (EDS). The effectiveness of the cleaning proceduremore » was evident in the pre- and post-cleaning photography and permitted accurate placement of the test specimens on the disks. Due to the limited amount of material available and the unique geometry of the disks, machine fixturing and test specimen design were critical aspects of this work. Multiple designs were considered and refined during mock-up test runs on unirradiated disks. The techniques used to successfully machine and test the various specimens will be presented along with a summary of important findings from the laboratory examinations.« less
NASA Technical Reports Server (NTRS)
Starkey, D.; Gehrels, Cornelis; Horne, Keith; Fausnaugh, M. M.; Peterson, B. M.; Bentz, M. C.; Kochanek, C. S.; Denney, K. D.; Edelson, R.; Goad, M. R.;
2017-01-01
We conduct a multi-wavelength continuum variability study of the Seyfert 1 galaxy NGC 5548 to investigate the temperature structure of its accretion disk. The 19 overlapping continuum light curves (1158 Angstrom to 9157 Angstrom) combine simultaneous Hubble Space Telescope, Swift, and ground-based observations over a 180 day period from 2014 January to July. Light-curve variability is interpreted as the reverberation response of the accretion disk to irradiation by a central time-varying point source. Our model yields the disk inclination i = 36deg +/- 10deg, temperature T(sub 1) = (44+/-6) times 10 (exp 3)K at 1 light day from the black hole, and a temperature radius slope (T proportional to r (exp -alpha)) of alpha = 0.99 +/- 0.03. We also infer the driving light curve and find that it correlates poorly with both the hard and soft X-ray light curves, suggesting that the X-rays alone may not drive the ultraviolet and optical variability over the observing period. We also decompose the light curves into bright, faint, and mean accretion-disk spectra. These spectra lie below that expected for a standard blackbody accretion disk accreting at L/L(sub Edd) = 0.1.
Hard-to-cook phenomenon in chickpeas (Cicer arietinum L): effect of accelerated storage on quality.
Reyes-Moreno, C; Okamura-Esparza, J; Armienta-Rodelo, E; Gómez-Garza, R M; Milán-Carrillo, J
2000-01-01
Storage, at high temperature (> or = 25 degrees C) and high relative humidity (> or = 65%), causes development of hard to cook (HTC) phenomenon in grain legumes. The objective of this work was to study the effect of storage simulating tropical conditions on chickpeas quality. The hardening of the Surutato 77, Mocorito 88, and Blanco Sinaloa 92 chickpea varieties was produced using adverse storage (32 +/- 1 degrees C, RH = 75%, 160 days) conditions. For all samples, the Hunter 'L' values decreased and deltaE values increased during storage, meaning a loss of color lightness and development of darkening. Accelerated storage caused a significant decrease in the water absorption capacities and cooking times of whole seeds, cotyledons and seed coats of all samples, being more pronounced in The Blanco Sinaloa 92 variety. Furthermore, storage produced significant decreases in the seed coat tannin content of the three materials; this parameter increased significantly in the cotyledon. In all samples, the levels of phytic acid decreased significantly with the seed hardness. Hardening of chickpea grains caused a decrease in the in vitro protein digestibilities of all varieties. These results suggest that both the cotyledon and seed coat play a significant role in the process of chickpea hardening. Blanco Sinaloa 92 and Mocorito 88 might be classified as varieties with high and low proneness, respectively, to the development of the HTC condition.
Time-dependent disk accretion in X-ray Nova MUSCAE 1991
NASA Astrophysics Data System (ADS)
Mineshige, Shin; Hirano, Akira; Kitamoto, Shunji; Yamada, Tatsuya T.; Fukue, Jun
1994-05-01
We propose a new model for X-ray spectral fitting of binary black hole candidates. In this model, it is assumed that X-ray spectra are composed of a Comptonized blackbody (hard component) and a disk blackbody spectra (soft component), in which the temperature gradient of the disk, q identically equal to -d log T/d log r, is left as a fitting parameter. With this model, we have fitted X-ray spectra of X-ray Nova Muscae 1991 obtained by Ginga. The fitting shows that a hot cloud, which Compton up-scatters soft photons from the disk, gradually shrank and became transparent after the main peak. The temperature gradient turns out to be fairly constant and is q approximately 0.75, the value expected for a Newtonian disk model. To reproduce this value with a relativistic disk model, a small inclination angle, i approximately equal to 0 deg to 15 deg, is required. It seems, however, that the q-value temporarily decreased below 0.75 at the main flare, and q increased in a transient fashion at the second peak (or the reflare) occurring approximately 70 days after the main peak. Although statistics are poor, these results, if real, would indicate that the disk brightening responsible for the main and secondary peaks are initiated in the relatively inner portions of the disk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Debnath, Dipak; Molla, Aslam Ali; Chakrabarti, Sandip K.
2015-04-20
Transient black hole candidates are interesting objects to study in X-rays as these sources show rapid evolutions in their spectral and temporal properties. In this paper, we study the spectral properties of the Galactic transient X-ray binary MAXI J1659-152 during its very first outburst after discovery with the archival data of RXTE Proportional Counter Array instruments. We make a detailed study of the evolution of accretion flow dynamics during its 2010 outburst through spectral analysis using the Chakrabarti–Titarchuk two-component advective flow (TCAF) model as an additive table model in XSPEC. Accretion flow parameters (Keplerian disk and sub-Keplerian halo rates, shockmore » location, and shock strength) are extracted from our spectral fits with TCAF. We studied variations of these fit parameters during the entire outburst as it passed through three spectral classes: hard, hard-intermediate, and soft-intermediate. We compared our TCAF fitted results with standard combined disk blackbody (DBB) and power-law (PL) model fitted results and found that variations of disk rate with DBB flux and halo rate with PL flux are generally similar in nature. There appears to be an absence of the soft state, unlike what is seen in other similar sources.« less
Nanolubrication: patterned lubricating films using ultraviolet (UV) irradiation on hard disks.
Zhang, J; Hsu, S M; Liew, Y F
2007-01-01
Nanolubrication is emerging to be the key technical barrier in many devices. One of the key attributes for successful device lubrication is self-sustainability using only several molecular layers. For single molecular species lubrication, one desires bonding strength and molecular mobility to repair the contact by diffusing back to the contact. One way to achieve this is the use of mask to shield the surface with a patterned surface texture, put a monolayer on the surface and induce bonding. Then re-deposit mobile molecules on the surface to bring the thickness back to the desired thickness. This paper describes the use of long wavelength UV irradiation (320-390 nm) to induce bonding of a perfluoropolyether (PFPE) on CN(x) disks for magnetic hard disk application. This allows the use of irradiation to control the degree of bonding on CN(x) coatings. The effect of induced bonding based on this wavelength was studied by comparing 100% mobile PFPE, 100% bonded PFPE, and a mixture of mobile and bonded PFPE in a series of laboratory tests. Using a lateral force microscope, a diamond-tipped atomic force microscope, and a ball-on-inclined plane apparatus, the friction and wear characteristics of these three cases were obtained. Results suggested that the mixed PFPE has the highest shear rupture strength.
NASA Astrophysics Data System (ADS)
Kobylkin, Konstantin
2016-10-01
Computational complexity and approximability are studied for the problem of intersecting of a set of straight line segments with the smallest cardinality set of disks of fixed radii r > 0 where the set of segments forms straight line embedding of possibly non-planar geometric graph. This problem arises in physical network security analysis for telecommunication, wireless and road networks represented by specific geometric graphs defined by Euclidean distances between their vertices (proximity graphs). It can be formulated in a form of known Hitting Set problem over a set of Euclidean r-neighbourhoods of segments. Being of interest computational complexity and approximability of Hitting Set over so structured sets of geometric objects did not get much focus in the literature. Strong NP-hardness of the problem is reported over special classes of proximity graphs namely of Delaunay triangulations, some of their connected subgraphs, half-θ6 graphs and non-planar unit disk graphs as well as APX-hardness is given for non-planar geometric graphs at different scales of r with respect to the longest graph edge length. Simple constant factor approximation algorithm is presented for the case where r is at the same scale as the longest edge length.
The Design and Evolution of Jefferson Lab's Jasmine Mass Storage System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan Hess; M. Andrew Kowalski; Michael Haddox-Schatz
We describe the Jasmine mass storage system, in operation since 2001. Jasmine has scaled to meet the challenges of grid applications, petabyte class storage, and hundreds of MB/sec throughput using commodity hardware, Java technologies, and a small but focused development team. The evolution of the integrated disk cache system, which provides a managed online subset of the tape contents, is examined in detail. We describe how the storage system has grown to meet the special needs of the batch farm, grid clients, and new performance demands.
NASA Astrophysics Data System (ADS)
Hesselink, Lambertus; Orlov, Sergei S.
Optical data storage is a phenomenal success story. Since its introduction in the early 1980s, optical data storage devices have evolved from being focused primarily on music distribution, to becoming the prevailing data distribution and recording medium. Each year, billions of optical recordable and prerecorded disks are sold worldwide. Almost every computer today is shipped with a CD or DVD drive installed.
Taniguchi, Yoshimasa; Yamada, Makiko; Taniguchi, Harumi; Matsukura, Yasuko; Shindo, Kazutoshi
2015-11-25
The bitter taste of beer originates from resins in hops (Humulus lupulus L.), which are classified into two subtypes (soft and hard). Whereas the nature and reactivity of soft-resin-derived compounds, such as α-, β-, and iso-α-acids, are well studied, there is only a little information on the compounds in hard resin. For this work, hard resin was prepared from stored hops and investigated for its compositional changes in an experimental model of beer aging. The hard resin contained a series of α-acid oxides. Among them, 4'-hydroxyallohumulinones were unstable under beer storage conditions, and their transformation induced primary compositional changes of the hard resin during beer aging. The chemical structures of the products, including novel polycyclic compounds scorpiohumulinols A and B and dicyclohumulinols A and B, were determined by HRMS and NMR analyses. These compounds were proposed to be produced via proton-catalyzed cyclization reactions of 4'-hydroxyallohumulinones. Furthermore, they were more stable than their precursor 4'-hydroxyallohumulinones during prolonged storage periods.
Development of a COTS Mass Storage Unit for the Space Readiness Coherent Lidar Experiment
NASA Technical Reports Server (NTRS)
Liggin, Karl; Clark, Porter
1999-01-01
The technology to develop a Mass Storage Unit (MSU) using commercial-off-the-shelf (COTS) hard drives is an on-going challenge to meet the Space Readiness Coherent Lidar Experiment (SPARCLE) program requirements. A conceptual view of SPARCLE's laser collecting atmospheric data from the shuttle is shown in Figure 1. The determination to develop this technology required several in depth studies before an actual COTS hard drive was selected to continue this effort. Continuing the development of the MSU can, and will, serve future NASA programs that require larger data storage and more on-board processing.
ERIC Educational Resources Information Center
Perez, Ernest
1997-01-01
Examines the practical realities of upgrading Intel personal computers in libraries, considering budgets and technical personnel availability. Highlights include adding RAM; putting in faster processor chips, including clock multipliers; new hard disks; CD-ROM speed; motherboards and interface cards; cost limits and economic factors; and…
Fluctuation theorem for the effusion of an ideal gas.
Cleuren, B; Van den Broeck, C; Kawai, R
2006-08-01
The probability distribution of the entropy production for the effusion of an ideal gas between two compartments is calculated explicitly. The fluctuation theorem is verified. The analytic results are in good agreement with numerical data from hard disk molecular dynamics simulations.
Dunkel, F V; Serugendo, A; Breene, W M; Sriharan, S
1995-07-01
Three plant products with known insecticidal properties, a dry extract of flowers of Chrysanthemum cinerariaefolium (Trevir.) Vis. produced in Rwanda, an ethanol extract of seeds of neem, Azadirachta indica A. Juss, and crushed leaves of Tetradenia riparia Hochst Codd, a traditional Rwandan medicine, were mixed with beans, Phaseolus vulgaris L., for storage protection. These plant-protected beans were compared with "off the shelf' beans that were being sold to consumers by the Rwandan National Agricultural Products Marketing Organization (OPROVIA). A trained sensory panel determined that beans treated with neem and C. cinerariaefolium were as acceptable after 8 months storage as those being sold throughout Rwanda by the marketing organization. Beans marketed by this organization were all treated with the standard insecticide application in Rwanda, 0.01% weight/weight pirimiphos methyl in a powder formulation. Instrumental hardness (% hard-to-cook/mean gram force) after 20 months of storage was acceptable for beans stored with neem or with C. cinerariaefolium or with the conventional government application of pirimiphos methyl. Use of either neem or C. cinerariaefolium for storage protection should not affect consumer acceptance of dry beans.
A distributed parallel storage architecture and its potential application within EOSDIS
NASA Technical Reports Server (NTRS)
Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony
1994-01-01
We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.
Up-to-date state of storage techniques used for large numerical data files
NASA Technical Reports Server (NTRS)
Chlouba, V.
1975-01-01
Methods for data storage and output in data banks and memory files are discussed along with a survey of equipment available for this. Topics discussed include magnetic tapes, magnetic disks, Terabit magnetic tape memory, Unicon 690 laser memory, IBM 1360 photostore, microfilm recording equipment, holographic recording, film readers, optical character readers, digital data storage techniques, and photographic recording. The individual types of equipment are summarized in tables giving the basic technical parameters.
Spin dynamics and thermal stability in L10 FePt
NASA Astrophysics Data System (ADS)
Chen, Tianran; Toomey, Wahida
Increasing the data storage density of hard drives remains one of the continuing goals in magnetic recording technology. A critical challenge for increasing data density is the thermal stability of the written information, which drops rapidly as the bit size gets smaller. To maintain good thermal stability in small bits, one should consider materials with high anisotropy energy such as L10 FePt. High anisotropy energy nevertheless implies high coercivity, making it difficult to write information onto the disk. This issue can be overcome by a new technique called heat-assisted magnetic recording, where a laser is used to locally heat the recording medium to reduce its coercivity while retaining relatively good thermal stability. Many of the microscopic magnetic properties of L10 FePt, however, have not been theoretically well understood. In this poster, I will focus on a single L10 FePt grain, typically of a few nanometers. Specifically, I will discuss its critical temperature, size effect and, in particular, spin dynamics in the writing process, a key to the success of heat-assisted magnetic recording. WCU URF16.
Towards building high performance medical image management system for clinical trials
NASA Astrophysics Data System (ADS)
Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel
2011-03-01
Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.
Miller, J.J.
1982-01-01
The spectral analysis and filter program package is written in the BASIC language for the HP-9845T desktop computer. The program's main purpose is to perform spectral analyses on digitized time-domain data. In addition, band-pass filtering of the data can be performed in the time domain. Various other processes such as autocorrelation can be performed to the time domain data in order to precondition them for spectral analyses. The frequency domain data can also be transformed back into the time domain if desired. Any data can be displayed on the CRT in graphic form using a variety of plot routines. A hard copy can be obtained immediately using the internal thermal printer. Data can also be displayed in tabular form on the CRT or internal thermal printer or it can be stored permanently on a mass storage device like a tape or disk. A list of the processes performed in the order in which they occurred can be displayed at any time.
Imprint lithography template technology for bit patterned media (BPM)
NASA Astrophysics Data System (ADS)
Lille, J.; Patel, K.; Ruiz, R.; Wu, T.-W.; Gao, H.; Wan, Lei; Zeltzer, G.; Dobisz, E.; Albrecht, T. R.
2011-11-01
Bit patterned media (BPM) for magnetic recording has emerged as a promising technology to deliver thermally stable magnetic storage at densities beyond 1Tb/in2. Insertion of BPM into hard disk drives will require the introduction of nanoimprint lithography and other nanofabrication processes for the first time. In this work, we focus on nanoimprint and nanofabrication challenges that are being overcome in order to produce patterned media. Patterned media has created the need for new tools and processes, such as an advanced rotary e-beam lithography tool and block copolymer integration. The integration of block copolymer is through the use of a chemical contrast pattern on the substrate which guides the alignment of di-block copolymers. Most of the work on directed self assembly for patterned media applications has, until recently, concentrated on the formation of circular dot patterns in a hexagonal close packed lattice. However, interactions between the read head and media favor a bit aspect ratio (BAR) greater than one. This design constraint has motivated new approaches for using self-assembly to create suitable high-BAR master patterns and has implications for template fabrication.
Landau-Lifshitz-Bloch equation for exchange-coupled grains
NASA Astrophysics Data System (ADS)
Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter
2014-12-01
Heat-assisted recording is a promising technique to further increase the storage density in hard disks. Multilayer recording grains with graded Curie temperature is discussed to further assist the write process. Describing the correct magnetization dynamics of these grains, from room temperature to far above the Curie point, during a write process is required for the calculation of bit error rates. We present a coarse-grained approach based on the Landau-Lifshitz-Bloch (LLB) equation to model exchange-coupled grains with low computational effort. The required temperature-dependent material properties such as the zero-field equilibrium magnetization as well as the parallel and normal susceptibilities are obtained by atomistic Landau-Lifshitz-Gilbert simulations. Each grain is described with one magnetization vector. In order to mimic the atomistic exchange interaction between the grains a special treatment of the exchange field in the coarse-grained approach is presented. With the coarse-grained LLB model the switching probability of a recording grain consisting of two layers with graded Curie temperature is investigated in detail by calculating phase diagrams for different applied heat pulses and external magnetic fields.
Spectro-Timing Study of GX 339-4 in a Hard Intermediate State
NASA Technical Reports Server (NTRS)
Furst, F.; Grinberg, V.; Tomsick, J. A.; Bachetti, M.; Boggs, S. E.; Brightman, M.; Christensen, F. E.; Craig, W. W.; Ghandi, P.; Zhang, William W.
2016-01-01
We present an analysis of Nuclear Spectroscopic Telescope Array observations of a hard intermediate state of the transient black hole GX 339-4 taken in 2015 January. With the source softening significantly over the course of the 1.3 day long observation we split the data into 21 sub-sets and find that the spectrum of all of them can be well described by a power-law continuum with an additional relativistically blurred reflection component. The photon index increases from approx. 1.69 to approx. 1.77 over the course of the observation. The accretion disk is truncated at around nine gravitational radii in all spectra. We also perform timing analysis on the same 21 individual data sets, and find a strong type-C quasi-periodic oscillation (QPO), which increases in frequency from approx. 0.68 to approx. 1.05 Hz with time. The frequency change is well correlated with the softening of the spectrum. We discuss possible scenarios for the production of the QPO and calculate predicted inner radii in the relativistic precession model as well as the global disk mode oscillations model. We find discrepancies with respect to the observed values in both models unless we allow for a black hole mass of approx. 100 Mass compared to the Sun, which is highly unlikely. We discuss possible systematic uncertainties, in particular with the measurement of the inner accretion disk radius in the relativistic reflection model. We conclude that the combination of observed QPO frequencies and inner accretion disk radii, as obtained from spectral fitting, is difficult to reconcile with current models.
Exact mean-energy expansion of Ginibre's gas for coupling constants Γ =2 ×(oddinteger)
NASA Astrophysics Data System (ADS)
Salazar, R.; Téllez, G.
2017-12-01
Using the approach of a Vandermonde determinant to the power Γ =Q2/kBT expansion on monomial functions, a way to find the excess energy Uexc of the two-dimensional one-component plasma (2DOCP) on hard and soft disks (or a Dyson gas) for odd values of Γ /2 is provided. At Γ =2 , the present study not only corroborates the result for the particle-particle energy contribution of the Dyson gas found by Shakirov [Shakirov, Phys. Lett. A 375, 984 (2011), 10.1016/j.physleta.2011.01.004] by using an alternative approach, but also provides the exact N -finite expansion of the excess energy of the 2DOCP on the hard disk. The excess energy is fitted to the ansatz of the form Uexc=K1N +K2√{N }+K3+K4/N +O (1 /N2) to study the finite-size correction, with Ki coefficients and N the number of particles. In particular, the bulk term of the excess energy is in agreement with the well known result of Jancovici for the hard disk in the thermodynamic limit [Jancovici, Phys. Rev. Lett. 46, 386 (1981), 10.1103/PhysRevLett.46.386]. Finally, an expression is found for the pair correlation function which still keeps a link with the random matrix theory via the kernel in the Ginibre ensemble [Ginibre, J. Math. Phys. 6, 440 (1965), 10.1063/1.1704292] for odd values of Γ /2 . A comparison between the analytical two-body density function and histograms obtained with Monte Carlo simulations for small systems and Γ =2 ,6 ,10 ,... shows that the approach described in this paper may be used to study analytically the crossover behavior from systems in the fluid phase to small crystals.
A study of mass data storage technology for rocket engine data
NASA Technical Reports Server (NTRS)
Ready, John F.; Benser, Earl T.; Fritz, Bernard S.; Nelson, Scott A.; Stauffer, Donald R.; Volna, William M.
1990-01-01
The results of a nine month study program on mass data storage technology for rocket engine (especially the Space Shuttle Main Engine) health monitoring and control are summarized. The program had the objective of recommending a candidate mass data storage technology development for rocket engine health monitoring and control and of formulating a project plan and specification for that technology development. The work was divided into three major technical tasks: (1) development of requirements; (2) survey of mass data storage technologies; and (3) definition of a project plan and specification for technology development. The first of these tasks reviewed current data storage technology and developed a prioritized set of requirements for the health monitoring and control applications. The second task included a survey of state-of-the-art and newly developing technologies and a matrix-based ranking of the technologies. It culminated in a recommendation of optical disk technology as the best candidate for technology development. The final task defined a proof-of-concept demonstration, including tasks required to develop, test, analyze, and demonstrate the technology advancement, plus an estimate of the level of effort required. The recommended demonstration emphasizes development of an optical disk system which incorporates an order-of-magnitude increase in writing speed above the current state of the art.
Grain-boundary free energy in an assembly of elastic disks.
Lusk, Mark T; Beale, Paul D
2004-02-01
Grain-boundary free energy is estimated as a function of misoriention for symmetric tilt boundaries in an assembly of nearly hard disks. Fluctuating cell theory is used to accomplish this since the most common techniques for calculating interfacial free energy cannot be applied to such assemblies. The results are analogous to those obtained using a Leonard-Jones potential, but in this case the interfacial energy is dominated by an entropic contribution. Disk assemblies colorized with free and specific volume elucidate differences between these two characteristics of boundary structure. Profiles are also provided of the Helmholtz and Gibbs free energies as a function of distance from the grain boundaries. Low angle grain boundaries are shown to follow the classical relationship between dislocation orientation/spacing and misorientation angle.
de Moraes, Rafael Ratto; Marimon, José Laurindo Machado; Schneider, Luis Felipe; Sinhoreti, Mário Alexandre Coelho; Correr-Sobrinho, Lourenço; Bueno, Márcia
2008-06-01
This study assessed the effect of 6 months of aging in water on surface roughness and surface/subsurface hardness of two microhybrid resin composites. Filtek Z250 and Charisma were tested. Cylindrical specimens were obtained and stored in distilled water for 24 hours or 6 months, at 37 degrees C. For Knoop hardness evaluation, the specimens were transversely wet-flattened, and indentations were made on surface and subsurface layers. Data were submitted to three-way ANOVA and Tukey's test (alpha < or = 0.05). Surface roughness baseline measurements were made at 24 hours and repeated after 6 months of storage. Data were submitted to repeated measures ANOVA and Tukey's test (alpha < or = 0.05). Surface hardness (KHN, kg/mm(2)) means (+/- standard deviation) ranged from 55 +/- 1 to 49 +/- 4 for Z250 and from 50 +/- 2 to 41 +/- 3 for Charisma, at 24 hours and 6 months, respectively. Subsurface means ranged from 58 +/- 2 to 61 +/- 3 for Z250 and from 50 +/- 1 to 54 +/- 2 for Charisma, at 24 hours and 6 months. For both composites, the aged specimens presented significantly softer surfaces (p < 0.01). For the subsurface hardness, alteration after storage was detected only for Charisma, which presented a significant rise in hardness (p < 0.01). Z250 presented significantly harder surface and subsurface layers in comparison with Charisma. Surface roughness (Ra, mum) means ranged from 0.07 +/- 0.00 to 0.07 +/- 0.01 for Z250 and from 0.06 +/- 0.01 to 0.07 +/- 0.01 for Charisma, at 24 hours and 6 months, respectively. For both composites, no significant roughness alteration was detected during the study (p= 0.386). The 6-month period of storage in water presented a significant softening effect on the surfaces of the composites, although no significant deleterious alteration was detected for the subsurface hardness. In addition, the storage period had no significant effect on the surface roughness of the materials.
Computer Sciences and Data Systems, volume 2
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: data storage; information network architecture; VHSIC technology; fiber optics; laser applications; distributed processing; spaceborne optical disk controller; massively parallel processors; and advanced digital SAR processors.
A NICER Look at the Aql X-1 Hard State
NASA Astrophysics Data System (ADS)
Bult, Peter; Arzoumanian, Zaven; Cackett, Edward M.; Chakrabarty, Deepto; Gendreau, Keith C.; Guillot, Sebastien; Homan, Jeroen; Jaisawal, Gaurava K.; Keek, Laurens; Kenyon, Steve; Lamb, Frederick K.; Ludlam, Renee; Mahmoodifar, Simin; Markwardt, Craig; Miller, Jon M.; Prigozhin, Gregory; Soong, Yang; Strohmayer, Tod E.; Uttley, Phil
2018-05-01
We report on a spectral-timing analysis of the neutron star low-mass X-ray binary (LMXB) Aql X-1 with the Neutron Star Interior Composition Explorer (NICER) on the International Space Station (ISS). Aql X-1 was observed with NICER during a dim outburst in 2017 July, collecting approximately 50 ks of good exposure. The spectral and timing properties of the source correspond to that of a (hard) extreme island state in the atoll classification. We find that the fractional amplitude of the low-frequency (<0.3 Hz) band-limited noise shows a dramatic turnover as a function of energy: it peaks at 0.5 keV with nearly 25% rms, drops to 12% rms at 2 keV, and rises to 15% rms at 10 keV. Through the analysis of covariance spectra, we demonstrate that band-limited noise exists in both the soft thermal emission and the power-law emission. Additionally, we measure hard time lags, indicating the thermal emission at 0.5 keV leads the power-law emission at 10 keV on a timescale of ∼100 ms at 0.3 Hz to ∼10 ms at 3 Hz. Our results demonstrate that the thermal emission in the hard state is intrinsically variable, and is driving the modulation of the higher energy power-law. Interpreting the thermal spectrum as disk emission, we find that our results are consistent with the disk propagation model proposed for accretion onto black holes.
14 CFR 1206.700 - Schedule of fees.
Code of Federal Regulations, 2013 CFR
2013-01-01
... copies include the time spent in duplicating the documents. For copies of computer disks, still photographs, blueprints, videotapes, engineering drawings, hard copies of aperture cards, etc., the fee... records. Because of the diversity in the types and configurations of computers which may be required in...
14 CFR 1206.700 - Schedule of fees.
Code of Federal Regulations, 2011 CFR
2011-01-01
... copies include the time spent in duplicating the documents. For copies of computer disks, still photographs, blueprints, videotapes, engineering drawings, hard copies of aperture cards, etc., the fee... records. Because of the diversity in the types and configurations of computers which may be required in...
14 CFR 1206.700 - Schedule of fees.
Code of Federal Regulations, 2012 CFR
2012-01-01
... copies include the time spent in duplicating the documents. For copies of computer disks, still photographs, blueprints, videotapes, engineering drawings, hard copies of aperture cards, etc., the fee... records. Because of the diversity in the types and configurations of computers which may be required in...
Army Medical Imaging System - ARMIS
1992-08-08
modems , scanners, hard disk drives, dot matrix printers, erasable-optical disc drives, CD-ROM drives, WORM disc drives and tape drives are fully...can use 56K leased lines, TI links, digital data circuits, or public telephone lines. 3. ISDN The Integrated Services Digital Network, ISDN, is a
Extremes of the jet–accretion power relation of blazars, as explored by NuSTAR
Sbarrato, T.; Ghisellini, G.; Tagliaferri, G.; ...
2016-07-18
Hard X-ray observations are crucial to study the non-thermal jet emission from high-redshift, powerful blazars. We observed two bright z > 2 flat spectrum radio quasars (FSRQs) in hard X-rays to explore the details of their relativistic jets and their possible variability. S5 0014+81 (at z = 3.366) and B0222+185 (at z=2.690) have been observed twice by the Nuclear Spectroscopic Telescope Array (NuSTAR) simultaneously with Swift/XRT, showing different variability behaviors. We found that NuSTAR is instrumental to explore the variability of powerful high-redshift blazars, even when no gamma-ray emission is detected. The two sources have proven to have respectively themore » most luminous accretion disk and the most powerful jet among known blazars. Furthermore, thanks to these properties, they are located at the extreme end of the jet-accretion disk relation previously found for gamma-ray detected blazars, to which they are consistent.« less
Fabrication of piezoelectric ceramic micro-actuator and its reliability for hard disk drives.
Jing, Yang; Luo, Jianbin; Yang, Wenyan; Ju, Guoxian
2004-11-01
A new U-type micro-actuator for precisely positioning a magnetic head in high-density hard disk drives was proposed and developed. The micro-actuator is composed of a U-type stainless steel substrate and two piezoelectric ceramic elements. Using a high-d31 piezoelectric coefficient PMN-PZT ceramic plate and adopting reactive ion etching process fabricate the piezoelectric elements. Reliability against temperature was investigated to ensure the practical application to the drive products. The U-type substrate attached to each side via piezoelectric elements also was simulated by the finite-element method and practically measured by a laser Doppler vibrometer in order to testify the driving mechanics of it. The micro-actuator coupled with two piezoelectric elements featured large displacement of 0.875 microm and high-resonance frequency over 22 kHz. The novel piezoelectric micro-actuators then possess a useful compromise performance to displacement, resonance frequency, and generative force. The results reveal that the new design concept provides a valuable alternative for multilayer piezoelectric micro-actuators.
Electronic Still Camera Project on STS-48
NASA Technical Reports Server (NTRS)
1991-01-01
On behalf of NASA, the Office of Commercial Programs (OCP) has signed a Technical Exchange Agreement (TEA) with Autometric, Inc. (Autometric) of Alexandria, Virginia. The purpose of this agreement is to evaluate and analyze a high-resolution Electronic Still Camera (ESC) for potential commercial applications. During the mission, Autometric will provide unique photo analysis and hard-copy production. Once the mission is complete, Autometric will furnish NASA with an analysis of the ESC s capabilities. Electronic still photography is a developing technology providing the means by which a hand held camera electronically captures and produces a digital image with resolution approaching film quality. The digital image, stored on removable hard disks or small optical disks, can be converted to a format suitable for downlink transmission, or it can be enhanced using image processing software. The on-orbit ability to enhance or annotate high-resolution images and then downlink these images in real-time will greatly improve Space Shuttle and Space Station capabilities in Earth observations and on-board photo documentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ruby Thuy; Diaz, Luis A.; Imholte, D. Devin
Since the 2011 price spike of rare earth elements (REEs), research on permanent magnet recycling has blossomed globally to reduce future REE criticality. Hard disk drives (HDDs) have emerged as one feasible feedstock for recovering valuable REEs such as praseodymium, neodymium, and dysprosium. However, current processes for recycling e-waste only focus on certain metals due to feedstock and metal price uncertainties. In addition, some believe that recycling REEs is unprofitable. To shed some light on the economic viability of REE recycling from HDDs, this paper combines techno-economic information of a hydrometallurgical process with end-of-life HDD availability in a simulation model.more » Results showed that adding REEs to HDD recycling was profitable given current prices. As a result, recovered REEs could meet up to 5.1% rest of world (excluding China) magnet demand. Aluminum, gold, copper scrap and REEs were the primary main revenue streams from HDD recycling.« less
Nguyen, Ruby Thuy; Diaz, Luis A.; Imholte, D. Devin; ...
2017-06-05
Since the 2011 price spike of rare earth elements (REEs), research on permanent magnet recycling has blossomed globally to reduce future REE criticality. Hard disk drives (HDDs) have emerged as one feasible feedstock for recovering valuable REEs such as praseodymium, neodymium, and dysprosium. However, current processes for recycling e-waste only focus on certain metals due to feedstock and metal price uncertainties. In addition, some believe that recycling REEs is unprofitable. To shed some light on the economic viability of REE recycling from HDDs, this paper combines techno-economic information of a hydrometallurgical process with end-of-life HDD availability in a simulation model.more » Results showed that adding REEs to HDD recycling was profitable given current prices. As a result, recovered REEs could meet up to 5.1% rest of world (excluding China) magnet demand. Aluminum, gold, copper scrap and REEs were the primary main revenue streams from HDD recycling.« less
Effect of Polydispersity on Diffusion in Random Obstacle Matrices
NASA Astrophysics Data System (ADS)
Cho, Hyun Woo; Kwon, Gyemin; Sung, Bong June; Yethiraj, Arun
2012-10-01
The dynamics of tracers in disordered matrices is of interest in a number of diverse areas of physics such as the biophysics of crowding in cells and cell membranes, and the diffusion of fluids in porous media. To a good approximation the matrices can be modeled as a collection of spatially frozen particles. In this Letter, we consider the effect of polydispersity (in size) of the matrix particles on the dynamics of tracers. We study a two dimensional system of hard disks diffusing in a sea of hard disk obstacles, for different values of the polydispersity of the matrix. We find that for a given average size and area fraction, the diffusion of tracers is very sensitive to the polydispersity. We calculate the pore percolation threshold using Apollonius diagrams. The diffusion constant, D, follows a scaling relation D˜(ϕc-ϕm)μ-β for all values of the polydispersity, where ϕm is the area fraction and ϕc is the value of ϕm at the percolation threshold.
Effect of polydispersity on diffusion in random obstacle matrices.
Cho, Hyun Woo; Kwon, Gyemin; Sung, Bong June; Yethiraj, Arun
2012-10-12
The dynamics of tracers in disordered matrices is of interest in a number of diverse areas of physics such as the biophysics of crowding in cells and cell membranes, and the diffusion of fluids in porous media. To a good approximation the matrices can be modeled as a collection of spatially frozen particles. In this Letter, we consider the effect of polydispersity (in size) of the matrix particles on the dynamics of tracers. We study a two dimensional system of hard disks diffusing in a sea of hard disk obstacles, for different values of the polydispersity of the matrix. We find that for a given average size and area fraction, the diffusion of tracers is very sensitive to the polydispersity. We calculate the pore percolation threshold using Apollonius diagrams. The diffusion constant, D, follows a scaling relation D~(φ(c)-φ(m))(μ-β) for all values of the polydispersity, where φ(m) is the area fraction and φ(c) is the value of φ(m) at the percolation threshold.
Extremes of the jet–accretion power relation of blazars, as explored by NuSTAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sbarrato, T.; Ghisellini, G.; Tagliaferri, G.
Hard X-ray observations are crucial to study the non-thermal jet emission from high-redshift, powerful blazars. We observed two bright z > 2 flat spectrum radio quasars (FSRQs) in hard X-rays to explore the details of their relativistic jets and their possible variability. S5 0014+81 (at z = 3.366) and B0222+185 (at z=2.690) have been observed twice by the Nuclear Spectroscopic Telescope Array (NuSTAR) simultaneously with Swift/XRT, showing different variability behaviors. We found that NuSTAR is instrumental to explore the variability of powerful high-redshift blazars, even when no gamma-ray emission is detected. The two sources have proven to have respectively themore » most luminous accretion disk and the most powerful jet among known blazars. Furthermore, thanks to these properties, they are located at the extreme end of the jet-accretion disk relation previously found for gamma-ray detected blazars, to which they are consistent.« less
Recycling potential of neodymium: the case of computer hard disk drives.
Sprecher, Benjamin; Kleijn, Rene; Kramer, Gert Jan
2014-08-19
Neodymium, one of the more critically scarce rare earth metals, is often used in sustainable technologies. In this study, we investigate the potential contribution of neodymium recycling to reducing scarcity in supply, with a case study on computer hard disk drives (HDDs). We first review the literature on neodymium production and recycling potential. From this review, we find that recycling of computer HDDs is currently the most feasible pathway toward large-scale recycling of neodymium, even though HDDs do not represent the largest application of neodymium. We then use a combination of dynamic modeling and empirical experiments to conclude that within the application of NdFeB magnets for HDDs, the potential for loop-closing is significant: up to 57% in 2017. However, compared to the total NdFeB production capacity, the recovery potential from HDDs is relatively small (in the 1-3% range). The distributed nature of neodymium poses a significant challenge for recycling of neodymium.
Parallel Wavefront Analysis for a 4D Interferometer
NASA Technical Reports Server (NTRS)
Rao, Shanti R.
2011-01-01
This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.
The Enskog Equation for Confined Elastic Hard Spheres
NASA Astrophysics Data System (ADS)
Maynar, P.; García de Soria, M. I.; Brey, J. Javier
2018-03-01
A kinetic equation for a system of elastic hard spheres or disks confined by a hard wall of arbitrary shape is derived. It is a generalization of the modified Enskog equation in which the effects of the confinement are taken into account and it is supposed to be valid up to moderate densities. From the equation, balance equations for the hydrodynamic fields are derived, identifying the collisional transfer contributions to the pressure tensor and heat flux. A Lyapunov functional, H[f], is identified. For any solution of the kinetic equation, H decays monotonically in time until the system reaches the inhomogeneous equilibrium distribution, that is a Maxwellian distribution with a density field consistent with equilibrium statistical mechanics.
One-Dimensional Signal Extraction Of Paper-Written ECG Image And Its Archiving
NASA Astrophysics Data System (ADS)
Zhang, Zhi-ni; Zhang, Hong; Zhuang, Tian-ge
1987-10-01
A method for converting paper-written electrocardiograms to one dimensional (1-D) signals for archival storage on floppy disk is presented here. Appropriate image processing techniques were employed to remove the back-ground noise inherent to ECG recorder charts and to reconstruct the ECG waveform. The entire process consists of (1) digitization of paper-written ECGs with an image processing system via a TV camera; (2) image preprocessing, including histogram filtering and binary image generation; (3) ECG feature extraction and ECG wave tracing, and (4) transmission of the processed ECG data to IBM-PC compatible floppy disks for storage and retrieval. The algorithms employed here may also be used in the recognition of paper-written EEG or EMG and may be useful in robotic vision.
Disk space and load time requirements for eye movement biometric databases
NASA Astrophysics Data System (ADS)
Kasprowski, Pawel; Harezlak, Katarzyna
2016-06-01
Biometric identification is a very popular area of interest nowadays. Problems with the so-called physiological methods like fingerprints or iris recognition resulted in increased attention paid to methods measuring behavioral patterns. Eye movement based biometric (EMB) identification is one of the interesting behavioral methods and due to the intensive development of eye tracking devices it has become possible to define new methods for the eye movement signal processing. Such method should be supported by an efficient storage used to collect eye movement data and provide it for further analysis. The aim of the research was to check various setups enabling such a storage choice. There were various aspects taken into consideration, like disk space usage, time required for loading and saving whole data set or its chosen parts.
Spacecraft optical disk recorder memory buffer control
NASA Technical Reports Server (NTRS)
Hodson, Robert F.
1993-01-01
This paper discusses the research completed under the NASA-ASEE summer faculty fellowship program. The project involves development of an Application Specific Integrated Circuit (ASIC) to be used as a Memory Buffer Controller (MBC) in the Spacecraft Optical Disk System (SODR). The SODR system has demanding capacity and data rate specifications requiring specialized electronics to meet processing demands. The system is being designed to support Gigabit transfer rates with Terabit storage capability. The complete SODR system is designed to exceed the capability of all existing mass storage systems today. The ASIC development for SODR consist of developing a 144 pin CMOS device to perform format conversion and data buffering. The final simulations of the MBC were completed during this summer's NASA-ASEE fellowship along with design preparations for fabrication to be performed by an ASIC manufacturer.
Energy Storage and Dissipation in Random Copolymers during Biaxial Loading
NASA Astrophysics Data System (ADS)
Cho, Hansohl; Boyce, Mary
2012-02-01
Random copolymers composed of hard and soft segments in a glassy and rubbery state at the ambient conditions exhibit phase-separated morphologies which can be tailored to provide hybrid mechanical behaviors of the constituents. Here, phase-separated copolymers with hard and soft contents which form co-continuous structures are explored through experiments and modeling. The mechanics of the highly dissipative yet resilient behavior of an exemplar polyurea are studied under biaxial loading. The hard phase governs the initially stiff response followed by a highly dissipative viscoplasticity where dissipation arises from viscous relaxation as well as structural breakdown in the network structure that still provides energy storage resulting in the shape recovery. The soft phase provides additional energy storage that drives the resilience in high strain rate events. Biaxial experiments reveal the anisotropy and loading history dependence of energy storage and dissipation, validating the three-dimensional predictive capabilities of the microstructurally-based constitutive model. The combination of a highly dissipative and resilient behavior provides a versatile material for a myriad of applications ranging from self-healing microcapsules to ballistic protective coatings.
A Future Accelerated Cognitive Distributed Hybrid Testbed for Big Data Science Analytics
NASA Astrophysics Data System (ADS)
Halem, M.; Prathapan, S.; Golpayegani, N.; Huang, Y.; Blattner, T.; Dorband, J. E.
2016-12-01
As increased sensor spectral data volumes from current and future Earth Observing satellites are assimilated into high-resolution climate models, intensive cognitive machine learning technologies are needed to data mine, extract and intercompare model outputs. It is clear today that the next generation of computers and storage, beyond petascale cluster architectures, will be data centric. They will manage data movement and process data in place. Future cluster nodes have been announced that integrate multiple CPUs with high-speed links to GPUs and MICS on their backplanes with massive non-volatile RAM and access to active flash RAM disk storage. Active Ethernet connected key value store disk storage drives with 10Ge or higher are now available through the Kinetic Open Storage Alliance. At the UMBC Center for Hybrid Multicore Productivity Research, a future state-of-the-art Accelerated Cognitive Computer System (ACCS) for Big Data science is being integrated into the current IBM iDataplex computational system `bluewave'. Based on the next gen IBM 200 PF Sierra processor, an interim two node IBM Power S822 testbed is being integrated with dual Power 8 processors with 10 cores, 1TB Ram, a PCIe to a K80 GPU and an FPGA Coherent Accelerated Processor Interface card to 20TB Flash Ram. This system is to be updated to the Power 8+, an NVlink 1.0 with the Pascal GPU late in 2016. Moreover, the Seagate 96TB Kinetic Disk system with 24 Ethernet connected active disks is integrated into the ACCS storage system. A Lightweight Virtual File System developed at the NASA GSFC is installed on bluewave. Since remote access to publicly available quantum annealing computers is available at several govt labs, the ACCS will offer an in-line Restricted Boltzmann Machine optimization capability to the D-Wave 2X quantum annealing processor over the campus high speed 100 Gb network to Internet 2 for large files. As an evaluation test of the cognitive functionality of the architecture, the following studies utilizing all the system components will be presented; (i) a near real time climate change study generating CO2 fluxes and (ii) a deep dive capability into an 8000 x8000 pixel image pyramid display and (iii) Large dense and sparse eigenvalue decomposition.
Inflow Generated X-Ray Corona around Supermassive Black Holes and a Unified Model for X-Ray Emission
NASA Astrophysics Data System (ADS)
Wang, Lile; Cen, Renyue
2016-02-01
Three-dimensional hydrodynamic simulations are performed, which cover the spatial domain from hundreds of Schwarzschild radii to 2 pc around the central supermassive black hole of mass {10}8{M}⊙ , with detailed radiative cooling processes. The existence of a significant amount of shock heated, high temperature (≥slant {10}8 {{K}}) coronal gas in the inner (≤slant {10}4{r}{sch}) region is generally found. It is shown that the composite bremsstrahlung emission spectrum due to coronal gas of various temperatures is in reasonable agreement with the overall ensemble spectrum of active galactic nuclei (AGNs) and hard X-ray background. Taking into account inverse Compton processes, in the context of the simulation-produced coronal gas, our model can readily account for the wide variety of AGN spectral shapes, which can now be understood physically. The distinguishing feature of our model is that X-ray coronal gas is, for the first time, an integral part of the inflow gas and its observable characteristics are physically coupled to the concomitant inflow gas. One natural prediction of our model is the anti-correlation between accretion disk luminosity and spectral hardness: as the luminosity of SMBH accretion disk decreases, the hard X-ray luminosity increases relative to the UV/optical luminosity.
Ukuku, Dike O; Mukhopadhyay, Sudarsan; Onwulata, Charles
2013-01-01
Previously, we reported inactivation of Escherichia coli populations in corn product (CP) and whey protein product (WPP) extruded at different temperatures. However, information on the effect of storage temperatures on injured bacterial populations was not addressed. In this study, the effect of storage temperatures on the survival and recovery of thermal death time (TDT) disks and extrusion injured E. coli populations in CP and WPP was investigated. CP and WPP inoculated with E. coli bacteria at 7.8 log(10) CFU/g were conveyed separately into the extruder with a series 6300 digital type T-35 twin screw volumetric feeder set at a speed of 600 rpm and extruded at 35°C, 55°C, 75°C, and 95°C, or thermally treated with TDT disks submerged into water bath set at 35°C, 55°C, 75°C, and 95°C for 120 s. Populations of surviving bacteria including injured cells in all treated samples were determined immediately and every day for 5 days, and up to 10 days for untreated samples during storage at 5°C, 10°C, and 23°C. TDT disks treatment at 35°C and 55°C did not cause significant changes in the population of the surviving bacteria including injured populations. Extrusion treatment at 35°C and 55°C led to significant (p<0.05) reduction of E. coli populations in WPP as opposed to CP. The injured populations among the surviving E. coli cells in CP and WPP extruded at all temperatures tested were inactivated during storage. Population of E. coli inactivated in samples extruded at 75°C was significantly (p<0.05) different than 55°C during storage. Percent injured population could not be determined in samples extruded at 95°C due to absence of colony forming units on the agar plates. The results of this study showed that further inactivation of the injured populations occurred during storage at 5°C for 5 days suggesting the need for immediate storage of 75°C extruded CP and WPP at 5°C for at least 24 h to enhance their microbial safety.
Asymmetric 511 keV Positron Annihilation Line Emission from the Inner Galactic Disk
NASA Technical Reports Server (NTRS)
Skinner, Gerry; Weidenspointner, Georg; Jean, Pierre; Knodlseder, Jurgen; Ballmoos, Perer von; Bignami, Giovanni; Diehl, Roland; Strong, Andrew; Cordier, Bertrand; Schanne, Stephane;
2008-01-01
A recently reported asymmetry in the 511 keV gamma-ray line emission from the inner galactic disk is unexpected and mimics an equally unexpected one in the distribution of LMXBs seen at hard X-ray energies. A possible conclusion is that LMXBs are an important source of the positrons whose annihilation gives rise to the line. We will discuss these results, their statistical significance and that of any link between the two. The implication of any association between LMXBs and positrons for the strong annihilation radiation from the galactic bulge will be reviewed.
STS-48 MS Buchli, eating crackers on OV-103's middeck, is captured by ESC
NASA Technical Reports Server (NTRS)
1991-01-01
STS-48 Mission Specialist (MS) James F. Buchli 'catches' goldfish snack crackers as they float in the weightless environment of the earth-orbiting Discovery, Orbiter Vehicle (OV) 103. Buchli's eating activity on the middeck was documented using the Electronic Still Camera (ESC). Crewmembers were testing the ESC as part of Development Test Objective (DTO) 648, Electronic Still Photography. The digital image was stored on a removable hard disk or small optical disk, and could be converted to a format suitable for downlink transmission. The ESC is making its initial appearance on this Space Shuttle mission.
X-RAY VARIABILITY AND HARDNESS OF ESO 243-49 HLX-1: CLEAR EVIDENCE FOR SPECTRAL STATE TRANSITIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Servillat, Mathieu; Farrell, Sean A.; Lin Dacheng
2011-12-10
The ultraluminous X-ray (ULX) source ESO 243-49 HLX-1, which reaches a maximum luminosity of 10{sup 42} erg s{sup -1} (0.2-10 keV), currently provides the strongest evidence for the existence of intermediate-mass black holes (IMBHs). To study the spectral variability of the source, we conduct an ongoing monitoring campaign with the Swift X-ray Telescope (XRT), which now spans more than two years. We found that HLX-1 showed two fast rise and exponential decay type outbursts in the Swift XRT light curve with increases in the count rate of a factor {approx}40 separated by 375 {+-} 13 days. We obtained new XMM-Newtonmore » and Chandra dedicated pointings that were triggered at the lowest and highest luminosities, respectively. From spectral fitting, the unabsorbed luminosities ranged from 1.9 Multiplication-Sign 10{sup 40} to 1.25 Multiplication-Sign 10{sup 42} erg s{sup -1}. We confirm here the detection of spectral state transitions from HLX-1 reminiscent of Galactic black hole binaries (GBHBs): at high luminosities, the X-ray spectrum showed a thermal state dominated by a disk component with temperatures of 0.26 keV at most, and at low luminosities the spectrum is dominated by a hard power law with a photon index in the range 1.4-2.1, consistent with a hard state. The source was also observed in a state consistent with the steep power-law state, with a photon index of {approx}3.5. In the thermal state, the luminosity of the disk component appears to scale with the fourth power of the inner disk temperature, which supports the presence of an optically thick, geometrically thin accretion disk. The low fractional variability (rms of 9% {+-} 9%) in this state also suggests the presence of a dominant disk. The spectral changes and long-term variability of the source cannot be explained by variations of the beaming angle and are not consistent with the source being in a super-Eddington accretion state as is proposed for most ULX sources with lower luminosities. All this indicates that HLX-1 is an unusual ULX as it is similar to GBHBs, which have non-beamed and sub-Eddington emission, but with luminosities three orders of magnitude higher. In this picture, a lower limit on the mass of the black hole of >9000 M{sub Sun} can be derived, and the relatively low disk temperature in the thermal state also suggests the presence of an IMBH of a few 10{sup 3} M{sub Sun }.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J. Y.; Liu, B. F.; Qiao, E. L.
We investigate the accretion process in high-luminosity active galactic nuclei (HLAGNs) in the scenario of the disk evaporation model. Based on this model, the thin disk can extend down to the innermost stable circular orbit (ISCO) at accretion rates higher than 0.02 M-dot{sub Edd} while the corona is weak since part of the coronal gas is cooled by strong inverse Compton scattering of the disk photons. This implies that the corona cannot produce as strong X-ray radiation as observed in HLAGNs with large Eddington ratio. In addition to the viscous heating, other heating to the corona is necessary to interpretmore » HLAGN. In this paper, we assume that a part of accretion energy released in the disk is transported into the corona, heating up the electrons, and is thereby radiated away. For the first time, we compute the corona structure with additional heating, fully taking into account the mass supply to the corona, and find that the corona could indeed survive at higher accretion rates and that its radiation power increases. The spectra composed of bremsstrahlung and Compton radiation are also calculated. Our calculations show that the Compton-dominated spectrum becomes harder with the increase of energy fraction (f) liberating in the corona, and the photon index for hard X-ray (2-10 keV) is 2.2 < {Gamma} < 2.7. We discuss possible heating mechanisms for the corona. Combining the energy fraction transported to the corona with the accretion rate by magnetic heating, we find that the hard X-ray spectrum becomes steeper at a larger accretion rate and the bolometric correction factor (L{sub bol}/L{sub 2-10keV}) increases with increasing accretion rate for f < 8/35, which is roughly consistent with the observational results.« less
CHANDRA/ACIS-I STUDY OF THE X-RAY PROPERTIES OF THE NGC 6611 AND M16 STELLAR POPULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guarcello, M. G.; Drake, J. J.; Caramazza, M.
2012-07-10
Mechanisms regulating the origin of X-rays in young stellar objects and the correlation with their evolutionary stage are under debate. Studies of the X-ray properties in young clusters allow us to understand these mechanisms. One ideal target for this analysis is the Eagle Nebula (M16), with its central cluster NGC 6611. At 1750 pc from the Sun, it harbors 93 OB stars, together with a population of low-mass stars from embedded protostars to disk-less Class III objects, with age {<=}3 Myr. We study an archival 78 ks Chandra/ACIS-I observation of NGC 6611 and two new 80 ks observations of themore » outer region of M16, one centered on the Column V and the other on a region of the molecular cloud with ongoing star formation. We detect 1755 point sources with 1183 candidate cluster members (219 disk-bearing and 964 disk-less). We study the global X-ray properties of M16 and compare them with those of the Orion Nebula Cluster. We also compare the level of X-ray emission of Class II and Class III stars and analyze the X-ray spectral properties of OB stars. Our study supports the lower level of X-ray activity for the disk-bearing stars with respect to the disk-less members. The X-ray luminosity function (XLF) of M16 is similar to that of Orion, supporting the universality of the XLF in young clusters. Eighty-five percent of the O stars of NGC 6611 have been detected in X-rays. With only one possible exception, they show soft spectra with no hard components, indicating that mechanisms for the production of hard X-ray emission in O stars are not operating in NGC 6611.« less
Classical Accreting Pulsars with NICER
NASA Technical Reports Server (NTRS)
Wilson-Hodge, Colleen A.
2014-01-01
Soft excesses are very common center dot Lx > 1038 erg/s - reprocessing by optically thick material at the inner edge of the accretion disk center dot Lx < 1036 erg/s - photoionized or collisionally heated diffuse gas or thermal emission from the NS surface center dot Lx 1037 erg/s - either or both types of emission center dot NICER observations of soft excesses in bright X-ray pulsars combined with reflection modeling will constrain the ionization state, metalicity and dynamics of the inner edge of the magnetically truncated accretion disk Reflection models of an accretion disk for a hard power law - Strong soft excess below 3 keV from hot X-ray heated disk - For weakly ionized case: strong recombination lines - Are we seeing changes in the disk ionization in 4U1626-26? 13 years of weekly monitoring with RXTE PCA center dot Revealed an unexpectedly large population of Be/X-ray binaries compared to the Milky Way center dot Plotted luminosities are typical of "normal" outbursts (once per orbit) center dot The SMC provides an excellent opportunity to study a homogenous population of HMXBs with low interstellar absorption for accretion disk studies. Monitoring with NICER will enable studies of accretion disk physics in X-ray pulsars center dot The SMC provides a potential homogeneous low-absorption population for this study center dot NICER monitoring and TOO observations will also provide measurements of spinfrequencies, QPOs, pulsed fluxes, and energy spectra.
47 CFR 14.51 - Specifications as to pleadings, briefs, and other documents; subscription.
Code of Federal Regulations, 2012 CFR
2012-10-01
... other documents; subscription. 14.51 Section 14.51 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL ACCESS TO ADVANCED COMMUNICATIONS SERVICES AND EQUIPMENT BY PEOPLE WITH DISABILITIES Recordkeeping... improper purpose. (d) All proposed orders shall be submitted both as hard copies and on computer disk...
47 CFR 14.51 - Specifications as to pleadings, briefs, and other documents; subscription.
Code of Federal Regulations, 2014 CFR
2014-10-01
... other documents; subscription. 14.51 Section 14.51 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL ACCESS TO ADVANCED COMMUNICATIONS SERVICES AND EQUIPMENT BY PEOPLE WITH DISABILITIES Recordkeeping... improper purpose. (d) All proposed orders shall be submitted both as hard copies and on computer disk...
47 CFR 14.51 - Specifications as to pleadings, briefs, and other documents; subscription.
Code of Federal Regulations, 2013 CFR
2013-10-01
... other documents; subscription. 14.51 Section 14.51 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL ACCESS TO ADVANCED COMMUNICATIONS SERVICES AND EQUIPMENT BY PEOPLE WITH DISABILITIES Recordkeeping... improper purpose. (d) All proposed orders shall be submitted both as hard copies and on computer disk...
Industrial-Strength Streaming Video.
ERIC Educational Resources Information Center
Avgerakis, George; Waring, Becky
1997-01-01
Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…
14 CFR § 1206.700 - Schedule of fees.
Code of Federal Regulations, 2014 CFR
2014-01-01
.... These charges for copies include the time spent in duplicating the documents. For copies of computer disks, still photographs, blueprints, videotapes, engineering drawings, hard copies of aperture cards... computers which may be required in responding to requests for Agency records maintained in whole or in part...
Iron lines in model disk spectra of Galactic black hole binaries
NASA Astrophysics Data System (ADS)
Różańska, A.; Madej, J.; Konorski, P.; SaḐowski, A.
2011-03-01
Context. We present angle-dependent, broad-band intensity spectra from accretion disks around black holes of 10 M⊙. In our computations disks are assumed to be slim, which means that the radial advection is taken into account while computing the effective temperature of the disk. Aims: We attempt to reconstruct continuum and line spectra of X-ray binaries in soft state, i.e. dominated by the disk component of multitemperature shape. We follow how the iron-line complex depends on the external irradiation, an accretion rate, and a black hole spin. Methods: Full radiative transfer is solved including effects of Compton scattering, free-free and all important bound-free transitions of 10 main elements. We assume the LTE equation of state. Moreover, we include here the fundamental series of iron lines from helium-like and hydrogen-like ions, and fluorescent Kα and Kβ lines from low ionized iron. We consider two cases: nonrotating black hole, and black hole rotating with almost maximum spin a = 0.98, and obtain spectra for five accretion disks from hard X-rays to the infrared. Results: In nonirradiated disks, resonance lines from He-like and H-like iron appear mostly in absorption. Such disk spectra exhibit limb darkening in the whole energy range. External irradiation causes that iron resonance lines appear in emission. Furthermore, depending on disk effective temperature, fluorescent iron Kα and Kβ lines are present in disk emitting spectra. All models with irradiation exhibit limb brightening in their X-ray reflected continua. Conclusions: We show that the disk around stellar black hole itself is hot enough to produce strong-absorption resonance lines of iron. Emission lines can only be observed if heating by external X-rays dominates thermal processess in a hot disk atmosphere. Irradiated disks are usually brighter in X-ray continuum when seen edge on, and fainter when seen face on.
NASA Astrophysics Data System (ADS)
Godon, Patrick; Sion, Edward M.; Balman, Şölen; Blair, William P.
2017-09-01
The standard disk is often inadequate to model disk-dominated cataclysmic variables (CVs) and generates a spectrum that is bluer than the observed UV spectra. X-ray observations of these systems reveal an optically thin boundary layer (BL) expected to appear as an inner hole in the disk. Consequently, we truncate the inner disk. However, instead of removing the inner disk, we impose the no-shear boundary condition at the truncation radius, thereby lowering the disk temperature and generating a spectrum that better fits the UV data. With our modified disk, we analyze the archival UV spectra of three novalikes that cannot be fitted with standard disks. For the VY Scl systems MV Lyr and BZ Cam, we fit a hot inflated white dwarf (WD) with a cold modified disk (\\dot{M} ˜ a few 10-9 M ⊙ yr-1). For V592 Cas, the slightly modified disk (\\dot{M}˜ 6× {10}-9 {M}⊙ {{yr}}-1) completely dominates the UV. These results are consistent with Swift X-ray observations of these systems, revealing BLs merged with ADAF-like flows and/or hot coronae, where the advection of energy is likely launching an outflow and heating the WD, thereby explaining the high WD temperature in VY Scl systems. This is further supported by the fact that the X-ray hardness ratio increases with the shallowness of the UV slope in a small CV sample we examine. Furthermore, for 105 disk-dominated systems, the International Ultraviolet Explorer spectra UV slope decreases in the same order as the ratio of the X-ray flux to optical/UV flux: from SU UMa’s, to U Gem’s, Z Cam’s, UX UMa’s, and VY Scl’s.
A case for automated tape in clinical imaging.
Bookman, G; Baune, D
1998-08-01
Electronic archiving of radiology images over many years will require many terabytes of storage with a need for rapid retrieval of these images. As more large PACS installations are installed and implemented, a data crisis occurs. The ability to store this large amount of data using the traditional method of optical jukeboxes or online disk alone becomes an unworkable solution. The amount of floor space number of optical jukeboxes, and off-line shelf storage required to store the images becomes unmanageable. With the recent advances in tape and tape drives, the use of tape for long term storage of PACS data has become the preferred alternative. A PACS system consisting of a centrally managed system of RAID disk, software and at the heart of the system, tape, presents a solution that for the first time solves the problems of multi-modality high end PACS, non-DICOM image, electronic medical record and ADT data storage. This paper will examine the installation of the University of Utah, Department of Radiology PACS system and the integration of automated tape archive. The tape archive is also capable of storing data other than traditional PACS data. The implementation of an automated data archive to serve the many other needs of a large hospital will also be discussed. This will include the integration of a filmless cardiology department and the backup/archival needs of a traditional MIS department. The need for high bandwidth to tape with a large RAID cache will be examined and how with an interface to a RIS pre-fetch engine, tape can be a superior solution to optical platters or other archival solutions. The data management software will be discussed in detail. The performance and cost of RAID disk cache and automated tape compared to a solution that includes optical will be examined.
The advantage of an alternative substrate over Al/NiP disks
NASA Astrophysics Data System (ADS)
Jiaa, Chi L.; Eltoukhy, Atef
1994-02-01
Compact-size disk drives with high storage densities are in high demand due to the popularity of portable computers and workstations. The contact-start-stop (CSS) endurance performance must improve in order to accomodate the higher number of on/off cycles. In this paper, we looked at 65 mm thin-film canasite substrate disks and evaluated their mechanical performance. We compared them with conventional aluminum NiP-plated disks in surface topography, take-off time with changes of skew angles and radius, CSS, drag test and glide height performance, and clamping effect. In addition, a new post-sputter process aimed at the improvement of take-off and glide as well as CSS performances was investigated and demonstrated for the canasite disks. From the test results, it is indicated that canasite achieved a lower take-off velocity, higher clamping resistance, and better glide height and CSS endurance performance. This study concludes that a new generation disk drive equipped with canasite substrate disks will consume less power from the motor due to faster take-off and lighter weight, achieve higher recording density since the head flies lower, can better withstand damage from sliding friction during the CSS operations, and will be less prone to disk distortion from clamping due to its superior mechanical properties.
Rhinoplasty perioperative database using a personal digital assistant.
Kotler, Howard S
2004-01-01
To construct a reliable, accurate, and easy-to-use handheld computer database that facilitates the point-of-care acquisition of perioperative text and image data specific to rhinoplasty. A user-modified database (Pendragon Forms [v.3.2]; Pendragon Software Corporation, Libertyville, Ill) and graphic image program (Tealpaint [v.4.87]; Tealpaint Software, San Rafael, Calif) were used to capture text and image data, respectively, on a Palm OS (v.4.11) handheld operating with 8 megabytes of memory. The handheld and desktop databases were maintained secure using PDASecure (v.2.0) and GoldSecure (v.3.0) (Trust Digital LLC, Fairfax, Va). The handheld data were then uploaded to a desktop database of either FileMaker Pro 5.0 (v.1) (FileMaker Inc, Santa Clara, Calif) or Microsoft Access 2000 (Microsoft Corp, Redmond, Wash). Patient data were collected from 15 patients undergoing rhinoplasty in a private practice outpatient ambulatory setting. Data integrity was assessed after 6 months' disk and hard drive storage. The handheld database was able to facilitate data collection and accurately record, transfer, and reliably maintain perioperative rhinoplasty data. Query capability allowed rapid search using a multitude of keyword search terms specific to the operative maneuvers performed in rhinoplasty. Handheld computer technology provides a method of reliably recording and storing perioperative rhinoplasty information. The handheld computer facilitates the reliable and accurate storage and query of perioperative data, assisting the retrospective review of one's own results and enhancement of surgical skills.
Email authentication using symmetric and asymmetric key algorithm encryption
NASA Astrophysics Data System (ADS)
Halim, Mohamad Azhar Abdul; Wen, Chuah Chai; Rahmi, Isredza; Abdullah, Nurul Azma; Rahman, Nurul Hidayah Ab.
2017-10-01
Protection of sensitive or classified data from unauthorized access, hackers and other personals is virtue. Storage of data is done in devices such as USB, external hard disk, laptops, I-Pad or at cloud. Cloud computing presents with both ups and downs. However, storing information elsewhere increases risk of being attacked by hackers. Besides, the risk of losing the device or being stolen is increased in case of storage in portable devices. There are array of mediums of communications and even emails used to send data or information but these technologies come along with severe weaknesses such as absence of confidentiality where the message sent can be altered and sent to the recipient. No proofs are shown to the recipient that the message received is altered. The recipient would not find out unless he or she checks with the sender. Without encrypted of data or message, sniffing tools and software can be used to hack and read the information since it is in plaintext. Therefore, an electronic mail authentication is proposed, namely Hybrid Encryption System (HES). The security of HES is protected using asymmetric and symmetric key algorithms. The asymmetric algorithm is RSA and symmetric algorithm is Advance Encryption Standard. With the combination for both algorithms in the HES may provide the confidentiality and authenticity to the electronic documents send from the sender to the recipient. In a nutshell, the HES will help users to protect their valuable documentation and data from illegal third party user.
Discovery of Photon Index Saturation in the Black Hole Binary GRS 1915+105
NASA Technical Reports Server (NTRS)
Titarchuk, Lev; Seifina, Elena
2009-01-01
We present a study of the correlations between spectral, timing properties and mass accretion rate observed in X-rays from the Galactic Black Hole (BH) binary GRS 1915+105 during the transition between hard and soft states. We analyze all transition episodes from this source observed with Rossi X-ray Timing Explorer (RXTE), coordinated with Ryle Radio Telescope (RT) observations. We show that broad-band energy spectra of GRS 1915+105 during all these spectral states can be adequately presented by two Bulk Motion Comptonization (BMC) components: a hard component (BMC1, photon index Gamma(sub 1) = 1.7 -- 3.0) with turnover at high energies and soft thermal component (BMC2, Gamma(sub 2) = 2.7 -- 4.2) with characteristic color temperature < or = 1 keV, and the red-skewed iron line (LAOR) component. We also present observable correlations between the index and the normalization of the disk "seed" component. The use of "seed" disk normalization, which is presumably proportional to mass accretion rate in the disk, is crucial to establish the index saturation effect during the transition to the soft state. We discovered the photon index saturation of the soft and hard spectral components at values of < or approximately equal 4.2 and 3 respectively. We present a physical model which explains the index-seed photon normalization correlations. We argue that the index saturation effect of the hard component (BMC1) is due to the soft photon Comptonization in the converging inflow close to 1311 and that of soft component is due to matter accumulation in the transition layer when mass accretion rate increases. Furthermore we demonstrate a strong correlation between equivalent width of the iron line and radio flux in GRS 1915+105. In addition to our spectral model components we also find a strong feature of "blackbody-like" bump which color temperature is about 4.5 keV in eight observations of the intermediate and soft states. We discuss a possible origin of this "blackbody-like" emission.
47 CFR 1.734 - Specifications as to pleadings, briefs, and other documents; subscription.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false Specifications as to pleadings, briefs, and other documents; subscription. 1.734 Section 1.734 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... submitted both as hard copies and on computer disk formatted to be compatible with the Commission's computer...
47 CFR 1.734 - Specifications as to pleadings, briefs, and other documents; subscription.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Specifications as to pleadings, briefs, and other documents; subscription. 1.734 Section 1.734 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... submitted both as hard copies and on computer disk formatted to be compatible with the Commission's computer...
47 CFR 1.734 - Specifications as to pleadings, briefs, and other documents; subscription.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false Specifications as to pleadings, briefs, and other documents; subscription. 1.734 Section 1.734 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... submitted both as hard copies and on computer disk formatted to be compatible with the Commission's computer...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-21
... the HDA incorporate semiconductor, magnetic, mechanical, and manufacturing process design into an..., mechanical surface design and manufacturing process design. It takes approximately [xxx] hours to design... brand names ``Barracuda'' and ``Desktop''. HDDs are designed in the United States and assembled either...
Multisensory Public Access Catalogs on CD-ROM.
ERIC Educational Resources Information Center
Harrison, Nancy; Murphy, Brower
1987-01-01
BiblioFile Intelligent Catalog is a CD-ROM-based public access catalog system which incorporates graphics and sound to provide a multisensory interface and artificial intelligence techniques to increase search precision. The system can be updated frequently and inexpensively by linking hard disk drives to CD-ROM optical drives. (MES)
Evaluation of a Biometric Keystroke Typing Dynamics Computer Security System
1992-03-01
intrusions, numerous computer systems have been threatened or destroyed by virus attacks. A recent example was the virus called " Michelangelo ," which...threatened to destroy all data on infected hard disks on the birthday of the artist Michelangelo , 6 March, in 1992. During the 1991 Persian Gulf War
Converged photonic data storage and switch platform for exascale disaggregated data centers
NASA Astrophysics Data System (ADS)
Pitwon, R.; Wang, K.; Worrall, A.
2017-02-01
We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.
ACCRETION FLOW DYNAMICS OF MAXI J1836-194 DURING ITS 2011 OUTBURST FROM TCAF SOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jana, Arghajit; Debnath, Dipak; Chakrabarti, Sandip K.
2016-03-20
The Galactic transient X-ray binary MAXI J1836-194 was discovered on 2011 August 29. Here we make a detailed study of the spectral and timing properties of its 2011 outburst using archival data from the RXTE Proportional Counter Array instrument. The evolution of accretion flow dynamics of the source during the outburst through spectral analysis with Chakrabarti–Titarchuk’s two-component advective flow (TCAF) solution as a local table model in XSPEC. We also fitted spectra with combined disk blackbody and power-law models and compared it with the TCAF model fitted results. The source is found to be in hard and hard-intermediate spectral states onlymore » during the entire phase of this outburst. No soft or soft-intermediate spectral states are observed. This could be due to the fact that this object belongs to a special class of sources (e.g., MAXI J1659-152, Swift J1753.5-0127, etc.) that have very short orbital periods and that the companion is profusely mass-losing or the disk is immersed inside an excretion disk. In these cases, flows in the accretion disk are primarily dominated by low viscous sub-Keplerian flow and the Keplerian rate is not high enough to initiate softer states. Low-frequency quasi-periodic oscillations (QPOs) are observed sporadically although as in normal outbursts of transient black holes, monotonic evolutions of QPO frequency during both rising and declining phases are observed. From the TCAF fits, we find the mass of the black hole in the range of 7.5–11 M{sub ⊙}, and from time differences between peaks of the Keplerian and sub-Keplerian accretion rates we obtain a viscous timescale for this particular outburst, ∼10 days.« less
Formation and Destruction of Jets in X-ray Binaries
NASA Technical Reports Server (NTRS)
Kylafix, N. D.; Contopoulos, I.; Kazanas, D.; Christodoulou, D. M.
2011-01-01
Context. Neutron-star and black-hole X-ray binaries (XRBs) exhibit radio jets, whose properties depend on the X-ray spectral state e.nd history of the source. In particular, black-hole XRBs emit compact, 8teady radio jets when they are in the so-called hard state. These jets become eruptive as the sources move toward the soft state, disappear in the soft state, and then re-appear when the sources return to the hard state. The jets from neutron-star X-ray binaries are typically weaker radio emitters than the black-hole ones at the same X-ray luminosity and in some cases radio emission is detected in the soft state. Aims. Significant phenomenology has been developed to describe the spectral states of neutron-star and black-hole XRBs, and there is general agreement about the type of the accretion disk around the compact object in the various spectral states. We investigate whether the phenomenology describing the X-ray emission on one hand and the jet appearance and disappearance on the other can be put together in a consistent physical picture. Methods. We consider the so-called Poynting-Robertson cosmic battery (PRCB), which has been shown to explain in a natural way the formation of magnetic fields in the disks of AGNs and the ejection of jets. We investigate whether the PRCB can also explain the [ormation, destruction, and variability or jets in XRBs. Results. We find excellent agreement between the conditions under which the PRCB is efficient (i.e., the type of the accretion disk) and the emission or destruction of the r.adio jet. Conclusions. The disk-jet connection in XRBs can be explained in a natural way using the PRCB.
NASA Technical Reports Server (NTRS)
Sambruna, Rita; Gliozzi, Mario; Tavecchio, F.; Maraschi, L.; Foschini, Luigi
2007-01-01
The connection between the accretion process that powers AGN and the formation of jets is still poorly understood. Here we tackle this issue using new, deep Chandra and XMM-Newton observations of tlie cores of three powerful radio loud quasars: 1136-135, 1150+497 (Chandra), and 0723+679 (XMM-Newton), in the redshift range z=0.3-0.8. These sources are known from our previous Chandra siiapsliot survey to liave kpc-scale X-ray jets. In 1136-135 and 1150-1+497; evidence is found for the presence of diffuse thermal X-ray emission around the cores; on scales of 40-50 kpc and with luminosity L(sub 0.3-2 kev approx. 10(sup 43) erg per second, suggesting thermal emission from the host galaxy or a galaxy group. The X-ray continua of the cores in the three sources are described by an upward-curved (concave) broken power law, with photon indices GAMMA (sub soft) approx. 1.8 - 2.1 and GAMMA (sub hard) approx. 1.7 below and above approx. equal to 2 keV, respectively. There is evidence for an uiiresolved Fe K alpha line with EW approx. 70 eV in the three quasars. The Spectral Energy Distributions of the sources can be well described by a mix of jet and disk emission, with the jet dominating the radio and hard X-rays (via synchrotron and external Compton) and the disk dominating the optical/UV through soft X-rays. The ratio of the jet-to-disk powers is approx. 1, consistent with those derived for a number of gamma ray emitting blazars. This indicates that near equality of accretion and jet power may be common in powerful radio-loud AGN.
The optical, ultraviolet, and X-ray structure of the quasar HE 0435–1223
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blackburne, Jeffrey A.; Kochanek, Christopher S.; Chen, Bin
2014-07-10
Microlensing has proved an effective probe of the structure of the innermost regions of quasars and an important test of accretion disk models. We present light curves of the lensed quasar HE 0435–1223 in the R band and in the ultraviolet (UV), and consider them together with X-ray light curves in two energy bands that are presented in a companion paper. Using a Bayesian Monte Carlo method, we constrain the size of the accretion disk in the rest-frame near- and far-UV, and constrain for the first time the size of the X-ray emission regions in two X-ray energy bands. Themore » R-band scale size of the accretion disk is about 10{sup 15.23} cm (∼23r{sub g}), slightly smaller than previous estimates, but larger than would be predicted from the quasar flux. In the UV, the source size is weakly constrained, with a strong prior dependence. The UV to R-band size ratio is consistent with the thin disk model prediction, with large error bars. In soft and hard X-rays, the source size is smaller than ∼10{sup 14.8} cm (∼10r{sub g} ) at 95% confidence. We do not find evidence of structure in the X-ray emission region, as the most likely value for the ratio of the hard X-ray size to the soft X-ray size is unity. Finally, we find that the most likely value for the mean mass of stars in the lens galaxy is ∼0.3 M{sub ☉}, consistent with other studies.« less
Three-body correlations and conditional forces in suspensions of active hard disks
NASA Astrophysics Data System (ADS)
Härtel, Andreas; Richard, David; Speck, Thomas
2018-01-01
Self-propelled Brownian particles show rich out-of-equilibrium physics, for instance, the motility-induced phase separation (MIPS). While decades of studying the structure of liquids have established a deep understanding of passive systems, not much is known about correlations in active suspensions. In this work we derive an approximate analytic theory for three-body correlations and forces in systems of active Brownian disks starting from the many-body Smoluchowski equation. We use our theory to predict the conditional forces that act on a tagged particle and their dependence on the propulsion speed of self-propelled disks. We identify preferred directions of these forces in relation to the direction of propulsion and the positions of the surrounding particles. We further relate our theory to the effective swimming speed of the active disks, which is relevant for the physics of MIPS. To test and validate our theory, we additionally run particle-resolved computer simulations, for which we explicitly calculate the three-body forces. In this context, we discuss the modeling of active Brownian swimmers with nearly hard interaction potentials. We find very good agreement between our simulations and numerical solutions of our theory, especially for the nonequilibrium pair-distribution function. For our analytical results, we carefully discuss their range of validity in the context of the different levels of approximation we applied. This discussion allows us to study the individual contribution of particles to three-body forces and to the emerging structure. Thus, our work sheds light on the collective behavior, provides the basis for further studies of correlations in active suspensions, and makes a step towards an emerging liquid state theory.
Standards on the permanence of recording materials
NASA Astrophysics Data System (ADS)
Adelstein, Peter Z.
1996-02-01
The permanence of recording materials is dependent upon many factors, and these differ for photographic materials, magnetic tape and optical disks. Photographic permanence is affected by the (1) stability of the material, (2) the photographic processing and (3) the storage conditions. American National Standards on the material and the processing have been published for different types of film and standard test methods have been established for color film. The third feature of photographic permanence is the storage requirements and these have been established for photographic film, prints and plates. Standardization on the permanence of electronic recording materials is more complicated. As with photographic materials, stability is dependent upon (1) the material itself and (2) the storage environment. In addition, retention of the necessary (3) hardware and (4) software is also a prerequisite. American National Standards activity in these areas has been underway for the past six years. A test method for the material which determines the life expectancy of CD-ROMs has been standardized. The problems of determining the expected life of magnetic tape have been more formidable but the critical physical properties have been determined. A specification for the storage environment of magnetic tape has been finalized and one on the storage of optical disks is being worked on. Critical but unsolved problems are the obsolescence of both the hardware and the software necessary to read digital images.
Standards on the permanence of recording materials
NASA Astrophysics Data System (ADS)
Adelstein, Peter Z.
1996-01-01
The permanence of recording materials is dependent upon many factors, and these differ for photographic materials, magnetic tape and optical disks. Photographic permanence is affected by the (1) stability of the material, (2) the photographic processing, and (3) the storage conditions. American National Standards on the material and the processing have been published for different types of film and standard test methods have been established for color film. The third feature of photographic permanence is the storage requirements and these have been established for photographic film, prints, and plates. Standardization on the permanence of electronic recording materials is more complicated. As with photographic materials, stability is dependent upon (1) the material itself and (2) the storage environment. In addition, retention of the necessary (3) hardware and (4) software is also a prerequisite. American National Standards activity in these areas has been underway for the past six years. A test method for the material which determines the life expectancy of CD-ROMs has been standardized. The problems of determining the expected life of magnetic tape have been more formidable but the critical physical properties have been determined. A specification for the storage environment of magnetic tapes has been finalized and one on the storage of optical disks is being worked on. Critical but unsolved problems are the obsolescence of both the hardware and the software necessary to read digital images.
NASA Technical Reports Server (NTRS)
Buckley, D. H.
1974-01-01
The lubricating properties of some benzyl and benzene structures were determined by using 304 stainless steel surfaces strained to various hardness. Friction coefficients and wear track widths were measured with a Bowden-Leben type friction apparatus by using a pin-on-disk specimen configuration. Results obtained indicate that benzyl monosulfide, dibenzyl disulfide, and benzyl alcohol resulted in the lowest friction coefficients for 304 stainless steel, while benzyl ether provided the least surface protection and gave the highest friction. Strainhardening of the 304 stainless steel prior to sliding resulted in reduced friction in dry sliding. With benzyl monosulfide, dibenzyl disulfide, and benzyl alcohol changes in 304 stainless steel hardness had no effect upon friction behavior.
Hard Spheres on the Primitive Surface
NASA Astrophysics Data System (ADS)
Dotera, Tomonari; Takahashi, Yusuke
2015-03-01
Recently hierarchical structures associated with the gyroid in several soft-matter systems have been reported. One of fundamental questions is regular arrangement or tiling on minimal surfaces. We have found certain numbers of hard spheres per unit cell on the gyroid surface are entropically self-organized. Here, new results for the primitive surface are presented. 56/64/72 per unit cell on the primitive minimal surface are entropically self-organized. Numerical evidences for the fluid-solid transition as a function of hard sphere radius are obtained in terms of the acceptance ratio of Monte Carlo moves and order parameters. These arrangements, which are the extensions of the hexagonal arrangement on a flat surface, can be viewed as hyperbolic tiling on the Poincaré disk with a negative Gaussian curvature.
Stored grain pack factors for wheat: comparison of three methods to field measurements
USDA-ARS?s Scientific Manuscript database
Storing grain in bulk storage units results in grain packing from overbearing pressure, which increases grain bulk density and storage-unit capacity. This study compared pack factors of hard red winter (HRW) wheat in vertical storage bins using different methods: the existing packing model (WPACKING...
Hard-Soft Composite Carbon as a Long-Cycling and High-Rate Anode for Potassium-Ion Batteries
Jian, Zelang; Hwang, Sooyeon; Li, Zhifei; ...
2017-05-05
There exist tremendous needs for sustainable storage solutions for intermittent renewable energy sources, such as solar and wind energy. Thus, systems based on Earth-abundant elements deserve much attention. Potassium-ion batteries represent a promising candidate because of the abundance of potassium resources. As for the choices of anodes, graphite exhibits encouraging potassium-ion storage properties; however, it suffers limited rate capability and poor cycling stability. Here in this paper, nongraphitic carbons as K-ion anodes with sodium carboxymethyl cellulose as the binder are systematically investigated. Compared to hard carbon and soft carbon, a hard–soft composite carbon with 20 wt% soft carbon distributed inmore » the matrix phase of hard carbon microspheres exhibits highly amenable performance: high capacity, high rate capability, and very stable long-term cycling. In contrast, pure hard carbon suffers limited rate capability, while the capacity of pure soft carbon fades more rapidly.« less
Hard-Soft Composite Carbon as a Long-Cycling and High-Rate Anode for Potassium-Ion Batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jian, Zelang; Hwang, Sooyeon; Li, Zhifei
There exist tremendous needs for sustainable storage solutions for intermittent renewable energy sources, such as solar and wind energy. Thus, systems based on Earth-abundant elements deserve much attention. Potassium-ion batteries represent a promising candidate because of the abundance of potassium resources. As for the choices of anodes, graphite exhibits encouraging potassium-ion storage properties; however, it suffers limited rate capability and poor cycling stability. Here in this paper, nongraphitic carbons as K-ion anodes with sodium carboxymethyl cellulose as the binder are systematically investigated. Compared to hard carbon and soft carbon, a hard–soft composite carbon with 20 wt% soft carbon distributed inmore » the matrix phase of hard carbon microspheres exhibits highly amenable performance: high capacity, high rate capability, and very stable long-term cycling. In contrast, pure hard carbon suffers limited rate capability, while the capacity of pure soft carbon fades more rapidly.« less
Statistical theory of correlations in random packings of hard particles.
Jin, Yuliang; Puckett, James G; Makse, Hernán A
2014-05-01
A random packing of hard particles represents a fundamental model for granular matter. Despite its importance, analytical modeling of random packings remains difficult due to the existence of strong correlations which preclude the development of a simple theory. Here, we take inspiration from liquid theories for the n-particle angular correlation function to develop a formalism of random packings of hard particles from the bottom up. A progressive expansion into a shell of particles converges in the large layer limit under a Kirkwood-like approximation of higher-order correlations. We apply the formalism to hard disks and predict the density of two-dimensional random close packing (RCP), ϕ(rcp) = 0.85 ± 0.01, and random loose packing (RLP), ϕ(rlp) = 0.67 ± 0.01. Our theory also predicts a phase diagram and angular correlation functions that are in good agreement with experimental and numerical data.
STS-48 MS Brown on OV-103's aft flight deck poses for ESC photo
NASA Technical Reports Server (NTRS)
1991-01-01
STS-48 Mission Specialist (MS) Mark N. Brown looks away from the portable laptop computer screen to pose for an Electronic Still Camera (ESC) photo on the aft flight deck of the earth-orbiting Discovery, Orbiter Vehicle (OV) 103. Brown was working at the payload station before the interruption. Crewmembers were testing the ESC as part of Development Test Objective (DTO) 648, Electronic Still Photography. The digital image was stored on a removable hard disk or small optical disk, and could be converted to a format suitable for downlink transmission. The ESC is making its initial appearance on this Space Shuttle mission.
STS-48 Commander Creighton on OV-103's aft flight deck poses for ESC photo
NASA Technical Reports Server (NTRS)
1991-01-01
STS-48 Commander John O. Creighton, positioned under overhead window W8, interrupts an out-the-window observation to display a pleasant countenance for an electronic still camera (ESC) photo on the aft flight deck of the earth-orbiting Discovery, Orbiter Vehicle (OV) 103. Crewmembers were testing the ESC as part of Development Test Objective (DTO) 648, Electronic Still Photography. The digital image was stored on a removable hard disk or small optical disk, and could be converted to a format suitable for downlink transmission. The ESC is making its initial appearance on this Space Shuttle mission.
The Gaseous Disks of Young Stellar Objects
NASA Technical Reports Server (NTRS)
Glassgold, A. E.
2006-01-01
Disks represent a crucial stage in the formation of stars and planets. They are novel astrophysical systems with attributes intermediate between the interstellar medium and stars. Their physical properties are inhomogeneous and are affected by hard stellar radiation and by dynamical evolution. Observing disk structure is difficult because of the small sizes, ranging from as little as 0.05 AU at the inner edge to 100-1000 AU at large radial distances. Nonetheless, substantial progress has been made by observing the radiation emitted by the dust from near infrared to mm wavelengths, i.e., the spectral energy distribution of an unresolved disk. Many fewer results are available for the gas, which is the main mass component of disks over much of their lifetime. The inner disk gas of young stellar objects (henceforth YSOs) have been studied using the near infrared rovibrational transitions of CO and a few other molecules, while the outer regions have been explored with the mm and sub-mm lines of CO and other species. Further progress can be expected in understanding the physical properties of disks from observations with sub-mm arrays like SMA, CARMA and ALMA, with mid infrared measurements using Spitzer, and near infrared spectroscopy with large ground-based telescopes. Intense efforts are also being made to model the observations using complex thermal-chemical models. After a brief review of the existing observations and modeling results, some of the weaknesses of the models will be discussed, including the absence of good laboratory and theoretical calculations for essential microscopic processes.
In-Storage Embedded Accelerator for Sparse Pattern Processing
2016-08-13
performance of RAM disk. Since this configuration offloads most of processing onto the FPGA, the host software consists of only two threads for...more. Fig. 13. Document Processed vs CPU Threads Note that BlueDBM efficiency comes from our in-store processing paradigm that uses the FPGA...In-Storage Embedded Accelerator for Sparse Pattern Processing Sang-Woo Jun*, Huy T. Nguyen#, Vijay Gadepally#*, and Arvind* #MIT Lincoln Laboratory
Medical image digital archive: a comparison of storage technologies
NASA Astrophysics Data System (ADS)
Chunn, Timothy; Hutchings, Matt
1998-07-01
A cost effective, high capacity digital archive system is one of the remaining key factors that will enable a radiology department to eliminate film as an archive medium. The ever increasing amount of digital image data is creating the need for huge archive systems that can reliably store and retrieve millions of images and hold from a few terabytes of data to possibly hundreds of terabytes. Selecting the right archive solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, conformance to open standards, archive availability and reliability, security, cost, achievable benefits and cost savings, investment protection, and more. This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. New technologies will be discussed, such as DVD and high performance tape. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on random and pre-fetch retrieval time will be analyzed. The concept of automated migration of images from high performance, RAID disk storage devices to high capacity, NearlineR storage devices will be introduced as a viable way to minimize overall storage costs for an archive.
NASA Astrophysics Data System (ADS)
Balasubramanian, Balamurugan; Mukherjee, Pinaki; Skomski, Ralph; Manchanda, Priyanka; Das, Bhaskar; Sellmyer, David J.
2014-09-01
Nanoscience has been one of the outstanding driving forces in technology recently, arguably more so in magnetism than in any other branch of science and technology. Due to nanoscale bit size, a single computer hard disk is now able to store the text of 3,000,000 average-size books, and today's high-performance permanent magnets--found in hybrid cars, wind turbines, and disk drives--are nanostructured to a large degree. The nanostructures ideally are designed from Co- and Fe-rich building blocks without critical rare-earth elements, and often are required to exhibit high coercivity and magnetization at elevated temperatures of typically up to 180 °C for many important permanent-magnet applications. Here we achieve this goal in exchange-coupled hard-soft composite films by effective nanostructuring of high-anisotropy HfCo7 nanoparticles with a high-magnetization Fe65Co35 phase. An analysis based on a model structure shows that the soft-phase addition improves the performance of the hard-magnetic material by mitigating Brown's paradox in magnetism, a substantial reduction of coercivity from the anisotropy field. The nanostructures exhibit a high room-temperature energy product of about 20.3 MGOe (161.5 kJ/m3), which is a record for a rare earth- or Pt-free magnetic material and retain values as high as 17.1 MGOe (136.1 kJ/m3) at 180°C.
Magnetic printing characteristics using master disk with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Fujiwara, Naoto; Nishida, Yoichi; Ishioka, Toshihide; Sugita, Ryuji; Yasunaga, Tadashi
With the increase in recording density and capacity of hard-disk drives (HDD), high speed, high precision and low cost servo writing method has become an issue in HDD industry. The magnetic printing was proposed as the ultimate solution for this issue [1-3]. There are two types of magnetic printing methods, which are 'Bit Printing (BP)' and 'Edge Printing (EP)'. BP method is conducted by applying external field whose direction is vertical to the plane of both master disk (Master) and perpendicular magnetic recording (PMR) media (Slave). On the other hand, EP method is conducted by applying external field toward down track direction of both master and slave. In BP for bit length shorter than 100 nm, the SNR of perpendicular anisotropic master was higher than isotropic master. And the SNR of EP for the bit length shorter than 50 nm was demonstrated.
Pulsed Thermal Emission from the Accreting Pulsar XMMU J054134.7-682550
NASA Astrophysics Data System (ADS)
Manousakis, Antonis; Walter, Roland; Audard, Marc; Lanz, Thierry
2009-05-01
XMMU J054134.7-682550, located in the LMC, featured a type II outburst in August 2007. We analyzed XMM-Newton (EPIC-MOS) and RXTE (PCA) data in order to derive the spectral and temporal characteristics of the system throughout the outburst. Spectral variability, spin period evolution, energy dependent pulse shape are discussed. The outburst (LX~3×1038 erg/s~LEDD) spectrum can be modeled using, cutoff power law, soft X-ray blackbody, disk emission, and cyclotron absorption line. The blackbody component shows a sinusoidal behavior, expected from hard X-ray reprocessing on the inner edge of the accretion disk. The thickness of the inner accretion disk (width of ~75 km) can be constrained. The spin-up of the pulsar during the outburst is the signature of a (huge) accretion rate. Simbol-X will provide similar capabilities as XMM-Newton and RXTE together, for such bright events.
San Nicolas Island surface radiation-meteorology data
NASA Technical Reports Server (NTRS)
Johnson-Pasqua, Christopher M.; Cox, Stephen K.
1990-01-01
A summary of the surface data collected by Colorado State University (CSU) on San Nicolas Island during the First ISCCP Regional Experiment (FIRE) from 30 June (Julian Day 181) through 19 July (Julian Day 200) is given. The data are available in two formats: hard copy graphs, and processed data on floppy disk.