Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2005-01-01
A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2003-01-01
A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.
The purpose of this SOP is to describe the operation of the data processing program. These methods were used for every execution of the data processing program during the Arizona NHEXAS project and the "Border" study. Keywords: data; data processing.
The National Human Exposur...
NASA Technical Reports Server (NTRS)
1972-01-01
The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.
The purpose of this SOP is to describe the operation of the data processing program. These methods were used for every execution of the data processing program during the Arizona NHEXAS project and the Border study. Keywords: data; data processing.
The U.S.-Mexico Border Progr...
US EPA Base Study Standard Operating Procedure for Data Processing and Data Management
The purpose of the Standard Operating Procedures (SOP) for data management and data processing is to facilitate consistent documentation and completion of data processing duties and management responsibilities in order to maintain a high standard of data quality.
Kepler Science Operations Center Architecture
NASA Technical Reports Server (NTRS)
Middour, Christopher; Klaus, Todd; Jenkins, Jon; Pletcher, David; Cote, Miles; Chandrasekaran, Hema; Wohler, Bill; Girouard, Forrest; Gunter, Jay P.; Uddin, Kamal;
2010-01-01
We give an overview of the operational concepts and architecture of the Kepler Science Data Pipeline. Designed, developed, operated, and maintained by the Science Operations Center (SOC) at NASA Ames Research Center, the Kepler Science Data Pipeline is central element of the Kepler Ground Data System. The SOC charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Data Pipeline, including the hardware infrastructure, scientific algorithms, and operational procedures. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center that hosts the computers required to perform data analysis. We discuss the high-performance, parallel computing software modules of the Kepler Science Data Pipeline that perform transit photometry, pixel-level calibration, systematic error-correction, attitude determination, stellar target management, and instrument characterization. We explain how data processing environments are divided to support operational processing and test needs. We explain the operational timelines for data processing and the data constructs that flow into the Kepler Science Data Pipeline.
An intelligent factory-wide optimal operation system for continuous production process
NASA Astrophysics Data System (ADS)
Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping
2016-03-01
In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.
The DFVLR main department for central data processing, 1976 - 1983
NASA Technical Reports Server (NTRS)
1983-01-01
Data processing, equipment and systems operation, operative and user systems, user services, computer networks and communications, text processing, computer graphics, and high power computers are discussed.
Data Assembly and Processing for Operational Oceanography: 10 Years of Achievements
2009-07-20
Processing for Operational Oceanography: 10 Years of Acheivements 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 0602435N 6... operational oceanography infrastructure. They provide data and products needed by modeling and data assimilation systems; they also provide products...directly useable for applications. The paper will discuss the role and functions of the data centers for operational oceanography and describe some of
He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian
2015-02-01
Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.
LANDSAT-D ground segment operations plan, revision A
NASA Technical Reports Server (NTRS)
Evans, B.
1982-01-01
The basic concept for the utilization of LANDSAT ground processing resources is described. Only the steady state activities that support normal ground processing are addressed. This ground segment operations plan covers all processing of the multispectral scanner and the processing of thematic mapper through data acquisition and payload correction data generation for the LANDSAT 4 mission. The capabilities embedded in the hardware and software elements are presented from an operations viewpoint. The personnel assignments associated with each functional process and the mechanisms available for controlling the overall data flow are identified.
THE WASHINGTON DATA PROCESSING TRAINING STORY.
ERIC Educational Resources Information Center
MCKEE, R.L.
A DATA PROCESSING TRAINING PROGRAM IN WASHINGTON HAD 10 DATA PROCESSING CENTERS IN OPERATION AND EIGHT MORE IN VARIOUS STAGES OF PLANNING IN 1963. THESE CENTERS WERE FULL-TIME DAY PREPARATORY 2-YEAR POST-HIGH SCHOOL TECHNICIAN TRAINING PROGRAMS, OPERATED AND ADMINISTERED BY THE LOCAL BOARDS OF EDUCATION. EACH SCHOOL HAD A COMPLETE DATA PROCESSING…
Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong
2013-10-01
A processor-implemented method for determining aging of a processing unit in a processor the method comprising: calculating an effective aging profile for the processing unit wherein the effective aging profile quantifies the effects of aging on the processing unit; combining the effective aging profile with process variation data, actual workload data and operating conditions data for the processing unit; and determining aging through an aging sensor of the processing unit using the effective aging profile, the process variation data, the actual workload data, architectural characteristics and redundancy data, and the operating conditions data for the processing unit.
A Big Spatial Data Processing Framework Applying to National Geographic Conditions Monitoring
NASA Astrophysics Data System (ADS)
Xiao, F.
2018-04-01
In this paper, a novel framework for spatial data processing is proposed, which apply to National Geographic Conditions Monitoring project of China. It includes 4 layers: spatial data storage, spatial RDDs, spatial operations, and spatial query language. The spatial data storage layer uses HDFS to store large size of spatial vector/raster data in the distributed cluster. The spatial RDDs are the abstract logical dataset of spatial data types, and can be transferred to the spark cluster to conduct spark transformations and actions. The spatial operations layer is a series of processing on spatial RDDs, such as range query, k nearest neighbor and spatial join. The spatial query language is a user-friendly interface which provide people not familiar with Spark with a comfortable way to operation the spatial operation. Compared with other spatial frameworks, it is highlighted that comprehensive technologies are referred for big spatial data processing. Extensive experiments on real datasets show that the framework achieves better performance than traditional process methods.
Managing computer-controlled operations
NASA Technical Reports Server (NTRS)
Plowden, J. B.
1985-01-01
A detailed discussion of Launch Processing System Ground Software Production is presented to establish the interrelationships of firing room resource utilization, configuration control, system build operations, and Shuttle data bank management. The production of a test configuration identifier is traced from requirement generation to program development. The challenge of the operational era is to implement fully automated utilities to interface with a resident system build requirements document to eliminate all manual intervention in the system build operations. Automatic update/processing of Shuttle data tapes will enhance operations during multi-flow processing.
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2012 CFR
2012-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2012-10-01 2012-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2013 CFR
2013-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2013-10-01 2013-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2011 CFR
2011-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2011-10-01 2011-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2014 CFR
2014-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2014-10-01 2014-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2010 CFR
2010-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account...
The use of Merging and Aggregation Operators for MRDB Data Feeding
NASA Astrophysics Data System (ADS)
Kozioł, Krystian; Lupa, Michał
2013-12-01
This paper presents the application of two generalization operators - merging and displacement - in the process of automatic data feeding in a multiresolution data base of topographic objects from large-scale data-bases (1 : 500-1 : 5000). An ordered collection of objects makes a layer of development that in the process of generalization is subjected to the processes of merging and displacement in order to maintain recognizability in the reduced scale of the map. The solution to the above problem is the algorithms described in the work; these algorithms use the standard recognition of drawings (Chrobak 2010), independent of the user. A digital cartographic generalization process is a set of consecutive operators where merging and aggregation play a key role. The proper operation has a significant impact on the qualitative assessment of data generalization
Passive serialization in a multitasking environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hennessey, J.P.; Osisek, D.L.; Seigh, J.W. II
1989-02-28
In a multiprocessing system having a control program in which data objects are shared among processes, this patent describes a method for serializing references to a data object by the processes so as to prevent invalid references to the data object by any process when an operation requiring exclusive access is performed by another process, comprising the steps of: permitting the processes to reference data objects on a shared access basis without obtaining a shared lock; monitoring a point of execution of the control program which is common to all processes in the system, which occurs regularly in the process'more » execution and across which no references to any data object can be maintained by any process, except references using locks; establishing a system reference point which occurs after each process in the system has passed the point of execution at least once since the last such system reference point; requesting an operation requiring exclusive access on a selected data object; preventing subsequent references by other processes to the selected data object; waiting until two of the system references points have occurred; and then performing the requested operation.« less
Ruohonen, Toni; Ennejmy, Mohammed
2013-01-01
Making reliable and justified operational and strategic decisions is a really challenging task in the health care domain. So far, the decisions have been made based on the experience of managers and staff, or they are evaluated with traditional methods, using inadequate data. As a result of this kind of decision-making process, attempts to improve operations usually have failed or led to only local improvements. Health care organizations have a lot of operational data, in addition to clinical data, which is the key element for making reliable and justified decisions. However, it is progressively problematic to access it and make usage of it. In this paper we discuss about the possibilities how to exploit operational data in the most efficient way in the decision-making process. We'll share our future visions and propose a conceptual framework for automating the decision-making process.
NASA Technical Reports Server (NTRS)
Byrd, Raymond J.
1990-01-01
This study was initiated to identify operations problems and cost drivers for current propulsion systems and to identify technology and design approaches to increase the operational efficiency and reduce operations costs for future propulsion systems. To provide readily usable data for the Advance Launch System (ALS) program, the results of the Operationally Efficient Propulsion System Study (OEPSS) were organized into a series of OEPSS Data Books as follows: Volume 1, Generic Ground Operations Data; Volume 2, Ground Operations Problems; Volume 3, Operations Technology; Volume 4, OEPSS Design Concepts; and Volume 5, OEPSS Final Review Briefing, which summarizes the activities and results of the study. This volume presents ground processing data for a generic LOX/LH2 booster and core propulsion system based on current STS experience. The data presented includes: top logic diagram, process flow, activities bar-chart, loaded timelines, manpower requirements in terms of duration, headcount and skill mix per operations and maintenance instruction (OMI), and critical path tasks and durations.
Development of Data Processing Software for NBI Spectroscopic Analysis System
NASA Astrophysics Data System (ADS)
Zhang, Xiaodan; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Wu, Deyun; Cui, Qinglong
2015-04-01
A set of data processing software is presented in this paper for processing NBI spectroscopic data. For better and more scientific managment and querying these data, they are managed uniformly by the NBI data server. The data processing software offers the functions of uploading beam spectral original and analytic data to the data server manually and automatically, querying and downloading all the NBI data, as well as dealing with local LZO data. The set software is composed of a server program and a client program. The server software is programmed in C/C++ under a CentOS development environment. The client software is developed under a VC 6.0 platform, which offers convenient operational human interfaces. The network communications between the server and the client are based on TCP. With the help of this set software, the NBI spectroscopic analysis system realizes the unattended automatic operation, and the clear interface also makes it much more convenient to offer beam intensity distribution data and beam power data to operators for operation decision-making. supported by National Natural Science Foundation of China (No. 11075183), the Chinese Academy of Sciences Knowledge Innovation
Performing an allreduce operation on a plurality of compute nodes of a parallel computer
Faraj, Ahmad [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.
M-DAS: System for multispectral data analysis. [in Saginaw Bay, Michigan
NASA Technical Reports Server (NTRS)
Johnson, R. H.
1975-01-01
M-DAS is a ground data processing system designed for analysis of multispectral data. M-DAS operates on multispectral data from LANDSAT, S-192, M2S and other sources in CCT form. Interactive training by operator-investigators using a variable cursor on a color display was used to derive optimum processing coefficients and data on cluster separability. An advanced multivariate normal-maximum likelihood processing algorithm was used to produce output in various formats: color-coded film images, geometrically corrected map overlays, moving displays of scene sections, coverage tabulations and categorized CCTs. The analysis procedure for M-DAS involves three phases: (1) screening and training, (2) analysis of training data to compute performance predictions and processing coefficients, and (3) processing of multichannel input data into categorized results. Typical M-DAS applications involve iteration between each of these phases. A series of photographs of the M-DAS display are used to illustrate M-DAS operation.
The Kepler Science Data Processing Pipeline Source Code Road Map
NASA Technical Reports Server (NTRS)
Wohler, Bill; Jenkins, Jon M.; Twicken, Joseph D.; Bryson, Stephen T.; Clarke, Bruce Donald; Middour, Christopher K.; Quintana, Elisa Victoria; Sanderfer, Jesse Thomas; Uddin, Akm Kamal; Sabale, Anima;
2016-01-01
We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed, developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center, the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center which hosts the computers required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Processing Pipeline, including, the software algorithms. We present the high-performance, parallel computing software modules of the pipeline that perform transit photometry, pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument characterization.
Progress in Operational Analysis of Launch Vehicles in Nonstationary Flight
NASA Technical Reports Server (NTRS)
James, George; Kaouk, Mo; Cao, Timothy
2013-01-01
This paper presents recent results in an ongoing effort to understand and develop techniques to process launch vehicle data, which is extremely challenging for modal parameter identification. The primary source of difficulty is due to the nonstationary nature of the situation. The system is changing, the environment is not steady, and there is an active control system operating. Hence, the primary tool for producing clean operational results (significant data lengths and data averaging) is not available to the user. This work reported herein uses a correlation-based two step operational modal analysis approach to process the relevant data sets for understanding and development of processes. A significant drawback for such processing of short time histories is a series of beating phenomena due to the inability to average out random modal excitations. A recursive correlation process coupled to a new convergence metric (designed to mitigate the beating phenomena) is the object of this study. It has been found in limited studies that this process creates clean modal frequency estimates but numerically alters the damping.
NASA Astrophysics Data System (ADS)
Sharma, A. K.
2016-12-01
The current operational polar sounding systems running at the National Oceanic and Atmospheric Administration (NOAA) National Environmental Satellite Data and Information Service (NESDIS) for processing the sounders data from the Cross-track Infrared (CrIS) onboard the Suomi National Polar-orbiting Partnership (SNPP) under the Joint Polar Satellite System (JPSS) program; the Infrared Atmospheric Sounding Interferometer (IASI) onboard Metop-1 and Metop-2 satellites under the program managed by the European Organization for the Exploitation of Meteorological (EUMETSAT); and the Advanced TIROS (Television and Infrared Observation Satellite) Operational Vertical Sounding (ATOVS) onboard NOAA-19 in the NOAA series of Polar Orbiting Environmental Satellites (POES), Metop-1 and Metop-2. In a series of advanced operational sounders CrIS and IASI provide more accurate, detailed temperature and humidity profiles; trace gases such as ozone, nitrous oxide, carbon dioxide, and methane; outgoing longwave radiation; and the cloud cleared radiances (CCR) on a global scale and these products are available to the operational user community. This presentation will highlight the tools developed for the NOAA Unique Combined Atmospheric Processing System (NUCAPS), which will discuss the Environmental Satellites Processing Center (ESPC) system architecture involving sounding data processing and distribution for CrIS, IASI, and ATOVS sounding products. Discussion will also include the improvements made for data quality measurements, granule processing and distribution, and user timeliness requirements envisioned from the next generation of JPSS and GOES-R satellites. There have been significant changes in the operational system due to system upgrades, algorithm updates, and value added data products and services. Innovative tools to better monitor performance and quality assurance of the operational sounder and imager products from the CrIS/ATMS, IASI and ATOVS have been developed and deployed at the Office of Satellite and Product Operations (OSPO). The incorporation of these tools in the OSPO operation has facilitated the diagnosis and resolution of problems when detected in the operational environment.
On the hitchhiker Robot Operated Materials Processing System: Experiment data system
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Jenstrom, Del
1995-01-01
The Space Shuttle Discovery STS-64 mission carried the first American autonomous robot into space, the Robot Operated Materials Processing System (ROMPS). On this mission ROMPS was the only Hitchhiker experiment and had a unique opportunity to utilize all Hitchhiker space carrier capabilities. ROMPS conducted rapid thermal processing of the one hundred semiconductor material samples to study how micro gravity affects the resulting material properties. The experiment was designed, built and operated by a small GSFC team in cooperation with industry and university based principal investigators who provided the material samples and data interpretation. ROMPS' success presents some valuable lessons in such cooperation, as well as in the utilization of the Hitchhiker carrier for complex applications. The motivation of this paper is to share these lessons with the scientific community interested in attached payload experiments. ROMPS has a versatile and intelligent material processing control data system. This paper uses the ROMPS data system as the guiding thread to present the ROMPS mission experience. It presents an overview of the ROMPS experiment followed by considerations of the flight and ground data subsystems and their architecture, data products generation during mission operations, and post mission data utilization. It then presents the lessons learned from the development and operation of the ROMPS data system as well as those learned during post-flight data processing.
78 FR 32255 - HHS-Operated Risk Adjustment Data Validation Stakeholder Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-29
...-Operated Risk Adjustment Data Validation Stakeholder Meeting AGENCY: Centers for Medicare & Medicaid... Act HHS-operated risk adjustment data validation process. The purpose of this public meeting is to... interested parties about key HHS policy considerations pertaining to the HHS-operated risk adjustment data...
ERTS operations and data processing
NASA Technical Reports Server (NTRS)
Gonzales, L.; Sos, J. Y.
1974-01-01
The overall communications and data flow between the ERTS spacecraft and the ground stations and processing centers are generally described. Data from the multispectral scanner and the return beam vidicon are telemetered to a primary ground station where they are demodulated, processed, and recorded. The tapes are then transferred to the NASA Data Processing Facility (NDPF) at Goddard. Housekeeping data are relayed from the prime ground stations to the Operations Control Center at Goddard. Tracking data are processed at the ground stations, and the calculated parameters are transmitted by teletype to the orbit determination group at Goddard. The ERTS orbit has been designed so that the same swaths of the ground coverage pattern viewed during one 18-day coverage cycle are repeated by the swaths viewed on all subsequent cycles. The Operations Control Center is the focal point for all communications with the spacecraft. NDPF is a job-oriented facility which processes and stores all sensor data, and which disseminates large quantities of these data to users in the form of films, computer-compatible tapes, and data collection system data.
Performance of the Landsat-Data Collection System in a Total System Context
NASA Technical Reports Server (NTRS)
Paulson, R. W. (Principal Investigator); Merk, C. F.
1975-01-01
The author has identified the following significant results. This experiment was, and continues to be, an integration of the LANDSAT-DCS with the data collection and processing system of the Geological Survey. Although an experimental demonstration, it was a successful integration of a satellite relay system that is capable of continental data collection, and an existing governmental nationwide operational data processing and distributing networks. The Survey's data processing system uses a large general purpose computer with insufficient redundancy for 24-hour a day, 7 day a week operation. This is significant, but soluble obstacle to converting the experimental integration of the system to an operational integration.
SAR image formation with azimuth interpolation after azimuth transform
Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM
2008-07-08
Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.
Statistical process control: separating signal from noise in emergency department operations.
Pimentel, Laura; Barrueto, Fermin
2015-05-01
Statistical process control (SPC) is a visually appealing and statistically rigorous methodology very suitable to the analysis of emergency department (ED) operations. We demonstrate that the control chart is the primary tool of SPC; it is constructed by plotting data measuring the key quality indicators of operational processes in rationally ordered subgroups such as units of time. Control limits are calculated using formulas reflecting the variation in the data points from one another and from the mean. SPC allows managers to determine whether operational processes are controlled and predictable. We review why the moving range chart is most appropriate for use in the complex ED milieu, how to apply SPC to ED operations, and how to determine when performance improvement is needed. SPC is an excellent tool for operational analysis and quality improvement for these reasons: 1) control charts make large data sets intuitively coherent by integrating statistical and visual descriptions; 2) SPC provides analysis of process stability and capability rather than simple comparison with a benchmark; 3) SPC allows distinction between special cause variation (signal), indicating an unstable process requiring action, and common cause variation (noise), reflecting a stable process; and 4) SPC keeps the focus of quality improvement on process rather than individual performance. Because data have no meaning apart from their context, and every process generates information that can be used to improve it, we contend that SPC should be seriously considered for driving quality improvement in emergency medicine. Copyright © 2015 Elsevier Inc. All rights reserved.
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.
2014-08-12
Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.
Stochastic availability analysis of operational data systems in the Deep Space Network
NASA Technical Reports Server (NTRS)
Issa, T. N.
1991-01-01
Existing availability models of standby redundant systems consider only an operator's performance and its interaction with the hardware performance. In the case of operational data systems in the Deep Space Network (DSN), in addition to an operator system interface, a controller reconfigures the system and links a standby unit into the network data path upon failure of the operating unit. A stochastic (Markovian) process technique is used to model and analyze the availability performance and occurrence of degradation due to partial failures are quantitatively incorporated into the model. Exact expressions of the steady state availability and proportion degraded performance measures are derived for the systems under study. The interaction among the hardware, operator, and controller performance parameters and that interaction's effect on data availability are evaluated and illustrated for an operational data processing system.
Next Generation MODTRAN for Improved Atmospheric Correction of Spectral Imagery
2016-01-29
DoD operational and research sensor and data processing systems, particularly those involving the removal of atmospheric effects, commonly referred...atmospheric correction process. Given the ever increasing capabilities of spectral sensors to quickly generate enormous quantities of data, combined...many DoD operational and research sensor and data processing systems, particularly those involving the removal of atmospheric effects, commonly
Short haul air passenger data sources in the United States
NASA Technical Reports Server (NTRS)
Al-Kazily, J.; Gosling, G.; Horonjeff, R.
1977-01-01
The sources and characteristics of existing data on short haul air passenger traffic in the United States domestic air market are described along with data availability, processing, and costs. Reference is made to data derived from aircraft operations since these data can be used to insure that no short haul operators are omitted during the process of assembling passenger data.
NASA Astrophysics Data System (ADS)
Meyer, F. J.; McAlpin, D. B.; Gong, W.; Ajadi, O.; Arko, S.; Webley, P. W.; Dehn, J.
2015-02-01
Remote sensing plays a critical role in operational volcano monitoring due to the often remote locations of volcanic systems and the large spatial extent of potential eruption pre-cursor signals. Despite the all-weather capabilities of radar remote sensing and its high performance in monitoring of change, the contribution of radar data to operational monitoring activities has been limited in the past. This is largely due to: (1) the high costs associated with radar data; (2) traditionally slow data processing and delivery procedures; and (3) the limited temporal sampling provided by spaceborne radars. With this paper, we present new data processing and data integration techniques that mitigate some of these limitations and allow for a meaningful integration of radar data into operational volcano monitoring decision support systems. Specifically, we present fast data access procedures as well as new approaches to multi-track processing that improve near real-time data access and temporal sampling of volcanic systems with SAR data. We introduce phase-based (coherent) and amplitude-based (incoherent) change detection procedures that are able to extract dense time series of hazard information from these data. For a demonstration, we present an integration of our processing system with an operational volcano monitoring system that was developed for use by the Alaska Volcano Observatory (AVO). Through an application to a historic eruption, we show that the integration of SAR into systems such as AVO can significantly improve the ability of operational systems to detect eruptive precursors. Therefore, the developed technology is expected to improve operational hazard detection, alerting, and management capabilities.
Study and Analysis of The Robot-Operated Material Processing Systems (ROMPS)
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.
1996-01-01
This is a report presenting the progress of a research grant funded by NASA for work performed during 1 Oct. 1994 - 31 Sep. 1995. The report deals with the development and investigation of potential use of software for data processing for the Robot Operated Material Processing System (ROMPS). It reports on the progress of data processing of calibration samples processed by ROMPS in space and on earth. First data were retrieved using the I/O software and manually processed using MicroSoft Excel. Then the data retrieval and processing process was automated using a program written in C which is able to read the telemetry data and produce plots of time responses of sample temperatures and other desired variables. LabView was also employed to automatically retrieve and process the telemetry data.
Digital ultrasonics signal processing: Flaw data post processing use and description
NASA Technical Reports Server (NTRS)
Buel, V. E.
1981-01-01
A modular system composed of two sets of tasks which interprets the flaw data and allows compensation of the data due to transducer characteristics is described. The hardware configuration consists of two main units. A DEC LSI-11 processor running under the RT-11 sngle job, version 2C-02 operating system, controls the scanner hardware and the ultrasonic unit. A DEC PDP-11/45 processor also running under the RT-11, version 2C-02, operating system, stores, processes and displays the flaw data. The software developed the Ultrasonics Evaluation System, is divided into two catagories; transducer characterization and flaw classification. Each category is divided further into two functional tasks: a data acquisition and a postprocessor ask. The flaw characterization collects data, compresses its, and writes it to a disk file. The data is then processed by the flaw classification postprocessing task. The use and operation of a flaw data postprocessor is described.
NASA Technical Reports Server (NTRS)
Waldrop, Glen S.
1990-01-01
Operations problems and cost drivers were identified for current propulsion systems and design and technology approaches were identified to increase the operational efficiency and to reduce operations costs for future propulsion systems. To provide readily usable data for the ALS program, the results of the OEPSS study were organized into a series of OEPSS Data Books. This volume presents a detailed description of 25 major problems encountered during launch processing of current expendable and reusable launch vehicles. A concise description of each problem and its operational impact on launch processing is presented, along with potential solutions and technology recommendation.
OPERATIONS RESEARCH IN THE DESIGN OF MANAGEMENT INFORMATION SYSTEMS
management information systems is concerned with the identification and detailed specification of the information and data processing...of advanced data processing techniques in management information systems today, the close coordination of operations research and data systems activities has become a practical necessity for the modern business firm.... information systems in which mathematical models are employed as the basis for analysis and systems design. Operations research provides a
Development and testing of operational incident detection algorithms : executive summary
DOT National Transportation Integrated Search
1997-09-01
This report describes the development of operational surveillance data processing algorithms and software for application to urban freeway systems, conforming to a framework in which data processing is performed in stages: sensor malfunction detectio...
Process for using surface strain measurements to obtain operational loads for complex structures
NASA Technical Reports Server (NTRS)
Ko, William L. (Inventor); Richards, William Lance (Inventor)
2010-01-01
The invention is an improved process for using surface strain data to obtain real-time, operational loads data for complex structures that significantly reduces the time and cost versus current methods.
NOAA's Satellite Climate Data Records: The Research to Operations Process and Current State
NASA Astrophysics Data System (ADS)
Privette, J. L.; Bates, J. J.; Kearns, E. J.; NOAA's Climate Data Record Program
2011-12-01
In support of NOAA's mandate to provide climate products and services to the Nation, the National Climatic Data Center initiated the satellite Climate Data Record (CDR) Program. The Program develops and sustains climate information products derived from satellite data that NOAA has collected over the past 30+ years. These are the longest sets of continuous global measurements in existence. Data from other satellite programs, including those in NASA, the Department of Defense, and foreign space agencies, are also used. NOAA is now applying advanced analysis techniques to these historic data. This process is unraveling underlying climate trend and variability information and returning new value from the data. However, the transition of complex data processing chains, voluminous data products and documentation into an systematic, configuration controlled context involves many challenges. In this presentation, we focus on the Program's process for research-to-operations transition and the evolving systems designed to ensure transparency, security, economy and authoritative value. The Program has adopted a two-phase process defined by an Initial Operational Capability (IOC) and a Full Operational Capability (FOC). The principles and procedures for IOC are described, as well as the process for moving CDRs from IOC to FOC. Finally, we will describe the state of the CDRs in all phases the Program, with an emphasis on the seven community-developed CDRs transitioned to NOAA in 2011. Details on CDR access and distribution will be provided.
The X-33 range Operations Control Center
NASA Technical Reports Server (NTRS)
Shy, Karla S.; Norman, Cynthia L.
1998-01-01
This paper describes the capabilities and features of the X-33 Range Operations Center at NASA Dryden Flight Research Center. All the unprocessed data will be collected and transmitted over fiber optic lines to the Lockheed Operations Control Center for real-time flight monitoring of the X-33 vehicle. By using the existing capabilities of the Western Aeronautical Test Range, the Range Operations Center will provide the ability to monitor all down-range tracking sites for the Extended Test Range systems. In addition to radar tracking and aircraft telemetry data, the Telemetry and Radar Acquisition and Processing System is being enhanced to acquire vehicle command data, differential Global Positioning System corrections and telemetry receiver signal level status. The Telemetry and Radar Acquisition Processing System provides the flexibility to satisfy all X-33 data processing requirements quickly and efficiently. Additionally, the Telemetry and Radar Acquisition Processing System will run a real-time link margin analysis program. The results of this model will be compared in real-time with actual flight data. The hardware and software concepts presented in this paper describe a method of merging all types of data into a common database for real-time display in the Range Operations Center in support of the X-33 program. All types of data will be processed for real-time analysis and display of the range system status to ensure public safety.
The Kepler End-to-End Data Pipeline: From Photons to Far Away Worlds
NASA Technical Reports Server (NTRS)
Cooke, Brian; Thompson, Richard; Standley, Shaun
2012-01-01
The Kepler mission is described in overview and the Kepler technique for discovering exoplanets is discussed. The design and implementation of the Kepler spacecraft, tracing the data path from photons entering the telescope aperture through raw observation data transmitted to the ground operations team is described. The technical challenges of operating a large aperture photometer with an unprecedented 95 million pixel detector are addressed as well as the onboard technique for processing and reducing the large volume of data produced by the Kepler photometer. The technique and challenge of day-to-day mission operations that result in a very high percentage of time on target is discussed. This includes the day to day process for monitoring and managing the health of the spacecraft, the annual process for maintaining sun on the solar arrays while still keeping the telescope pointed at the fixed science target, the process for safely but rapidly returning to science operations after a spacecraft initiated safing event and the long term anomaly resolution process.The ground data processing pipeline, from the point that science data is received on the ground to the presentation of preliminary planetary candidates and supporting data to the science team for further evaluation is discussed. Ground management, control, exchange and storage of Kepler's large and growing data set is discussed as well as the process and techniques for removing noise sources and applying calibrations to intermediate data products.
Implementing asyncronous collective operations in a multi-node processing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Dong; Eisley, Noel A.; Heidelberger, Philip
A method, system, and computer program product are disclosed for implementing an asynchronous collective operation in a multi-node data processing system. In one embodiment, the method comprises sending data to a plurality of nodes in the data processing system, broadcasting a remote get to the plurality of nodes, and using this remote get to implement asynchronous collective operations on the data by the plurality of nodes. In one embodiment, each of the nodes performs only one task in the asynchronous operations, and each nodes sets up a base address table with an entry for a base address of a memorymore » buffer associated with said each node. In another embodiment, each of the nodes performs a plurality of tasks in said collective operations, and each task of each node sets up a base address table with an entry for a base address of a memory buffer associated with the task.« less
TESS Ground System Operations and Data Products
NASA Astrophysics Data System (ADS)
Glidden, Ana; Guerrero, Natalia; Fausnaugh, Michael; TESS Team
2018-01-01
We describe the ground system operations for processing data from the Transiting Exoplanet Survey Satellite (TESS), highlighting the role of the Science Operations Center (SOC). TESS is a spaced-based (nearly) all-sky mission, designed to find small planets around nearby bright stars using the transit method. We detail the flow of data from pixel measurements on the instrument to final products available at the Mikulski Archive for Space Telescopes (MAST). The ground system relies on a host of players to process the data, including the Payload Operations Center at MIT, the Science Processing Operation Center at NASA Ames, and the TESS Science Office, led by the Harvard-Smithsonian Center for Astrophysics and MIT. Together, these groups will deliver TESS Input Catalog, instrument calibration models, calibrated target pixels and full frame images, threshold crossing event reports, two-minute light curves, and the TESS Objects of Interest List.
Sampling Operations on Big Data
2015-11-29
gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and
Operational aspects of satellite data collection systems
NASA Technical Reports Server (NTRS)
Morakis, J. C.
1979-01-01
Operational aspects of satellite data collection systems (DCS) are discussed with consideration given to a cooperative program between the United States and France. The Tiros-N DCS is described which is a random access system providing operational capability for position location and/or data collection of 4000 to 16,000 moving and/or fixed platforms. The platform transmissions and processing of the data is designed to conform with the user needs. The position location is obtained through ground processing of Doppler measurements made by the data collection instrument on board the spacecraft.
SDI-based business processes: A territorial analysis web information system in Spain
NASA Astrophysics Data System (ADS)
Béjar, Rubén; Latre, Miguel Á.; Lopez-Pellicer, Francisco J.; Nogueras-Iso, Javier; Zarazaga-Soria, F. J.; Muro-Medrano, Pedro R.
2012-09-01
Spatial Data Infrastructures (SDIs) provide access to geospatial data and operations through interoperable Web services. These data and operations can be chained to set up specialized geospatial business processes, and these processes can give support to different applications. End users can benefit from these applications, while experts can integrate the Web services in their own business processes and developments. This paper presents an SDI-based territorial analysis Web information system for Spain, which gives access to land cover, topography and elevation data, as well as to a number of interoperable geospatial operations by means of a Web Processing Service (WPS). Several examples illustrate how different territorial analysis business processes are supported. The system has been established by the Spanish National SDI (Infraestructura de Datos Espaciales de España, IDEE) both as an experimental platform for geoscientists and geoinformation system developers, and as a mechanism to contribute to the Spanish citizens knowledge about their territory.
Overview of the Smart Network Element Architecture and Recent Innovations
NASA Technical Reports Server (NTRS)
Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.
2008-01-01
In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.
An expert systems application to space base data processing
NASA Technical Reports Server (NTRS)
Babb, Stephen M.
1988-01-01
The advent of space vehicles with their increased data requirements are reflected in the complexity of future telemetry systems. Space based operations with its immense operating costs will shift the burden of data processing and routine analysis from the space station to the Orbital Transfer Vehicle (OTV). A research and development project is described which addresses the real time onboard data processing tasks associated with a space based vehicle, specifically focusing on an implementation of an expert system.
Telemetry distribution and processing for the second German Spacelab Mission D-2
NASA Technical Reports Server (NTRS)
Rabenau, E.; Kruse, W.
1994-01-01
For the second German Spacelab Mission D-2 all activities related to operating, monitoring and controlling the experiments on board the Spacelab were conducted from the German Space Operations Control Center (GSOC) operated by the Deutsche Forschungsanstalt fur Luft- und Raumfahrt (DLR) in Oberpfaffenhofen, Germany. The operational requirements imposed new concepts on the transfer of data between Germany and the NASA centers and the processing of data at the GSOC itself. Highlights were the upgrade of the Spacelab Data Processing Facility (SLDPF) to real time data processing, the introduction of packet telemetry and the development of the high-rate data handling front end, data processing and display systems at GSOC. For the first time, a robot on board the Spacelab was to be controlled from the ground in a closed loop environment. A dedicated forward channel was implemented to transfer the robot manipulation commands originating from the robotics experiment ground station to the Spacelab via the Orbiter's text and graphics system interface. The capability to perform telescience from an external user center was implemented. All interfaces proved successful during the course of the D-2 mission and are described in detail in this paper.
PILOT: An intelligent distributed operations support system
NASA Technical Reports Server (NTRS)
Rasmussen, Arthur N.
1993-01-01
The Real-Time Data System (RTDS) project is exploring the application of advanced technologies to the real-time flight operations environment of the Mission Control Centers at NASA's Johnson Space Center. The system, based on a network of engineering workstations, provides services such as delivery of real time telemetry data to flight control applications. To automate the operation of this complex distributed environment, a facility called PILOT (Process Integrity Level and Operation Tracker) is being developed. PILOT comprises a set of distributed agents cooperating with a rule-based expert system; together they monitor process operation and data flows throughout the RTDS network. The goal of PILOT is to provide unattended management and automated operation under user control.
Automatic Data Processing Equipment (ADPE) acquisition plan for the medical sciences
NASA Technical Reports Server (NTRS)
1979-01-01
An effective mechanism for meeting the SLSD/MSD data handling/processing requirements for Shuttle is discussed. The ability to meet these requirements depends upon the availability of a general purpose high speed digital computer system. This system is expected to implement those data base management and processing functions required across all SLSD/MSD programs during training, laboratory operations/analysis, simulations, mission operations, and post mission analysis/reporting.
Computerized procedures system
Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.
2010-10-12
An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.
Argo workstation: a key component of operational oceanography
NASA Astrophysics Data System (ADS)
Dong, Mingmei; Xu, Shanshan; Miao, Qingsheng; Yue, Xinyang; Lu, Jiawei; Yang, Yang
2018-02-01
Operational oceanography requires the quantity, quality, and availability of data set and the timeliness and effectiveness of data products. Without steady and strong operational system supporting, operational oceanography will never be proceeded far. In this paper we describe an integrated platform named Argo Workstation. It operates as a data processing and management system, capable of data collection, automatic data quality control, visualized data check, statistical data search and data service. After it is set up, Argo workstation provides global high quality Argo data to users every day timely and effectively. It has not only played a key role in operational oceanography but also set up an example for operational system.
Image data-processing system for solar astronomy
NASA Technical Reports Server (NTRS)
Wilson, R. M.; Teuber, D. L.; Watkins, J. R.; Thomas, D. T.; Cooper, C. M.
1977-01-01
The paper describes an image data processing system (IDAPS), its hardware/software configuration, and interactive and batch modes of operation for the analysis of the Skylab/Apollo Telescope Mount S056 X-Ray Telescope experiment data. Interactive IDAPS is primarily designed to provide on-line interactive user control of image processing operations for image familiarization, sequence and parameter optimization, and selective feature extraction and analysis. Batch IDAPS follows the normal conventions of card control and data input and output, and is best suited where the desired parameters and sequence of operations are known and when long image-processing times are required. Particular attention is given to the way in which this system has been used in solar astronomy and other investigations. Some recent results obtained by means of IDAPS are presented.
Design requirements for operational earth resources ground data processing
NASA Technical Reports Server (NTRS)
Baldwin, C. J.; Bradford, L. H.; Burnett, E. S.; Hutson, D. E.; Kinsler, B. A.; Kugle, D. R.; Webber, D. S.
1972-01-01
Realistic tradeoff data and evaluation techniques were studied that permit conceptual design of operational earth resources ground processing systems. Methodology for determining user requirements that utilize the limited information available from users is presented along with definitions of sensor capabilities projected into the shuttle/station era. A tentative method is presented for synthesizing candidate ground processing concepts.
Architectures Toward Reusable Science Data Systems
NASA Technical Reports Server (NTRS)
Moses, John Firor
2014-01-01
Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAA's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today.
Vocational Education Operations Analysis Process.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento. Vocational Education Services.
This manual on the vocational education operations analysis process is designed to provide vocational administrators/coordinators with an internal device to collect, analyze, and display vocational education performance data. The first section describes the system and includes the following: analysis worksheet, data sources, utilization, system…
Using task analysis to understand the Data System Operations Team
NASA Technical Reports Server (NTRS)
Holder, Barbara E.
1994-01-01
The Data Systems Operations Team (DSOT) currently monitors the Multimission Ground Data System (MGDS) at JPL. The MGDS currently supports five spacecraft and within the next five years, it will support ten spacecraft simultaneously. The ground processing element of the MGDS consists of a distributed UNIX-based system of over 40 nodes and 100 processes. The MGDS system provides operators with little or no information about the system's end-to-end processing status or end-to-end configuration. The lack of system visibility has become a critical issue in the daily operation of the MGDS. A task analysis was conducted to determine what kinds of tools were needed to provide DSOT with useful status information and to prioritize the tool development. The analysis provided the formality and structure needed to get the right information exchange between development and operations. How even a small task analysis can improve developer-operator communications is described, and the challenges associated with conducting a task analysis in a real-time mission operations environment are examined.
Multimission image processing and science data visualization
NASA Technical Reports Server (NTRS)
Green, William B.
1993-01-01
The Operational Science Analysis (OSA) Functional area supports science instrument data display, analysis, visualization and photo processing in support of flight operations of planetary spacecraft managed by the Jet Propulsion Laboratory (JPL). This paper describes the data products generated by the OSA functional area, and the current computer system used to generate these data products. The objectives on a system upgrade now in process are described. The design approach to development of the new system are reviewed, including use of the Unix operating system and X-Window display standards to provide platform independence, portability, and modularity within the new system, is reviewed. The new system should provide a modular and scaleable capability supporting a variety of future missions at JPL.
Mars Observer data production, transfer, and archival: The data production assembly line
NASA Technical Reports Server (NTRS)
Childs, David B.
1993-01-01
This paper describes the data production, transfer, and archival process designed for the Mars Observer Flight Project. It addresses the developmental and operational aspects of the archive collection production process. The developmental aspects cover the design and packaging of data products for archival and distribution to the planetary community. Also discussed is the design and development of a data transfer and volume production process capable of handling the large throughput and complexity of the Mars Observer data products. The operational aspects cover the main functions of the process: creating data and engineering products, collecting the data products and ancillary products in a central repository, producing archive volumes, validating volumes, archiving, and distributing the data to the planetary community.
Introduction to the scientific application system of DAMPE (On behalf of DAMPE collaboration)
NASA Astrophysics Data System (ADS)
Zang, Jingjing
2016-07-01
The Dark Matter Particle Explorer (DAMPE) is a high energy particle physics experiment satellite, launched on 17 Dec 2015. The science data processing and payload operation maintenance for DAMPE will be provided by the DAMPE Scientific Application System (SAS) at the Purple Mountain Observatory (PMO) of Chinese Academy of Sciences. SAS is consisted of three subsystems - scientific operation subsystem, science data and user management subsystem and science data processing subsystem. In cooperation with the Ground Support System (Beijing), the scientific operation subsystem is responsible for proposing observation plans, monitoring the health of satellite, generating payload control commands and participating in all activities related to payload operation. Several databases developed by the science data and user management subsystem of DAMPE methodically manage all collected and reconstructed science data, down linked housekeeping data, payload configuration and calibration data. Under the leadership of DAMPE Scientific Committee, this subsystem is also responsible for publication of high level science data and supporting all science activities of the DAMPE collaboration. The science data processing subsystem of DAMPE has already developed a series of physics analysis software to reconstruct basic information about detected cosmic ray particle. This subsystem also maintains the high performance computing system of SAS to processing all down linked science data and automatically monitors the qualities of all produced data. In this talk, we will describe all functionalities of whole DAMPE SAS system and show you main performances of data processing ability.
The Research on Linux Memory Forensics
NASA Astrophysics Data System (ADS)
Zhang, Jun; Che, ShengBing
2018-03-01
Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.
NASA Astrophysics Data System (ADS)
Squibb, Gael F.
1984-10-01
The operation teams for the Infrared Astronomical Satellite (IRAS) included scientists from the IRAS International Science Team. The scientific decisions on an hour-to-hour basis, as well as the long-term strategic decisions, were made by science team members. The IRAS scientists were involved in the analysis of the instrument performance, the analysis of the quality of the data, the decision to reacquire data that was contaminated by radiation effects, the strategy for acquiring the survey data, and the process for using the telescope for additional observations, as well as the processing decisions required to ensure the publication of the final scientific products by end of flight operations plus one year. Early in the project, two science team members were selected to be responsible for the scientific operational decisions. One, located at the operations control center in England, was responsible for the scientific aspects of the satellite operations; the other, located at the scientific processing center in Pasadena, was responsible for the scientific aspects of the processing. These science team members were then responsible for approving the design and test of the tools to support their responsibilities and then, after launch, for using these tools in making their decisions. The ability of the project to generate the final science data products one year after the end of flight operations is due in a large measure to the active participation of the science team members in the operations. This paper presents a summary of the operational experiences gained from this scientific involvement.
ALMA Array Operations Group process overview
NASA Astrophysics Data System (ADS)
Barrios, Emilio; Alarcon, Hector
2016-07-01
ALMA Science operations activities in Chile are responsibility of the Department of Science Operations, which consists of three groups, the Array Operations Group (AOG), the Program Management Group (PMG) and the Data Management Group (DMG). The AOG includes the Array Operators and have the mission to provide support for science observations, operating safely and efficiently the array. The poster describes the AOG process, management and operational tools.
NASA Astrophysics Data System (ADS)
Fernández-González, Daniel; Martín-Duarte, Ramón; Ruiz-Bustinza, Íñigo; Mochón, Javier; González-Gasca, Carmen; Verdeja, Luis Felipe
2016-08-01
Blast furnace operators expect to get sinter with homogenous and regular properties (chemical and mechanical), necessary to ensure regular blast furnace operation. Blends for sintering also include several iron by-products and other wastes that are obtained in different processes inside the steelworks. Due to their source, the availability of such materials is not always consistent, but their total production should be consumed in the sintering process, to both save money and recycle wastes. The main scope of this paper is to obtain the least expensive iron ore blend for the sintering process, which will provide suitable chemical and mechanical features for the homogeneous and regular operation of the blast furnace. The systematic use of statistical tools was employed to analyze historical data, including linear and partial correlations applied to the data and fuzzy clustering based on the Sugeno Fuzzy Inference System to establish relationships among the available variables.
NASA Technical Reports Server (NTRS)
1976-01-01
Inputs from prospective LANDSAT-C data users are requested to aid NASA in defining LANDSAT-C mission and data requirements and in making decisions regarding the scheduling of satellite operations and ground data processing operations. Design specifications, multispectral band scanner performance characteristics, satellite schedule operations, and types of available data products are briefly described.
Historical data and analysis for the first five years of KSC STS payload processing
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1986-01-01
General and specific quantitative and qualitative results were identified from a study of actual operational experience while processing 186 science, applications, and commercial payloads for the first 5 years of Space Transportation System (STS) operations at the National Aeronautics and Space Administration's (NASA) John F. Kennedy Space Center (KSC). All non-Department of Defense payloads from STS-2 through STS-33 were part of the study. Historical data and cumulative program experiences from key personnel were used extensively. Emphasis was placed on various program planning and events that affected KSC processing, payload experiences and improvements, payload hardware condition after arrival, services to customers, and the impact of STS operations and delays. From these initial considerations, operational drivers were identified, data for selected processing parameters collected and analyzed, processing criteria and options determined, and STS payload results and conclusions reached. The study showed a significant reduction in time and effort needed by STS customers and KSC to process a wide variety of payload configurations. Also of significance is the fact that even the simplest payloads required more processing resources than were initially assumed. The success to date of payload integration, testing, and mission operations, however, indicates the soundness of the approach taken and the methods used.
New Directions in Space Operations Services in Support of Interplanetary Exploration
NASA Technical Reports Server (NTRS)
Bradford, Robert N.
2005-01-01
To gain access to the necessary operational processes and data in support of NASA's Lunar/Mars Exploration Initiative, new services, adequate levels of computing cycles and access to myriad forms of data must be provided to onboard spacecraft and ground based personnel/systems (earth, lunar and Martian) to enable interplanetary exploration by humans. These systems, cycles and access to vast amounts of development, test and operational data will be required to provide a new level of services not currently available to existing spacecraft, on board crews and other operational personnel. Although current voice, video and data systems in support of current space based operations has been adequate, new highly reliable and autonomous processes and services will be necessary for future space exploration activities. These services will range from the more mundane voice in LEO to voice in interplanetary travel which because of the high latencies will require new voice processes and standards. New services, like component failure predictions based on data mining of significant quantities of data, located at disparate locations, will be required. 3D or holographic representation of onboard components, systems or family members will greatly improve maintenance, operations and service restoration not to mention crew morale. Current operational systems and standards, like the Internet Protocol, will not able to provide the level of service required end to end from an end point on the Martian surface like a scientific instrument to a researcher at a university. Ground operations whether earth, lunar or Martian and in flight operations to the moon and especially to Mars will require significant autonomy that will require access to highly reliable processing capabilities, data storage based on network storage technologies. Significant processing cycles will be needed onboard but could be borrowed from other locations either ground based or onboard other spacecraft. Reliability will be a key factor with onboard and distributed backup processing an absolutely necessary requirement. Current cluster processing/Grid technologies may provide the basis for providing these services. An overview of existing services, future services that will be required and the technologies and standards required to be developed will be presented. The purpose of this paper will be to initiate a technological roadmap, albeit at a high level, of current voice, video, data and network technologies and standards (which show promise for adaptation or evolution) to what technologies and standards need to be redefined, adjusted or areas where new ones require development. The roadmap should begin the differentiation between non manned and manned processes/services where applicable. The paper will be based in part on the activities of the CCSDS Monitor and Control working group which is beginning the process of standardization of the these processes. Another element of the paper will be based on an analysis of current technologies supporting space flight processes and services at JSC, MSFC, GSFC and to a lesser extent at KSC. Work being accomplished in areas such as Grid computing, data mining and network storage at ARC, IBM and the University of Alabama at Huntsville will be researched and analyzed.
Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E
2014-02-11
Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.
Yamamoto, Yuta; Iriyama, Yasutoshi; Muto, Shunsuke
2016-04-01
In this article, we propose a smart image-analysis method suitable for extracting target features with hierarchical dimension from original data. The method was applied to three-dimensional volume data of an all-solid lithium-ion battery obtained by the automated sequential sample milling and imaging process using a focused ion beam/scanning electron microscope to investigate the spatial configuration of voids inside the battery. To automatically fully extract the shape and location of the voids, three types of filters were consecutively applied: a median blur filter to extract relatively larger voids, a morphological opening operation filter for small dot-shaped voids and a morphological closing operation filter for small voids with concave contrasts. Three data cubes separately processed by the above-mentioned filters were integrated by a union operation to the final unified volume data, which confirmed the correct extraction of the voids over the entire dimension contained in the original data. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Davenport, Paul B; Carter, Kimberly F; Echternach, Jeffrey M; Tuck, Christopher R
2018-02-01
High-reliability organizations (HROs) demonstrate unique and consistent characteristics, including operational sensitivity and control, situational awareness, hyperacute use of technology and data, and actionable process transformation. System complexity and reliance on information-based processes challenge healthcare organizations to replicate HRO processes. This article describes a healthcare organization's 3-year journey to achieve key HRO features to deliver high-quality, patient-centric care via an operations center powered by the principles of high-reliability data and software to impact patient throughput and flow.
Network acceleration techniques
NASA Technical Reports Server (NTRS)
Crowley, Patricia (Inventor); Maccabe, Arthur Barney (Inventor); Awrach, James Michael (Inventor)
2012-01-01
Splintered offloading techniques with receive batch processing are described for network acceleration. Such techniques offload specific functionality to a NIC while maintaining the bulk of the protocol processing in the host operating system ("OS"). The resulting protocol implementation allows the application to bypass the protocol processing of the received data. Such can be accomplished this by moving data from the NIC directly to the application through direct memory access ("DMA") and batch processing the receive headers in the host OS when the host OS is interrupted to perform other work. Batch processing receive headers allows the data path to be separated from the control path. Unlike operating system bypass, however, the operating system still fully manages the network resource and has relevant feedback about traffic and flows. Embodiments of the present disclosure can therefore address the challenges of networks with extreme bandwidth delay products (BWDP).
NASA Technical Reports Server (NTRS)
Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)
1983-01-01
A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.
NASA Technical Reports Server (NTRS)
1975-01-01
NASA structural analysis (NASTRAN) computer program is operational on three series of third generation computers. The problem and difficulties involved in adapting NASTRAN to a fourth generation computer, namely, the Control Data STAR-100, are discussed. The salient features which distinguish Control Data STAR-100 from third generation computers are hardware vector processing capability and virtual memory. A feasible method is presented for transferring NASTRAN to Control Data STAR-100 system while retaining much of the machine-independent code. Basic matrix operations are noted for optimization for vector processing.
Fifolt, Matthew; Blackburn, Justin; Rhodes, David J; Gillespie, Shemeka; Bennett, Aleena; Wolff, Paul; Rucks, Andrew
Historically, double data entry (DDE) has been considered the criterion standard for minimizing data entry errors. However, previous studies considered data entry alternatives through the limited lens of data accuracy. This study supplies information regarding data accuracy, operational efficiency, and cost for DDE and Optical Mark Recognition (OMR) for processing the Consumer Assessment of Healthcare Providers and Systems 5.0 survey. To assess data accuracy, we compared error rates for DDE and OMR by dividing the number of surveys that were arbitrated by the total number of surveys processed for each method. To assess operational efficiency, we tallied the cost of data entry for DDE and OMR after survey receipt. Costs were calculated on the basis of personnel, depreciation for capital equipment, and costs of noncapital equipment. The cost savings attributed to this method were negated by the operational efficiency of OMR. There was a statistical significance between rates of arbitration between DDE and OMR; however, this statistical significance did not create a practical significance. The potential benefits of DDE in terms of data accuracy did not outweigh the operational efficiency and thereby financial savings of OMR.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
CTEPP STANDARD OPERATING PROCEDURE FOR PROCESSING COMPLETED DATA FORMS (SOP-4.10)
This SOP describes the methods for processing completed data forms. Key components of the SOP include (1) field editing, (2) data form Chain-of-Custody, (3) data processing verification, (4) coding, (5) data entry, (6) programming checks, (7) preparation of data dictionaries, cod...
Digital processing of radiographic images
NASA Technical Reports Server (NTRS)
Bond, A. D.; Ramapriyan, H. K.
1973-01-01
Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.
Efficient computer algorithms for infrared astronomy data processing
NASA Technical Reports Server (NTRS)
Pelzmann, R. F., Jr.
1976-01-01
Data processing techniques to be studied for use in infrared astronomy data analysis systems are outlined. Only data from space based telescope systems operating as survey instruments are considered. Resulting algorithms, and in some cases specific software, will be applicable for use with the infrared astronomy satellite (IRAS) and the shuttle infrared telescope facility (SIRTF). Operational tests made during the investigation use data from the celestial mapping program (CMP). The overall task differs from that involved in ground-based infrared telescope data reduction.
System for processing an encrypted instruction stream in hardware
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griswold, Richard L.; Nickless, William K.; Conrad, Ryan C.
A system and method of processing an encrypted instruction stream in hardware is disclosed. Main memory stores the encrypted instruction stream and unencrypted data. A central processing unit (CPU) is operatively coupled to the main memory. A decryptor is operatively coupled to the main memory and located within the CPU. The decryptor decrypts the encrypted instruction stream upon receipt of an instruction fetch signal from a CPU core. Unencrypted data is passed through to the CPU core without decryption upon receipt of a data fetch signal.
Wireless communication devices and movement monitoring methods
Skorpik, James R.
2006-10-31
Wireless communication devices and movement monitoring methods are described. In one aspect, a wireless communication device includes a housing, wireless communication circuitry coupled with the housing and configured to communicate wireless signals, movement circuitry coupled with the housing and configured to provide movement data regarding movement sensed by the movement circuitry, and event processing circuitry coupled with the housing and the movement circuitry, wherein the event processing circuitry is configured to process the movement data, and wherein at least a portion of the event processing circuitry is configured to operate in a first operational state having a different power consumption rate compared with a second operational state.
Effective low-level processing for interferometric image enhancement
NASA Astrophysics Data System (ADS)
Joo, Wonjong; Cha, Soyoung S.
1995-09-01
The hybrid operation of digital image processing and a knowledge-based AI system has been recognized as a desirable approach of the automated evaluation of noise-ridden interferogram. Early noise/data reduction before phase is extracted is essential for the success of the knowledge- based processing. In this paper, new concepts of effective, interactive low-level processing operators: that is, a background-matched filter and a directional-smoothing filter, are developed and tested with transonic aerodynamic interferograms. The results indicate that these new operators have promising advantages in noise/data reduction over the conventional ones, leading success of the high-level, intelligent phase extraction.
2004-06-01
Situation Understanding) Common Operational Pictures Planning & Decision Support Capabilities Message & Order Processing Common Operational...Pictures Planning & Decision Support Capabilities Message & Order Processing Common Languages & Data Models Modeling & Simulation Domain
Reusable Rocket Engine Operability Modeling and Analysis
NASA Technical Reports Server (NTRS)
Christenson, R. L.; Komar, D. R.
1998-01-01
This paper describes the methodology, model, input data, and analysis results of a reusable launch vehicle engine operability study conducted with the goal of supporting design from an operations perspective. Paralleling performance analyses in schedule and method, this requires the use of metrics in a validated operations model useful for design, sensitivity, and trade studies. Operations analysis in this view is one of several design functions. An operations concept was developed given an engine concept and the predicted operations and maintenance processes incorporated into simulation models. Historical operations data at a level of detail suitable to model objectives were collected, analyzed, and formatted for use with the models, the simulations were run, and results collected and presented. The input data used included scheduled and unscheduled timeline and resource information collected into a Space Transportation System (STS) Space Shuttle Main Engine (SSME) historical launch operations database. Results reflect upon the importance not only of reliable hardware but upon operations and corrective maintenance process improvements.
Bibliography On Multiprocessors And Distributed Processing
NASA Technical Reports Server (NTRS)
Miya, Eugene N.
1988-01-01
Multiprocessor and Distributed Processing Bibliography package consists of large machine-readable bibliographic data base, which in addition to usual keyword searches, used for producing citations, indexes, and cross-references. Data base contains UNIX(R) "refer" -formatted ASCII data and implemented on any computer running under UNIX(R) operating system. Easily convertible to other operating systems. Requires approximately one megabyte of secondary storage. Bibliography compiled in 1985.
Graph processing platforms at scale: practices and experiences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Lee, Sangkeun; Brown, Tyler C
2015-01-01
Graph analysis unveils hidden associations of data in many phenomena and artifacts, such as road network, social networks, genomic information, and scientific collaboration. Unfortunately, a wide diversity in the characteristics of graphs and graph operations make it challenging to find a right combination of tools and implementation of algorithms to discover desired knowledge from the target data set. This study presents an extensive empirical study of three representative graph processing platforms: Pegasus, GraphX, and Urika. Each system represents a combination of options in data model, processing paradigm, and infrastructure. We benchmarked each platform using three popular graph operations, degree distribution,more » connected components, and PageRank over a variety of real-world graphs. Our experiments show that each graph processing platform shows different strength, depending the type of graph operations. While Urika performs the best in non-iterative operations like degree distribution, GraphX outputforms iterative operations like connected components and PageRank. In addition, we discuss challenges to optimize the performance of each platform over large scale real world graphs.« less
Fast, Massively Parallel Data Processors
NASA Technical Reports Server (NTRS)
Heaton, Robert A.; Blevins, Donald W.; Davis, ED
1994-01-01
Proposed fast, massively parallel data processor contains 8x16 array of processing elements with efficient interconnection scheme and options for flexible local control. Processing elements communicate with each other on "X" interconnection grid with external memory via high-capacity input/output bus. This approach to conditional operation nearly doubles speed of various arithmetic operations.
[Numerical simulation and operation optimization of biological filter].
Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing
2014-12-01
BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.
Use of a multimission system for cost effective support of planetary science data processing
NASA Technical Reports Server (NTRS)
Green, William B.
1994-01-01
JPL's Multimission Operations Systems Office (MOSO) provides a multimission facility at JPL for processing science instrument data from NASA's planetary missions. This facility, the Multimission Image Processing System (MIPS), is developed and maintained by MOSO to meet requirements that span the NASA family of planetary missions. Although the word 'image' appears in the title, MIPS is used to process instrument data from a variety of science instruments. This paper describes the design of a new system architecture now being implemented within the MIPS to support future planetary mission activities at significantly reduced operations and maintenance cost.
30 CFR 939.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall become applicable...
30 CFR 903.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Mining Operations, pertaining to petitions, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities, applies to surface...
30 CFR 939.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall become applicable...
30 CFR 903.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Mining Operations, pertaining to petitions, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities, applies to surface...
30 CFR 903.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Mining Operations, pertaining to petitions, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities, applies to surface...
30 CFR 939.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall become applicable...
30 CFR 939.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall become applicable...
30 CFR 939.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall become applicable...
30 CFR 903.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mining Operations, pertaining to petitions, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities, applies to surface...
NASA Technical Reports Server (NTRS)
Lee, Hyun H.
2012-01-01
MERTELEMPROC processes telemetered data in data product format and generates Experiment Data Records (EDRs) for many instruments (HAZCAM, NAVCAM, PANCAM, microscopic imager, Moessbauer spectrometer, APXS, RAT, and EDLCAM) on the Mars Exploration Rover (MER). If the data is compressed, then MERTELEMPROC decompresses the data with an appropriate decompression algorithm. There are two compression algorithms (ICER and LOCO) used in MER. This program fulfills a MER specific need to generate Level 1 products within a 60-second time requirement. EDRs generated by this program are used by merinverter, marscahv, marsrad, and marsjplstereo to generate higher-level products for the mission operations. MERTELEPROC was the first GDS program to process the data product. Metadata of the data product is in XML format. The software allows user-configurable input parameters, per-product processing (not streambased processing), and fail-over is allowed if the leading image header is corrupted. It is used within the MER automated pipeline. MERTELEMPROC is part of the OPGS (Operational Product Generation Subsystem) automated pipeline, which analyzes images returned by in situ spacecraft and creates level 1 products to assist in operations, science, and outreach.
LABORATORY PROCESS CONTROLLER USING NATURAL LANGUAGE COMMANDS FROM A PERSONAL COMPUTER
NASA Technical Reports Server (NTRS)
Will, H.
1994-01-01
The complex environment of the typical research laboratory requires flexible process control. This program provides natural language process control from an IBM PC or compatible machine. Sometimes process control schedules require changes frequently, even several times per day. These changes may include adding, deleting, and rearranging steps in a process. This program sets up a process control system that can either run without an operator, or be run by workers with limited programming skills. The software system includes three programs. Two of the programs, written in FORTRAN77, record data and control research processes. The third program, written in Pascal, generates the FORTRAN subroutines used by the other two programs to identify the user commands with the user-written device drivers. The software system also includes an input data set which allows the user to define the user commands which are to be executed by the computer. To set the system up the operator writes device driver routines for all of the controlled devices. Once set up, this system requires only an input file containing natural language command lines which tell the system what to do and when to do it. The operator can make up custom commands for operating and taking data from external research equipment at any time of the day or night without the operator in attendance. This process control system requires a personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. The program requires a FORTRAN77 compiler and user-written device drivers. This program was developed in 1989 and has a memory requirement of about 62 Kbytes.
Data Processing for the Space-Based Desis Hyperspectral Sensor
NASA Astrophysics Data System (ADS)
Carmona, E.; Avbelj, J.; Alonso, K.; Bachmann, M.; Cerra, D.; Eckardt, A.; Gerasch, B.; Graham, L.; Günther, B.; Heiden, U.; Kerr, G.; Knodt, U.; Krutz, D.; Krawcyk, H.; Makarau, A.; Miller, R.; Müller, R.; Perkins, R.; Walter, I.
2017-05-01
The German Aerospace Center (DLR) and Teledyne Brown Engineering (TBE) have established a collaboration to develop and operate a new space-based hyperspectral sensor, the DLR Earth Sensing Imaging Spectrometer (DESIS). DESIS will provide spacebased hyperspectral data in the VNIR with high spectral resolution and near-global coverage. While TBE provides the platform and infrastructure for operation of the DESIS instrument on the International Space Station, DLR is responsible for providing the instrument and the processing software. The DESIS instrument is equipped with novel characteristics for an imaging spectrometer such high spectral resolution (2.55 nm), a mirror pointing unit or a CMOS sensor operated in rolling shutter mode. We present here an overview of the DESIS instrument and its processing chain, emphasizing the effect of the novel characteristics of DESIS in the data processing and final data products. Furthermore, we analyse in more detail the effect of the rolling shutter on the DESIS data and possible mitigation/correction strategies.
Lessons Learned From Developing Three Generations of Remote Sensing Science Data Processing Systems
NASA Technical Reports Server (NTRS)
Tilmes, Curt; Fleig, Albert J.
2005-01-01
The Biospheric Information Systems Branch at NASA s Goddard Space Flight Center has developed three generations of Science Investigator-led Processing Systems for use with various remote sensing instruments. The first system is used for data from the MODIS instruments flown on NASA s Earth Observing Systems @OS) Terra and Aqua Spacecraft launched in 1999 and 2002 respectively. The second generation is for the Ozone Measuring Instrument flying on the EOS Aura spacecraft launched in 2004. We are now developing a third generation of the system for evaluation science data processing for the Ozone Mapping and Profiler Suite (OMPS) to be flown by the NPOESS Preparatory Project (NPP) in 2006. The initial system was based on large scale proprietary hardware, operating and database systems. The current OMI system and the OMPS system being developed are based on commodity hardware, the LINUX Operating System and on PostgreSQL, an Open Source RDBMS. The new system distributes its data archive across multiple server hosts and processes jobs on multiple processor boxes. We have created several instances of this system, including one for operational processing, one for testing and reprocessing and one for applications development and scientific analysis. Prior to receiving the first data from OMI we applied the system to reprocessing information from the Solar Backscatter Ultraviolet (SBUV) and Total Ozone Mapping Spectrometer (TOMS) instruments flown from 1978 until now. The system was able to process 25 years (108,000 orbits) of data and produce 800,000 files (400 GiB) of level 2 and level 3 products in less than a week. We will describe the lessons we have learned and tradeoffs between system design, hardware, operating systems, operational staffing, user support and operational procedures. During each generational phase, the system has become more generic and reusable. While the system is not currently shrink wrapped we believe it is to the point where it could be readily adopted, with substantial cost savings, for other similar tasks.
OBSERVATIONAL DATA PROCESSING AT NCEP
operations, but also for research and study. 2. The various NCEP networks access the observational data base Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar / VISION | About EMC Observational Data Processing at NCEP Dennis Keyser - NOAA/NWS/NCEP/EMC (Last Revised
30 CFR 922.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 922.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 947.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 941.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 922.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 912.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 905.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Mining Operations, pertaining to petitions, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 910.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 905.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mining Operations, pertaining to petitions, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 905.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Mining Operations, pertaining to petitions, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 941.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 912.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 941.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 922.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 947.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 947.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 941.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 910.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 910.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 910.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 905.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Mining Operations, pertaining to petitions, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 941.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 912.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 947.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 947.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 922.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
30 CFR 912.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and regulatory responsibilities shall apply to surface...
NASA Technical Reports Server (NTRS)
1993-01-01
The Second International Symposium featured 135 oral presentations in these 12 categories: Future Missions and Operations; System-Level Architectures; Mission-Specific Systems; Mission and Science Planning and Sequencing; Mission Control; Operations Automation and Emerging Technologies; Data Acquisition; Navigation; Operations Support Services; Engineering Data Analysis of Space Vehicle and Ground Systems; Telemetry Processing, Mission Data Management, and Data Archiving; and Operations Management. Topics focused on improvements in the productivity, effectiveness, efficiency, and quality of mission operations, ground systems, and data acquisition. Also emphasized were accomplishments in management of human factors; use of information systems to improve data retrieval, reporting, and archiving; design and implementation of logistics support for mission operations; and the use of telescience and teleoperations.
NASA Astrophysics Data System (ADS)
Ruiz-Cárcel, C.; Jaramillo, V. H.; Mba, D.; Ottewill, J. R.; Cao, Y.
2016-01-01
The detection and diagnosis of faults in industrial processes is a very active field of research due to the reduction in maintenance costs achieved by the implementation of process monitoring algorithms such as Principal Component Analysis, Partial Least Squares or more recently Canonical Variate Analysis (CVA). Typically the condition of rotating machinery is monitored separately using vibration analysis or other specific techniques. Conventional vibration-based condition monitoring techniques are based on the tracking of key features observed in the measured signal. Typically steady-state loading conditions are required to ensure consistency between measurements. In this paper, a technique based on merging process and vibration data is proposed with the objective of improving the detection of mechanical faults in industrial systems working under variable operating conditions. The capabilities of CVA for detection and diagnosis of faults were tested using experimental data acquired from a compressor test rig where different process faults were introduced. Results suggest that the combination of process and vibration data can effectively improve the detectability of mechanical faults in systems working under variable operating conditions.
NASA Technical Reports Server (NTRS)
1988-01-01
This requirements and analyses of commercial operations (RACO) study data release reflects the current status of research activities of the Microgravity and Materials Processing Facility under Modification No. 21 to NASA/MSFC Contract NAS8-36122. Section 1 includes 65 commercial space processing projects suitable for deployment aboard the Space Station. Section 2 contains reports of the R:BASE (TM) electronic data base being used in the study, synopses of the experiments, and a summary of data on the experimental facilities. Section 3 is a discussion of video and data compression techniques used as well as a mission timeline analysis.
Operational Control Procedures for the Activated Sludge Process: Appendix.
ERIC Educational Resources Information Center
West, Alfred W.
This document is the appendix for a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. Categories discussed include: control test data, trend charts, moving averages, semi-logarithmic plots, probability…
Word Processing Curriculum: Attitudes/Skills Business Educators Should Update.
ERIC Educational Resources Information Center
Robertson, Jane R.; West, Judy F.
1984-01-01
Discusses a study to gain data enabling curricula planners and business educators to plan an effective word processing curriculum, to determine basic skills and attitudes needed by word processing operators, and to make recommendations to help word processor operators increase productivity. (JOW)
NASA Technical Reports Server (NTRS)
Conway, R.; Matuck, G. N.; Roe, J. M.; Taylor, J.; Turner, A.
1975-01-01
A vortex information display system is described which provides flexible control through system-user interaction for collecting wing-tip-trailing vortex data, processing this data in real time, displaying the processed data, storing raw data on magnetic tape, and post processing raw data. The data is received from two asynchronous laser Doppler velocimeters (LDV's) and includes position, velocity, and intensity information. The raw data is written onto magnetic tape for permanent storage and is also processed in real time to locate vortices and plot their positions as a function of time. The interactive capability enables the user to make real time adjustments in processing data and provides a better definition of vortex behavior. Displaying the vortex information in real time produces a feedback capability to the LDV system operator allowing adjustments to be made in the collection of raw data. Both raw data and processing can be continually upgraded during flyby testing to improve vortex behavior studies. The post-analysis capability permits the analyst to perform in-depth studies of test data and to modify vortex behavior models to improve transport predictions.
CFDP Evolutions and File Based Operations
NASA Astrophysics Data System (ADS)
Valverde, Alberto; Taylor, Chris; Magistrati, Giorgio; Maiorano, Elena; Colombo, Cyril; Haddow, Colin
2015-09-01
The complexity of the scientific ESA missions in terms of data handling requirements has been steadily increasing in the last years. The availability of high speed telemetry links to ground, the increase on the data storage capacity, as well as the processing performance of the spacecraft avionics have enabled this process. Nowadays, it is common to find missions with hundreds of gigabytes of daily on-board generated data, with terabytes of on-board mass memories and with downlinks of several hundreds of megabits per second. This technological trends push an upgrade on the spacecraft data handling and operation concept, smarter solutions are needed to sustain such high data rates and volumes, while improving the on-board autonomy and easing operations. This paper describes the different activities carried out to adapt to the new data handling scenario. It contains an analysis of the proposed operations concept for file-based spacecrafts, including the updates on the PUS and CFDP standards.
WFIRST: User and mission support at ISOC - IPAC Science Operations Center
NASA Astrophysics Data System (ADS)
Akeson, Rachel; Armus, Lee; Bennett, Lee; Colbert, James; Helou, George; Kirkpatrick, J. Davy; Laine, Seppo; Meshkat, Tiffany; Paladini, Roberta; Ramirez, Solange; Wang, Yun; Xie, Joan; Yan, Lin
2018-01-01
The science center for WFIRST is distributed between the Goddard Space Flight Center, the Infrared Processing and Analysis Center (IPAC) and the Space Telescope Science Institute (STScI). The main functions of the IPAC Science Operations Center (ISOC) are:* Conduct the GO, archival and theory proposal submission and evaluation process* Support the coronagraph instrument, including observation planning, calibration and data processing pipeline, generation of data products, and user support* Microlensing survey data processing pipeline, generation of data products, and user support* Community engagement including conferences, workshops and general support of the WFIRST exoplanet communityWe will describe the components planned to support these functions and the community of WFIRST users.
NASA Astrophysics Data System (ADS)
Zhukovskiy, Y.; Koteleva, N.
2017-10-01
Analysis of technical and technological conditions for the emergence of emergency situations during the operation of electromechanical equipment of enterprises of the mineral and raw materials complex shows that when developing the basis for ensuring safe operation, it is necessary to take into account not only the technical condition, but also the non-stationary operation of the operating conditions of equipment, and the nonstationarity of operational operating parameters of technological processes. Violations of the operation of individual parts of the machine, not detected in time, can lead to severe accidents at work, as well as to unplanned downtime and loss of profits. That is why, the issues of obtaining and processing Big data obtained during the life cycle of electromechanical equipment, for assessing the current state of the electromechanical equipment used, timely diagnostics of emergency and pre-emergency modes of its operation, estimating the residual resource, as well as prediction the technical state on the basis of machine learning are very important. This article is dedicated to developing the special method of data storing, collection and aggregation for definition of life-cycle resources of electromechanical equipment. This method can be used in working with big data and can allow extracting the knowledge from different data types: the plants’ historical data and the factory historical data. The data of the plants contains the information about electromechanical equipment operation and the data of the factory contains the information about a production of electromechanical equipment.
NASA Astrophysics Data System (ADS)
Schmidt, S.; Heyns, P. S.; de Villiers, J. P.
2018-02-01
In this paper, a fault diagnostic methodology is developed which is able to detect, locate and trend gear faults under fluctuating operating conditions when only vibration data from a single transducer, measured on a healthy gearbox are available. A two-phase feature extraction and modelling process is proposed to infer the operating condition and based on the operating condition, to detect changes in the machine condition. Information from optimised machine and operating condition hidden Markov models are statistically combined to generate a discrepancy signal which is post-processed to infer the condition of the gearbox. The discrepancy signal is processed and combined with statistical methods for automatic fault detection and localisation and to perform fault trending over time. The proposed methodology is validated on experimental data and a tacholess order tracking methodology is used to enhance the cost-effectiveness of the diagnostic methodology.
User's manual SIG: a general-purpose signal processing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lager, D.; Azevedo, S.
1983-10-25
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Many of the basic operations one would perform on digitized data are contained in the core SIG package. Out of these core commands, more powerful signal processing algorithms may be built. Many different operations on time- and frequency-domain signals can be performed by SIG. They include operations on the samples of a signal, such as adding a scalar tomore » each sample, operations on the entire signal such as digital filtering, and operations on two or more signals such as adding two signals. Signals may be simulated, such as a pulse train or a random waveform. Graphics operations display signals and spectra.« less
Processing Cones: A Computational Structure for Image Analysis.
1981-12-01
image analysis applications, referred to as a processing cone, is described and sample algorithms are presented. A fundamental characteristic of the structure is its hierarchical organization into two-dimensional arrays of decreasing resolution. In this architecture, a protypical function is defined on a local window of data and applied uniformly to all windows in a parallel manner. Three basic modes of processing are supported in the cone: reduction operations (upward processing), horizontal operations (processing at a single level) and projection operations (downward
Version pressure feedback mechanisms for speculative versioning caches
Eichenberger, Alexandre E.; Gara, Alan; O& #x27; Brien, Kathryn M.; Ohmacht, Martin; Zhuang, Xiaotong
2013-03-12
Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.
The operational processing of wind estimates from cloud motions: Past, present and future
NASA Technical Reports Server (NTRS)
Novak, C.; Young, M.
1977-01-01
Current NESS winds operations provide approximately 1800 high quality wind estimates per day to about twenty domestic and foreign users. This marked improvement in NESS winds operations was the result of computer techniques development which began in 1969 to streamline and improve operational procedures. In addition, the launch of the SMS-1 satellite in 1974, the first in the second generation of geostationary spacecraft, provided an improved source of visible and infrared scanner data for the extraction of wind estimates. Currently, operational winds processing at NESS is accomplished by the automated and manual analyses of infrared data from two geostationary spacecraft. This system uses data from SMS-2 and GOES-1 to produce wind estimates valid for 00Z, 12Z and 18Z synoptic times.
The Landsat Data Continuity Mission Operational Land Imager (OLI) Radiometric Calibration
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Dabney, Philip W.; Murphy-Morris, Jeanine E.; Knight, Edward J.; Kvaran, Geir; Barsi, Julia A.
2010-01-01
The Operational Land Imager (OLI) on the Landsat Data Continuity Mission (LDCM) has a comprehensive radiometric characterization and calibration program beginning with the instrument design, and extending through integration and test, on-orbit operations and science data processing. Key instrument design features for radiometric calibration include dual solar diffusers and multi-lamped on-board calibrators. The radiometric calibration transfer procedure from NIST standards has multiple checks on the radiometric scale throughout the process and uses a heliostat as part of the transfer to orbit of the radiometric calibration. On-orbit lunar imaging will be used to track the instruments stability and side slither maneuvers will be used in addition to the solar diffuser to flat field across the thousands of detectors per band. A Calibration Validation Team is continuously involved in the process from design to operations. This team uses an Image Assessment System (IAS), part of the ground system to characterize and calibrate the on-orbit data.
29. Perimeter acquisition radar building room #318, data processing system ...
29. Perimeter acquisition radar building room #318, data processing system area; data processor maintenance and operations center, showing data processing consoles - Stanley R. Mickelsen Safeguard Complex, Perimeter Acquisition Radar Building, Limited Access Area, between Limited Access Patrol Road & Service Road A, Nekoma, Cavalier County, ND
Brown, Jesslyn; Howard, Daniel M.; Wylie, Bruce K.; Friesz, Aaron M.; Ji, Lei; Gacke, Carolyn
2015-01-01
Monitoring systems benefit from high temporal frequency image data collected from the Moderate Resolution Imaging Spectroradiometer (MODIS) system. Because of near-daily global coverage, MODIS data are beneficial to applications that require timely information about vegetation condition related to drought, flooding, or fire danger. Rapid satellite data streams in operational applications have clear benefits for monitoring vegetation, especially when information can be delivered as fast as changing surface conditions. An “expedited” processing system called “eMODIS” operated by the U.S. Geological Survey provides rapid MODIS surface reflectance data to operational applications in less than 24 h offering tailored, consistently-processed information products that complement standard MODIS products. We assessed eMODIS quality and consistency by comparing to standard MODIS data. Only land data with known high quality were analyzed in a central U.S. study area. When compared to standard MODIS (MOD/MYD09Q1), the eMODIS Normalized Difference Vegetation Index (NDVI) maintained a strong, significant relationship to standard MODIS NDVI, whether from morning (Terra) or afternoon (Aqua) orbits. The Aqua eMODIS data were more prone to noise than the Terra data, likely due to differences in the internal cloud mask used in MOD/MYD09Q1 or compositing rules. Post-processing temporal smoothing decreased noise in eMODIS data.
Programming and Operations Lab 1--Intermediate, Data Processing Technology: 8025.23.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
The following course outline has been prepared as a guide toward helping the student develop an understanding of operating principles and procedures necessary in processing data electronically. Students who have met the objectives of Designing the Computer Program should be admitted to this course. The class meets 2 hours per day for 90 clock…
Control of Technology Transfer at JPL
NASA Technical Reports Server (NTRS)
Oliver, Ronald
2006-01-01
Controlled Technology: 1) Design: preliminary or critical design data, schematics, technical flow charts, SNV code/diagnostics, logic flow diagrams, wirelist, ICDs, detailed specifications or requirements. 2) Development: constraints, computations, configurations, technical analyses, acceptance criteria, anomaly resolution, detailed test plans, detailed technical proposals. 3) Production: process or how-to: assemble, operated, repair, maintain, modify. 4) Manufacturing: technical instructions, specific parts, specific materials, specific qualities, specific processes, specific flow. 5) Operations: how-to operate, contingency or standard operating plans, Ops handbooks. 6) Repair: repair instructions, troubleshooting schemes, detailed schematics. 7) Test: specific procedures, data, analysis, detailed test plan and retest plans, detailed anomaly resolutions, detailed failure causes and corrective actions, troubleshooting, trended test data, flight readiness data. 8) Maintenance: maintenance schedules and plans, methods for regular upkeep, overhaul instructions. 9) Modification: modification instructions, upgrades kit parts, including software
30 CFR 937.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and rith the February 26, 1980, May 16, 1980, and...
30 CFR 937.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and rith the February 26, 1980, May 16, 1980, and...
30 CFR 937.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and rith the February 26, 1980, May 16, 1980, and...
30 CFR 937.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and rith the February 26, 1980, May 16, 1980, and...
30 CFR 937.764 - Process for designating areas unsuitable for surface coal mining operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Mining Operations, pertaining to petitioning, initial processing, hearing requirements, decisions, data base and inventory systems, public information, and rith the February 26, 1980, May 16, 1980, and...
Real-time solar magnetograph operation system software design and user's guide
NASA Technical Reports Server (NTRS)
Wang, C.
1984-01-01
The Real Time Solar Magnetograph (RTSM) Operation system software design on PDP11/23+ is presented along with the User's Guide. The RTSM operation software is for real time instrumentation control, data collection and data management. The data is used for vector analysis, plotting or graphics display. The processed data is then easily compared with solar data from other sources, such as the Solar Maximum Mission (SMM).
NASA Technical Reports Server (NTRS)
Fayyad, Kristina E.; Hill, Randall W., Jr.; Wyatt, E. J.
1993-01-01
This paper presents a case study of the knowledge engineering process employed to support the Link Monitor and Control Operator Assistant (LMCOA). The LMCOA is a prototype system which automates the configuration, calibration, test, and operation (referred to as precalibration) of the communications, data processing, metric data, antenna, and other equipment used to support space-ground communications with deep space spacecraft in NASA's Deep Space Network (DSN). The primary knowledge base in the LMCOA is the Temporal Dependency Network (TDN), a directed graph which provides a procedural representation of the precalibration operation. The TDN incorporates precedence, temporal, and state constraints and uses several supporting knowledge bases and data bases. The paper provides a brief background on the DSN, and describes the evolution of the TDN and supporting knowledge bases, the process used for knowledge engineering, and an analysis of the successes and problems of the knowledge engineering effort.
NASA Technical Reports Server (NTRS)
Bates, J. R.; Lauderdale, W. W.; Kernaghan, H.
1979-01-01
The Apollo Lunar Surface Experiments Package (ALSEP) final report was prepared when support operations were terminated September 30, 1977, and NASA discontinued the receiving and processing of scientific data transmitted from equipment deployed on the lunar surface. The ALSEP experiments (Apollo 11 to Apollo 17) are described and pertinent operational history is given for each experiment. The ALSEP data processing and distribution are described together with an extensive discussion on archiving. Engineering closeout tests and results are given, and the status and configuration of the experiments at termination are documented. Significant science findings are summarized by selected investigators. Significant operational data and recommendations are also included.
NASA Astrophysics Data System (ADS)
Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.
1986-06-01
In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.
NASA Technical Reports Server (NTRS)
Ziese, James M.
1992-01-01
A design tool of figure of merit was developed that allows the operability of a propulsion system design to be measured. This Launch Operations Index (LOI) relates Operations Efficiency to System Complexity. The figure of Merit can be used by conceptual designers to compare different propulsion system designs based on their impact on launch operations. The LOI will improve the design process by making sure direct launch operations experience is a necessary feedback to the design process.
WFIRST Science Operations at STScI
NASA Astrophysics Data System (ADS)
Gilbert, Karoline; STScI WFIRST Team
2018-06-01
With sensitivity and resolution comparable the Hubble Space Telescope, and a field of view 100 times larger, the Wide Field Instrument (WFI) on WFIRST will be a powerful survey instrument. STScI will be the Science Operations Center (SOC) for the WFIRST Mission, with additional science support provided by the Infrared Processing and Analysis Center (IPAC) and foreign partners. STScI will schedule and archive all WFIRST observations, calibrate and produce pipeline-reduced data products for imaging with the Wide Field Instrument, support the High Latitude Imaging and Supernova Survey Teams, and support the astronomical community in planning WFI imaging observations and analyzing the data. STScI has developed detailed concepts for WFIRST operations, including a data management system integrating data processing and the archive which will include a novel, cloud-based framework for high-level data processing, providing a common environment accessible to all users (STScI operations, Survey Teams, General Observers, and archival investigators). To aid the astronomical community in examining the capabilities of WFIRST, STScI has built several simulation tools. We describe the functionality of each tool and give examples of its use.
Shared address collectives using counter mechanisms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blocksome, Michael; Dozsa, Gabor; Gooding, Thomas M
A shared address space on a compute node stores data received from a network and data to transmit to the network. The shared address space includes an application buffer that can be directly operated upon by a plurality of processes, for instance, running on different cores on the compute node. A shared counter is used for one or more of signaling arrival of the data across the plurality of processes running on the compute node, signaling completion of an operation performed by one or more of the plurality of processes, obtaining reservation slots by one or more of the pluralitymore » of processes, or combinations thereof.« less
One Way of Testing a Distributed Processor
NASA Technical Reports Server (NTRS)
Edstrom, R.; Kleckner, D.
1982-01-01
Launch processing for Space Shuttle is checked out, controlled, and monitored with new system. Entire system can be exercised by two computer programs--one in master console and other in each of operations consoles. Control program in each operations console detects change in status and begins task initiation. All of front-end processors are exercised from consoles through common data buffer, and all data are logged to processed-data recorder for posttest analysis.
InSAR data for monitoring land subsidence: time to think big
NASA Astrophysics Data System (ADS)
Ferretti, A.; Colombo, D.; Fumagalli, A.; Novali, F.; Rucci, A.
2015-11-01
Satellite interferometric synthetic aperture radar (InSAR) data have proven effective and valuable in the analysis of urban subsidence phenomena based on multi-temporal radar images. Results obtained by processing data acquired by different radar sensors, have shown the potential of InSAR and highlighted the key points for an operational use of this technology, namely: (1) regular acquisition over large areas of interferometric data stacks; (2) use of advanced processing algorithms, capable of estimating and removing atmospheric disturbances; (3) access to significant processing power for a regular update of the information over large areas. In this paper, we show how the operational potential of InSAR has been realized thanks to the recent advances in InSAR processing algorithms, the advent of cloud computing and the launch of new satellite platforms, specifically designed for InSAR analyses (e.g. Sentinel-1a operated by the ESA and ALOS2 operated by JAXA). The processing of thousands of SAR scenes to cover an entire nation has been performed successfully in Italy in a project financed by the Italian Ministry of the Environment. The challenge for the future is to pass from the historical analysis of SAR scenes already acquired in digital archives to a near real-time monitoring program where up to date deformation data are routinely provided to final users and decision makers.
Survey: National Environmental Satellite Service
NASA Technical Reports Server (NTRS)
1977-01-01
The national Environmental Satellite Service (NESS) receives data at periodic intervals from satellites of the Synchronous Meteorological Satellite/Geostationary Operational Environmental Satellite series and from the Improved TIROS (Television Infrared Observational Satellite) Operational Satellite. Within the conterminous United States, direct readout and processed products are distributed to users over facsimile networks from a central processing and data distribution facility. In addition, the NESS Satellite Field Stations analyze, interpret, and distribute processed geostationary satellite products to regional weather service activities.
Identification and Description of Alternative Means of Accomplishing IMS Operational Features.
ERIC Educational Resources Information Center
Dave, Ashok
The operational features of feasible alternative configurations for a computer-based instructional management system are identified. Potential alternative means and components of accomplishing these features are briefly described. Included are aspects of data collection, data input, data transmission, data reception, scanning and processing,…
High data volume and transfer rate techniques used at NASA's image processing facility
NASA Technical Reports Server (NTRS)
Heffner, P.; Connell, E.; Mccaleb, F.
1978-01-01
Data storage and transfer operations at a new image processing facility are described. The equipment includes high density digital magnetic tape drives and specially designed controllers to provide an interface between the tape drives and computerized image processing systems. The controller performs the functions necessary to convert the continuous serial data stream from the tape drive to a word-parallel blocked data stream which then goes to the computer-based system. With regard to the tape packing density, 1.8 times 10 to the tenth data bits are stored on a reel of one-inch tape. System components and their operation are surveyed, and studies on advanced storage techniques are summarized.
Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.
Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng
2014-10-01
Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.
Magnetospheric Multiscale Instrument Suite Operations and Data System
NASA Technical Reports Server (NTRS)
Baker, D. N.; Riesberg, L.; Pankratz, C. K.; Panneton, R. S.; Giles, B. L.; Wilder, F. D.; Ergun, R. E.
2015-01-01
The four Magnetospheric Multiscale (MMS) spacecraft will collect a combined volume of approximately 100 gigabits per day of particle and field data. On average, only 4 gigabits of that volume can be transmitted to the ground. To maximize the scientific value of each transmitted data segment, MMS has developed the Science Operations Center (SOC) to manage science operations, instrument operations, and selection, downlink, distribution, and archiving of MMS science data sets. The SOC is managed by the Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado and serves as the primary point of contact for community participation in the mission. MMS instrument teams conduct their operations through the SOC, and utilize the SOC's Science Data Center (SOC) for data management and distribution. The SOC provides a single mission data archive for the housekeeping and science data, calibration data, ephemerides, attitude and other ancillary data needed to support the scientific use and interpretation. All levels of data products will reside at and be publicly disseminated from the SDC. Documentation and metadata describing data products, algorithms, instrument calibrations, validation, and data quality will be provided. Arguably, the most important innovation developed by the SOC is the MMS burst data management and selection system. With nested automation and 'Scientist-in-the-Loop' (SITL) processes, these systems are designed to maximize the value of the burst data by prioritizing the data segments selected for transmission to the ground. This paper describes the MMS science operations approach, processes and data systems, including the burst system and the SITL concept.
Magnetospheric Multiscale Instrument Suite Operations and Data System
NASA Astrophysics Data System (ADS)
Baker, D. N.; Riesberg, L.; Pankratz, C. K.; Panneton, R. S.; Giles, B. L.; Wilder, F. D.; Ergun, R. E.
2016-03-01
The four Magnetospheric Multiscale (MMS) spacecraft will collect a combined volume of ˜100 gigabits per day of particle and field data. On average, only 4 gigabits of that volume can be transmitted to the ground. To maximize the scientific value of each transmitted data segment, MMS has developed the Science Operations Center (SOC) to manage science operations, instrument operations, and selection, downlink, distribution, and archiving of MMS science data sets. The SOC is managed by the Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado and serves as the primary point of contact for community participation in the mission. MMS instrument teams conduct their operations through the SOC, and utilize the SOC's Science Data Center (SDC) for data management and distribution. The SOC provides a single mission data archive for the housekeeping and science data, calibration data, ephemerides, attitude and other ancillary data needed to support the scientific use and interpretation. All levels of data products will reside at and be publicly disseminated from the SDC. Documentation and metadata describing data products, algorithms, instrument calibrations, validation, and data quality will be provided. Arguably, the most important innovation developed by the SOC is the MMS burst data management and selection system. With nested automation and "Scientist-in-the-Loop" (SITL) processes, these systems are designed to maximize the value of the burst data by prioritizing the data segments selected for transmission to the ground. This paper describes the MMS science operations approach, processes and data systems, including the burst system and the SITL concept.
LANDSAT-D Mission Operations Review (MOR)
NASA Technical Reports Server (NTRS)
1982-01-01
Portions of the LANDSAT-D systems operation plan are presented. An overview of the data processing operations, logistics and other operations support, prelaunch and post-launch activities, thematic mapper operations during the scrounge period, and LANDSAT-D performance evaluation is given.
NASA Astrophysics Data System (ADS)
Zender, J.; Berghmans, D.; Bloomfield, D. S.; Cabanas Parada, C.; Dammasch, I.; De Groof, A.; D'Huys, E.; Dominique, M.; Gallagher, P.; Giordanengo, B.; Higgins, P. A.; Hochedez, J.-F.; Yalim, M. S.; Nicula, B.; Pylyser, E.; Sanchez-Duarte, L.; Schwehm, G.; Seaton, D. B.; Stanger, A.; Stegen, K.; Willems, S.
2013-08-01
The PROBA2 Science Centre (P2SC) is a small-scale science operations centre supporting the Sun observation instruments onboard PROBA2: the EUV imager Sun Watcher using APS detectors and image Processing (SWAP) and Large-Yield Radiometer (LYRA). PROBA2 is one of ESA's small, low-cost Projects for Onboard Autonomy (PROBA) and part of ESA's In-Orbit Technology Demonstration Programme. The P2SC is hosted at the Royal Observatory of Belgium, co-located with both Principal Investigator teams. The P2SC tasks cover science planning, instrument commanding, instrument monitoring, data processing, support of outreach activities, and distribution of science data products. PROBA missions aim for a high degree of autonomy at mission and system level, including the science operations centre. The autonomy and flexibility of the P2SC is reached by a set of web-based interfaces allowing the operators as well as the instrument teams to monitor quasi-continuously the status of the operations, allowing a quick reaction to solar events. In addition, several new concepts are implemented at instrument, spacecraft, and ground-segment levels allowing a high degree of flexibility in the operations of the instruments. This article explains the key concepts of the P2SC, emphasising the automation and the flexibility achieved in the commanding as well as the data-processing chain.
National Polar-orbiting Operational Environmental Satellite System (NPOESS) Design and Architecture
NASA Astrophysics Data System (ADS)
Hinnant, F.
2008-12-01
The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system - the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS will replace the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD and will provide continuity for the NASA Earth Observing System (EOS) with the launch of the NPOESS Preparatory Project (NPP). This poster will provide an overview of the NPOESS architecture, which includes four segments. The space segment includes satellites in two orbits that carry a suite of sensors to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the Earth, atmosphere, and near-Earth space environment. The NPOESS design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users through a Command, Control, and Communication Segment (C3S). The data processing for NPOESS is accomplished through an Interface Data Processing Segment (IDPS)/Field Terminal Segment (FTS) that processes NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government as well as to remote terminal users. The Launch Support Segment completes the four segments that make up NPOESS that will enhance the connectivity between research and operations and provide critical operational and scientific environmental measurements to military, civil, and scientific users until 2026.
Quality of narrative operative reports in pancreatic surgery
Wiebe, Meagan E.; Sandhu, Lakhbir; Takata, Julie L.; Kennedy, Erin D.; Baxter, Nancy N.; Gagliardi, Anna R.; Urbach, David R.; Wei, Alice C.
2013-01-01
Background Quality in health care can be evaluated using quality indicators (QIs). Elements contained in the surgical operative report are potential sources for QI data, but little is known about the completeness of the narrative operative report (NR). We evaluated the completeness of the NR for patients undergoing a pancreaticoduodenectomy. Methods We reviewed NRs for patients undergoing a pancreaticoduodenectomy over a 1-year period. We extracted 79 variables related to patient and narrator characteristics, process of care measures, surgical technique and oncology-related outcomes by document analysis. Data were coded and evaluated for completeness. Results We analyzed 74 NRs. The median number of variables reported was 43.5 (range 13–54). Variables related to surgical technique were most complete. Process of care and oncology-related variables were often omitted. Completeness of the NR was associated with longer operative duration. Conclusion The NRs were often incomplete and of poor quality. Important elements, including process of care and oncology-related data, were frequently missing. Thus, the NR is an inadequate data source for QI. Development and use of alternative reporting methods, including standardized synoptic operative reports, should be encouraged to improve documentation of care and serve as a measure of quality of surgical care. PMID:24067527
Quality of narrative operative reports in pancreatic surgery.
Wiebe, Meagan E; Sandhu, Lakhbir; Takata, Julie L; Kennedy, Erin D; Baxter, Nancy N; Gagliardi, Anna R; Urbach, David R; Wei, Alice C
2013-10-01
Quality in health care can be evaluated using quality indicators (QIs). Elements contained in the surgical operative report are potential sources for QI data, but little is known about the completeness of the narrative operative report (NR). We evaluated the completeness of the NR for patients undergoing a pancreaticoduodenectomy. We reviewed NRs for patients undergoing a pancreaticoduodenectomy over a 1-year period. We extracted 79 variables related to patient and narrator characteristics, process of care measures, surgical technique and oncology-related outcomes by document analysis. Data were coded and evaluated for completeness. We analyzed 74 NRs. The median number of variables reported was 43.5 (range 13-54). Variables related to surgical technique were most complete. Process of care and oncology-related variables were often omitted. Completeness of the NR was associated with longer operative duration. The NRs were often incomplete and of poor quality. Important elements, including process of care and oncology-related data, were frequently missing. Thus, the NR is an inadequate data source for QI. Development and use of alternative reporting methods, including standardized synoptic operative reports, should be encouraged to improve documentation of care and serve as a measure of quality of surgical care.
AIRSAR Automated Web-based Data Processing and Distribution System
NASA Technical Reports Server (NTRS)
Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen
2005-01-01
In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.
Recent advances and plans in processing and geocoding of SAR data at the DFD
NASA Technical Reports Server (NTRS)
Noack, W.
1993-01-01
Because of the needs of future projects like ENVISAT and the experiences made with the current operational ERS-1 facilities, a radical change in the synthetic aperture radar (SAR) processing scenarios can be predicted for the next years. At the German PAF several new developments were initialized which are driven mainly either by user needs or by system and operational constraints ('lessons learned'). At the end there will be a major simplification and uniformation of all used computer systems. Especially the following changes are likely to be implemented at the German PAF: transcription before archiving, processing of all standard products with high throughput directly at the receiving stations, processing of special 'high-valued' products at the PAF, usage of a single type of processor hardware, implementation of a large and fast on-line data archive, and improved and unified fast data network between the processing and archiving facilities. A short description of the current operational SAR facilities as well as the future implementations are given.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., or interpretation of any geological data and information. Initial analysis and processing are the stages of analysis or processing where the data and information first become available for in-house... geochemical) data and information describing each operation of analysis, processing, and interpretation; (2...
Code of Federal Regulations, 2012 CFR
2012-07-01
..., or interpretation of any geological data and information. Initial analysis and processing are the stages of analysis or processing where the data and information first become available for in-house... geochemical) data and information describing each operation of analysis, processing, and interpretation; (2...
Code of Federal Regulations, 2013 CFR
2013-07-01
..., or interpretation of any geological data and information. Initial analysis and processing are the stages of analysis or processing where the data and information first become available for in-house... geochemical) data and information describing each operation of analysis, processing, and interpretation; (2...
NASA Astrophysics Data System (ADS)
Meyer, F. J.; Webley, P. W.; Dehn, J.; Arko, S. A.; McAlpin, D. B.; Gong, W.
2016-12-01
Volcanic eruptions are among the most significant hazards to human society, capable of triggering natural disasters on regional to global scales. In the last decade, remote sensing has become established in operational volcano monitoring. Centers like the Alaska Volcano Observatory rely heavily on remote sensing data from optical and thermal sensors to provide time-critical hazard information. Despite this high use of remote sensing data, the presence of clouds and a dependence on solar illumination often limit their impact on decision making. Synthetic Aperture Radar (SAR) systems are widely considered superior to optical sensors in operational monitoring situations, due to their weather and illumination independence. Still, the contribution of SAR to operational volcano monitoring has been limited in the past due to high data costs, long processing times, and low temporal sampling rates of most SAR systems. In this study, we introduce the automatic SAR processing system SARVIEWS, whose advanced data analysis and data integration techniques allow, for the first time, a meaningful integration of SAR into operational monitoring systems. We will introduce the SARVIEWS database interface that allows for automatic, rapid, and seamless access to the data holdings of the Alaska Satellite Facility. We will also present a set of processing techniques designed to automatically generate a set of SAR-based hazard products (e.g. change detection maps, interferograms, geocoded images). The techniques take advantage of modern signal processing and radiometric normalization schemes, enabling the combination of data from different geometries. Finally, we will show how SAR-based hazard information is integrated in existing multi-sensor decision support tools to enable joint hazard analysis with data from optical and thermal sensors. We will showcase the SAR processing system using a set of recent natural disasters (both earthquakes and volcanic eruptions) to demonstrate its robustness. We will also show the benefit of integrating SAR with data from other sensors to support volcano monitoring. For historic eruptions at Okmok and Augustine volcano, both located in the North Pacific, we will demonstrate that the addition of SAR can lead to a significant improvement in activity detection and eruption forecasting.
Mission operations concepts for Earth Observing System (EOS)
NASA Technical Reports Server (NTRS)
Kelly, Angelita C.; Taylor, Thomas D.; Hawkins, Frederick J.
1991-01-01
Mission operation concepts are described which are being used to evaluate and influence space and ground system designs and architectures with the goal of achieving successful, efficient, and cost-effective Earth Observing System (EOS) operations. Emphasis is given to the general characteristics and concepts developed for the EOS Space Measurement System, which uses a new series of polar-orbiting observatories. Data rates are given for various instruments. Some of the operations concepts which require a total system view are also examined, including command operations, data processing, data accountability, data archival, prelaunch testing and readiness, launch, performance monitoring and assessment, contingency operations, flight software maintenance, and security.
Forensic Analysis of Window’s(Registered) Virtual Memory Incorporating the System’s Page-File
2008-12-01
Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE December...data in a meaningful way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed...way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed arbitrarily across
A performance comparison of the IBM RS/6000 and the Astronautics ZS-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, W.M.; Abraham, S.G.; Davidson, E.S.
1991-01-01
Concurrent uniprocessor architectures, of which vector and superscalar are two examples, are designed to capitalize on fine-grain parallelism. The authors have developed a performance evaluation method for comparing and improving these architectures, and in this article they present the methodology and a detailed case study of two machines. The runtime of many programs is dominated by time spent in loop constructs - for example, Fortran Do-loops. Loops generally comprise two logical processes: The access process generates addresses for memory operations while the execute process operates on floating-point data. Memory access patterns typically can be generated independently of the data inmore » the execute process. This independence allows the access process to slip ahead, thereby hiding memory latency. The IBM 360/91 was designed in 1967 to achieve slip dynamically, at runtime. One CPU unit executes integer operations while another handles floating-point operations. Other machines, including the VAX 9000 and the IBM RS/6000, use a similar approach.« less
EOS Operations Systems: EDOS Implemented Changes to Reduce Operations Costs
NASA Technical Reports Server (NTRS)
Cordier, Guy R.; Gomez-Rosa, Carlos; McLemore, Bruce D.
2007-01-01
The authors describe in this paper the progress achieved to-date with the reengineering of the Earth Observing System (EOS) Data and Operations System (EDOS), the experience gained in the process and the ensuing reduction of ground systems operations costs. The reengineering effort included a major methodology change, applying to an existing schedule driven system, a data-driven system approach.
Apollo Multiplexer operations manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M.M.
1985-04-01
This report describes the operation of the the Apollo Multiplexer, a microprocessor based communications device designed to process data between an Apollo computer and up to four Gandalf PACXIV data switches. Details are given on overall operation, hardware, and troubleshooting. The reader should gain sufficient knowledge from this report to understand the operation of the multiplexer and effectively analyze and correct any problems that might occur.
Integrating Data Sources for Process Sustainability Assessments (presentation)
To perform a chemical process sustainability assessment requires significant data about chemicals, process design specifications, and operating conditions. The required information includes the identity of the chemicals used, the quantities of the chemicals within the context of ...
Personal notes [of D.S. Lewis, 7 September 1956 to 31 December 1959
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, D.S.
1956-09-07
This report is a copy of the personal log of D.S. Lewis of the Irradiation Processing Dept. of Reactor Operations at Hanford and covers the period from 7 September 1956 through 31 December 1959. Data are presented on the following: (1) basic reactor operating data, including daily operating data, outage resumes, injuries and incidents, charging and tube replacement rates, panellit gage (flowmeter) trip failures, and thermocouple failures, and (2) basic reactor information on the water plant, electrical distribution, VSR`s, HCR`s, Ball 3X, Safety circuits, gas system, effluent system, process tube cross-section, and production scheduling.
Retrieving Historical Electrorefining Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, Meagan Daniella
Pyrochemical Operations began at Los Alamos National Laboratory (LANL) during 1962 (1). Electrorefining (ER) has been implemented as a routine process since the 1980’s. The process data that went through the ER operation was recorded but had never been logged in an online database. Without a database new staff members are hindered in their work by the lack of information. To combat the issue a database in Access was created to collect the historical data. The years from 2000 onward were entered and queries were created to analyze trends. These trends will aid engineering and operations staff to reach optimalmore » performance for the startup of the new lines.« less
Suomi NPP Ground System Performance
NASA Astrophysics Data System (ADS)
Grant, K. D.; Bergeron, C.
2013-12-01
The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will replace the afternoon orbit component and ground processing system of the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA. The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological and geophysical observations of the Earth. The first satellite in the JPSS constellation, known as the Suomi National Polar-orbiting Partnership (Suomi NPP) satellite, was launched on 28 October 2011, and is currently undergoing product calibration and validation activities. As products reach a beta level of maturity, they are made available to the community through NOAA's Comprehensive Large Array-data Stewardship System (CLASS). CGS's data processing capability processes the satellite data from the Joint Polar Satellite System satellites to provide environmental data products (including Sensor Data Records (SDRs) and Environmental Data Records (EDRs)) to NOAA and Department of Defense (DoD) processing centers operated by the United States government. CGS is currently processing and delivering SDRs and EDRs for Suomi NPP and will continue through the lifetime of the Joint Polar Satellite System programs. Following the launch and sensor activation phase of the Suomi NPP mission, full volume data traffic is now flowing from the satellite through CGS's C3, data processing, and data delivery systems. Ground system performance is critical for this operational system. As part of early system checkout, Raytheon measured all aspects of data acquisition, routing, processing, and delivery to ensure operational performance requirements are met, and will continue to be met throughout the mission. Raytheon developed a tool to measure, categorize, and automatically adjudicate packet behavior across the system, and metrics collected by this tool form the basis of the information to be presented. This presentation will provide details of ground system processing performance, such as data rates through each of the CGS nodes, data accounting statistics, and retransmission rates and success, along with data processing throughput, data availability, and latency. In particular, two key metrics relating to the most important operational measures, availability (the ratio of actual granules delivered to the theoretical maximum number of granules that could be delivered over a particular period) and latency (the time from the detection of a photon by an instrument to the time a product is made available to the data consumer's interface), are provided for Raw Data Records (RDRs), SDRs, and EDRs. Specific availability metrics include Adjusted Expected Granules (the count of the theoretical maximum number of granules minus adjudicated exceptions (granules missing due to factors external to the CGS)), Data Made Available (DMA) (the number of granules provided to CLASS) and Availability Results. Latency metrics are similar, including Data Made Available Minus Exceptions, Data Made Latency, and Latency Results. Overall results, measured during a ninety day period from October 2012 through January 2013, are excellent, with all values surpassing system requirements.
Shope, William G.; ,
1987-01-01
The US Geological Survey is utilizing a national network of more than 1000 satellite data-collection stations, four satellite-relay direct-readout ground stations, and more than 50 computers linked together in a private telecommunications network to acquire, process, and distribute hydrological data in near real-time. The four Survey offices operating a satellite direct-readout ground station provide near real-time hydrological data to computers located in other Survey offices through the Survey's Distributed Information System. The computerized distribution system permits automated data processing and distribution to be carried out in a timely manner under the control and operation of the Survey office responsible for the data-collection stations and for the dissemination of hydrological information to the water-data users.
12 CFR 7.5007 - Correspondent services.
Code of Federal Regulations, 2013 CFR
2013-01-01
... provision of computer networking packages and related hardware; (b) Data processing services; (c) The sale of software that performs data processing functions; (d) The development, operation, management, and...
12 CFR 7.5007 - Correspondent services.
Code of Federal Regulations, 2012 CFR
2012-01-01
... provision of computer networking packages and related hardware; (b) Data processing services; (c) The sale of software that performs data processing functions; (d) The development, operation, management, and...
12 CFR 7.5007 - Correspondent services.
Code of Federal Regulations, 2011 CFR
2011-01-01
... provision of computer networking packages and related hardware; (b) Data processing services; (c) The sale of software that performs data processing functions; (d) The development, operation, management, and...
12 CFR 7.5007 - Correspondent services.
Code of Federal Regulations, 2014 CFR
2014-01-01
... provision of computer networking packages and related hardware; (b) Data processing services; (c) The sale of software that performs data processing functions; (d) The development, operation, management, and...
Key Features of the Deployed NPP/NPOESS Ground System
NASA Astrophysics Data System (ADS)
Heckmann, G.; Grant, K. D.; Mulligan, J. E.
2010-12-01
The National Oceanic & Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics & Space Administration (NASA) are jointly acquiring the next-generation weather/environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current NOAA Polar-orbiting Operational Environmental Satellites (POES) and DoD Defense Meteorological Satellite Program (DMSP). NPOESS satellites carry sensors to collect meteorological, oceanographic, climatological, and solar-geophysical data of the earth, atmosphere, and space. The ground data processing segment is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence & Information Systems (IIS). The IDPS processes NPOESS Preparatory Project (NPP)/NPOESS satellite data to provide environmental data products/records (EDRs) to NOAA and DoD processing centers operated by the US government. The IDPS will process EDRs beginning with NPP and continuing through the lifetime of the NPOESS system. The command & telemetry segment is the Command, Control & Communications Segment (C3S), also developed by Raytheon IIS. C3S is responsible for managing the overall NPP/NPOESS missions from control & status of the space and ground assets to ensuring delivery of timely, high quality data from the Space Segment to IDPS for processing. In addition, the C3S provides the globally-distributed ground assets needed to collect and transport mission, telemetry, and command data between the satellites and processing locations. The C3S provides all functions required for day-to-day satellite commanding & state-of-health monitoring, and delivery of Stored Mission Data to each Central IDP for data products development and transfer to system subscribers. The C3S also monitors and reports system-wide health & status and data communications with external systems and between the segments. The C3S & IDPS segments were delivered & transitioned to operations for NPP. C3S transitioned to operations at the NOAA Satellite Operations Facility (NSOF) in Suitland Maryland in August 2007 and IDPS transitioned in July 2009. Both segments were involved with several compatibility tests with the NPP Satellite at the Ball Aerospace Technology Corporation (BATC) factory. The compatibility tests involved the spacecraft bus, the four sensors (VIIRS, ATMS, CrIS and OMPS), and both ground segments flowing data between the NSOF and BATC factory and flowing data from the polar ground station (Svalbard) over high-speed links back to the NSOF and the two IDP locations (NESDIS & AFWA). This presentation will describe the NPP/NPOESS ground architecture features & enhancements for the NPOESS era. These will include C3S-provided space-to-ground connectivity, reliable and secure data delivery and insight & oversight of the total operation. For NPOESS the ground architecture is extended to provide additional ground receptor sites to reduce data product delivery times to users and delivery of additional sensor data products from sensors similar to NPP and more NPOESS sensors. This architecture is also extended from two Centrals (NESDIS & AFWA) to two additional Centrals (FNMOC & NAVO). IDPS acts as a buffer minimizing changes in how users request and receive data products.
The application of automated operations at the Institutional Processing Center
NASA Technical Reports Server (NTRS)
Barr, Thomas H.
1993-01-01
The JPL Institutional and Mission Computing Division, Communications, Computing and Network Services Section, with its mission contractor, OAO Corporation, have for some time been applying automation to the operation of JPL's Information Processing Center (IPC). Automation does not come in one easy to use package. Automation for a data processing center is made up of many different software and hardware products supported by trained personnel. The IPC automation effort formally began with console automation, and has since spiraled out to include production scheduling, data entry, report distribution, online reporting, failure reporting and resolution, documentation, library storage, and operator and user education, while requiring the interaction of multi-vendor and locally developed software. To begin the process, automation goals are determined. Then a team including operations personnel is formed to research and evaluate available options. By acquiring knowledge of current products and those in development, taking an active role in industry organizations, and learning of other data center's experiences, a forecast can be developed as to what direction technology is moving. With IPC management's approval, an implementation plan is developed and resources identified to test or implement new systems. As an example, IPC's new automated data entry system was researched by Data Entry, Production Control, and Advance Planning personnel. A proposal was then submitted to management for review. A determination to implement the new system was made and elements/personnel involved with the initial planning performed the implementation. The final steps of the implementation were educating data entry personnel in the areas effected and procedural changes necessary to the successful operation of the new system.
40 CFR 68.65 - Process safety information.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Program 3 Prevention Program § 68.65 Process safety... data; (4) Reactivity data: (5) Corrosivity data; (6) Thermal and chemical stability data; and (7... operator shall document that equipment complies with recognized and generally accepted good engineering...
Landsat-5 bumper-mode geometric correction
Storey, James C.; Choate, Michael J.
2004-01-01
The Landsat-5 Thematic Mapper (TM) scan mirror was switched from its primary operating mode to a backup mode in early 2002 in order to overcome internal synchronization problems arising from long-term wear of the scan mirror mechanism. The backup bumper mode of operation removes the constraints on scan start and stop angles enforced in the primary scan angle monitor operating mode, requiring additional geometric calibration effort to monitor the active scan angles. It also eliminates scan timing telemetry used to correct the TM scan geometry. These differences require changes to the geometric correction algorithms used to process TM data. A mathematical model of the scan mirror's behavior when operating in bumper mode was developed. This model includes a set of key timing parameters that characterize the time-varying behavior of the scan mirror bumpers. To simplify the implementation of the bumper-mode model, the bumper timing parameters were recast in terms of the calibration and telemetry data items used to process normal TM imagery. The resulting geometric performance, evaluated over 18 months of bumper-mode operations, though slightly reduced from that achievable in the primary operating mode, is still within the Landsat specifications when the data are processed with the most up-to-date calibration parameters.
Airborne Oceanographic Lidar (AOL) (Global Carbon Cycle)
NASA Technical Reports Server (NTRS)
2003-01-01
This bimonthly contractor progress report covers the operation, maintenance and data management of the Airborne Oceanographic Lidar and the Airborne Topographic Mapper. Monthly activities included: mission planning, sensor operation and calibration, data processing, data analysis, network development and maintenance and instrument maintenance engineering and fabrication.
NASA Astrophysics Data System (ADS)
Sunarya, I. Made Gede; Yuniarno, Eko Mulyanto; Purnomo, Mauridhi Hery; Sardjono, Tri Arief; Sunu, Ismoyo; Purnama, I. Ketut Eddy
2017-06-01
Carotid Artery (CA) is one of the vital organs in the human body. CA features that can be used are position, size and volume. Position feature can used to determine the preliminary initialization of the tracking. Examination of the CA features can use Ultrasound. Ultrasound imaging can be operated dependently by an skilled operator, hence there could be some differences in the images result obtained by two or more different operators. This can affect the process of determining of CA. To reduce the level of subjectivity among operators, it can determine the position of the CA automatically. In this study, the proposed method is to segment CA in B-Mode Ultrasound Image based on morphology, geometry and gradient direction. This study consists of three steps, the data collection, preprocessing and artery segmentation. The data used in this study were taken directly by the researchers and taken from the Brno university's signal processing lab database. Each data set contains 100 carotid artery B-Mode ultrasound image. Artery is modeled using ellipse with center c, major axis a and minor axis b. The proposed method has a high value on each data set, 97% (data set 1), 73 % (data set 2), 87% (data set 3). This segmentation results will then be used in the process of tracking the CA.
Artificial intelligence issues related to automated computing operations
NASA Technical Reports Server (NTRS)
Hornfeck, William A.
1989-01-01
Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.
Integrating artificial and human intelligence into tablet production process.
Gams, Matjaž; Horvat, Matej; Ožek, Matej; Luštrek, Mitja; Gradišek, Anton
2014-12-01
We developed a new machine learning-based method in order to facilitate the manufacturing processes of pharmaceutical products, such as tablets, in accordance with the Process Analytical Technology (PAT) and Quality by Design (QbD) initiatives. Our approach combines the data, available from prior production runs, with machine learning algorithms that are assisted by a human operator with expert knowledge of the production process. The process parameters encompass those that relate to the attributes of the precursor raw materials and those that relate to the manufacturing process itself. During manufacturing, our method allows production operator to inspect the impacts of various settings of process parameters within their proven acceptable range with the purpose of choosing the most promising values in advance of the actual batch manufacture. The interaction between the human operator and the artificial intelligence system provides improved performance and quality. We successfully implemented the method on data provided by a pharmaceutical company for a particular product, a tablet, under development. We tested the accuracy of the method in comparison with some other machine learning approaches. The method is especially suitable for analyzing manufacturing processes characterized by a limited amount of data.
Industrial Assessment Center (IAC) Operations Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopalakrishnan, Bhaskaran; Nimbalkar, Sachin U.; Wenning, Thomas J.
IAC Operations Manual describes organizational model and operations of the Industrial Assessment Center (IAC), Center management activities, typical process of energy assessment, and energy assessment data for specific industry sectors.
Processing AIRS Scientific Data Through Level 2
NASA Technical Reports Server (NTRS)
Oliphant, Robert; Lee, Sung-Yung; Chahine, Moustafa; Susskind, Joel; arnet, Christopher; McMillin, Larry; Goldberg, Mitchell; Blaisdell, John; Rosenkranz, Philip; Strow, Larrabee
2007-01-01
The Atmospheric Infrared Spectrometer (AIRS) Science Processing System (SPS) is a collection of computer programs, denoted product generation executives (PGEs), for processing the readings of the AIRS suite of infrared and microwave instruments orbiting the Earth aboard NASA s Aqua spacecraft. AIRS SPS at an earlier stage of development was described in "Initial Processing of Infrared Spectral Data' (NPO-35243), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 39. To recapitulate: Starting from level 0 (representing raw AIRS data), the PGEs and their data products are denoted by alphanumeric labels (1A, 1B, and 2) that signify the successive stages of processing. The cited prior article described processing through level 1B (the level-2 PGEs were not yet operational). The level-2 PGEs, which are now operational, receive packages of level-1B geolocated radiance data products and produce such geolocated geophysical atmospheric data products such as temperature and humidity profiles. The process of computing these geophysical data products is denoted "retrieval" and is quite complex. The main steps of the process are denoted microwave-only retrieval, cloud detection and cloud clearing, regression, full retrieval, and rapid transmittance algorithm.
ERIC Educational Resources Information Center
Woodell, Eric A.
2013-01-01
Information Technology (IT) professionals use the Information Technology Infrastructure Library (ITIL) process to better manage their business operations, measure performance, improve reliability and lower costs. This study examined the operational results of those data centers using ITIL against those that do not, and whether the results change…
Pike, William A; Riensche, Roderick M; Best, Daniel M; Roberts, Ian E; Whyatt, Marie V; Hart, Michelle L; Carr, Norman J; Thomas, James J
2012-09-18
Systems and computer-implemented processes for storage and management of information artifacts collected by information analysts using a computing device. The processes and systems can capture a sequence of interactive operation elements that are performed by the information analyst, who is collecting an information artifact from at least one of the plurality of software applications. The information artifact can then be stored together with the interactive operation elements as a snippet on a memory device, which is operably connected to the processor. The snippet comprises a view from an analysis application, data contained in the view, and the sequence of interactive operation elements stored as a provenance representation comprising operation element class, timestamp, and data object attributes for each interactive operation element in the sequence.
Shope, William G.
1987-01-01
The U. S. Geological Survey maintains the basic hydrologic data collection system for the United States. The Survey is upgrading the collection system with electronic communications technologies that acquire, telemeter, process, and disseminate hydrologic data in near real-time. These technologies include satellite communications via the Geostationary Operational Environmental Satellite, Data Collection Platforms in operation at over 1400 Survey gaging stations, Direct-Readout Ground Stations at nine Survey District Offices and a network of powerful minicomputers that allows data to be processed and disseminate quickly.
International online support to process optimisation and operation decisions.
Onnerth, T B; Eriksson, J
2002-01-01
The information level at all technical facilities has developed from almost nothing 30-40 years ago to advanced IT--Information Technology--systems based on both chemical and mechanical on-line sensors for process and equipment. Still the basic part of information is to get the right data at the right time for the decision to be made. Today a large amount of operational data is available at almost any European wastewater treatment plant, from laboratory and SCADA. The difficult part is to determine which data to keep, which to use in calculations and how and where to make data available. With the STARcontrol system it is possible to separate only process relevant data to use for on-line control and reporting at engineering level, to optimise operation. Furthermore, the use of IT makes it possible to communicate internationally, with full access to the whole amount of data on the single plant. In this way, expert supervision can be both very local in local language e.g. Polish and at the same time very professional with Danish experts advising on Danish processes in Poland or Sweden where some of the 12 STARcontrol systems are running.
Data Processing Technology, A Suggested 2-Year Post High School Curriculum.
ERIC Educational Resources Information Center
Central Texas Coll., Killeen.
This guide identifies technicians, states specific job requirements, and describes special problems in defining, initiating, and operating post-high school programs in data processing technology. The following are discussed: (1) the program (employment opportunities, the technician, work performed by data processing personnel, the faculty, student…
Survey of Munitions Response Technologies
2006-06-01
3-34 3.3.4 Digital Data Processing .......................................................................... 3-36 4.0 SOURCE DATA AND METHODS...6-4 6.1.6 DGM versus Mag and Flag Processes ..................................................... 6-5 6.1.7 Translation to...signatures, surface clutter, variances in operator technique, target selection, and data processing all degrade from and affect optimum performance
Flexible server-side processing of climate archives
NASA Astrophysics Data System (ADS)
Juckes, Martin; Stephens, Ag; Damasio da Costa, Eduardo
2014-05-01
The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.
Flexible server-side processing of climate archives
NASA Astrophysics Data System (ADS)
Juckes, M. N.; Stephens, A.; da Costa, E. D.
2013-12-01
The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.
NASA Astrophysics Data System (ADS)
Goldbery, R.; Tehori, O.
SEDPAK provides a comprehensive software package for operation of a settling tube and sand analyzer (2-0.063 mm) and includes data-processing programs for statistical and graphic output of results. The programs are menu-driven and written in APPLESOFT BASIC, conforming with APPLE 3.3 DOS. Data storage and retrieval from disc is an important feature of SEDPAK. Additional features of SEDPAK include condensation of raw settling data via standard size-calibration curves to yield statistical grain-size parameters, plots of grain-size frequency distributions and cumulative log/probability curves. The program also has a module for processing of grain-size frequency data from sieved samples. An addition feature of SEDPAK is the option for automatic data processing and graphic output of a sequential or nonsequential array of samples on one side of a disc.
Converting CSV Files to RKSML Files
NASA Technical Reports Server (NTRS)
Trebi-Ollennu, Ashitey; Liebersbach, Robert
2009-01-01
A computer program converts, into a format suitable for processing on Earth, files of downlinked telemetric data pertaining to the operation of the Instrument Deployment Device (IDD), which is a robot arm on either of the Mars Explorer Rovers (MERs). The raw downlinked data files are in comma-separated- value (CSV) format. The present program converts the files into Rover Kinematics State Markup Language (RKSML), which is an Extensible Markup Language (XML) format that facilitates representation of operations of the IDD and enables analysis of the operations by means of the Rover Sequencing Validation Program (RSVP), which is used to build sequences of commanded operations for the MERs. After conversion by means of the present program, the downlinked data can be processed by RSVP, enabling the MER downlink operations team to play back the actual IDD activity represented by the telemetric data against the planned IDD activity. Thus, the present program enhances the diagnosis of anomalies that manifest themselves as differences between actual and planned IDD activities.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2017-07-11
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2016-05-31
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2017-01-03
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
Onboard Processing and Autonomous Operations on the IPEX Cubesat
NASA Technical Reports Server (NTRS)
Chien, Steve; Doubleday, Joshua; Ortega, Kevin; Flatley, Tom; Crum, Gary; Geist, Alessandro; Lin, Michael; Williams, Austin; Bellardo, John; Puig-Suari, Jordi;
2012-01-01
IPEX is a 1u Cubesat sponsored by NASA Earth Science Technology Office (ESTO), the goals or which are: (1) Flight validate high performance flight computing, (2) Flight validate onboard instrument data processing product generation software, (3) flight validate autonomous operations for instrument processing, (4) enhance NASA outreach and university ties.
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Litt, Jonathan S.
2010-01-01
This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.
NASA Wallops Flight Center GEOS-3 altimeter data processing report
NASA Technical Reports Server (NTRS)
Stanley, H. R.; Dwyer, R. E.
1980-01-01
The procedures used to process the GEOS-3 radar altimeter data from raw telemetry data to a final user data product are described. In addition, the radar altimeter hardware design and operating parameters are presented to aid the altimeter user in understanding the altimeter data.
1979-11-01
C-9 TRANSITION CONFIGURATION ........................... C-29 i r Fvii LIST OF TABLES Table Page 2.1-1 PARAMETERS DESCRIBING ATC OPERATION - BASELINE...buffer load from each sensor. Medium Storage (Buffer space for data in and out is largest factor .) II P=K +KR Processing. Where R is the number of... factors to the data from the next scan VII Provides support service Operational Role VIII Dependence Requires data from Preliminary Processing and Target
NASA Technical Reports Server (NTRS)
Cramer, Christopher J.; Wright, James D.; Simmons, Scott A.; Bobbitt, Lynn E.; DeMoss, Joshua A.
2015-01-01
The paper will present a brief background of the previous data acquisition system at the National Transonic Facility (NTF) and the reasoning and goals behind the upgrade to the current Test SLATE (Test Software Laboratory and Automated Testing Environments) data acquisition system. The components, performance characteristics, and layout of the Test SLATE system within the NTF control room will be discussed. The development, testing, and integration of Test SLATE within NTF operations will be detailed. The operational capabilities of the system will be outlined including: test setup, instrumentation calibration, automatic test sequencer setup, data recording, communication between data and facility control systems, real time display monitoring, and data reduction. The current operational status of the Test SLATE system and its performance during recent NTF testing will be highlighted including high-speed, frame-by-frame data acquisition with conditional sampling post-processing applied. The paper concludes with current development work on the system including the capability for real-time conditional sampling during data acquisition and further efficiency enhancements to the wind tunnel testing process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... quantification system; data management and maintenance system; and control, oversight, and validation system for...-supervised institution's advanced IRB systems, operational risk management processes, operational risk data...-length basis between the seller and the obligor (intercompany accounts receivable and receivables subject...
Murray, Jessica R.; Svarc, Jerry L.
2017-01-01
The U.S. Geological Survey Earthquake Science Center collects and processes Global Positioning System (GPS) data throughout the western United States to measure crustal deformation related to earthquakes and tectonic processes as part of a long‐term program of research and monitoring. Here, we outline data collection procedures and present the GPS dataset built through repeated temporary deployments since 1992. This dataset consists of observations at ∼1950 locations. In addition, this article details our data processing and analysis procedures, which consist of the following. We process the raw data collected through temporary deployments, in addition to data from continuously operating western U.S. GPS stations operated by multiple agencies, using the GIPSY software package to obtain position time series. Subsequently, we align the positions to a common reference frame, determine the optimal parameters for a temporally correlated noise model, and apply this noise model when carrying out time‐series analysis to derive deformation measures, including constant interseismic velocities, coseismic offsets, and transient postseismic motion.
Evolution of International Space Station Program Safety Review Processes and Tools
NASA Technical Reports Server (NTRS)
Ratterman, Christian D.; Green, Collin; Guibert, Matt R.; McCracken, Kristle I.; Sang, Anthony C.; Sharpe, Matthew D.; Tollinger, Irene V.
2013-01-01
The International Space Station Program at NASA is constantly seeking to improve the processes and systems that support safe space operations. To that end, the ISS Program decided to upgrade their Safety and Hazard data systems with 3 goals: make safety and hazard data more accessible; better support the interconnection of different types of safety data; and increase the efficiency (and compliance) of safety-related processes. These goals are accomplished by moving data into a web-based structured data system that includes strong process support and supports integration with other information systems. Along with the data systems, ISS is evolving its submission requirements and safety process requirements to support the improved model. In contrast to existing operations (where paper processes and electronic file repositories are used for safety data management) the web-based solution provides the program with dramatically faster access to records, the ability to search for and reference specific data within records, reduced workload for hazard updates and approval, and process support including digital signatures and controlled record workflow. In addition, integration with other key data systems provides assistance with assessments of flight readiness, more efficient review and approval of operational controls and better tracking of international safety certifications. This approach will also provide new opportunities to streamline the sharing of data with ISS international partners while maintaining compliance with applicable laws and respecting restrictions on proprietary data. One goal of this paper is to outline the approach taken by the ISS Progrm to determine requirements for the new system and to devise a practical and efficient implementation strategy. From conception through implementation, ISS and NASA partners utilized a user-centered software development approach focused on user research and iterative design methods. The user-centered approach used on the new ISS hazard system utilized focused user research and iterative design methods employed by the Human Computer Interaction Group at NASA Ames Research Center. Particularly, the approach emphasized the reduction of workload associated with document and data management activities so more resources can be allocated to the operational use of data in problem solving, safety analysis, and recurrence control. The methods and techniques used to understand existing processes and systems, to recognize opportunities for improvement, and to design and review improvements are described with the intent that similar techniques can be employed elsewhere in safety operations. A second goal of this paper is to provide and overview of the web-based data system implemented by ISS. The software selected for the ISS hazard systemMission Assurance System (MAS)is a NASA-customized vairant of the open source software project Bugzilla. The origin and history of MAS as a NASA software project and the rationale for (and advantages of) using open-source software are documented elsewhere (Green, et al., 2009).
Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S
2009-11-01
Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.
Authomatization of Digital Collection Access Using Mobile and Wireless Data Terminals
NASA Astrophysics Data System (ADS)
Leontiev, I. V.
Information technologies become vital due to information processing needs, database access, data analysis and decision support. Currently, a lot of scientific projects are oriented on database integration of heterogeneous systems. The problem of on-line and rapid access to large integrated systems of digital collections is also very important. Usually users move between different locations, either at work or at home. In most cases users need an efficient and remote access to information, stored in integrated data collections. Desktop computers are unable to fulfill the needs, so mobile and wireless devices become helpful. Handhelds and data terminals are nessessary in medical assistance (they store detailed information about each patient, and helpful for nurses), immediate access to data collections is used in a Highway patrol services (databanks of cars, owners, driver licences). Using mobile access, warehouse operations can be validated. Library and museum items cyclecounting will speed up using online barcode-scanning and central database access. That's why mobile devices - cell phones, PDA, handheld computers with wireless access, WindowsCE and PalmOS terminals become popular. Generally, mobile devices have a relatively slow processor, and limited display capabilities, but they are effective for storing and displaying textual data, recognize user hand-writing with stylus, support GUI. Users can perform operations on handheld terminal, and exchange data with the main system (using immediate radio access, or offline access during syncronization process) for update. In our report, we give an approach for mobile access to data collections, which raises an efficiency of data processing in a book library, helps to control available books, books in stock, validate service charges, eliminate staff mistakes, generate requests for book delivery. Our system uses mobile devices Symbol RF (with radio-channel access), and data terminals Symbol Palm Terminal for batch-processing and synchronization with remote library databases. We discuss the use of PalmOS-compatible devices, and WindowsCE terminals. Our software system is based on modular, scalable three-tier architecture. Additional functionality can be easily customized. Scalability is also supplied by Internet / Intranet technologies, and radio-access points. The base module of the system supports generic warehouse operations: cyclecounting with handheld barcode-scanners, efficient items delivery and issue, item movement, reserving, report generating on finished and in-process operations. Movements are optimized using worker's current location, operations are sorted in a priority order and transmitted to mobile and wireless worker's terminals. Mobile terminals improve of tasks processing control, eliminate staff mistakes, display actual information about main processes, provide data for online-reports, and significantly raise the efficiency of data exchange.
IUS/TUG orbital operations and mission support study. Volume 4: Project planning data
NASA Technical Reports Server (NTRS)
1975-01-01
Planning data are presented for the development phases of interim upper stage (IUS) and tug systems. Major project planning requirements, major event schedules, milestones, system development and operations process networks, and relevant support research and technology requirements are included. Topics discussed include: IUS flight software; tug flight software; IUS/tug ground control center facilities, personnel, data systems, software, and equipment; IUS mission events; tug mission events; tug/spacecraft rendezvous and docking; tug/orbiter operations interface, and IUS/orbiter operations interface.
Architectures Toward Reusable Science Data Systems
NASA Astrophysics Data System (ADS)
Moses, J. F.
2014-12-01
Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building ground systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research, NOAA's weather satellites and USGS's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience the goal is to recognize architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.
Architectures Toward Reusable Science Data Systems
NASA Technical Reports Server (NTRS)
Moses, John
2015-01-01
Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.
Interactive data-processing system for metallurgy
NASA Technical Reports Server (NTRS)
Rathz, T. J.
1978-01-01
Equipment indicates that system can rapidly and accurately process metallurgical and materials-processing data for wide range of applications. Advantages include increase in contract between areas on image, ability to analyze images via operator-written programs, and space available for storing images.
Nimbus/TOMS Science Data Operations Support
NASA Technical Reports Server (NTRS)
Childs, Jeff
1998-01-01
1. Participate in and provide analysis of laboratory and in-flight calibration of UV sensors used for space observations of backscattered UV radiation. 2. Provide support to the TOMS Science Operations Center, including generating instrument command lists and analysis of TOMS health and safety data. 3. Develop and maintain software and algorithms designed to capture and process raw spacecraft and instrument data, convert the instrument output into measured radiance and irradiances, and produce scientifically valid products. 4. Process the TOMS data into Level 1, Level 2, and Level 3 data products. 5. Provide analysis of the science data products in support of NASA GSFC Code 916's research.
Nimbus/TOMS Science Data Operations Support
NASA Technical Reports Server (NTRS)
1998-01-01
Projected goals include the following: (1) Participate in and provide analysis of laboratory and in-flight calibration of LTV sensors used for space observations of backscattered LTV radiation; (2) Provide support to the TOMS Science Operations Center, including generating instrument command lists and analysis of TOMS health and safety data; (3) Develop and maintain software and algorithms designed to capture and process raw spacecraft and instrument data, convert the instrument output into measured radiance and irradiances, and produce scientifically valid products; (4) Process the TOMS data into Level 1, Level 2, and Level 3 data products; (5) Provide analysis of the science data products in support of NASA GSFC Code 916's research.
Meta-control of combustion performance with a data mining approach
NASA Astrophysics Data System (ADS)
Song, Zhe
Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.
Collection, storage, retrieval, and publication of water-resources data
Showen, C. R.
1978-01-01
This publication represents a series of papers devoted to the subject of collection, storage, retrieval, and publication of hydrologic data. The papers were presented by members of the U.S. Geological Survey at the International Seminar on Organization and Operation of Hydrologic Services, Ottawa, Canada, July 15-16, 1976, sponsored by the World Meteorological Organization. The first paper, ' Standardization of Hydrologic Measurements, ' by George F. Smoot discusses the need for standardization of the methods and instruments used in measuring hydrologic data. The second paper, ' Use of Earth Satellites for Automation of Hydrologic Data Collection, ' by Richard W. Paulson discusses the use of inexpensive battery-operated radios to transmit realtime hydrologic data to earth satellites and back to ground receiving stations for computer processing. The third paper, ' Operation Hydrometeorological Data-Collection System for the Columbia River, ' by Nicholas A. Kallio discusses the operation of a complex water-management system for a large river basin utilizing the latest automatic telemetry and processing devices. The fourth paper, ' Storage and Retrieval of Water-Resources Data, ' by Charles R. Showen discusses the U.S. Geological Survey 's National Water Data Storage and Retrieval System (WATSTORE) and its use in processing water resources data. The final paper, ' Publication of Water Resources Data, ' by S. M. Lang and C. B. Ham discusses the requirement for publication of water-resources data to meet the needs of a widespread audience and for archival purposes. (See W78-09324 thru W78-09328) (Woodard-USGS)
Natural Resource Information System. Volume 2: System operating procedures and instructions
NASA Technical Reports Server (NTRS)
1972-01-01
A total computer software system description is provided for the prototype Natural Resource Information System designed to store, process, and display data of maximum usefulness to land management decision making. Program modules are described, as are the computer file design, file updating methods, digitizing process, and paper tape conversion to magnetic tape. Operating instructions for the system, data output, printed output, and graphic output are also discussed.
2009-10-01
actuelle M&S couvrant le soutien aux operations, la representation du comportement humain , la guerre asymetrique, la defense contre le terrorisme et...methods, tools, data, intellectual capital , and processes to address these capability requirements. Fourth, there is a need to compare capability...requirements to current capabilities to identify gaps that may be addressed with DoD HSCB methods, tools, data, intellectual capital , and process
Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing
Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng
2015-01-01
Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA’s CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream. Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels. PMID:26566545
A scientific operations plan for the large space telescope. [ground support system design
NASA Technical Reports Server (NTRS)
West, D. K.
1977-01-01
The paper describes an LST ground system which is compatible with the operational requirements of the LST. The goal of the approach is to minimize the cost of post launch operations without seriously compromising the quality and total throughput of LST science. Attention is given to cost constraints and guidelines, the telemetry operations processing systems (TELOPS), the image processing facility, ground system planning and data flow, and scientific interfaces.
Computer Sciences and Data Systems, volume 1
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?
Code of Federal Regulations, 2011 CFR
2011-07-01
... appropriate, monitor malfunctions, associated repairs, and required quality assurance or control activities... monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring malfunctions, associated repairs...
40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?
Code of Federal Regulations, 2014 CFR
2014-07-01
... control activities (including, as applicable, calibration checks and required zero and span adjustments), you must conduct all monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring...
40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... appropriate, monitor malfunctions, associated repairs, and required quality assurance or control activities... monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring malfunctions, associated repairs...
40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?
Code of Federal Regulations, 2013 CFR
2013-07-01
... control activities (including, as applicable, calibration checks and required zero and span adjustments), you must conduct all monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring...
NASA Astrophysics Data System (ADS)
Coughlin, J.; Mital, R.; Nittur, S.; SanNicolas, B.; Wolf, C.; Jusufi, R.
2016-09-01
Operational analytics when combined with Big Data technologies and predictive techniques have been shown to be valuable in detecting mission critical sensor anomalies that might be missed by conventional analytical techniques. Our approach helps analysts and leaders make informed and rapid decisions by analyzing large volumes of complex data in near real-time and presenting it in a manner that facilitates decision making. It provides cost savings by being able to alert and predict when sensor degradations pass a critical threshold and impact mission operations. Operational analytics, which uses Big Data tools and technologies, can process very large data sets containing a variety of data types to uncover hidden patterns, unknown correlations, and other relevant information. When combined with predictive techniques, it provides a mechanism to monitor and visualize these data sets and provide insight into degradations encountered in large sensor systems such as the space surveillance network. In this study, data from a notional sensor is simulated and we use big data technologies, predictive algorithms and operational analytics to process the data and predict sensor degradations. This study uses data products that would commonly be analyzed at a site. This study builds on a big data architecture that has previously been proven valuable in detecting anomalies. This paper outlines our methodology of implementing an operational analytic solution through data discovery, learning and training of data modeling and predictive techniques, and deployment. Through this methodology, we implement a functional architecture focused on exploring available big data sets and determine practical analytic, visualization, and predictive technologies.
Code of Federal Regulations, 2014 CFR
2014-01-01
... internal risk rating and segmentation system; risk parameter quantification system; data management and... advanced IRB systems, operational risk management processes, operational risk data and assessment systems... generated on an arm's-length basis between the seller and the obligor (intercompany accounts receivable and...
Reference Model for Project Support Environments Version 1.0
1993-02-28
relationship with the framework’s Process Support services and with the Lifecycle Process Engineering services. Examples: "* ORCA (Object-based...Design services. Examples: "* ORCA (Object-based Requirements Capture and Analysis). "* RETRAC (REquirements TRACeability). 4.3 Life-Cycle Process...34traditional" computer tools. Operations: Examples of audio and video processing operations include: "* Create, modify, and delete sound and video data
NASA Astrophysics Data System (ADS)
Nguyen, Duy
2012-07-01
Digital Elevation Models (DEMs) are used in many applications in the context of earth sciences such as in topographic mapping, environmental modeling, rainfall-runoff studies, landslide hazard zonation, seismic source modeling, etc. During the last years multitude of scientific applications of Synthetic Aperture Radar Interferometry (InSAR) techniques have evolved. It has been shown that InSAR is an established technique of generating high quality DEMs from space borne and airborne data, and that it has advantages over other methods for the generation of large area DEM. However, the processing of InSAR data is still a challenging task. This paper describes InSAR operational steps and processing chain for DEM generation from Single Look Complex (SLC) SAR data and compare a satellite SAR estimate of surface elevation with a digital elevation model (DEM) from Topography map. The operational steps are performed in three major stages: Data Search, Data Processing, and product Validation. The Data processing stage is further divided into five steps of Data Pre-Processing, Co-registration, Interferogram generation, Phase unwrapping, and Geocoding. The Data processing steps have been tested with ERS 1/2 data using Delft Object-oriented Interferometric (DORIS) InSAR processing software. Results of the outcome of the application of the described processing steps to real data set are presented.
NASA Technical Reports Server (NTRS)
Griffin, Ashley
2017-01-01
The Joint Polar Satellite System (JPSS) Program Office is the supporting organization for the Suomi National Polar Orbiting Partnership (S-NPP) and JPSS-1 satellites. S-NPP carries the following sensors: VIIRS, CrIS, ATMS, OMPS, and CERES with instruments that ultimately produce over 25 data products that cover the Earths weather, oceans, and atmosphere. A team of scientists and engineers from all over the United States document, monitor and fix errors in operational software code or documentation with the algorithm change process (ACP) to ensure the success of the S-NPP and JPSS 1 missions by maintaining quality and accuracy of the data products the scientific community relies on. This poster will outline the programs algorithm change process (ACP), identify the various users and scientific applications of our operational data products and highlight changes that have been made to the ACP to accommodate operating system upgrades to the JPSS programs Interface Data Processing Segment (IDPS), so that the program is ready for the transition to the 2017 JPSS-1 satellite mission and beyond.
ULSGEN (Uplink Summary Generator)
NASA Technical Reports Server (NTRS)
Wang, Y.-F.; Schrock, M.; Reeve, T.; Nguyen, K.; Smith, B.
2014-01-01
Uplink is an important part of spacecraft operations. Ensuring the accuracy of uplink content is essential to mission success. Before commands are radiated to the spacecraft, the command and sequence must be reviewed and verified by various teams. In most cases, this process requires collecting the command data, reviewing the data during a command conference meeting, and providing physical signatures by designated members of various teams to signify approval of the data. If commands or sequences are disapproved for some reason, the whole process must be restarted. Recording data and decision history is important for traceability reasons. Given that many steps and people are involved in this process, an easily accessible software tool for managing the process is vital to reducing human error which could result in uplinking incorrect data to the spacecraft. An uplink summary generator called ULSGEN was developed to assist this uplink content approval process. ULSGEN generates a web-based summary of uplink file content and provides an online review process. Spacecraft operations personnel view this summary as a final check before actual radiation of the uplink data. .
Quantification of Operational Risk Using A Data Mining
NASA Technical Reports Server (NTRS)
Perera, J. Sebastian
1999-01-01
What is Data Mining? - Data Mining is the process of finding actionable information hidden in raw data. - Data Mining helps find hidden patterns, trends, and important relationships often buried in a sea of data - Typically, automated software tools based on advanced statistical analysis and data modeling technology can be utilized to automate the data mining process
Industrial process surveillance system
Gross, Kenneth C.; Wegerich, Stephan W.; Singer, Ralph M.; Mott, Jack E.
1998-01-01
A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.
Industrial process surveillance system
Gross, K.C.; Wegerich, S.W.; Singer, R.M.; Mott, J.E.
1998-06-09
A system and method are disclosed for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy. 96 figs.
Industrial Process Surveillance System
Gross, Kenneth C.; Wegerich, Stephan W; Singer, Ralph M.; Mott, Jack E.
2001-01-30
A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.
Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd
2016-03-01
The Process Overview Measure is a query-based measure developed to assess operator situation awareness (SA) from monitoring process plants. A companion paper describes how the measure has been developed according to process plant properties and operator cognitive work. The Process Overview Measure demonstrated practicality, sensitivity, validity and reliability in two full-scope simulator experiments investigating dramatically different operational concepts. Practicality was assessed based on qualitative feedback of participants and researchers. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA in full-scope simulator settings based on data collected on process experts. Thus, full-scope simulator studies can employ the Process Overview Measure to reveal the impact of new control room technology and operational concepts on monitoring process plants. Practitioner Summary: The Process Overview Measure is a query-based measure that demonstrated practicality, sensitivity, validity and reliability for assessing operator situation awareness (SA) from monitoring process plants in representative settings.
NASA Technical Reports Server (NTRS)
Anderson, R. C.; Summers, R. L.
1981-01-01
An integrated gas analysis system designed to operate in automatic, semiautomatic, and manual modes from a remote control panel is described. The system measures the carbon monoxide, oxygen, water vapor, total hydrocarbons, carbon dioxide, and oxides of nitrogen. A pull through design provides increased reliability and eliminates the need for manual flow rate adjustment and pressure correction. The system contains two microprocessors to range the analyzers, calibrate the system, process the raw data to units of concentration, and provides information to the facility research computer and to the operator through terminal and the control panels. After initial setup, the system operates for several hours without significant operator attention.
The ATLAS PanDA Pilot in Operation
NASA Astrophysics Data System (ADS)
Nilsson, P.; Caballero, J.; De, K.; Maeno, T.; Stradling, A.; Wenaus, T.; ATLAS Collaboration
2011-12-01
The Production and Distributed Analysis system (PanDA) [1-2] was designed to meet ATLAS [3] requirements for a data-driven workload management system capable of operating at LHC data processing scale. Submitted jobs are executed on worker nodes by pilot jobs sent to the grid sites by pilot factories. This paper provides an overview of the PanDA pilot [4] system and presents major features added in light of recent operational experience, including multi-job processing, advanced job recovery for jobs with output storage failures, gLExec [5-6] based identity switching from the generic pilot to the actual user, and other security measures. The PanDA system serves all ATLAS distributed processing and is the primary system for distributed analysis; it is currently used at over 100 sites worldwide. We analyze the performance of the pilot system in processing real LHC data on the OSG [7], EGI [8] and Nordugrid [9-10] infrastructures used by ATLAS, and describe plans for its evolution.
Multi-Mission Laser Altimeter Data Processing and Co-Registration of Image and Laser Data at DLR
NASA Astrophysics Data System (ADS)
Stark, A.; Matz, K.-D.; Roatsch, T.
2018-04-01
We designed a system for the processing and storage of large laser altimeter data sets for various past and operating laser altimeter instruments. Furthermore, we developed a technique to accurately co-register multi-mission laser and image data.
SImbol Materials Lithium Extraction Operating Data From Elmore and Featherstone Geothermal Plants
Stephen Harrison
2015-07-08
The data provided in this upload is summary data from its Demonstration Plant operation at the geothermal power production plants in the Imperial Valley. The data provided is averaged data for the Elmore Plant and the Featherstone Plant. Included is both temperature and analytical data (ICP_OES). Provide is the feed to the Simbol Process, post brine treatment and post lithium extraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R
Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with themore » endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.« less
NASA Astrophysics Data System (ADS)
Hinnant, F.
2009-12-01
The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD and will provide continuity for the NASA Earth Observation System with the launch of the NPOESS Preparatory Project. This poster will provide a top level status update of the program, as well as an overview of the NPOESS system architecture, which includes four segments. The space segment includes satellites in two orbits that carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The NPOESS system design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users through a Command, Control, and Communication Segment (C3S). The data processing for NPOESS is accomplished through an Interface Data Processing Segment (IDPS)/Field Terminal Segment (FTS) that processes NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government as well as remote terminal users. The Launch Support Segment completes the four segments that make up the NPOESS system that will enhance the connectivity between research and operations and provide critical operational and scientific environmental measurements to military, civil, and scientific users until 2026.
NASA Astrophysics Data System (ADS)
Neill, Aaron; Reaney, Sim
2015-04-01
Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.
Data Processing Courses in High Schools?
ERIC Educational Resources Information Center
Reese, Don
1970-01-01
It is more important for students to have an understanding of basic fundamentals such as English, mathematics, social studies, and basic business understandings than a superficial understanding of data processing equipment and its operation. (Editor)
Systems and methods for determining a spacecraft orientation
NASA Technical Reports Server (NTRS)
Harman, Richard R (Inventor); Luquette, Richard J (Inventor); Lee, Michael H (Inventor)
2004-01-01
Disclosed are systems and methods of determining or estimating an orientation of a spacecraft. An exemplary system generates telemetry data, including star observations, in a satellite. A ground station processes the telemetry data with data from a star catalog, to generate display data which, in this example, includes observed stars overlaid with catalog stars. An operator views the display and generates an operator input signal using a mouse device, to pair up observed and catalog stars. Circuitry in the ground station then processes two pairs of observed and catalog stars, to determine an orientation of the spacecraft.
MIRADS-2 Implementation Manual
NASA Technical Reports Server (NTRS)
1975-01-01
The Marshall Information Retrieval and Display System (MIRADS) which is a data base management system designed to provide the user with a set of generalized file capabilities is presented. The system provides a wide variety of ways to process the contents of the data base and includes capabilities to search, sort, compute, update, and display the data. The process of creating, defining, and loading a data base is generally called the loading process. The steps in the loading process which includes (1) structuring, (2) creating, (3) defining, (4) and implementing the data base for use by MIRADS are defined. The execution of several computer programs is required to successfully complete all steps of the loading process. This library must be established as a cataloged mass storage file as the first step in MIRADS implementation. The procedure for establishing the MIRADS Library is given. The system is currently operational for the UNIVAC 1108 computer system utilizing the Executive Operating System. All procedures relate to the use of MIRADS on the U-1108 computer.
Web Application Software for Ground Operations Planning Database (GOPDb) Management
NASA Technical Reports Server (NTRS)
Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey
2013-01-01
A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.
ERIC Educational Resources Information Center
Fussler, Herman; Payne, Charles T.
Part I is a discussion of the following project tasks: A) development of an on-line, real-time bibliographic data processing system; B) implementation in library operations; C) character sets; D) Project MARC; E) circulation; and F) processing operation studies. Part II is a brief discussion of efforts to work out cooperative library systems…
NASA Technical Reports Server (NTRS)
Redinbo, Robert
1994-01-01
Fault tolerance features in the first three major subsystems appearing in the next generation of communications satellites are described. These satellites will contain extensive but efficient high-speed processing and switching capabilities to support the low signal strengths associated with very small aperture terminals. The terminals' numerous data channels are combined through frequency division multiplexing (FDM) on the up-links and are protected individually by forward error-correcting (FEC) binary convolutional codes. The front-end processing resources, demultiplexer, demodulators, and FEC decoders extract all data channels which are then switched individually, multiplexed, and remodulated before retransmission to earth terminals through narrow beam spot antennas. Algorithm based fault tolerance (ABFT) techniques, which relate real number parity values with data flows and operations, are used to protect the data processing operations. The additional checking features utilize resources that can be substituted for normal processing elements when resource reconfiguration is required to replace a failed unit.
Thomassen, Yvonne E; van Sprang, Eric N M; van der Pol, Leo A; Bakker, Wilfried A M
2010-09-01
Historical manufacturing data can potentially harbor a wealth of information for process optimization and enhancement of efficiency and robustness. To extract useful data multivariate data analysis (MVDA) using projection methods is often applied. In this contribution, the results obtained from applying MVDA on data from inactivated polio vaccine (IPV) production runs are described. Data from over 50 batches at two different production scales (700-L and 1,500-L) were available. The explorative analysis performed on single unit operations indicated consistent manufacturing. Known outliers (e.g., rejected batches) were identified using principal component analysis (PCA). The source of operational variation was pinpointed to variation of input such as media. Other relevant process parameters were in control and, using this manufacturing data, could not be correlated to product quality attributes. The gained knowledge of the IPV production process, not only from the MVDA, but also from digitalizing the available historical data, has proven to be useful for troubleshooting, understanding limitations of available data and seeing the opportunity for improvements. 2010 Wiley Periodicals, Inc.
Chow, Vincent S; Huang, Wenhai; Puterman, Martin L
2009-01-01
Operations research (OR) is playing an increasing role in the support of many health care initiatives. However one of the main challenges facing OR practitioners is the availability and the integrity of operations data. Hospital information systems (HIS) are often designed with a clinical or accounting focus and may lack the data necessary for operational studies. In this paper, we illustrate the data processing methods and data challenges faced by our team during a study of surgical scheduling practices at the Vancouver Island Health Authority. We also provide some general recommendations to improve HIS from an operations perspective. In general, more integration between operations researchers and HIS specialists are required to support ongoing operational improvements in the health care sector.
Design of a dataway processor for a parallel image signal processing system
NASA Astrophysics Data System (ADS)
Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu
1995-04-01
Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.
State machine analysis of sensor data from dynamic processes
Cook, William R.; Brabson, John M.; Deland, Sharon M.
2003-12-23
A state machine model analyzes sensor data from dynamic processes at a facility to identify the actual processes that were performed at the facility during a period of interest for the purpose of remote facility inspection. An inspector can further input the expected operations into the state machine model and compare the expected, or declared, processes to the actual processes to identify undeclared processes at the facility. The state machine analysis enables the generation of knowledge about the state of the facility at all levels, from location of physical objects to complex operational concepts. Therefore, the state machine method and apparatus may benefit any agency or business with sensored facilities that stores or manipulates expensive, dangerous, or controlled materials or information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barr, Jonathan L.; Tuffner, Francis K.; Hadley, Mark D.
This document contains the Integrated Assessment Plan (IAP) for the Phase 2 Operational Demonstration (OD) of the Smart Power Infrastructure Demonstration for Energy Reliability (SPIDERS) Joint Capability Technology Demonstration (JCTD) project. SPIDERS will be conducted over a three year period with Phase 2 being conducted at Fort Carson, Colorado. This document includes the Operational Demonstration Execution Plan (ODEP) and the Operational Assessment Execution Plan (OAEP), as approved by the Operational Manager (OM) and the Integrated Management Team (IMT). The ODEP describes the process by which the OD is conducted and the OAEP describes the process by which the data collectedmore » from the OD is processed. The execution of the OD, in accordance with the ODEP and the subsequent execution of the OAEP, will generate the necessary data for the Quick Look Report (QLR) and the Utility Assessment Report (UAR). These reports will assess the ability of the SPIDERS JCTD to meet the four critical requirements listed in the Implementation Directive (ID).« less
The Design and Implementation of INGRES.
ERIC Educational Resources Information Center
Stonebraker, Michael; And Others
The currently operational version of the INGRES data base management system gives a relational view of data, supports two high level, non-procedural data sublanguages, and runs as a collection of user processes on top of a UNIX operating system. The authors stress the design decisions and tradeoffs in relation to (1) structuring the system into…
Exploiting analytics techniques in CMS computing monitoring
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Kuznetsov, V.; Magini, N.; Repečka, A.; Vaandering, E.
2017-10-01
The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.
The purpose of this SOP is to describe the assembly of household (HH) packets into data processing batches. The batching process enables orderly tracking of packets or forms through data processing and limits the potential for packet or form loss. This procedure was used for th...
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2011 CFR
2011-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
The purpose of this SOP is to describe the assembly of household (HH) packets into data processing batches. The batching process enables orderly tracking of packets or forms through data processing and limits the potential for packet or form loss. This procedure was used for th...
Practical Use of Operation Data in the Process Industry
NASA Astrophysics Data System (ADS)
Kano, Manabu
This paper aims to reveal real problems in the process industry and introduce recent development to solve such problems from the viewpoint of effective use of operation data. Two topics are discussed: virtual sensor and process control. First, in order to clarify the present state and problems, a part of our recent questionnaire survey of process control is quoted. It is emphasized that maintenance is a key issue not only for soft-sensors but also for controllers. Then, new techniques are explained. The first one is correlation-based just-in-time modeling (CoJIT), which can realize higher prediction performance than conventional methods and simplify model maintenance. The second is extended fictitious reference iterative tuning (E-FRIT), which can realize data-driven PID control parameter tuning without process modeling. The great usefulness of these techniques are demonstrated through their industrial applications.
The Kepler Science Operations Center Pipeline Framework Extensions
NASA Technical Reports Server (NTRS)
Klaus, Todd C.; Cote, Miles T.; McCauliff, Sean; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Chandrasekaran, Hema; Bryson, Stephen T.; Middour, Christopher; Caldwell, Douglas A.;
2010-01-01
The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating on-board data compression tables, monitoring photometer health and status, processing the science data, and exporting the pipeline products to the mission archive. We describe how the generic pipeline framework software developed for Kepler is extended to achieve these goals, including pipeline configurations for processing science data and other support roles, and custom unit of work generators that control how the Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages the retrieval and storage of the data for a given unit of work and the MATLAB algorithms that process these data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing these files to be used to debug and evolve the algorithms offline.
Overview of the land analysis system (LAS)
Quirk, Bruce K.; Olseson, Lyndon R.
1987-01-01
The Land Analysis System (LAS) is a fully integrated digital analysis system designed to support remote sensing, image processing, and geographic information systems research. LAS is being developed through a cooperative effort between the National Aeronautics and Space Administration Goddard Space Flight Center and the U. S. Geological Survey Earth Resources Observation Systems (EROS) Data Center. LAS has over 275 analysis modules capable to performing input and output, radiometric correction, geometric registration, signal processing, logical operations, data transformation, classification, spatial analysis, nominal filtering, conversion between raster and vector data types, and display manipulation of image and ancillary data. LAS is currently implant using the Transportable Applications Executive (TAE). While TAE was designed primarily to be transportable, it still provides the necessary components for a standard user interface, terminal handling, input and output services, display management, and intersystem communications. With TAE the analyst uses the same interface to the processing modules regardless of the host computer or operating system. LAS was originally implemented at EROS on a Digital Equipment Corporation computer system under the Virtual Memorial System operating system with DeAnza displays and is presently being converted to run on a Gould Power Node and Sun workstation under the Berkeley System Distribution UNIX operating system.
Spaceborne synthetic aperture radar signal processing using FPGAs
NASA Astrophysics Data System (ADS)
Sugimoto, Yohei; Ozawa, Satoru; Inaba, Noriyasu
2017-10-01
Synthetic Aperture Radar (SAR) imagery requires image reproduction through successive signal processing of received data before browsing images and extracting information. The received signal data records of the ALOS-2/PALSAR-2 are stored in the onboard mission data storage and transmitted to the ground. In order to compensate the storage usage and the capacity of transmission data through the mission date communication networks, the operation duty of the PALSAR-2 is limited. This balance strongly relies on the network availability. The observation operations of the present spaceborne SAR systems are rigorously planned by simulating the mission data balance, given conflicting user demands. This problem should be solved such that we do not have to compromise the operations and the potential of the next-generation spaceborne SAR systems. One of the solutions is to compress the SAR data through onboard image reproduction and information extraction from the reproduced images. This is also beneficial for fast delivery of information products and event-driven observations by constellation. The Emergence Studio (Sōhatsu kōbō in Japanese) with Japan Aerospace Exploration Agency is developing evaluation models of FPGA-based signal processing system for onboard SAR image reproduction. The model, namely, "Fast L1 Processor (FLIP)" developed in 2016 can reproduce a 10m-resolution single look complex image (Level 1.1) from ALOS/PALSAR raw signal data (Level 1.0). The processing speed of the FLIP at 200 MHz results in twice faster than CPU-based computing at 3.7 GHz. The image processed by the FLIP is no way inferior to the image processed with 32-bit computing in MATLAB.
System integration of marketable subsystems. [for residential solar heating and cooling
NASA Technical Reports Server (NTRS)
1979-01-01
Progress is reported in the following areas: systems integration of marketable subsystems; development, design, and building of site data acquisition subsystems; development and operation of the central data processing system; operation of the MSFC Solar Test Facility; and systems analysis.
NASA Astrophysics Data System (ADS)
Duda, James L.; Mulligan, Joseph; Valenti, James; Wenkel, Michael
2005-01-01
A key feature of the National Polar-orbiting Operational Environmental Satellite System (NPOESS) is the Northrop Grumman Space Technology patent-pending innovative data routing and retrieval architecture called SafetyNetTM. The SafetyNetTM ground system architecture for the National Polar-orbiting Operational Environmental Satellite System (NPOESS), combined with the Interface Data Processing Segment (IDPS), will together provide low data latency and high data availability to its customers. The NPOESS will cut the time between observation and delivery by a factor of four when compared with today's space-based weather systems, the Defense Meteorological Satellite Program (DMSP) and NOAA's Polar-orbiting Operational Environmental Satellites (POES). SafetyNetTM will be a key element of the NPOESS architecture, delivering near real-time data over commercial telecommunications networks. Scattered around the globe, the 15 unmanned ground receptors are linked by fiber-optic systems to four central data processing centers in the U. S. known as Weather Centrals. The National Environmental Satellite, Data and Information Service; Air Force Weather Agency; Fleet Numerical Meteorology and Oceanography Center, and the Naval Oceanographic Office operate the Centrals. In addition, this ground system architecture will have unused capacity attendant with an infrastructure that can accommodate additional users.
Reactor Operations Monitoring System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M.M.
1989-01-01
The Reactor Operations Monitoring System (ROMS) is a VME based, parallel processor data acquisition and safety action system designed by the Equipment Engineering Section and Reactor Engineering Department of the Savannah River Site. The ROMS will be analyzing over 8 million signal samples per minute. Sixty-eight microprocessors are used in the ROMS in order to achieve a real-time data analysis. The ROMS is composed of multiple computer subsystems. Four redundant computer subsystems monitor 600 temperatures with 2400 thermocouples. Two computer subsystems share the monitoring of 600 reactor coolant flows. Additional computer subsystems are dedicated to monitoring 400 signals from assortedmore » process sensors. Data from these computer subsystems are transferred to two redundant process display computer subsystems which present process information to reactor operators and to reactor control computers. The ROMS is also designed to carry out safety functions based on its analysis of process data. The safety functions include initiating a reactor scram (shutdown), the injection of neutron poison, and the loadshed of selected equipment. A complete development Reactor Operations Monitoring System has been built. It is located in the Program Development Center at the Savannah River Site and is currently being used by the Reactor Engineering Department in software development. The Equipment Engineering Section is designing and fabricating the process interface hardware. Upon proof of hardware and design concept, orders will be placed for the final five systems located in the three reactor areas, the reactor training simulator, and the hardware maintenance center.« less
Exploring cloud and big data components for SAR archiving and analysis
NASA Astrophysics Data System (ADS)
Baker, S.; Crosby, C. J.; Meertens, C.; Phillips, D.
2017-12-01
Under the Geodesy Advancing Geoscience and EarthScope (GAGE) NSF Cooperative Agreement, UNAVCO has seen the volume of the SAR Data Archive grow at a substantial rate, from 2 TB in Y1 and 5 TB in Y2 to 41 TB in Y3 primarily due to WInSAR PI proposal management of ALOS-2/JAXA (Japan Aerospace Exploration Agency) data and to a lesser extent Supersites and other data collections. JAXA provides a fixed number of scenes per year for each PI, and some data files are 50-60GB each, which accounts for the large volume of data. In total, over 100TB of SAR data are in the WInSAR/UNAVCO archive and a large portion of these are available unrestricted for WInSAR members. In addition to the existing data, newer data streams from the Sentinel-1 and NISAR missions will require efficient processing pipelines and easily scalable infrastructure to handle processed results. With these growing data sizes and space concerns, the SAR archive operations migrated to the Texas Advanced Computing Center (TACC) via an NSF XSEDE proposal in spring 2017. Data are stored on an HPC system while data operations are running on Jetstream virtual machines within the same datacenter. In addition to the production data operations, testing was done in early 2017 with container based InSAR processing analysis using JupyterHub and Docker images deployed on a VM cluster on Jetstream. The JupyterHub environment is well suited for short courses and other training opportunities for the community such as labs for university courses on InSAR. UNAVCO is also exploring new processing methodologies using DC/OS (the datacenter operating system) for batch and stream processing workflows and time series analysis with Big Data open source components like the Spark, Mesos, Akka, Cassandra, Kafka (SMACK) stack. The comparison of the different methodologies will provide insight into the pros and cons for each and help the SAR community with decisions about infrastructure and software requirements to meet their research goals.
Data Telemetry and Acquisition System for Acoustic Signal Processing Investigations.
1996-02-20
were VME- based computer systems operating under the VxWorks real - time operating system . Each system shared a common hardware and software... real - time operating system . It interfaces to the Berg PCM Decommutator board, which searches for the embedded synchronization word in the data and re...software were built on top of this architecture. The multi-tasking, message queue and memory management facilities of the VxWorks real - time operating system are
Landsat 7 Science Data Processing: An Overview
NASA Technical Reports Server (NTRS)
Schweiss, Robert J.; Daniel, Nathaniel E.; Derrick, Deborah K.
2000-01-01
The Landsat 7 Science Data Processing System, developed by NASA for the Landsat 7 Project, provides the science data handling infrastructure used at the Earth Resources Observation Systems (EROS) Data Center (EDC) Landsat Data Handling Facility (DHF) of the United States Department of Interior, United States Geological Survey (USGS) located in Sioux Falls, South Dakota. This paper presents an overview of the Landsat 7 Science Data Processing System and details of the design, architecture, concept of operation, and management aspects of systems used in the processing of the Landsat 7 Science Data.
Post-test navigation data analysis techniques for the shuttle ALT
NASA Technical Reports Server (NTRS)
1975-01-01
Postflight test analysis data processing techniques for shuttle approach and landing tests (ALT) navigation data are defined. Postfight test processor requirements are described along with operational and design requirements, data input requirements, and software test requirements. The postflight test data processing is described based on the natural test sequence: quick-look analysis, postflight navigation processing, and error isolation processing. Emphasis is placed on the tradeoffs that must remain open and subject to analysis until final definition is achieved in the shuttle data processing system and the overall ALT plan. A development plan for the implementation of the ALT postflight test navigation data processing system is presented. Conclusions are presented.
NASA Astrophysics Data System (ADS)
Kalluri, S. N.; Haman, B.; Vititoe, D.
2014-12-01
The ground system under development for Geostationary Operational Environmental Satellite-R (GOES-R) series of weather satellite has completed a key milestone in implementing the science algorithms that process raw sensor data to higher level products in preparation for launch. Real time observations from GOES-R are expected to make significant contributions to Earth and space weather prediction, and there are stringent requirements to product weather products at very low latency to meet NOAA's operational needs. Simulated test data from all the six GOES-R sensors are being processed by the system to test and verify performance of the fielded system. Early results show that the system development is on track to meet functional and performance requirements to process science data. Comparison of science products generated by the ground system from simulated data with those generated by the algorithm developers show close agreement among data sets which demonstrates that the algorithms are implemented correctly. Successful delivery of products to AWIPS and the Product Distribution and Access (PDA) system from the core system demonstrate that the external interfaces are working.
NASA Technical Reports Server (NTRS)
Cangahuala, L.; Drain, T. R.
1999-01-01
At present, ground navigation support for interplanetary spacecraft requires human intervention for data pre-processing, filtering, and post-processing activities; these actions must be repeated each time a new batch of data is collected by the ground data system.
Practical Applications of Data Processing to School Purchasing.
ERIC Educational Resources Information Center
California Association of School Business Officials, San Diego. Imperial Section.
Electronic data processing provides a fast and accurate system for handling large volumes of routine data. If properly employed, computers can perform myriad functions for purchasing operations, including purchase order writing; equipment inventory control; vendor inventory; and equipment acquisition, transfer, and retirement. The advantages of…
Taking advantage of ground data systems attributes to achieve quality results in testing software
NASA Technical Reports Server (NTRS)
Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.
1994-01-01
During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.
Experience Transitioning Models and Data at the NOAA Space Weather Prediction Center
NASA Astrophysics Data System (ADS)
Berger, Thomas
2016-07-01
The NOAA Space Weather Prediction Center has a long history of transitioning research data and models into operations and with the validation activities required. The first stage in this process involves demonstrating that the capability has sufficient value to customers to justify the cost needed to transition it and to run it continuously and reliably in operations. Once the overall value is demonstrated, a substantial effort is then required to develop the operational software from the research codes. The next stage is to implement and test the software and product generation on the operational computers. Finally, effort must be devoted to establishing long-term measures of performance, maintaining the software, and working with forecasters, customers, and researchers to improve over time the operational capabilities. This multi-stage process of identifying, transitioning, and improving operational space weather capabilities will be discussed using recent examples. Plans for future activities will also be described.
Metallurgical Plant Optimization Through the use of Flowsheet Simulation Modelling
NASA Astrophysics Data System (ADS)
Kennedy, Mark William
Modern metallurgical plants typically have complex flowsheets and operate on a continuous basis. Real time interactions within such processes can be complex and the impacts of streams such as recycles on process efficiency and stability can be highly unexpected prior to actual operation. Current desktop computing power, combined with state-of-the-art flowsheet simulation software like Metsim, allow for thorough analysis of designs to explore the interaction between operating rate, heat and mass balances and in particular the potential negative impact of recycles. Using plant information systems, it is possible to combine real plant data with simple steady state models, using dynamic data exchange links to allow for near real time de-bottlenecking of operations. Accurate analytical results can also be combined with detailed unit operations models to allow for feed-forward model-based-control. This paper will explore some examples of the application of Metsim to real world engineering and plant operational issues.
Operating Room Delays: Meaningful Use in Electronic Health Record.
Van Winkle, Rachelle A; Champagne, Mary T; Gilman-Mays, Meri; Aucoin, Julia
2016-06-01
Perioperative areas are the most costly to operate and account for more than 40% of expenses. The high costs prompted one organization to analyze surgical delays through a retrospective review of their new electronic health record. Electronic health records have made it easier to access and aggregate clinical data; 2123 operating room cases were analyzed. Implementing a new electronic health record system is complex; inaccurate data and poor implementation can introduce new problems. Validating the electronic health record development processes determines the ease of use and the user interface, specifically related to user compliance with the intent of the electronic health record development. The revalidation process after implementation determines if the intent of the design was fulfilled and data can be meaningfully used. In this organization, the data fields completed through automation provided quantifiable, meaningful data. However, data fields completed by staff that required subjective decision making resulted in incomplete data nearly 24% of the time. The ease of use was further complicated by 490 permutations (combinations of delay types and reasons) that were built into the electronic health record. Operating room delay themes emerged notwithstanding the significant complexity of the electronic health record build; however, improved accuracy could improve meaningful data collection and a more accurate root cause analysis of operating room delays. Accurate and meaningful use of data affords a more reliable approach in quality, safety, and cost-effective initiatives.
Distributed performance counters
Davis, Kristan D; Evans, Kahn C; Gara, Alan; Satterfield, David L
2013-11-26
A plurality of first performance counter modules is coupled to a plurality of processing cores. The plurality of first performance counter modules is operable to collect performance data associated with the plurality of processing cores respectively. A plurality of second performance counter modules are coupled to a plurality of L2 cache units, and the plurality of second performance counter modules are operable to collect performance data associated with the plurality of L2 cache units respectively. A central performance counter module may be operable to coordinate counter data from the plurality of first performance counter modules and the plurality of second performance modules, the a central performance counter module, the plurality of first performance counter modules, and the plurality of second performance counter modules connected by a daisy chain connection.
A Electro-Optical Image Algebra Processing System for Automatic Target Recognition
NASA Astrophysics Data System (ADS)
Coffield, Patrick Cyrus
The proposed electro-optical image algebra processing system is designed specifically for image processing and other related computations. The design is a hybridization of an optical correlator and a massively paralleled, single instruction multiple data processor. The architecture of the design consists of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined in terms of basic operations of an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how it implements the natural decomposition of algebraic functions into spatially distributed, point use operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The implementation of the proposed design may be accomplished in many ways. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control a large variety of the arithmetic and logic operations of the image algebra's generalized matrix product. The generalized matrix product is the most powerful fundamental operation in the algebra, thus allowing a wide range of applications. No other known device or design has made this claim of processing speed and general implementation of a heterogeneous image algebra.
Method and apparatus for monitoring plasma processing operations
Smith, Jr., Michael Lane; Stevenson, Joel O'Don; Ward, Pamela Peardon Denise
2001-01-01
The invention generally relates to various aspects of a plasma process, and more specifically the monitoring of such plasma processes. One aspect relates in at least some manner to calibrating or initializing a plasma monitoring assembly. This type of calibration may be used to address wavelength shifts, intensity shifts, or both associated with optical emissions data obtained on a plasma process. A calibration light may be directed at a window through which optical emissions data is being obtained to determine the effect, if any, that the inner surface of the window is having on the optical emissions data being obtained therethrough, the operation of the optical emissions data gathering device, or both. Another aspect relates in at least some manner to various types of evaluations which may be undertaken of a plasma process which was run, and more typically one which is currently being run, within the processing chamber. Plasma health evaluations and process identification through optical emissions analysis are included in this aspect. Yet another aspect associated with the present invention relates in at least some manner to the endpoint of a plasma process (e.g., plasma recipe, plasma clean, conditioning wafer operation) or discrete/discernible portion thereof (e.g., a plasma step of a multiple step plasma recipe). A final aspect associated with the present invention relates to how one or more of the above-noted aspects may be implemented into a semiconductor fabrication facility, such as the distribution of wafers to a wafer production system.
Method and apparatus for monitoring plasma processing operations
Smith, Jr., Michael Lane; Stevenson, Joel O'Don; Ward, Pamela Peardon Denise
2001-01-01
The invention generally relates to various aspects of a plasma process, and more specifically the monitoring of such plasma processes. One aspect relates in at least some manner to calibrating or initializing a plasma monitoring assembly. This type of calibration may be used to address wavelength shifts, intensity shifts, or both associated with optical emissions data obtained on a plasma process. A calibration light may be directed at a window through which optical emissions data is being obtained to determine the effect, if any, that the inner surface of the window is having on the optical emissions data being obtained therethrough, the operation of the optical emissions data gathering device, or both. Another aspect relates in at least some manner to various types of evaluations which may be undertaken of a plasma process which was run, and more typically one which is currently being run, within the processing chamber. Plasma health evaluations and process identification through optical emissions analysis are included in this aspect. Yet another aspect associated with the present invention relates in at least some manner to the endpoint of a plasma process (e.g., plasma recipe, plasma clean, conditioning wafer operation) or discrete/discemible portion thereof (e.g., a plasma step of a multiple step plasma recipe). A final aspect associated with the present invention relates to how one or more of the above-noted aspects may be implemented into a semiconductor fabrication facility, such as the distribution of wafers to a wafer production system.
Method and apparatus for monitoring plasma processing operations
Smith, Jr., Michael Lane; Stevenson, Joel O'Don; Ward, Pamela Peardon Denise
2000-01-01
The invention generally relates to various aspects of a plasma process, and more specifically the monitoring of such plasma processes. One aspect relates in at least some manner to calibrating or initializing a plasma monitoring assembly. This type of calibration may be used to address wavelength shifts, intensity shifts, or both associated with optical emissions data obtained on a plasma process. A calibration light may be directed at a window through which optical emissions data is being obtained to determine the effect, if any, that the inner surface of the window is having on the optical emissions data being obtained therethrough, the operation of the optical emissions data gathering device, or both. Another aspect relates in at least some manner to various types of evaluations which may be undertaken of a plasma process which was run, and more typically one which is currently being run, within the processing chamber. Plasma health evaluations and process identification through optical emissions analysis are included in this aspect. Yet another aspect associated with the present invention relates in at least some manner to the endpoint of a plasma process (e.g., plasma recipe, plasma clean, conditioning wafer operation) or discrete/discernible portion thereof (e.g., a plasma step of a multiple step plasma recipe). A final aspect associated with the present invention relates to how one or more of the above-noted aspects may be implemented into a semiconductor fabrication facility, such as the distribution of wafers to a wafer production system.
Method and apparatus for monitoring plasma processing operations
Smith, Jr., Michael Lane; Stevenson, Joel O'Don; Ward, Pamela Peardon Denise
2002-07-16
The invention generally relates to various aspects of a plasma process, and more specifically the monitoring of such plasma processes. One aspect relates in at least some manner to calibrating or initializing a plasma monitoring assembly. This type of calibration may be used to address wavelength shifts, intensity shifts, or both associated with optical emissions data obtained on a plasma process. A calibration light may be directed at a window through which optical emissions data is being obtained to determine the effect, if any, that the inner surface of the window is having on the optical emissions data being obtained therethrough, the operation of the optical emissions data gathering device, or both. Another aspect relates in at least some manner to various types of evaluations which may be undertaken of a plasma process which was run, and more typically one which is currently being run, within the processing chamber. Plasma health evaluations and process identification through optical emissions analysis are included in this aspect. Yet another aspect associated with the present invention relates in at least some manner to the endpoint of a plasma process (e.g., plasma recipe, plasma clean, conditioning wafer operation) or discrete/discernible portion thereof (e.g., a plasma step of a multiple step plasma recipe). A final aspect associated with the present invention relates to how one or more of the above-noted aspects may be implemented into a semiconductor fabrication facility, such as the distribution of wafers to a wafer production system.
Flexible, secure agent development framework
Goldsmith,; Steven, Y [Rochester, MN
2009-04-07
While an agent generator is generating an intelligent agent, it can also evaluate the data processing platform on which it is executing, in order to assess a risk factor associated with operation of the agent generator on the data processing platform. The agent generator can retrieve from a location external to the data processing platform an open site that is configurable by the user, and load the open site into an agent substrate, thereby creating a development agent with code development capabilities. While an intelligent agent is executing a functional program on a data processing platform, it can also evaluate the data processing platform to assess a risk factor associated with performing the data processing function on the data processing platform.
National Centers for Environmental Prediction
Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar Documentation for operational and research users Operational Models All of the secondary bulleted items will be climate MOM4 HYCOM-Wavewatch Modeling Research Global and regional Institutionally supported components
Review of the transportation planning process in the Kansas City metropolitan area
DOT National Transportation Integrated Search
2013-01-01
In 2010 the FHWA Office of Operations, Office of Transportation Management (HOTM) commissioned the development of a white paper, Data Capture and Management: Needs and Gaps in the Operation and Coordination of U.S. DOT Data Capture and Management Pro...
Schoellhamer, D.H.
2002-01-01
Suspended sediment concentration (SSC) data from San Pablo Bay, California, were analyzed to compare the basin-scale effect of dredging and disposal of dredged material (dredging operations) and natural estuarine processes. The analysis used twelve 3-wk to 5-wk periods of mid-depth and near-bottom SSC data collected at Point San Pablo every 15 min from 1993-1998. Point San Pablo is within a tidal excursion of a dredged-material disposal site. The SSC data were compared to dredging volume, Julian day, and hydrodynamic and meteorological variables that could affect SSC. Kendall's ??, Spearman's ??, and weighted (by the fraction of valid data in each period) Spearman's ??w correlation coefficients of the variables indicated which variables were significantly correlated with SSC. Wind-wave resuspension had the greatest effect on SSC. Median water-surface elevation was the primary factor affecting mid-depth SSC. Greater depths inhibit wind-wave resuspension of bottom sediment and indicate greater influence of less turbid water from down estuary. Seasonal variability in the supply of erodible sediment is the primary factor affecting near-bottom SSC. Natural physical processes in San Pablo Bay are more areally extensive, of equal or longer duration, and as frequent as dredging operations (when occurring), and they affect SSC at the tidal time scale. Natural processes control SSC at Point San Pablo even when dredging operations are occurring.
Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun; Wang, Gi-Nam
2016-01-01
Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively.
Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun
2016-01-01
Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively. PMID:27974882
GEARS: An Enterprise Architecture Based On Common Ground Services
NASA Astrophysics Data System (ADS)
Petersen, S.
2014-12-01
Earth observation satellites collect a broad variety of data used in applications that range from weather forecasting to climate monitoring. Within NOAA the National Environmental Satellite Data and Information Service (NESDIS) supports these applications by operating satellites in both geosynchronous and polar orbits. Traditionally NESDIS has acquired and operated its satellites as stand-alone systems with their own command and control, mission management, processing, and distribution systems. As the volume, velocity, veracity, and variety of sensor data and products produced by these systems continues to increase, NESDIS is migrating to a new concept of operation in which it will operate and sustain the ground infrastructure as an integrated Enterprise. Based on a series of common ground services, the Ground Enterprise Architecture System (GEARS) approach promises greater agility, flexibility, and efficiency at reduced cost. This talk describes the new architecture and associated development activities, and presents the results of initial efforts to improve product processing and distribution.
Bar-Chart-Monitor System For Wind Tunnels
NASA Technical Reports Server (NTRS)
Jung, Oscar
1993-01-01
Real-time monitor system provides bar-chart displays of significant operating parameters developed for National Full-Scale Aerodynamic Complex at Ames Research Center. Designed to gather and process sensory data on operating conditions of wind tunnels and models, and displays data for test engineers and technicians concerned with safety and validation of operating conditions. Bar-chart video monitor displays data in as many as 50 channels at maximum update rate of 2 Hz in format facilitating quick interpretation.
ARM Operations and Engineering Procedure Mobile Facility Site Startup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voyles, Jimmy W
2015-05-01
This procedure exists to define the key milestones, necessary steps, and process rules required to commission and operate an Atmospheric Radiation Measurement (ARM) Mobile Facility (AMF), with a specific focus toward on-time product delivery to the ARM Data Archive. The overall objective is to have the physical infrastructure, networking and communications, and instrument calibration, grooming, and alignment (CG&A) completed with data products available from the ARM Data Archive by the Operational Start Date milestone.
Maine Facility Research Summary : Dynamic Sign Systems for Narrow Bridges
DOT National Transportation Integrated Search
1997-09-01
This report describes the development of operational surveillance data processing algorithms and software for application to urban freeway systems, conforming to a framework in which data processing is performed in stages: sensor malfunction detectio...
Processing and Analysis of Mars Pathfinder Science Data at JPL's Science Data Processing Section
NASA Technical Reports Server (NTRS)
LaVoie, S.; Green, W.; Runkle, A.; Alexander, D.; Andres, P.; DeJong, E.; Duxbury, E.; Freda, D.; Gorjian, Z.; Hall, J.;
1998-01-01
The Mars Pathfinder mission required new capabilities and adaptation of existing capabilities in order to support science analysis and flight operations requirements imposed by the in-situ nature of the mission.
Expansion of transient operating data
NASA Astrophysics Data System (ADS)
Chipman, Christopher; Avitabile, Peter
2012-08-01
Real time operating data is very important to understand actual system response. Unfortunately, the amount of physical data points typically collected is very small and often interpretation of the data is difficult. Expansion techniques have been developed using traditional experimental modal data to augment this limited set of data. This expansion process allows for a much improved description of the real time operating response. This paper presents the results from several different structures to show the robustness of the technique. Comparisons are made to a more complete set of measured data to validate the approach. Both analytical simulations and actual experimental data are used to illustrate the usefulness of the technique.
NASA Technical Reports Server (NTRS)
Dehghani, Navid; Tankenson, Michael
2006-01-01
This paper details an architectural description of the Mission Data Processing and Control System (MPCS), an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is developed based on a set of small reusable components, implemented in Java, each designed with a specific function and well-defined interfaces. An industry standard messaging bus is used to transfer information among system components. Components generate standard messages which are used to capture system information, as well as triggers to support the event-driven architecture of the system. Event-driven systems are highly desirable for processing high-rate telemetry (science and engineering) data, and for supporting automation for many mission operations processes.
Young, Kevin L [Idaho Falls, ID; Hungate, Kevin E [Idaho Falls, ID
2010-02-23
A system for providing operational feedback to a user of a detection probe may include an optical sensor to generate data corresponding to a position of the detection probe with respect to a surface; a microprocessor to receive the data; a software medium having code to process the data with the microprocessor and pre-programmed parameters, and making a comparison of the data to the parameters; and an indicator device to indicate results of the comparison. A method of providing operational feedback to a user of a detection probe may include generating output data with an optical sensor corresponding to the relative position with respect to a surface; processing the output data, including comparing the output data to pre-programmed parameters; and indicating results of the comparison.
Computer program developed for flowsheet calculations and process data reduction
NASA Technical Reports Server (NTRS)
Alfredson, P. G.; Anastasia, L. J.; Knudsen, I. E.; Koppel, L. B.; Vogel, G. J.
1969-01-01
Computer program PACER-65, is used for flowsheet calculations and easily adapted to process data reduction. Each unit, vessel, meter, and processing operation in the overall flowsheet is represented by a separate subroutine, which the program calls in the order required to complete an overall flowsheet calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melius, C
2007-12-05
The epidemiological and economic modeling of poultry diseases requires knowing the size, location, and operational type of each poultry type operation within the US. At the present time, the only national database of poultry operations that is available to the general public is the USDA's 2002 Agricultural Census data, published by the National Agricultural Statistics Service, herein referred to as the 'NASS data'. The NASS data provides census data at the county level on poultry operations for various operation types (i.e., layers, broilers, turkeys, ducks, geese). However, the number of farms and sizes of farms for the various types aremore » not independent since some facilities have more than one type of operation. Furthermore, some data on the number of birds represents the number sold, which does not represent the number of birds present at any given time. In addition, any data tabulated by NASS that could identify numbers of birds or other data reported by an individual respondent is suppressed by NASS and coded with a 'D'. To be useful for epidemiological and economic modeling, the NASS data must be converted into a unique set of facility types (farms having similar operational characteristics). The unique set must not double count facilities or birds. At the same time, it must account for all the birds, including those for which the data has been suppressed. Therefore, several data processing steps are required to work back from the published NASS data to obtain a consistent database for individual poultry operations. This technical report documents data processing steps that were used to convert the NASS data into a national poultry facility database with twenty-six facility types (7 egg-laying, 6 broiler, 1 backyard, 3 turkey, and 9 others, representing ducks, geese, ostriches, emus, pigeons, pheasants, quail, game fowl breeders and 'other'). The process involves two major steps. The first step defines the rules used to estimate the data that is suppressed within the NASS database. The first step is similar to the first step used to estimate suppressed data for livestock [Melius et al (2006)]. The second step converts the NASS poultry types into the operational facility types used by the epidemiological and economic model. We also define two additional facility types for high and low risk poultry backyards, and an additional two facility types for live bird markets and swap meets. The distribution of these additional facility types among counties is based on US population census data. The algorithm defining the number of premises and the corresponding distribution among counties and the resulting premises density plots for the continental US are provided.« less
Development of Cross-Platform Software for Well Logging Data Visualization
NASA Astrophysics Data System (ADS)
Akhmadulin, R. K.; Miraev, A. I.
2017-07-01
Well logging data processing is one of the main sources of information in the oil-gas field analysis and is of great importance in the process of its development and operation. Therefore, it is important to have the software which would accurately and clearly provide the user with processed data in the form of well logs. In this work, there have been developed a software product which not only has the basic functionality for this task (loading data from .las files, well log curves display, etc.), but can be run in different operating systems and on different devices. In the article a subject field analysis and task formulation have been performed, and the software design stage has been considered. At the end of the work the resulting software product interface has been described.
PNNL Data-Intensive Computing for a Smarter Energy Grid
Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria
2017-12-09
The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.
Operating tool for a distributed data and information management system
NASA Astrophysics Data System (ADS)
Reck, C.; Mikusch, E.; Kiemle, S.; Wolfmüller, M.; Böttcher, M.
2002-07-01
The German Remote Sensing Data Center has developed the Data Information and Management System DIMS which provides multi-mission ground system services for earth observation product processing, archiving, ordering and delivery. DIMS successfully uses newest technologies within its services. This paper presents the solution taken to simplify operation tasks for this large and distributed system.
Synthetic Aperture Radar (SAR) data processing
NASA Technical Reports Server (NTRS)
Beckner, F. L.; Ahr, H. A.; Ausherman, D. A.; Cutrona, L. J.; Francisco, S.; Harrison, R. E.; Heuser, J. S.; Jordan, R. L.; Justus, J.; Manning, B.
1978-01-01
The available and optimal methods for generating SAR imagery for NASA applications were identified. The SAR image quality and data processing requirements associated with these applications were studied. Mathematical operations and algorithms required to process sensor data into SAR imagery were defined. The architecture of SAR image formation processors was discussed, and technology necessary to implement the SAR data processors used in both general purpose and dedicated imaging systems was addressed.
ASTEP user's guide and software documentation
NASA Technical Reports Server (NTRS)
Gliniewicz, A. S.; Lachowski, H. M.; Pace, W. H., Jr.; Salvato, P., Jr.
1974-01-01
The Algorithm Simulation Test and Evaluation Program (ASTEP) is a modular computer program developed for the purpose of testing and evaluating methods of processing remotely sensed multispectral scanner earth resources data. ASTEP is written in FORTRAND V on the UNIVAC 1110 under the EXEC 8 operating system and may be operated in either a batch or interactive mode. The program currently contains over one hundred subroutines consisting of data classification and display algorithms, statistical analysis algorithms, utility support routines, and feature selection capability. The current program can accept data in LARSC1, LARSC2, ERTS, and Universal formats, and can output processed image or data tapes in Universal format.
Streaming data analytics via message passing with application to graph algorithms
Plimpton, Steven J.; Shead, Tim
2014-05-06
The need to process streaming data, which arrives continuously at high-volume in real-time, arises in a variety of contexts including data produced by experiments, collections of environmental or network sensors, and running simulations. Streaming data can also be formulated as queries or transactions which operate on a large dynamic data store, e.g. a distributed database. We describe a lightweight, portable framework named PHISH which enables a set of independent processes to compute on a stream of data in a distributed-memory parallel manner. Datums are routed between processes in patterns defined by the application. PHISH can run on top of eithermore » message-passing via MPI or sockets via ZMQ. The former means streaming computations can be run on any parallel machine which supports MPI; the latter allows them to run on a heterogeneous, geographically dispersed network of machines. We illustrate how PHISH can support streaming MapReduce operations, and describe streaming versions of three algorithms for large, sparse graph analytics: triangle enumeration, subgraph isomorphism matching, and connected component finding. Lastly, we also provide benchmark timings for MPI versus socket performance of several kernel operations useful in streaming algorithms.« less
An operations management system for the Space Station
NASA Astrophysics Data System (ADS)
Savage, Terry R.
A description is provided of an Operations Management System (OMS) for the planned NASA Space Station. The OMS would be distributed both in space and on the ground, and provide a transparent interface to the communications and data processing facilities of the Space Station Program. The allocation of OMS responsibilities has, in the most current Space Station design, been fragmented among the Communications and Tracking Subsystem (CTS), the Data Management System (DMS), and a redefined OMS. In this current view, OMS is less of a participant in the real-time processing, and more an overseer of the health and management of the Space Station operations.
OPALS: Mission System Operations Architecture for an Optical Communications Demonstration on the ISS
NASA Technical Reports Server (NTRS)
Abrahamson, Matthew J.; Sindiy, Oleg V.; Oaida, Bogdan V.; Fregoso, Santos; Bowles-Martinez, Jessica N.; Kokorowski, Michael; Wilkerson, Marcus W.; Konyha, Alexander L.
2014-01-01
In spring 2014, the Optical PAyload for Lasercomm Science (OPALS) will launch to the International Space Station (ISS) to demonstrate space-to-ground optical communications. During a 90-day baseline mission, OPALS will downlink high quality, short duration videos to the Optical Communications Telescope Laboratory (OCTL) in Wrightwood, California. To achieve mission success, interfaces to the ISS payload operations infrastructure are established. For OPALS, the interfaces facilitate activity planning, hazardous laser operations, commanding, and telemetry transmission. In addition, internal processes such as pointing prediction and data processing satisfy the technical requirements of the mission. The OPALS operations team participates in Operational Readiness Tests (ORTs) with external partners to exercise coordination processes and train for the overall mission. The tests have provided valuable insight into operational considerations on the ISS.
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Palikonda, R.; Smith, W. L., Jr.; Bedka, K. M.; Spangenberg, D.; Vakhnin, A.; Lutz, N. E.; Walter, J.; Kusterer, J.
2017-12-01
Cloud Computing offers new opportunities for large-scale scientific data producers to utilize Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) IT resources to process and deliver data products in an operational environment where timely delivery, reliability, and availability are critical. The NASA Langley Research Center Atmospheric Science Data Center (ASDC) is building and testing a private and public facing cloud for users in the Science Directorate to utilize as an everyday production environment. The NASA SatCORPS (Satellite ClOud and Radiation Property Retrieval System) team processes and derives near real-time (NRT) global cloud products from operational geostationary (GEO) satellite imager datasets. To deliver these products, we will utilize the public facing cloud and OpenShift to deploy a load-balanced webserver for data storage, access, and dissemination. The OpenStack private cloud will host data ingest and computational capabilities for SatCORPS processing. This paper will discuss the SatCORPS migration towards, and usage of, the ASDC Cloud Services in an operational environment. Detailed lessons learned from use of prior cloud providers, specifically the Amazon Web Services (AWS) GovCloud and the Government Cloud administered by the Langley Managed Cloud Environment (LMCE) will also be discussed.
14 CFR 415.127 - Flight safety system design and operation data.
Code of Federal Regulations, 2013 CFR
2013-01-01
... system and subsystems design and operational requirements. (c) Flight safety system diagram. An applicant... subsystems. The diagram must include the following subsystems defined in part 417, subpart D of this chapter... data processing, display, and recording system; and flight safety official console. (d) Subsystem...
14 CFR 415.127 - Flight safety system design and operation data.
Code of Federal Regulations, 2011 CFR
2011-01-01
... system and subsystems design and operational requirements. (c) Flight safety system diagram. An applicant... subsystems. The diagram must include the following subsystems defined in part 417, subpart D of this chapter... data processing, display, and recording system; and flight safety official console. (d) Subsystem...
14 CFR 415.127 - Flight safety system design and operation data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... system and subsystems design and operational requirements. (c) Flight safety system diagram. An applicant... subsystems. The diagram must include the following subsystems defined in part 417, subpart D of this chapter... data processing, display, and recording system; and flight safety official console. (d) Subsystem...
14 CFR 415.127 - Flight safety system design and operation data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... system and subsystems design and operational requirements. (c) Flight safety system diagram. An applicant... subsystems. The diagram must include the following subsystems defined in part 417, subpart D of this chapter... data processing, display, and recording system; and flight safety official console. (d) Subsystem...
Advanced data management for optimising the operation of a full-scale WWTP.
Beltrán, Sergio; Maiza, Mikel; de la Sota, Alejandro; Villanueva, José María; Ayesa, Eduardo
2012-01-01
The lack of appropriate data management tools is presently a limiting factor for a broader implementation and a more efficient use of sensors and analysers, monitoring systems and process controllers in wastewater treatment plants (WWTPs). This paper presents a technical solution for advanced data management of a full-scale WWTP. The solution is based on an efficient and intelligent use of the plant data by a standard centralisation of the heterogeneous data acquired from different sources, effective data processing to extract adequate information, and a straightforward connection to other emerging tools focused on the operational optimisation of the plant such as advanced monitoring and control or dynamic simulators. A pilot study of the advanced data manager tool was designed and implemented in the Galindo-Bilbao WWTP. The results of the pilot study showed its potential for agile and intelligent plant data management by generating new enriched information combining data from different plant sources, facilitating the connection of operational support systems, and developing automatic plots and trends of simulated results and actual data for plant performance and diagnosis.
Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT
NASA Astrophysics Data System (ADS)
Livschitz, Yakov; Munro, Rosemary; Lang, Rüdiger; Fiedler, Lars; Dyer, Richard; Eisinger, Michael
2010-05-01
The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument's health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument's degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.
Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT
NASA Astrophysics Data System (ADS)
Livschitz, Y.; Munro, R.; Lang, R.; Fiedler, L.; Dyer, R.; Eisinger, M.
2009-12-01
The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument’s health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument’s degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.
Statistical analysis of general aviation VG-VGH data
NASA Technical Reports Server (NTRS)
Clay, L. E.; Dickey, R. L.; Moran, M. S.; Payauys, K. W.; Severyn, T. P.
1974-01-01
To represent the loads spectra of general aviation aircraft operating in the Continental United States, VG and VGH data collected since 1963 in eight operational categories were processed and analyzed. Adequacy of data sample and current operational categories, and parameter distributions required for valid data extrapolation were studied along with envelopes of equal probability of exceeding the normal load factor (n sub z) versus airspeed for gust and maneuver loads and the probability of exceeding current design maneuver, gust, and landing impact n sub z limits. The significant findings are included.
National Centers for Environmental Prediction
Processing Land Surface Software Engineering Hurricanes Model Information Documentation Performance Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar Series Other Information Collaborators In-House Website Transition to Operations Presentations
The California Integrated Seismic Network
NASA Astrophysics Data System (ADS)
Hellweg, M.; Given, D.; Hauksson, E.; Neuhauser, D.; Oppenheimer, D.; Shakal, A.
2007-05-01
The mission of the California Integrated Seismic Network (CISN) is to operate a reliable, modern system to monitor earthquakes throughout the state; to generate and distribute information in real-time for emergency response, for the benefit of public safety, and for loss mitigation; and to collect and archive data for seismological and earthquake engineering research. To meet these needs, the CISN operates data processing and archiving centers, as well as more than 3000 seismic stations. Furthermore, the CISN is actively developing and enhancing its infrastructure, including its automated processing and archival systems. The CISN integrates seismic and strong motion networks operated by the University of California Berkeley (UCB), the California Institute of Technology (Caltech), and the United States Geological Survey (USGS) offices in Menlo Park and Pasadena, as well as the USGS National Strong Motion Program (NSMP), and the California Geological Survey (CGS). The CISN operates two earthquake management centers (the NCEMC and SCEMC) where statewide, real-time earthquake monitoring takes place, and an engineering data center (EDC) for processing strong motion data and making it available in near real-time to the engineering community. These centers employ redundant hardware to minimize disruptions to the earthquake detection and processing systems. At the same time, dual feeds of data from a subset of broadband and strong motion stations are telemetered in real- time directly to both the NCEMC and the SCEMC to ensure the availability of statewide data in the event of a catastrophic failure at one of these two centers. The CISN uses a backbone T1 ring (with automatic backup over the internet) to interconnect the centers and the California Office of Emergency Services. The T1 ring enables real-time exchange of selected waveforms, derived ground motion data, phase arrivals, earthquake parameters, and ShakeMaps. With the goal of operating similar and redundant statewide earthquake processing systems at both real-time EMCs, the CISN is currently adopting and enhancing the database-centric, earthquake processing and analysis software originally developed for the Caltech/USGS Pasadena TriNet project. Earthquake data and waveforms are made available to researchers and to the public in near real-time through the CISN's Northern and Southern California Eathquake Data Centers (NCEDC and SCEDC) and through the USGS Earthquake Notification System (ENS). The CISN partners have developed procedures to automatically exchange strong motion data, both waveforms and peak parameters, for use in ShakeMap and in the rapid engineering reports which are available near real-time through the strong motion EDC.
Future electro-optical sensors and processing in urban operations
NASA Astrophysics Data System (ADS)
Grönwall, Christina; Schwering, Piet B.; Rantakokko, Jouni; Benoist, Koen W.; Kemp, Rob A. W.; Steinvall, Ove; Letalick, Dietmar; Björkert, Stefan
2013-10-01
In the electro-optical sensors and processing in urban operations (ESUO) study we pave the way for the European Defence Agency (EDA) group of Electro-Optics experts (IAP03) for a common understanding of the optimal distribution of processing functions between the different platforms. Combinations of local, distributed and centralized processing are proposed. In this way one can match processing functionality to the required power, and available communication systems data rates, to obtain the desired reaction times. In the study, three priority scenarios were defined. For these scenarios, present-day and future sensors and signal processing technologies were studied. The priority scenarios were camp protection, patrol and house search. A method for analyzing information quality in single and multi-sensor systems has been applied. A method for estimating reaction times for transmission of data through the chain of command has been proposed and used. These methods are documented and can be used to modify scenarios, or be applied to other scenarios. Present day data processing is organized mainly locally. Very limited exchange of information with other platforms is present; this is performed mainly at a high information level. Main issues that arose from the analysis of present-day systems and methodology are the slow reaction time due to the limited field of view of present-day sensors and the lack of robust automated processing. Efficient handover schemes between wide and narrow field of view sensors may however reduce the delay times. The main effort in the study was in forecasting the signal processing of EO-sensors in the next ten to twenty years. Distributed processing is proposed between hand-held and vehicle based sensors. This can be accompanied by cloud processing on board several vehicles. Additionally, to perform sensor fusion on sensor data originating from different platforms, and making full use of UAV imagery, a combination of distributed and centralized processing is essential. There is a central role for sensor fusion of heterogeneous sensors in future processing. The changes that occur in the urban operations of the future due to the application of these new technologies will be the improved quality of information, with shorter reaction time, and with lower operator load.
Apparatus and Method for Assessing Vestibulo-Ocular Function
NASA Technical Reports Server (NTRS)
Shelhamer, Mark J. (Inventor)
2015-01-01
A system for assessing vestibulo-ocular function includes a motion sensor system adapted to be coupled to a user's head; a data processing system configured to communicate with the motion sensor system to receive the head-motion signals; a visual display system configured to communicate with the data processing system to receive image signals from the data processing system; and a gain control device arranged to be operated by the user and to communicate gain adjustment signals to the data processing system.
NASA Astrophysics Data System (ADS)
Holmdahl, P. E.; Ellis, A. B. E.; Moeller-Olsen, P.; Ringgaard, J. P.
1981-12-01
The basic requirements of the SAR ground segment of ERS-1 are discussed. A system configuration for the real time data acquisition station and the processing and archive facility is depicted. The functions of a typical SAR processing unit (SPU) are specified, and inputs required for near real time and full precision, deferred time processing are described. Inputs and the processing required for provision of these inputs to the SPU are dealt with. Data flow through the systems, and normal and nonnormal operational sequence, are outlined. Prerequisites for maintaining overall performance are identified, emphasizing quality control. The most demanding tasks to be performed by the front end are defined in order to determine types of processors and peripherals which comply with throughput requirements.
Controlling Laboratory Processes From A Personal Computer
NASA Technical Reports Server (NTRS)
Will, H.; Mackin, M. A.
1991-01-01
Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.
The Advanced Linked Extended Reconnaissance & Targeting Technology Demonstration project
NASA Astrophysics Data System (ADS)
Edwards, Mark
2008-04-01
The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing many operational needs of the future Canadian Army's Surveillance and Reconnaissance forces. Using the surveillance system of the Coyote reconnaissance vehicle as an experimental platform, the ALERT TD project aims to significantly enhance situational awareness by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. The project is exploiting important advances made in computer processing capability, displays technology, digital communications, and sensor technology since the design of the original surveillance system. As the major research area within the project, concepts are discussed for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as from beyond line-of-sight systems such as mini-UAVs and unattended ground sensors. Video-rate image processing has been developed to assist the operator to detect poorly visible targets. As a second major area of research, automatic target cueing capabilities have been added to the system. These include scene change detection, automatic target detection and aided target recognition algorithms processing both IR and visible-band images to draw the operator's attention to possible targets. The merits of incorporating scene change detection algorithms are also discussed. In the area of multi-sensor data fusion, up to Joint Defence Labs level 2 has been demonstrated. The human factors engineering aspects of the user interface in this complex environment are presented, drawing upon multiple user group sessions with military surveillance system operators. The paper concludes with Lessons Learned from the project. The ALERT system has been used in a number of C4ISR field trials, most recently at Exercise Empire Challenge in China Lake CA, and at Trial Quest in Norway. Those exercises provided further opportunities to investigate operator interactions. The paper concludes with recommendations for future work in operator interface design.
Petabyte Class Storage at Jefferson Lab (CEBAF)
NASA Technical Reports Server (NTRS)
Chambers, Rita; Davis, Mark
1996-01-01
By 1997, the Thomas Jefferson National Accelerator Facility will collect over one Terabyte of raw information per day of Accelerator operation from three concurrently operating Experimental Halls. When post-processing is included, roughly 250 TB of raw and formatted experimental data will be generated each year. By the year 2000, a total of one Petabyte will be stored on-line. Critical to the experimental program at Jefferson Lab (JLab) is the networking and computational capability to collect, store, retrieve, and reconstruct data on this scale. The design criteria include support of a raw data stream of 10-12 MB/second from Experimental Hall B, which will operate the CEBAF (Continuous Electron Beam Accelerator Facility) Large Acceptance Spectrometer (CLAS). Keeping up with this data stream implies design strategies that provide storage guarantees during accelerator operation, minimize the number of times data is buffered allow seamless access to specific data sets for the researcher, synchronize data retrievals with the scheduling of postprocessing calculations on the data reconstruction CPU farms, as well as support the site capability to perform data reconstruction and reduction at the same overall rate at which new data is being collected. The current implementation employs state-of-the-art StorageTek Redwood tape drives and robotics library integrated with the Open Storage Manager (OSM) Hierarchical Storage Management software (Computer Associates, International), the use of Fibre Channel RAID disks dual-ported between Sun Microsystems SMP servers, and a network-based interface to a 10,000 SPECint92 data processing CPU farm. Issues of efficiency, scalability, and manageability will become critical to meet the year 2000 requirements for a Petabyte of near-line storage interfaced to over 30,000 SPECint92 of data processing power.
NASA Astrophysics Data System (ADS)
Samsinar, Riza; Suseno, Jatmiko Endro; Widodo, Catur Edi
2018-02-01
The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.
Research & Technology Report Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Soffen, Gerald A. (Editor); Truszkowski, Walter (Editor); Ottenstein, Howard (Editor); Frost, Kenneth (Editor); Maran, Stephen (Editor); Walter, Lou (Editor); Brown, Mitch (Editor)
1995-01-01
The main theme of this edition of the annual Research and Technology Report is Mission Operations and Data Systems. Shifting from centralized to distributed mission operations, and from human interactive operations to highly automated operations is reported. The following aspects are addressed: Mission planning and operations; TDRSS, Positioning Systems, and orbit determination; hardware and software associated with Ground System and Networks; data processing and analysis; and World Wide Web. Flight projects are described along with the achievements in space sciences and earth sciences. Spacecraft subsystems, cryogenic developments, and new tools and capabilities are also discussed.
NASA's Earth Science Data Systems Standards Process Experiences
NASA Technical Reports Server (NTRS)
Ullman, Richard E.; Enloe, Yonsook
2007-01-01
NASA has impaneled several internal working groups to provide recommendations to NASA management on ways to evolve and improve Earth Science Data Systems. One of these working groups is the Standards Process Group (SPC). The SPG is drawn from NASA-funded Earth Science Data Systems stakeholders, and it directs a process of community review and evaluation of proposed NASA standards. The working group's goal is to promote interoperability and interuse of NASA Earth Science data through broader use of standards that have proven implementation and operational benefit to NASA Earth science by facilitating the NASA management endorsement of proposed standards. The SPC now has two years of experience with this approach to identification of standards. We will discuss real examples of the different types of candidate standards that have been proposed to NASA's Standards Process Group such as OPeNDAP's Data Access Protocol, the Hierarchical Data Format, and Open Geospatial Consortium's Web Map Server. Each of the three types of proposals requires a different sort of criteria for understanding the broad concepts of "proven implementation" and "operational benefit" in the context of NASA Earth Science data systems. We will discuss how our Standards Process has evolved with our experiences with the three candidate standards.
NASA Technical Reports Server (NTRS)
Hammond, P. L.
1979-01-01
This manual describes the use of the primary ultrasonics task (PUT) and the transducer characterization system (XC) for the collection, processing, and recording of data received from a pulse-echo ultrasonic system. Both PUT and XC include five primary functions common to many real-time data acquisition systems. Some of these functions are implemented using the same code in both systems. The solicitation and acceptance of operator control input is emphasized. Those operations not under user control are explained.
Interactive color display for multispectral imagery using correlation clustering
NASA Technical Reports Server (NTRS)
Haskell, R. E. (Inventor)
1979-01-01
A method for processing multispectral data is provided, which permits an operator to make parameter level changes during the processing of the data. The system is directed to production of a color classification map on a video display in which a given color represents a localized region in multispectral feature space. Interactive controls permit an operator to alter the size and change the location of these regions, permitting the classification of such region to be changed from a broad to a narrow classification.
75 FR 8508 - Computerized Tribal IV-D Systems and Office Automation
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-25
...This rule enables Tribes and Tribal organizations currently operating comprehensive Tribal Child Support Enforcement programs under Title IV-D of the Social Security Act (the Act) to apply for and receive direct Federal funding for the costs of automated data processing. This rule addresses the Secretary's commitment to provide instructions and guidance to Tribes and Tribal organizations on requirements for applying for, and upon approval, securing Federal Financial Participation (FFP) in the costs of installing, operating, maintaining, and enhancing automated data processing systems.
Space shuttle engineering and operations support. Avionics system engineering
NASA Technical Reports Server (NTRS)
Broome, P. A.; Neubaur, R. J.; Welsh, R. T.
1976-01-01
The shuttle avionics integration laboratory (SAIL) requirements for supporting the Spacelab/orbiter avionics verification process are defined. The principal topics are a Spacelab avionics hardware assessment, test operations center/electronic systems test laboratory (TOC/ESL) data processing requirements definition, SAIL (Building 16) payload accommodations study, and projected funding and test scheduling. Because of the complex nature of the Spacelab/orbiter computer systems, the PCM data link, and the high rate digital data system hardware/software relationships, early avionics interface verification is required. The SAIL is a prime candidate test location to accomplish this early avionics verification.
Method and apparatus for detecting concealed weapons
Kotter, Dale K.; Fluck, Frederick D.
2006-03-14
Apparatus for classifying a ferromagnetic object within a sensing area may include a magnetic field sensor that produces magnetic field data. A signal processing system operatively associated with the magnetic field sensor includes a neural network. The neural network compares the magnetic field data with magnetic field data produced by known ferromagnetic objects to make a probabilistic determination as to the classification of the ferromagnetic object within the sensing area. A user interface operatively associated with the signal processing system produces a user-discernable output indicative of the probabilistic determination of the classification of the ferromagnetic object within a sensing area.
A radar data processing and enhancement system
NASA Technical Reports Server (NTRS)
Anderson, K. F.; Wrin, J. W.; James, R.
1986-01-01
This report describes the space position data processing system of the NASA Western Aeronautical Test Range. The system is installed at the Dryden Flight Research Facility of NASA Ames Research Center. This operational radar data system (RADATS) provides simultaneous data processing for multiple data inputs and tracking and antenna pointing outputs while performing real-time monitoring, control, and data enhancement functions. Experience in support of the space shuttle and aeronautical flight research missions is described, as well as the automated calibration and configuration functions of the system.
National Space Transportation System Reference. Volume 2: Operations
NASA Technical Reports Server (NTRS)
1988-01-01
An overview of the Space Transportation System is presented in which aspects of the program operations are discussed. The various mission preparation and prelaunch operations are described including astronaut selection and training, Space Shuttle processing, Space Shuttle integration and rollout, Complex 39 launch pad facilities, and Space Shuttle cargo processing. Also, launch and flight operations and space tracking and data acquisition are described along with the mission control and payload operations control center. In addition, landing, postlanding, and solid rocket booster retrieval operations are summarized. Space Shuttle program management is described and Space Shuttle mission summaries and chronologies are presented. A glossary of acronyms and abbreviations are provided.
NASA Technical Reports Server (NTRS)
Weaver, William L.; Bush, Kathryn A.; Harris, Chris J.; Howerton, Clayton E.; Tolson, Carol J.
1991-01-01
Instruments of the Earth Radiation Budget Experiment (ERBE) are operating on three different Earth orbiting spacecrafts: the Earth Radiation Budget Satellite (ERBS), NOAA-9, and NOAA-10. An overview is presented of the ERBE mission, in-orbit environments, and instrument design and operational features. An overview of science data processing and validation procedures is also presented. In-flight operations are described for the ERBE instruments aboard the ERBS and NOAA-9. Calibration and other operational procedures are described, and operational and instrument housekeeping data are presented and discussed.
Classification Trees for Quality Control Processes in Automated Constructed Response Scoring.
ERIC Educational Resources Information Center
Williamson, David M.; Hone, Anne S.; Miller, Susan; Bejar, Isaac I.
As the automated scoring of constructed responses reaches operational status, the issue of monitoring the scoring process becomes a primary concern, particularly when the goal is to have automated scoring operate completely unassisted by humans. Using a vignette from the Architectural Registration Examination and data for 326 cases with both human…
NASA Technical Reports Server (NTRS)
Brown, R. A.
1982-01-01
The productivity of spectroreflectometer equipment and operating personnel and the accuracy and sensitivity of the measurements were investigated. Increased optical sensitivity and better design of the data collection and processing scheme to eliminate some of the unnecessary present operations were conducted. Two promising approaches to increased sensitivity were identified, conventional processing with error compensation and detection of random noise modulation.
Computer program compatible with a laser nephelometer
NASA Technical Reports Server (NTRS)
Paroskie, R. M.; Blau, H. H., Jr.; Blinn, J. C., III
1975-01-01
The laser nephelometer data system was updated to provide magnetic tape recording of data, and real time or near real time processing of data to provide particle size distribution and liquid water content. Digital circuits were provided to interface the laser nephelometer to a Data General Nova 1200 minicomputer. Communications are via a teletypewriter. A dual Linc Magnetic Tape System is used for program storage and data recording. Operational programs utilize the Data General Real-Time Operating System (RTOS) and the ERT AIRMAP Real-Time Operating System (ARTS). The programs provide for acquiring data from the laser nephelometer, acquiring data from auxiliary sources, keeping time, performing real time calculations, recording data and communicating with the teletypewriter.
NASA Astrophysics Data System (ADS)
Conway, Esther; Waterfall, Alison; Pepler, Sam; Newey, Charles
2015-04-01
In this paper we decribe a business process modelling approach to the integration of exisiting archival activities. We provide a high level overview of existing practice and discuss how procedures can be extended and supported through the description of preservation state. The aim of which is to faciliate the dynamic controlled management of scientific data through its lifecycle. The main types of archival processes considered are: • Management processes that govern the operation of an archive. These management processes include archival governance (preservation state management, selection of archival candidates and strategic management) . • Operational processes that constitute the core activities of the archive which maintain the value of research assets. These operational processes are the acquisition, ingestion, deletion, generation of metadata and preservation actvities, • Supporting processes, which include planning, risk analysis and monitoring of the community/preservation environment. We then proceed by describing the feasability testing of extended risk management and planning procedures which integrate current practices. This was done through the CEDA Archival Format Audit which inspected British Atmospherics Data Centre and National Earth Observation Data Centre Archival holdings. These holdings are extensive, comprising of around 2PB of data and 137 million individual files which were analysed and characterised in terms of format based risk. We are then able to present an overview of the risk burden faced by a large scale archive attempting to maintain the usability of heterogeneous environmental data sets. We conclude by presenting a dynamic data management information model that is capable of describing the preservation state of archival holdings throughout the data lifecycle. We provide discussion of the following core model entities and their relationships: • Aspirational entities, which include Data Entity definitions and their associated Preservation Objectives. • Risk entities, which act as drivers for change within the data lifecycle. These include Acquisitional Risks, Technical Risks, Strategic Risks and External Risks • Plan entities, which detail the actions to bring about change within an archive. These include Acquisition Plans, Preservation Plans and Monitoring plans • The Result entities describe the successful outcomes of the executed plans. These include Acquisitions, Mitigations and Accepted Risks.
NASA Technical Reports Server (NTRS)
Basile, Lisa
1988-01-01
The SLDPF is responsible for the capture, quality monitoring processing, accounting, and shipment of Spacelab and/or Attached Shuttle Payloads (ASP) telemetry data to various user facilities. Expert systems will aid in the performance of the quality assurance and data accounting functions of the two SLDPF functional elements: the Spacelab Input Processing System (SIPS) and the Spacelab Output Processing System (SOPS). Prototypes were developed for each as independent efforts. The SIPS Knowledge System Prototype (KSP) used the commercial shell OPS5+ on an IBM PC/AT; the SOPS Expert System Prototype used the expert system shell CLIPS implemented on a Macintosh personal computer. Both prototypes emulate the duties of the respective QA/DA analysts based upon analyst input and predetermined mission criteria parameters, and recommended instructions and decisions governing the reprocessing, release, or holding for further analysis of data. These prototypes demonstrated feasibility and high potential for operational systems. Increase in productivity, decrease of tedium, consistency, concise historical records, and a training tool for new analyses were the principal advantages. An operational configuration, taking advantage of the SLDPF network capabilities, is under development with the expert systems being installed on SUN workstations. This new configuration in conjunction with the potential of the expert systems will enhance the efficiency, in both time and quality, of the SLDPF's release of Spacelab/AST data products.
NASA Technical Reports Server (NTRS)
Basile, Lisa
1988-01-01
The SLDPF is responsible for the capture, quality monitoring processing, accounting, and shipment of Spacelab and/or Attached Shuttle Payloads (ASP) telemetry data to various user facilities. Expert systems will aid in the performance of the quality assurance and data accounting functions of the two SLDPF functional elements: the Spacelab Input Processing System (SIPS) and the Spacelab Output Processing System (SOPS). Prototypes were developed for each as independent efforts. The SIPS Knowledge System Prototype (KSP) used the commercial shell OPS5+ on an IBM PC/AT; the SOPS Expert System Prototype used the expert system shell CLIPS implemented on a Macintosh personal computer. Both prototypes emulate the duties of the respective QA/DA analysts based upon analyst input and predetermined mission criteria parameters, and recommended instructions and decisions governing the reprocessing, release, or holding for further analysis of data. These prototypes demonstrated feasibility and high potential for operational systems. Increase in productivity, decrease of tedium, consistency, concise historial records, and a training tool for new analyses were the principal advantages. An operational configuration, taking advantage of the SLDPF network capabilities, is under development with the expert systems being installed on SUN workstations. This new configuration in conjunction with the potential of the expert systems will enhance the efficiency, in both time and quality, of the SLDPF's release of Spacelab/AST data products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R
Methods, apparatuses, and computer program products for endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (`PAMI`) of a parallel computer are provided. Embodiments include establishing by a parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry. Embodiments also include registering in each endpoint in the geometry a dispatch callback function for a collective operation and executing without blocking, through a single onemore » of the endpoints in the geometry, an instruction for the collective operation.« less
The purpose of this SOP is to describe the flow of field data forms through the data processing system and to define who is responsible for the data at any time. It applies to field data forms collected and processed by NHEXAS Arizona. This procedure was followed to ensure consi...
EDOS operations concept and development approach
NASA Technical Reports Server (NTRS)
Knoble, G.; Garman, C.; Alcott, G.; Ramchandani, C.; Silvers, J.
1994-01-01
The Earth Observing System (EOS) Data and Operations System (EDOS) is being developed by the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) for the capture, level zero processing, distribution, and backup archiving of high speed telemetry data received from EOS spacecraft. All data received will conform to the Consultative Committee for Space Data Standards (CCSDS) recommendations. The major EDOS goals are to: (1) minimize EOS program costs to implement and operate EDOS; (2) respond effectively to EOS growth requirements; and (3) maintain compatibility with existing and enhanced versions of NASA institutional systems required to support EOS spacecraft. In order to meet these goals, the following objectives have been defined for EDOS: (1) standardize EDOS interfaces to maximize utility for future requirements; (2) emphasize life-cycle cost (LCC) considerations (rather than procurement costs) in making design decisions and meeting reliability, maintainability, availability (RMA) and upgradability requirements; (3) implement data-driven operations to the maximum extent possible to minimize staffing requirements and to maximize system responsiveness; (4) provide a system capable of simultaneously supporting multiple spacecraft, each in different phases of their life-cycles; (5) provide for technology insertion features to accommodate growth and future LCC reductions during the operations phase; and (6) provide a system that is sufficiently robust to accommodate incremental performance upgrades while supporting operations. Operations concept working group meetings were facilitated to help develop the EDOS operations concept. This provided a cohesive concept that met with approval of responsible personnel from the start. This approach not only speeded up the development process by reducing review cycles, it also provided a medium for generating good ideas that were immediately molded into feasible concepts. The operations concept was then used as a basis for the EDOS specification. When it was felt that concept elements did not support detailed requirements, the facilitator process was used to resolve discrepancies or to add new concept elements to support the specification. This method provided an ongoing revisal of the operations concept and prevented large revisions at the end of the requirement analysis phase of system development.
Model-based Assessment for Balancing Privacy Requirements and Operational Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knirsch, Fabian; Engel, Dominik; Frincu, Marc
2015-02-17
The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increasemore » of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and – if feasible – an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.« less
International Ultraviolet Explorer Observatory operations
NASA Technical Reports Server (NTRS)
1985-01-01
This volume contains the final report for the International Ultraviolet Explorer IUE Observatory Operations contract. The fundamental operational objective of the International Ultraviolet Explorer (IUE) program is to translate competitively selected observing programs into IUE observations, to reduce these observations into meaningful scientific data, and then to present these data to the Guest Observer in a form amenable to the pursuit of scientific research. The IUE Observatory is the key to this objective since it is the central control and support facility for all science operations functions within the IUE Project. In carrying out the operation of this facility, a number of complex functions were provided beginning with telescope scheduling and operation, proceeding to data processing, and ending with data distribution and scientific data analysis. In support of these critical-path functions, a number of other significant activities were also provided, including scientific instrument calibration, systems analysis, and software support. Routine activities have been summarized briefly whenever possible.
NASA Astrophysics Data System (ADS)
Heyd, R. S.; McArthur, G. A.; Leis, R.; Fennema, A.; Wolf, N.; Schaller, C. J.; Sutton, S.; Plassmann, J.; Forrester, T.; Fine, K.
2018-04-01
The HiRISE ground data system is a mature data processing system in operation for over 12 years. The experience gained from this system will be applied to developing a new and more modern GDS to process data from the CaSSIS instrument.
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses.
Montenegro-Burke, J Rafael; Phommavongsay, Thiery; Aisporna, Aries E; Huan, Tao; Rinehart, Duane; Forsberg, Erica; Poole, Farris L; Thorgersen, Michael P; Adams, Michael W W; Krantz, Gregory; Fields, Matthew W; Northen, Trent R; Robbins, Paul D; Niedernhofer, Laura J; Lairson, Luke; Benton, H Paul; Siuzdak, Gary
2016-10-04
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process. Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses
2016-01-01
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process. Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism. PMID:27560777
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses
Montenegro-Burke, J. Rafael; Phommavongsay, Thiery; Aisporna, Aries E.; ...
2016-08-25
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process.more » Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.« less
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montenegro-Burke, J. Rafael; Phommavongsay, Thiery; Aisporna, Aries E.
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process.more » Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.« less
Development of a prototype real-time automated filter for operational deep space navigation
NASA Technical Reports Server (NTRS)
Masters, W. C.; Pollmeier, V. M.
1994-01-01
Operational deep space navigation has been in the past, and is currently, performed using systems whose architecture requires constant human supervision and intervention. A prototype for a system which allows relatively automated processing of radio metric data received in near real-time from NASA's Deep Space Network (DSN) without any redesign of the existing operational data flow has been developed. This system can allow for more rapid response as well as much reduced staffing to support mission navigation operations.
Low Cost Mission Operations Workshop. [Space Missions
NASA Technical Reports Server (NTRS)
1994-01-01
The presentations given at the Low Cost (Space) Mission Operations (LCMO) Workshop are outlined. The LCMO concepts are covered in four introductory sections: Definition of Mission Operations (OPS); Mission Operations (MOS) Elements; The Operations Concept; and Mission Operations for Two Classes of Missions (operationally simple and complex). Individual presentations cover the following topics: Science Data Processing and Analysis; Mis sion Design, Planning, and Sequencing; Data Transport and Delivery, and Mission Coordination and Engineering Analysis. A list of panelists who participated in the conference is included along with a listing of the contact persons for obtaining more information concerning LCMO at JPL. The presentation of this document is in outline and graphic form.
Preliminary design review package for the solar heating and cooling central data processing system
NASA Technical Reports Server (NTRS)
1976-01-01
The Central Data Processing System (CDPS) is designed to transform the raw data collected at remote sites into performance evaluation information for assessing the performance of solar heating and cooling systems. Software requirements for the CDPS are described. The programming standards to be used in development, documentation, and maintenance of the software are discussed along with the CDPS operations approach in support of daily data collection and processing.
NASA Astrophysics Data System (ADS)
Acevedo, Romina; Orihuela, Nuris; Blanco, Rafael; Varela, Francisco; Camacho, Enrique; Urbina, Marianela; Aponte, Luis Gabriel; Vallenilla, Leopoldo; Acuña, Liana; Becerra, Roberto; Tabare, Terepaima; Recaredo, Erica
2009-12-01
Built in cooperation with the P.R of China, in October 29th of 2008, the Bolivarian Republic of Venezuela launched its first Telecommunication Satellite, the so called VENESAT-1 (Simón Bolívar Satellite), which operates in C (covering Center America, The Caribbean Region and most of South America), Ku (Bolivia, Cuba, Dominican Republic, Haiti, Paraguay, Uruguay, Venezuela) and Ka bands (Venezuela). The launch of VENESAT-1 represents the starting point for Venezuela as an active player in the field of space science and technology. In order to fulfill mission requirements and to guarantee the satellite's health, local professionals must provide continuous monitoring, orbit calculation, maneuvers preparation and execution, data preparation and processing, as well as data base management at the VENESAT-1 Ground Segment, which includes both a primary and backup site. In summary, data processing and real time data management are part of the daily activities performed by the personnel at the ground segment. Using published and unpublished information, this paper presents how human resource organization can enhance space information acquisition and processing, by analyzing the proposed organizational structure for the VENESAT-1 Ground Segment. We have found that the proposed units within the organizational structure reflect 3 key issues for mission management: Satellite Operations, Ground Operations, and Site Maintenance. The proposed organization is simple (3 hierarchical levels and 7 units), and communication channels seem efficient in terms of facilitating information acquisition, processing, storage, flow and exchange. Furthermore, the proposal includes a manual containing the full description of personnel responsibilities and profile, which efficiently allocates the management and operation of key software for satellite operation such as the Real-time Data Transaction Software (RDTS), Data Management Software (DMS), and Carrier Spectrum Monitoring Software (CSM) within the different organizational units. In all this process, the international cooperation has played a key role for the consolidation of its space capabilities, especially through the continuous and arduous exchange of information, documentation and expertise between Chinese and Venezuelan personnel at the ground stations. Based on the principles of technology transfer and human training, since 1999 the Bolivarian Republic of Venezuela has shown an increasing interest in developing local space capabilities for peaceful purposes. According to the analysis we have performed, the proposed organizational structure of the VENESAT-1 ground segment will allow the country to face the challenges imposed by the operation of complex technologies. By enhancing human resource organization, this proposal will help to fulfill mission requirements, and to facilitate the safe access, processing and storage of satellite data across the organization, during both nominal and potential contingency situations.
Airborne ballistic camera tracking systems
NASA Technical Reports Server (NTRS)
Redish, W. L.
1976-01-01
An operational airborne ballistic camera tracking system was tested for operational and data reduction feasibility. The acquisition and data processing requirements of the system are discussed. Suggestions for future improvements are also noted. A description of the data reduction mathematics is outlined. Results from a successful reentry test mission are tabulated. The test mission indicated that airborne ballistic camera tracking systems are feasible.
NASA Technical Reports Server (NTRS)
Carsey, Frank D.
1996-01-01
The Alaska SAR Facility (ASF) has been receiving, processing, archiving, and distributing data for Earth scientists and operations since it began receiving data in 1991. Four radar satellites are now being handled. Recent developments have served to increase the level of services of ASF to the Earth science community considerably. These developments are discussed.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.
Exploiting Analytics Techniques in CMS Computing Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, D.; Kuznetsov, V.; Magini, N.
The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster formore » further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.« less
NASA Astrophysics Data System (ADS)
Heckmann, G.; Route, G.
2009-12-01
The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. The IDPS processes NPOESS satellite data to provide environmental data products (aka, Environmental Data Records or EDRs) to NOAA and DoD processing centers operated by the United States government. The IDPS will process EDRs beginning with the NPOESS Preparatory Project (NPP) and continuing through the lifetime of the NPOESS system. IDPS also provides the software and requirements for the Field Terminal Segment (FTS). NPOESS provides support to deployed field terminals by providing mission data in the Low Rate and High Rate downlinks (LRD/HRD), mission support data needed to generate EDRs and decryption keys needed to decrypt mission data during Selective data Encryption (SDE). Mission support data consists of globally relevant data, geographically constrained data, and two line element sets. NPOESS provides these mission support data via the Internet accessible Mission Support Data Server and HRD/LRD downlinks. This presentation will illustrate and describe the NPOESS capabilities in support of Field Terminal users. This discussion will include the mission support data available to Field Terminal users, content of the direct broadcast HRD and LRD downlinks identifying differences between the direct broadcast downlinks including the variability of the LRD downlink and NPOESS management and distribution of decryption keys to approved field terminals using Public Key Infrastructure (PKI) AES standard with 256 bit encryption and elliptical curve cryptography.
NASA Extends Chandra Science and Operations Support Contract
NASA Astrophysics Data System (ADS)
2010-01-01
NASA has extended a contract with the Smithsonian Astrophysical Observatory in Cambridge, Mass., to provide science and operational support for the Chandra X-ray Observatory, a powerful tool used to better understand the structure and evolution of the universe. The contract extension with the Smithsonian Astrophysical Observatory provides continued science and operations support to Chandra. This approximately 172 million modification brings the total value of the contract to approximately 545 million for the base effort. The base effort period of performance will continue through Sept. 30, 2013, except for the work associated with the administration of scientific research grants, which will extend through Feb. 28, 2016. The contract type is cost reimbursement with no fee. In addition to the base effort, the contract includes two options for three years each to extend the period of performance for an additional six years. Option 1 is priced at approximately 177 million and Option 2 at approximately 191 million, for a total possible contract value of about $913 million. The contract covers mission operations and data analysis, which includes observatory operations, science data processing and astronomer support. The operations tasks include monitoring the health and status of the observatory and developing and uplinking the observation sequences during Chandra's communication coverage periods. The science data processing tasks include the competitive selection, planning and coordination of science observations and processing and delivery of the resulting scientific data. NASA's Marshall Space Flight Center in Huntsville, Ala, manages the Chandra program for the agency's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra's science and flight operations. For more information about the Chandra X-ray Observatory visit: http://chandra.nasa.gov
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao
In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less
A multiprocessing architecture for real-time monitoring
NASA Technical Reports Server (NTRS)
Schmidt, James L.; Kao, Simon M.; Read, Jackson Y.; Weitzenkamp, Scott M.; Laffey, Thomas J.
1988-01-01
A multitasking architecture for performing real-time monitoring and analysis using knowledge-based problem solving techniques is described. To handle asynchronous inputs and perform in real time, the system consists of three or more distributed processes which run concurrently and communicate via a message passing scheme. The Data Management Process acquires, compresses, and routes the incoming sensor data to other processes. The Inference Process consists of a high performance inference engine that performs a real-time analysis on the state and health of the physical system. The I/O Process receives sensor data from the Data Management Process and status messages and recommendations from the Inference Process, updates its graphical displays in real time, and acts as the interface to the console operator. The distributed architecture has been interfaced to an actual spacecraft (NASA's Hubble Space Telescope) and is able to process the incoming telemetry in real-time (i.e., several hundred data changes per second). The system is being used in two locations for different purposes: (1) in Sunnyville, California at the Space Telescope Test Control Center it is used in the preflight testing of the vehicle; and (2) in Greenbelt, Maryland at NASA/Goddard it is being used on an experimental basis in flight operations for health and safety monitoring.
IT Operational Risk Measurement Model Based on Internal Loss Data of Banks
NASA Astrophysics Data System (ADS)
Hao, Xiaoling
Business operation of banks relies increasingly on information technology (IT) and the most important role of IT is to guarantee the operational continuity of business process. Therefore, IT Risk management efforts need to be seen from the perspective of operational continuity. Traditional IT risk studies focused on IT asset-based risk analysis and risk-matrix based qualitative risk evaluation. In practice, IT risk management practices of banking industry are still limited to the IT department and aren't integrated into business risk management, which causes the two departments to work in isolation. This paper presents an improved methodology for dealing with IT operational risk. It adopts quantitative measurement method, based on the internal business loss data about IT events, and uses Monte Carlo simulation to predict the potential losses. We establish the correlation between the IT resources and business processes to make sure risk management of IT and business can work synergistically.
’Huts and Nuts’ or ’Hearts and Minds?’ -- Anthropologists and Operational Art
2008-12-06
hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...does operational art provide anthropology? A historical look at the use of anthropologists and their data /analysis is featured in Appendix A. Its...or undermine their efforts. Anthropologists, by introducing and using their fieldwork process of mapping out and data -basing these cultural and
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
New Processing of Spaceborne Imaging Radar-C (SIR-C) Data
NASA Astrophysics Data System (ADS)
Meyer, F. J.; Gracheva, V.; Arko, S. A.; Labelle-Hamer, A. L.
2017-12-01
The Spaceborne Imaging Radar-C (SIR-C) was a radar system, which successfully operated on two separate shuttle missions in April and October 1994. During these two missions, a total of 143 hours of radar data were recorded. SIR-C was the first multifrequency and polarimetric spaceborne radar system, operating in dual frequency (L- and C- band) and with quad-polarization. SIR-C had a variety of different operating modes, which are innovative even from today's point of view. Depending on the mode, it was possible to acquire data with different polarizations and carrier frequency combinations. Additionally, different swaths and bandwidths could be used during the data collection and it was possible to receive data with two antennas in the along-track direction.The United States Geological Survey (USGS) distributes the synthetic aperture radar (SAR) images as single-look complex (SLC) and multi-look complex (MLC) products. Unfortunately, since June 2005 the SIR-C processor has been inoperable and not repairable. All acquired SLC and MLC images were processed with a course resolution of 100 m with the goal of generating a quick look. These images are however not well suited for scientific analysis. Only a small percentage of the acquired data has been processed as full resolution SAR images and the unprocessed high resolution data cannot be processed any more at the moment.At the Alaska Satellite Facility (ASF) a new processor was developed to process binary SIR-C data to full resolution SAR images. ASF is planning to process the entire recoverable SIR-C archive to full resolution SLCs, MLCs and high resolution geocoded image products. ASF will make these products available to the science community through their existing data archiving and distribution system.The final paper will describe the new processor and analyze the challenges of reprocessing the SIR-C data.
Operational Management System for Regulated Water Systems
NASA Astrophysics Data System (ADS)
van Loenen, A.; van Dijk, M.; van Verseveld, W.; Berger, H.
2012-04-01
Most of the Dutch large rivers, canals and lakes are controlled by the Dutch water authorities. The main reasons concern safety, navigation and fresh water supply. Historically the separate water bodies have been controlled locally. For optimizating management of these water systems an integrated approach was required. Presented is a platform which integrates data from all control objects for monitoring and control purposes. The Operational Management System for Regulated Water Systems (IWP) is an implementation of Delft-FEWS which supports operational control of water systems and actively gives advice. One of the main characteristics of IWP is that is real-time collects, transforms and presents different types of data, which all add to the operational water management. Next to that, hydrodynamic models and intelligent decision support tools are added to support the water managers during their daily control activities. An important advantage of IWP is that it uses the Delft-FEWS framework, therefore processes like central data collection, transformations, data processing and presentation are simply configured. At all control locations the same information is readily available. The operational water management itself gains from this information, but it can also contribute to cost efficiency (no unnecessary pumping), better use of available storage and advise during (water polution) calamities.
Embedded parallel processing based ground control systems for small satellite telemetry
NASA Technical Reports Server (NTRS)
Forman, Michael L.; Hazra, Tushar K.; Troendly, Gregory M.; Nickum, William G.
1994-01-01
The use of networked terminals which utilize embedded processing techniques results in totally integrated, flexible, high speed, reliable, and scalable systems suitable for telemetry and data processing applications such as mission operations centers (MOC). Synergies of these terminals, coupled with the capability of terminal to receive incoming data, allow the viewing of any defined display by any terminal from the start of data acquisition. There is no single point of failure (other than with network input) such as exists with configurations where all input data goes through a single front end processor and then to a serial string of workstations. Missions dedicated to NASA's ozone measurements program utilize the methodologies which are discussed, and result in a multimission configuration of low cost, scalable hardware and software which can be run by one flight operations team with low risk.
Opals: Mission System Operations Architecture for an Optical Communications Demonstration on the ISS
NASA Technical Reports Server (NTRS)
Abrahamson, Matthew J.; Sindiy, Oleg V.; Oaida, Bogdan V.; Fregoso, Santos; Bowles-Martinez, Jessica N.; Kokorowski, Michael; Wilkerson, Marcus W.; Konyha, Alexander L.
2014-01-01
In April of 2014, the Optical PAyload for Lasercomm Science (OPALS) Flight System (FS) launched to the International Space Station (ISS) to demonstrate space-to-ground optical communications. During a planned 90-day baseline mission, the OPALS FS will downlink high quality, short duration videos to the Optical Communications Telescope Laboratory (OCTL) ground station in Wrightwood, California. Interfaces to the ISS payload operations infrastructure have been established to facilitate activity planning, hazardous laser operations, commanding, and telemetry transmission. In addition, internal processes, such as pointing prediction and data processing, satisfy the technical requirements of the mission. The OPALS operations team participates in Operational Readiness Tests (ORTs) with external partners to exercise coordination processes and train for the overall mission. The ORTs have provided valuable insight into operational considerations for the instrument on the ISS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, M.; Anderson, D.P.
1988-01-01
Marionette is a system for distributed parallel programming in an environment of networked heterogeneous computer systems. It is based on a master/slave model. The master process can invoke worker operations (asynchronous remote procedure calls to single slaves) and context operations (updates to the state of all slaves). The master and slaves also interact through shared data structures that can be modified only by the master. The master and slave processes are programmed in a sequential language. The Marionette runtime system manages slave process creation, propagates shared data structures to slaves as needed, queues and dispatches worker and context operations, andmore » manages recovery from slave processor failures. The Marionette system also includes tools for automated compilation of program binaries for multiple architectures, and for distributing binaries to remote fuel systems. A UNIX-based implementation of Marionette is described.« less
Using perceptual rules in interactive visualization
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Treinish, Lloyd A.
1994-05-01
In visualization, data are represented as variations in grayscale, hue, shape, and texture. They can be mapped to lines, surfaces, and glyphs, and can be represented statically or in animation. In modem visualization systems, the choices for representing data seem unlimited. This is both a blessing and a curse, however, since the visual impression created by the visualization depends critically on which dimensions are selected for representing the data (Bertin, 1967; Tufte, 1983; Cleveland, 1991). In modem visualization systems, the user can interactively select many different mapping and representation operations, and can interactively select processing operations (e.g., applying a color map), realization operations (e.g., generating geometric structures such as contours or streamlines), and rendering operations (e.g., shading or ray-tracing). The user can, for example, map data to a color map, then apply contour lines, then shift the viewing angle, then change the color map again, etc. In many systems, the user can vary the choices for each operation, selecting, for example, particular color maps, contour characteristics, and shading techniques. The hope is that this process will eventually converge on a visual representation which expresses the structure of the data and effectively communicates its message in a way that meets the user's goals. Sometimes, however, it results in visual representations which are confusing, misleading, and garish.
Historical data recording for process computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, J.C.; Sellars, H.L.
1981-11-01
Computers have been used to monitor and control chemical and refining processes for more than 15 years. During this time, there has been a steady growth in the variety and sophistication of the functions performed by these process computers. Early systems were limited to maintaining only current operating measurements, available through crude operator's consoles or noisy teletypes. The value of retaining a process history, that is, a collection of measurements over time, became apparent, and early efforts produced shift and daily summary reports. The need for improved process historians which record, retrieve and display process information has grown as processmore » computers assume larger responsibilities in plant operations. This paper describes newly developed process historian functions that have been used on several of its in-house process monitoring and control systems in Du Pont factories. 3 refs.« less
Code of Federal Regulations, 2011 CFR
2011-01-01
... REGULATIONS FOR WAREHOUSES REGULATIONS FOR THE UNITED STATES WAREHOUSE ACT Electronic Providers § 735.403... electronic data processing audit that meets the minimum requirements as provided in the applicable provider agreement. The electronic data processing audit will be used by DACO to evaluate current computer operations...
Code of Federal Regulations, 2010 CFR
2010-01-01
... REGULATIONS FOR WAREHOUSES REGULATIONS FOR THE UNITED STATES WAREHOUSE ACT Electronic Providers § 735.403... electronic data processing audit that meets the minimum requirements as provided in the applicable provider agreement. The electronic data processing audit will be used by DACO to evaluate current computer operations...
NASA Astrophysics Data System (ADS)
Ariana, I. M.; Bagiada, I. M.
2018-01-01
Development of spreadsheet-based integrated transaction processing systems and financial reporting systems is intended to optimize the capabilities of spreadsheet in accounting data processing. The purpose of this study are: 1) to describe the spreadsheet-based integrated transaction processing systems and financial reporting systems; 2) to test its technical and operational feasibility. This study type is research and development. The main steps of study are: 1) needs analysis (need assessment); 2) developing spreadsheet-based integrated transaction processing systems and financial reporting systems; and 3) testing the feasibility of spreadsheet-based integrated transaction processing systems and financial reporting systems. The technical feasibility include the ability of hardware and operating systems to respond the application of accounting, simplicity and ease of use. Operational feasibility include the ability of users using accounting applications, the ability of accounting applications to produce information, and control applications of the accounting applications. The instrument used to assess the technical and operational feasibility of the systems is the expert perception questionnaire. The instrument uses 4 Likert scale, from 1 (strongly disagree) to 4 (strongly agree). Data were analyzed using percentage analysis by comparing the number of answers within one (1) item by the number of ideal answer within one (1) item. Spreadsheet-based integrated transaction processing systems and financial reporting systems integrate sales, purchases, and cash transaction processing systems to produce financial reports (statement of profit or loss and other comprehensive income, statement of changes in equity, statement of financial position, and statement of cash flows) and other reports. Spreadsheet-based integrated transaction processing systems and financial reporting systems is feasible from the technical aspects (87.50%) and operational aspects (84.17%).
Oh, Ji-Hyeon
2018-12-01
With the development of computer-aided design/computer-aided manufacturing (CAD/CAM) technology, it has been possible to reconstruct the cranio-maxillofacial defect with more accurate preoperative planning, precise patient-specific implants (PSIs), and shorter operation times. The manufacturing processes include subtractive manufacturing and additive manufacturing and should be selected in consideration of the material type, available technology, post-processing, accuracy, lead time, properties, and surface quality. Materials such as titanium, polyethylene, polyetheretherketone (PEEK), hydroxyapatite (HA), poly-DL-lactic acid (PDLLA), polylactide-co-glycolide acid (PLGA), and calcium phosphate are used. Design methods for the reconstruction of cranio-maxillofacial defects include the use of a pre-operative model printed with pre-operative data, printing a cutting guide or template after virtual surgery, a model after virtual surgery printed with reconstructed data using a mirror image, and manufacturing PSIs by directly obtaining PSI data after reconstruction using a mirror image. By selecting the appropriate design method, manufacturing process, and implant material according to the case, it is possible to obtain a more accurate surgical procedure, reduced operation time, the prevention of various complications that can occur using the traditional method, and predictive results compared to the traditional method.
Software Correlator for Radioastron Mission
NASA Astrophysics Data System (ADS)
Likhachev, Sergey F.; Kostenko, Vladimir I.; Girin, Igor A.; Andrianov, Andrey S.; Rudnitskiy, Alexey G.; Zharov, Vladimir E.
In this paper, we discuss the characteristics and operation of Astro Space Center (ASC) software FX correlator that is an important component of space-ground interferometer for Radioastron project. This project performs joint observations of compact radio sources using 10m space radio telescope (SRT) together with ground radio telescopes at 92, 18, 6 and 1.3 cm wavelengths. In this paper, we describe the main features of space-ground VLBI data processing of Radioastron project using ASC correlator. Quality of implemented fringe search procedure provides positive results without significant losses in correlated amplitude. ASC Correlator has a computational power close to real time operation. The correlator has a number of processing modes: “Continuum”, “Spectral Line”, “Pulsars”, “Giant Pulses”,“Coherent”. Special attention is paid to peculiarities of Radioastron space-ground VLBI data processing. The algorithms of time delay and delay rate calculation are also discussed, which is a matter of principle for data correlation of space-ground interferometers. During five years of Radioastron SRT successful operation, ASC correlator showed high potential of satisfying steady growing needs of current and future ground and space VLBI science. Results of ASC software correlator operation are demonstrated.
The advanced linked extended reconnaissance and targeting technology demonstration project
NASA Astrophysics Data System (ADS)
Cruickshank, James; de Villers, Yves; Maheux, Jean; Edwards, Mark; Gains, David; Rea, Terry; Banbury, Simon; Gauthier, Michelle
2007-06-01
The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing key operational needs of the future Canadian Army's Surveillance and Reconnaissance forces by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. We discuss concepts for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as beyond line-of-sight systems such as a mini-UAV and unattended ground sensors. The authors address technical issues associated with the use of fully digital IR and day video cameras and discuss video-rate image processing developed to assist the operator to recognize poorly visible targets. Automatic target detection and recognition algorithms processing both IR and visible-band images have been investigated to draw the operator's attention to possible targets. The machine generated information display requirements are presented with the human factors engineering aspects of the user interface in this complex environment, with a view to establishing user trust in the automation. The paper concludes with a summary of achievements to date and steps to project completion.
Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines
Mikut, Ralf
2017-01-01
Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927
The purpose of this SOP is to describe the flow of field data forms through the data processing system and to define who is responsible for the data at any time. It applies to field data forms collected and processed by Arizona NHEXAS. This procedure was followed to ensure cons...
MRNIDX - Marine Data Index: Database Description, Operation, Retrieval, and Display
Paskevich, Valerie F.
1982-01-01
A database referencing the location and content of data stored on magnetic medium was designed to assist in the indexing of time-series and spatially dependent marine geophysical data collected or processed by the U. S. Geological Survey. The database was designed and created for input to the Geologic Retrieval and Synopsis Program (GRASP) to allow selective retrievals of information pertaining to location of data, data format, cruise, geographical bounds and collection dates of data. This information is then used to locate the stored data for administrative purposes or further processing. Database utilization is divided into three distinct operations. The first is the inventorying of the data and the updating of the database, the second is the retrieval of information from the database, and the third is the graphic display of the geographical boundaries to which the retrieved information pertains.
NASA Astrophysics Data System (ADS)
Meyer, F. J.; Webley, P.; Dehn, J.; Arko, S. A.; McAlpin, D. B.
2013-12-01
Volcanic eruptions are among the most significant hazards to human society, capable of triggering natural disasters on regional to global scales. In the last decade, remote sensing techniques have become established in operational forecasting, monitoring, and managing of volcanic hazards. Monitoring organizations, like the Alaska Volcano Observatory (AVO), are nowadays heavily relying on remote sensing data from a variety of optical and thermal sensors to provide time-critical hazard information. Despite the high utilization of these remote sensing data to detect and monitor volcanic eruptions, the presence of clouds and a dependence on solar illumination often limit their impact on decision making processes. Synthetic Aperture Radar (SAR) systems are widely believed to be superior to optical sensors in operational monitoring situations, due to the weather and illumination independence of their observations and the sensitivity of SAR to surface changes and deformation. Despite these benefits, the contributions of SAR to operational volcano monitoring have been limited in the past due to (1) high SAR data costs, (2) traditionally long data processing times, and (3) the low temporal sampling frequencies inherent to most SAR systems. In this study, we present improved data access, data processing, and data integration techniques that mitigate some of the above mentioned limitations and allow, for the first time, a meaningful integration of SAR into operational volcano monitoring systems. We will introduce a new database interface that was developed in cooperation with the Alaska Satellite Facility (ASF) and allows for rapid and seamless data access to all of ASF's SAR data holdings. We will also present processing techniques that improve the temporal frequency with which hazard-related products can be produced. These techniques take advantage of modern signal processing technology as well as new radiometric normalization schemes, both enabling the combination of multiple observation geometries in change detection procedures. Additionally, it will be shown how SAR-based hazard information can be integrated with data from optical satellites, thermal sensors, webcams and models to create near-real time volcano hazard information. We will introduce a prototype monitoring system that integrates SAR-based hazard information into the near real-time volcano hazard monitoring system of the Alaska Volcano Observatory. This prototype system was applied to historic eruptions of the volcanoes Okmok and Augustine, both located in the North Pacific. We will show that for these historic eruptions, the addition of SAR data lead to a significant improvement in activity detection and eruption monitoring, and improved the accuracy and timeliness of eruption alerts.
Shen, Jiacheng; Agblevor, Foster A
2010-03-01
An operable batch model of simultaneous saccharification and fermentation (SSF) for ethanol production from cellulose has been developed. The model includes four ordinary differential equations that describe the changes of cellobiose, glucose, yeast, and ethanol concentrations with respect to time. These equations were used to simulate the experimental data of the four main components in the SSF process of ethanol production from microcrystalline cellulose (Avicel PH101). The model parameters at 95% confidence intervals were determined by a MATLAB program based on the batch experimental data of the SSF. Both experimental data and model simulations showed that the cell growth was the rate-controlling step at the initial period in a series of reactions of cellulose to ethanol, and later, the conversion of cellulose to cellobiose controlled the process. The batch model was extended to the continuous and fed-batch operating models. For the continuous operation in the SSF, the ethanol productivities increased with increasing dilution rate, until a maximum value was attained, and rapidly decreased as the dilution rate approached the washout point. The model also predicted a relatively high ethanol mass for the fed-batch operation than the batch operation.
Petri net model for analysis of concurrently processed complex algorithms
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1986-01-01
This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.
Data Management Facility Operations Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keck, Nicole N
2014-06-30
The Data Management Facility (DMF) is the data center that houses several critical Atmospheric Radiation Measurement (ARM) Climate Research Facility services, including first-level data processing for the ARM Mobile Facilities (AMFs), Eastern North Atlantic (ENA), North Slope of Alaska (NSA), Southern Great Plains (SGP), and Tropical Western Pacific (TWP) sites, as well as Value-Added Product (VAP) processing, development systems, and other network services.
The purpose of this SOP is to define the procedure to provide a standard method for correcting electronic data errors. The procedure defines (1) when electronic data may be corrected and by whom, (2) the process of correcting the data, and (3) the process of documenting the corr...
2014-09-25
CAPE CANAVERAL, Fla. – Coupled Florida East Coast Railway, or FEC, locomotives No. 433 and No. 428 make the first run past the Orbiter Processing Facility and Thermal Protection System Facility in Launch Complex 39 at NASA’s Kennedy Space Center in Florida during the Rail Vibration Test for the Canaveral Port Authority. Seismic monitors are collecting data as the train passes by. The purpose of the test is to collect amplitude, frequency and vibration test data utilizing two Florida East Coast locomotives operating on KSC tracks to ensure that future railroad operations will not affect launch vehicle processing at the center. Buildings instrumented for the test include the Rotation Processing Surge Facility, Thermal Protection Systems Facility, Vehicle Assembly Building, Orbiter Processing Facility and Booster Fabrication Facility. Photo credit: NASA/Daniel Casper
CADC and CANFAR: Extending the role of the data centre
NASA Astrophysics Data System (ADS)
Gaudet, Severin
2015-12-01
Over the past six years, the CADC has moved beyond the astronomy archive data centre to a multi-service system for the community. This evolution is based on two major initiatives. The first is the adoption of International Virtual Observatory Alliance (IVOA) standards in both the system and data architecture of the CADC, including a common characterization data model. The second is the Canadian Advanced Network for Astronomical Research (CANFAR), a digital infrastructure combining the Canadian national research network (CANARIE), cloud processing and storage resources (Compute Canada) and a data centre (Canadian Astronomy Data Centre) into a unified ecosystem for storage and processing for the astronomy community. This talk will describe the architecture and integration of IVOA and CANFAR services into CADC operations, the operational experiences, the lessons learned and future directions
The Power Plant Operating Data Based on Real-time Digital Filtration Technology
NASA Astrophysics Data System (ADS)
Zhao, Ning; Chen, Ya-mi; Wang, Hui-jie
2018-03-01
Real-time monitoring of the data of the thermal power plant was the basis of accurate analyzing thermal economy and accurate reconstruction of the operating state. Due to noise interference was inevitable; we need real-time monitoring data filtering to get accurate information of the units and equipment operating data of the thermal power plant. Real-time filtering algorithm couldn’t be used to correct the current data with future data. Compared with traditional filtering algorithm, there were a lot of constraints. First-order lag filtering method and weighted recursive average filtering method could be used for real-time filtering. This paper analyzes the characteristics of the two filtering methods and applications for real-time processing of the positive spin simulation data, and the thermal power plant operating data. The analysis was revealed that the weighted recursive average filtering method applied to the simulation and real-time plant data filtering achieved very good results.
NASA Technical Reports Server (NTRS)
Paulson, R. W.
1974-01-01
The Earth Resources Technology Satellite Data Collection System has been shown to be, from the users vantage point, a reliable and simple system for collecting data from U.S. Geological Survey operational field instrumentation. It is technically feasible to expand the ERTS system into an operational polar-orbiting data collection system to gather data from the Geological Survey's Hydrologic Data Network. This could permit more efficient internal management of the Network, and could enable the Geological Survey to make data available to cooperating agencies in near-real time. The Geological Survey is conducting an analysis of the costs and benefits of satellite data-relay systems.
Research on Holographic Evaluation of Service Quality in Power Data Network
NASA Astrophysics Data System (ADS)
Wei, Chen; Jing, Tao; Ji, Yutong
2018-01-01
With the rapid development of power data network, the continuous development of the Power data application service system, more and more service systems are being put into operation. Following this, the higher requirements for network quality and service quality are raised, in the actual process for the network operation and maintenance. This paper describes the electricity network and data network services status. A holographic assessment model was presented to achieve a comprehensive intelligence assessment on the power data network and quality of service in the operation and maintenance on the power data network. This evaluation method avoids the problems caused by traditional means which performs a single assessment of network performance quality. This intelligent Evaluation method can improve the efficiency of network operation and maintenance guarantee the quality of real-time service in the power data network..
Divide and Recombine for Large Complex Data
2017-12-01
Empirical Methods in Natural Language Processing , October 2014 Keywords Enter keywords for the publication. URL Enter the URL...low-latency data processing systems. Declarative Languages for Interactive Visualization: The Reactive Vega Stack Another thread of XDATA research...for array processing operations embedded in the R programming language . Vector virtual machines work well for long vectors. One of the most
NASA Technical Reports Server (NTRS)
1974-01-01
The specifications and functions of the Central Data Processing (CDPF) Facility which supports the Earth Observatory Satellite (EOS) are discussed. The CDPF will receive the EOS sensor data and spacecraft data through the Spaceflight Tracking and Data Network (STDN) and the Operations Control Center (OCC). The CDPF will process the data and produce high density digital tapes, computer compatible tapes, film and paper print images, and other data products. The specific aspects of data inputs and data processing are identified. A block diagram of the CDPF to show the data flow and interfaces of the subsystems is provided.
Ku-band signal design study. [space shuttle orbiter data processing network
NASA Technical Reports Server (NTRS)
Rubin, I.
1978-01-01
Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.
NASA Astrophysics Data System (ADS)
Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.
2012-10-01
We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.
DSN command system Mark III-78. [data processing
NASA Technical Reports Server (NTRS)
Stinnett, W. G.
1978-01-01
The Deep Space Network command Mark III-78 data processing system includes a capability for a store-and-forward handling method. The functions of (1) storing the command files at a Deep Space station; (2) attaching the files to a queue; and (3) radiating the commands to the spacecraft are straightforward. However, the total data processing capability is a result of assuming worst case, failure-recovery, or nonnominal operating conditions. Optional data processing functions include: file erase, clearing the queue, suspend radiation, command abort, resume command radiation, and close window time override.
VASP-4096: a very high performance programmable device for digital media processing applications
NASA Astrophysics Data System (ADS)
Krikelis, Argy
2001-03-01
Over the past few years, technology drivers for microprocessors have changed significantly. Media data delivery and processing--such as telecommunications, networking, video processing, speech recognition and 3D graphics--is increasing in importance and will soon dominate the processing cycles consumed in computer-based systems. This paper presents the architecture of the VASP-4096 processor. VASP-4096 provides high media performance with low energy consumption by integrating associative SIMD parallel processing with embedded microprocessor technology. The major innovations in the VASP-4096 is the integration of thousands of processing units in a single chip that are capable of support software programmable high-performance mathematical functions as well as abstract data processing. In addition to 4096 processing units, VASP-4096 integrates on a single chip a RISC controller that is an implementation of the SPARC architecture, 128 Kbytes of Data Memory, and I/O interfaces. The SIMD processing in VASP-4096 implements the ASProCore architecture, which is a proprietary implementation of SIMD processing, operates at 266 MHz with program instructions issued by the RISC controller. The device also integrates a 64-bit synchronous main memory interface operating at 133 MHz (double-data rate), and a 64- bit 66 MHz PCI interface. VASP-4096, compared with other processors architectures that support media processing, offers true performance scalability, support for deterministic and non-deterministic data processing on a single device, and software programmability that can be re- used in future chip generations.
Astrophysics science operations - Near-term plans and vision
NASA Technical Reports Server (NTRS)
Riegler, Guenter R.
1991-01-01
Astrophysics science operations planned by the Science Operations branch of NASA Astrophysics Division for the 1990s for the purpose of gathering spaceborne astronomical data are described. The paper describes the near-future plans of the Science Operations in the areas of the preparation of the proposal; the planning and execution of spaceborne observations; the collection, processing, and analysis data; and the dissemination of results. Also presented are concepts planned for introduction at the beginning of the 20th century, including the concepts of open communications, transparent instrument and observatory operations, a spiral requirements development method, and an automated research assistant.
Cyber Situational Awareness through Operational Streaming Analysis
2011-04-07
Our system makes use of two specific data sources from network traffic: raw packet data and NetFlow connection summary records (de- scribed below...implemented an operational prototype system using the following two data feeds. a) NetFlow Data: Our system processes the NetFlow records of all...Internet gateway traffic for a large enterprise network. It uses the standard Cisco NetFlow version 5 proto- col, which defines a flow as a
Volcanic Ash Data Assimilation System for Atmospheric Transport Model
NASA Astrophysics Data System (ADS)
Ishii, K.; Shimbori, T.; Sato, E.; Tokumoto, T.; Hayashi, Y.; Hashimoto, A.
2017-12-01
The Japan Meteorological Agency (JMA) has two operations for volcanic ash forecasts, which are Volcanic Ash Fall Forecast (VAFF) and Volcanic Ash Advisory (VAA). In these operations, the forecasts are calculated by atmospheric transport models including the advection process, the turbulent diffusion process, the gravitational fall process and the deposition process (wet/dry). The initial distribution of volcanic ash in the models is the most important but uncertain factor. In operations, the model of Suzuki (1983) with many empirical assumptions is adopted to the initial distribution. This adversely affects the reconstruction of actual eruption plumes.We are developing a volcanic ash data assimilation system using weather radars and meteorological satellite observation, in order to improve the initial distribution of the atmospheric transport models. Our data assimilation system is based on the three-dimensional variational data assimilation method (3D-Var). Analysis variables are ash concentration and size distribution parameters which are mutually independent. The radar observation is expected to provide three-dimensional parameters such as ash concentration and parameters of ash particle size distribution. On the other hand, the satellite observation is anticipated to provide two-dimensional parameters of ash clouds such as mass loading, top height and particle effective radius. In this study, we estimate the thickness of ash clouds using vertical wind shear of JMA numerical weather prediction, and apply for the volcanic ash data assimilation system.
NASA Astrophysics Data System (ADS)
Green, J. C.; Rodriguez, J. V.; Denig, W. F.; Redmon, R. J.; Blake, J. B.; Mazur, J. E.; Fennell, J. F.; O'Brien, T. P.; Guild, T. B.; Claudepierre, S. G.; Singer, H. J.; Onsager, T. G.; Wilkinson, D. C.
2013-12-01
NOAA space weather sensors have monitored the near Earth space radiation environment for more than three decades providing one of the only long-term records of these energetic particles that can disable satellites and pose a threat to astronauts. These data have demonstrated their value for operations for decades, but they are also invaluable for scientific discovery. Here we describe the development of new NOAA tools for assessing radiation impacts to satellites and astronauts working in space. In particular, we discuss the new system implemented for processing and delivering near real time particle radiation data from the POES/MetOp satellites. We also describe the development of new radiation belt indices from the POES/MetOp data that capture significant global changes in the environment needed for operational decision making. Lastly, we investigate the physical processes responsible for dramatic changes of the inner proton belt region and the potential consequences these new belts may have for satellite operations.
AIRSAR Web-Based Data Processing
NASA Technical Reports Server (NTRS)
Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne
2007-01-01
The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.
ERBE and CERES broadband scanning radiometers
NASA Technical Reports Server (NTRS)
Weaver, William L.; Cooper, John E.
1990-01-01
Broadband scanning radiometers have been used extensively on earth-orbiting satellites to measure the Earth's outgoing radiation. The resulting estimates of longwave and shortwave fluxes have played an important role in helping to understand the Earth's radiant energy balance or budget. The Clouds and the Earth Radiant Energy System (CERES) experiment is expected to include instruments with three broadband scanning radiometers. The design of the CERES instrument will draw heavily from the flight-proven Earth Radiation Budget Experiment (ERBE) scanner instrument technology and will benefit from the several years of ERBE experience in mission operations and data processing. The discussion starts with a description of the scientific objectives of ERBE and CERES. The design and operational characteristics of the ERBE and CERES instrument are compared and the two ground-based data processing systems are compared. Finally, aspects of the CERES data processing which might be performed in near real-time aboard a spacecraft platform are discussed, and the types of algorithms and input data requirements for the onboard processing system are identified.
David Florida Laboratory Thermal Vacuum Data Processing System
NASA Technical Reports Server (NTRS)
Choueiry, Elie
1994-01-01
During 1991, the Space Simulation Facility conducted a survey to assess the requirements and analyze the merits for purchasing a new thermal vacuum data processing system for its facilities. A new, integrated, cost effective PC-based system was purchased which uses commercial off-the-shelf software for operation and control. This system can be easily reconfigured and allows its users to access a local area network. In addition, it provides superior performance compared to that of the former system which used an outdated mini-computer and peripheral hardware. This paper provides essential background on the old data processing system's features, capabilities, and the performance criteria that drove the genesis of its successor. This paper concludes with a detailed discussion of the thermal vacuum data processing system's components, features, and its important role in supporting our space-simulation environment and our capabilities for spacecraft testing. The new system was tested during the ANIK E spacecraft test, and was fully operational in November 1991.
NASA Technical Reports Server (NTRS)
Blackwell, R. J.
1982-01-01
Remote sensing data analysis of water quality monitoring is evaluated. Data anaysis and image processing techniques are applied to LANDSAT remote sensing data to produce an effective operational tool for lake water quality surveying and monitoring. Digital image processing and analysis techniques were designed, developed, tested, and applied to LANDSAT multispectral scanner (MSS) data and conventional surface acquired data. Utilization of these techniques facilitates the surveying and monitoring of large numbers of lakes in an operational manner. Supervised multispectral classification, when used in conjunction with surface acquired water quality indicators, is used to characterize water body trophic status. Unsupervised multispectral classification, when interpreted by lake scientists familiar with a specific water body, yields classifications of equal validity with supervised methods and in a more cost effective manner. Image data base technology is used to great advantage in characterizing other contributing effects to water quality. These effects include drainage basin configuration, terrain slope, soil, precipitation and land cover characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Richard L; Poole, Stephen W; Shamis, Pavel
2010-01-01
This paper introduces the newly developed Infini-Band (IB) Management Queue capability, used by the Host Channel Adapter (HCA) to manage network task data flow dependancies, and progress the communications associated with such flows. These tasks include sends, receives, and the newly supported wait task, and are scheduled by the HCA based on a data dependency description provided by the user. This functionality is supported by the ConnectX-2 HCA, and provides the means for delegating collective communication management and progress to the HCA, also known as collective communication offload. This provides a means for overlapping collective communications managed by the HCAmore » and computation on the Central Processing Unit (CPU), thus making it possible to reduce the impact of system noise on parallel applications using collective operations. This paper further describes how this new capability can be used to implement scalable Message Passing Interface (MPI) collective operations, describing the high level details of how this new capability is used to implement the MPI Barrier collective operation, focusing on the latency sensitive performance aspects of this new capability. This paper concludes with small scale benchmark experiments comparing implementations of the barrier collective operation, using the new network offload capabilities, with established point-to-point based implementations of these same algorithms, which manage the data flow using the central processing unit. These early results demonstrate the promise this new capability provides to improve the scalability of high performance applications using collective communications. The latency of the HCA based implementation of the barrier is similar to that of the best performing point-to-point based implementation managed by the central processing unit, starting to outperform these as the number of processes involved in the collective operation increases.« less
Lights Out Operations of a Space, Ground, Sensorweb
NASA Technical Reports Server (NTRS)
Chien, Steve; Tran, Daniel; Johnston, Mark; Davies, Ashley Gerard; Castano, Rebecca; Rabideau, Gregg; Cichy, Benjamin; Doubleday, Joshua; Pieri, David; Scharenbroich, Lucas;
2008-01-01
We have been operating an autonomous, integrated sensorweb linking numerous space and ground sensors in 24/7 operations since 2004. This sensorweb includes elements of space data acquisition (MODIS, GOES, and EO-1), space asset retasking (EO-1), integration of data acquired from ground sensor networks with on-demand ground processing of data into science products. These assets are being integrated using web service standards from the Open Geospatial Consortium. Future plans include extension to fixed and mobile surface and subsurface sea assets as part of the NSF's ORION Program.
Process Mining Methodology for Health Process Tracking Using Real-Time Indoor Location Systems.
Fernandez-Llatas, Carlos; Lizondo, Aroa; Monton, Eduardo; Benedi, Jose-Miguel; Traver, Vicente
2015-11-30
The definition of efficient and accurate health processes in hospitals is crucial for ensuring an adequate quality of service. Knowing and improving the behavior of the surgical processes in a hospital can improve the number of patients that can be operated on using the same resources. However, the measure of this process is usually made in an obtrusive way, forcing nurses to get information and time data, affecting the proper process and generating inaccurate data due to human errors during the stressful journey of health staff in the operating theater. The use of indoor location systems can take time information about the process in an unobtrusive way, freeing nurses, allowing them to engage in purely welfare work. However, it is necessary to present these data in a understandable way for health professionals, who cannot deal with large amounts of historical localization log data. The use of process mining techniques can deal with this problem, offering an easily understandable view of the process. In this paper, we present a tool and a process mining-based methodology that, using indoor location systems, enables health staff not only to represent the process, but to know precise information about the deployment of the process in an unobtrusive and transparent way. We have successfully tested this tool in a real surgical area with 3613 patients during February, March and April of 2015.
Process Mining Methodology for Health Process Tracking Using Real-Time Indoor Location Systems
Fernandez-Llatas, Carlos; Lizondo, Aroa; Monton, Eduardo; Benedi, Jose-Miguel; Traver, Vicente
2015-01-01
The definition of efficient and accurate health processes in hospitals is crucial for ensuring an adequate quality of service. Knowing and improving the behavior of the surgical processes in a hospital can improve the number of patients that can be operated on using the same resources. However, the measure of this process is usually made in an obtrusive way, forcing nurses to get information and time data, affecting the proper process and generating inaccurate data due to human errors during the stressful journey of health staff in the operating theater. The use of indoor location systems can take time information about the process in an unobtrusive way, freeing nurses, allowing them to engage in purely welfare work. However, it is necessary to present these data in a understandable way for health professionals, who cannot deal with large amounts of historical localization log data. The use of process mining techniques can deal with this problem, offering an easily understandable view of the process. In this paper, we present a tool and a process mining-based methodology that, using indoor location systems, enables health staff not only to represent the process, but to know precise information about the deployment of the process in an unobtrusive and transparent way. We have successfully tested this tool in a real surgical area with 3613 patients during February, March and April of 2015. PMID:26633395
Next Generation Global Navigation Satellite Systems (GNSS) Processing at NASA CDDIS
NASA Astrophysics Data System (ADS)
Michael, B. P.; Noll, C. E.
2016-12-01
The Crustal Dynamics Data Information System (CDDIS) has been providing access to space geodesy and related data sets since 1982, and in particular, Global Navigation Satellite Systems (GNSS) data and derived products since 1992. The CDDIS became one of the Earth Observing System Data and Information System (EOSDIS) archive centers in 2007. As such, CDDIS has evolved to offer a broad range of data ingest services, from data upload, quality control, documentation, metadata extraction, and ancillary information. With a growing understanding of the needs and goals of its science users CDDIS continues to improve these services. Due to the importance of GNSS data and derived products in scientific studies over the last decade, CDDIS has seen its ingest volume explode to over 30 million files per year or more than one file per second from over hundreds of simultaneous data providers. In order to accommodate this increase and to streamline operations and fully automate the workflow, CDDIS has recently updated the data submission process and GNSS processing. This poster will cover this new ingest infrastructure, workflow, and the agile techniques applied in its development and current operations.
Modeling of aircraft deicing fluids deposition
DOT National Transportation Integrated Search
2000-06-18
Glycol deposition near aircraft during deicing operations has become an important consideration at major airports. A sampling process was used to quantify glycol deposition from deicing operations at a major international airport. The resulting data ...
Division 1137 property control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastor, D.J.
1982-01-01
An automated data processing property control system was developed by Mobile and Remote Range Division 1137. This report describes the operation of the system and examines ways of using it in operational planning and control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.H. Frantz Jr; K.G. Brown; W.K. Sawyer
2006-03-01
This report summarizes the work performed under contract DE-FC26-03NT41743. The primary objective of this study was to develop tools that would allow Underground Gas Storage (UGS) operators to use wellhead electronic flow measurement (EFM) data to quickly and efficiently identify trends in well damage over time, thus aiding in the identification of potential causes of the damage. Secondary objectives of this work included: (1) To assist UGS operators in the evaluation of hardware and software requirements for implementing an EFM system similar to the one described in this report, and (2) To provide a cost-benefit analysis framework UGS operators canmore » use to evaluate economic benefits of installing wellhead EFM systems in their particular fields. Assessment of EFM data available for use, and selection of the specific study field are reviewed. The various EFM data processing tasks, including data collection, organization, extraction, processing, and interpretation are discussed. The process of damage assessment via pressure transient analysis of EFM data is outlined and demonstrated, including such tasks as quality control, semi-log analysis, and log-log analysis of pressure transient test data extracted from routinely collected EFM data. Output from pressure transient test analyses for 21 wells is presented, and the interpretation of these analyses to determine the timing of damage development is demonstrated using output from specific study wells. Development of processing and interpretation modules to handle EFM data interpretation in horizontal wells is also a presented and discussed. A spreadsheet application developed to aid underground gas storage operators in the selection of EFM equipment is presented, discussed, and used to determine the cost benefit of installing EFM equipment in a gas storage field. Recommendations for future work related to EFM in gas storage fields are presented and discussed.« less
NASA Technical Reports Server (NTRS)
Weise, Timothy M
2012-01-01
NASA's Dawn mission to the asteroid Vesta and dwarf planet Ceres launched September 27, 2007 and arrived at Vesta in July of 2011. This mission uses ion propulsion to achieve the necessary delta-V to reach and maneuver at Vesta and Ceres. This paper will show how the evolution of ground system automation and process improvement allowed a relatively small engineering team to transition from cruise operations to asteroid operations while maintaining robust processes. The cruise to Vesta phase lasted almost 4 years and consisted of activities that were built with software tools, but each tool was open loop and required engineers to review the output to ensure consistency. Additionally, this same time period was characterized by the evolution from manually retrieved and reviewed data products to automatically generated data products and data value checking. Furthermore, the team originally took about three to four weeks to design and build about four weeks of spacecraft activities, with spacecraft contacts only once a week. Operations around the asteroid Vesta increased the tempo dramatically by transitioning from one contact a week to three or four contacts a week, to fourteen contacts a week (every 12 hours). This was accompanied by a similar increase in activity complexity as well as very fast turn around activity design and build cycles. The design process became more automated and the tools became closed loop, allowing the team to build more activities without sacrificing rigor. Additionally, these activities were dependent on the results of flight system performance, so more automation was added to analyze the flight data and provide results in a timely fashion to feed the design cycle. All of this automation and process improvement enabled up the engineers to focus on other aspects of spacecraft operations, including spacecraft health monitoring and anomaly resolution.
JPSS Common Ground System Multimission Support
NASA Astrophysics Data System (ADS)
Jamilkowski, M. L.; Miller, S. W.; Grant, K. D.
2013-12-01
NOAA & NASA jointly acquire the next-generation civilian operational weather satellite: Joint Polar Satellite System (JPSS). JPSS contributes the afternoon orbit & restructured NPOESS ground system (GS) to replace the current Polar-orbiting Operational Environmental Satellite (POES) system run by NOAA. JPSS sensors will collect meteorological, oceanographic, climatological & solar-geophysical observations of the earth, atmosphere & space. The JPSS GS is the Common Ground System (CGS), consisting of Command, Control, & Communications (C3S) and Interface Data Processing (IDPS) segments, both developed by Raytheon Intelligence, Information & Services (IIS). CGS now flies the Suomi National Polar-orbiting Partnership (S-NPP) satellite, transfers its mission data between ground facilities and processes its data into Environmental Data Records for NOAA & Defense (DoD) weather centers. CGS will expand to support JPSS-1 in 2017. The JPSS CGS currently does data processing (DP) for S-NPP, creating multiple TBs/day across over two dozen environmental data products (EDPs). The workload doubles after JPSS-1 launch. But CGS goes well beyond S-NPP & JPSS mission management & DP by providing data routing support to operational centers & missions worldwide. The CGS supports several other missions: It also provides raw data acquisition, routing & some DP for GCOM-W1. The CGS does data routing for numerous other missions & systems, including USN's Coriolis/Windsat, NASA's SCaN network (including EOS), NSF's McMurdo Station communications, Defense Meteorological Satellite Program (DMSP), and NOAA's POES & EUMETSAT's MetOp satellites. Each of these satellite systems orbits the Earth 14 times/day, downlinking data once or twice/orbit at up to 100s of MBs/second, to support the creation of 10s of TBs of data/day across 100s of EDPs. Raytheon and the US government invested much in Raytheon's mission-management, command & control and data-processing products & capabilities. CGS's flexible, multimission capabilities offer major chances for cost reduction & improved information integration across missions. Raytheon has a unique ability to provide complex, highly-secure, multi-mission GSs. As disaggregation, hosted CGS multimission payloads, and other space-architecture trades are implemented and new sensors come on line that collect orders of magnitude more data, the importance of a flexible, expandable and virtualized modern GS architecture increases. The CGS offers that solution support. JPSS CGS supports 5 global ground stations that can receive S-NPP & JPSS-1 mission data. These, linked with high-bandwidth commercial fiber, quickly transport data to the IDPS for EDP creation & delivery. CGS will process & deliver JPSS-1 data to US operational users in < 80 minutes from time of collection. And CGS leverages this fiber network to provide added data routing for a wide array of global missions. The JPSS CGS is a mature, tested solution for support to operational weather forecasting for civil, military and international partners and climate research. It features a flexible design handling order-of-magnitude increases in data over legacy satellite GSs and meets demanding science accuracy needs. The Raytheon-built JPSS CGS gives the full GS capability, from design & development through operations & sustainment. This lays the foundation for CGS future evolution to support additional missions like Polar Free Flyers.
Mashup Model and Verification Using Mashup Processing Network
NASA Astrophysics Data System (ADS)
Zahoor, Ehtesham; Perrin, Olivier; Godart, Claude
Mashups are defined to be lightweight Web applications aggregating data from different Web services, built using ad-hoc composition and being not concerned with long term stability and robustness. In this paper we present a pattern based approach, called Mashup Processing Network (MPN). The idea is based on Event Processing Network and is supposed to facilitate the creation, modeling and the verification of mashups. MPN provides a view of how different actors interact for the mashup development namely the producer, consumer, mashup processing agent and the communication channels. It also supports modeling transformations and validations of data and offers validation of both functional and non-functional requirements, such as reliable messaging and security, that are key issues within the enterprise context. We have enriched the model with a set of processing operations and categorize them into data composition, transformation and validation categories. These processing operations can be seen as a set of patterns for facilitating the mashup development process. MPN also paves a way for realizing Mashup Oriented Architecture where mashups along with services are used as building blocks for application development.
Steyn, Rachelle; Boniaszczuk, John; Geldenhuys, Theodore
2014-01-01
To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p < 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) CONCLUSION: Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results.
NASA Astrophysics Data System (ADS)
Surace, J.; Laher, R.; Masci, F.; Grillmair, C.; Helou, G.
2015-09-01
The Palomar Transient Factory (PTF) is a synoptic sky survey in operation since 2009. PTF utilizes a 7.1 square degree camera on the Palomar 48-inch Schmidt telescope to survey the sky primarily at a single wavelength (R-band) at a rate of 1000-3000 square degrees a night. The data are used to detect and study transient and moving objects such as gamma ray bursts, supernovae and asteroids, as well as variable phenomena such as quasars and Galactic stars. The data processing system at IPAC handles realtime processing and detection of transients, solar system object processing, high photometric precision processing and light curve generation, and long-term archiving and curation. This was developed under an extremely limited budget profile in an unusually agile development environment. Here we discuss the mechanics of this system and our overall development approach. Although a significant scientific installation in of itself, PTF also serves as the prototype for our next generation project, the Zwicky Transient Facility (ZTF). Beginning operations in 2017, ZTF will feature a 50 square degree camera which will enable scanning of the entire northern visible sky every night. ZTF in turn will serve as a stepping stone to the Large Synoptic Survey Telescope (LSST), a major NSF facility scheduled to begin operations in the early 2020s.
Guo, Hansong; Huang, He; Huang, Liusheng; Sun, Yu-E
2016-01-01
As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user’s daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR) respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR) are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy. PMID:27556461
Guo, Hansong; Huang, He; Huang, Liusheng; Sun, Yu-E
2016-08-20
As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user's daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR) respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR) are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy.
NASA Technical Reports Server (NTRS)
Morrison, D. B. (Editor); Scherer, D. J.
1977-01-01
Papers are presented on a variety of techniques for the machine processing of remotely sensed data. Consideration is given to preprocessing methods such as the correction of Landsat data for the effects of haze, sun angle, and reflectance and to the maximum likelihood estimation of signature transformation algorithm. Several applications of machine processing to agriculture are identified. Various types of processing systems are discussed such as ground-data processing/support systems for sensor systems and the transfer of remotely sensed data to operational systems. The application of machine processing to hydrology, geology, and land-use mapping is outlined. Data analysis is considered with reference to several types of classification methods and systems.
nStudy: A System for Researching Information Problem Solving
ERIC Educational Resources Information Center
Winne, Philip H.; Nesbit, John C.; Popowich, Fred
2017-01-01
A bottleneck in gathering big data about learning is instrumentation designed to record data about processes students use to learn and information on which those processes operate. The software system nStudy fills this gap. nStudy is an extension to the Chrome web browser plus a server side database for logged trace data plus peripheral modules…
Competitive Parallel Processing For Compression Of Data
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Antony R. H.
1990-01-01
Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.
The microcomputer scientific software series 1: the numerical information manipulation system.
Harold M. Rauscher
1983-01-01
The Numerical Information Manipulation System extends the versatility provided by word processing systems for textual data manipulation to mathematical or statistical data in numeric matrix form. Numeric data, stored and processed in the matrix form, may be manipulated in a wide variety of ways. The system allows operations on single elements, entire rows, or columns...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pochan, M.J.; Massey, M.J.
1979-02-01
This report discusses the results of actual raw product gas sampling efforts and includes: Rationale for raw product gas sampling efforts; design and operation of the CMU gas sampling train; development and analysis of a sampling train data base; and conclusions and future application of results. The results of sampling activities at the CO/sub 2/-Acceptor and Hygas pilot plants proved that: The CMU gas sampling train is a valid instrument for characterization of environmental parameters in coal gasification gas-phase process streams; depending on the particular process configuration, the CMU gas sampling train can reduce gasifier effluent characterization activity to amore » single location in the raw product gas line; and in contrast to the slower operation of the EPA SASS Train, CMU's gas sampling train can collect representative effluent data at a rapid rate (approx. 2 points per hour) consistent with the rate of change of process variables, and thus function as a tool for process engineering-oriented analysis of environmental characteristics.« less
A graphics subsystem retrofit design for the bladed-disk data acquisition system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Carney, R. R.
1983-01-01
A graphics subsystem retrofit design for the turbojet blade vibration data acquisition system is presented. The graphics subsystem will operate in two modes permitting the system operator to view blade vibrations on an oscilloscope type of display. The first mode is a real-time mode that displays only gross blade characteristics, such as maximum deflections and standing waves. This mode is used to aid the operator in determining when to collect detailed blade vibration data. The second mode of operation is a post-processing mode that will animate the actual blade vibrations using the detailed data collected on an earlier data collection run. The operator can vary the rate of payback to view differring characteristics of blade vibrations. The heart of the graphics subsystem is a modified version of AMD's ""super sixteen'' computer, called the graphics preprocessor computer (GPC). This computer is based on AMD's 2900 series of bit-slice components.
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Schmidt, C. C.; Hoffman, J.; Giglio, L.; Peterson, D. A.
2013-12-01
Polar and geostationary satellites are used operationally for fire detection and smoke source estimation by many near-real-time operational users, including operational forecast centers around the globe. The input satellite radiance data are processed by data providers to produce Level-2 and Level -3 fire detection products, but processing these data into spatially and temporally consistent estimates of fire activity requires a substantial amount of additional processing. The most significant processing steps are correction for variable coverage of the satellite observations, and correction for conditions that affect the detection efficiency of the satellite sensors. We describe a system developed by the Naval Research Laboratory (NRL) that uses the full raster information from the entire constellation to diagnose detection opportunities, calculate corrections for factors such as angular dependence of detection efficiency, and generate global estimates of fire activity at spatial and temporal scales suitable for atmospheric modeling. By incorporating these improved fire observations, smoke emissions products, such as NRL's FLAMBE, are able to produce improved estimates of global emissions. This talk provides an overview of the system, demonstrates the achievable improvement over older methods, and describes challenges for near-real-time implementation.
Real time data acquisition of a countrywide commercial microwave link network
NASA Astrophysics Data System (ADS)
Chwala, Christian; Keis, Felix; Kunstmann, Harald
2015-04-01
Research in recent years has shown that data from commercial microwave link networks can provide very valuable precipitation information. Since these networks comprise the backbone of the cell phone network, they provide countrywide coverage. However acquiring the necessary data from the network operators is still difficult. Data is usually made available for researchers with a large time delay and often at irregular basis. This of course hinders the exploitation of commercial microwave link data in operational applications like QPE forecasts running at national meteorological services. To overcome this, we have developed a custom software in joint cooperation with our industry partner Ericsson. The software is installed on a dedicated server at Ericsson and is capable of acquiring data from the countrywide microwave link network in Germany. In its current first operational testing phase, data from several hundred microwave links in southern Germany is recorded. All data is instantaneously sent to our server where it is stored and organized in an emerging database. Time resolution for the Ericsson data is one minute. The custom acquisition software, however, is capable of processing higher sampling rates. Additionally we acquire and manage 1 Hz data from four microwave links operated by the skiing resort in Garmisch-Partenkirchen. We will present the concept of the data acquisition and show details of the custom-built software. Additionally we will showcase the accessibility and basic processing of real time microwave link data via our database web frontend.
Processing module operating methods, processing modules, and communications systems
McCown, Steven Harvey; Derr, Kurt W.; Moore, Troy
2014-09-09
A processing module operating method includes using a processing module physically connected to a wireless communications device, requesting that the wireless communications device retrieve encrypted code from a web site and receiving the encrypted code from the wireless communications device. The wireless communications device is unable to decrypt the encrypted code. The method further includes using the processing module, decrypting the encrypted code, executing the decrypted code, and preventing the wireless communications device from accessing the decrypted code. Another processing module operating method includes using a processing module physically connected to a host device, executing an application within the processing module, allowing the application to exchange user interaction data communicated using a user interface of the host device with the host device, and allowing the application to use the host device as a communications device for exchanging information with a remote device distinct from the host device.
Dynamic Modeling of Yield and Particle Size Distribution in Continuous Bayer Precipitation
NASA Astrophysics Data System (ADS)
Stephenson, Jerry L.; Kapraun, Chris
Process engineers at Alcoa's Point Comfort refinery are using a dynamic model of the Bayer precipitation area to evaluate options in operating strategies. The dynamic model, a joint development effort between Point Comfort and the Alcoa Technical Center, predicts process yields, particle size distributions and occluded soda levels for various flowsheet configurations of the precipitation and classification circuit. In addition to rigorous heat, material and particle population balances, the model includes mechanistic kinetic expressions for particle growth and agglomeration and semi-empirical kinetics for nucleation and attrition. The kinetic parameters have been tuned to Point Comfort's operating data, with excellent matches between the model results and plant data. The model is written for the ACSL dynamic simulation program with specifically developed input/output graphical user interfaces to provide a user-friendly tool. Features such as a seed charge controller enhance the model's usefulness for evaluating operating conditions and process control approaches.
Administrative Uses of Microcomputers.
ERIC Educational Resources Information Center
Crawford, Chase
1987-01-01
This paper examines the administrative uses of the microcomputer, stating that high performance educational managers are likely to have microcomputers in their organizations. Four situations that would justify the use of a computer are: (1) when massive amounts of data are processed through well-defined operations; (2) when data processing is…
ERIC Educational Resources Information Center
Crowe, Jacquelyn
This study investigated computer and word processing operator skills necessary for employment in today's high technology office. The study was comprised of seven major phases: (1) identification of existing community college computer operator programs in the state of Washington; (2) attendance at an information management seminar; (3) production…
Cyclic growth in Atlantic region continental crust
NASA Technical Reports Server (NTRS)
Goodwin, A. M.
1986-01-01
Atlantic region continental crust evolved in successive stages under the influence of regular, approximately 400 Ma-long tectonic cycles. Data point to a variety of operative tectonic processes ranging from widespread ocean floor consumption (Wilson cycle) to entirely ensialic (Ampferer-style subduction or simple crustal attenuation-compression). Different processes may have operated concurrently in some or different belts. Resolving this remains the major challenge.
F3D Image Processing and Analysis for Many - and Multi-core Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
F3D is written in OpenCL, so it achieve[sic] platform-portable parallelism on modern mutli-core CPUs and many-core GPUs. The interface and mechanims to access F3D core are written in Java as a plugin for Fiji/ImageJ to deliver several key image-processing algorithms necessary to remove artifacts from micro-tomography data. The algorithms consist of data parallel aware filters that can efficiently utilizes[sic] resources and can work on out of core datasets and scale efficiently across multiple accelerators. Optimizing for data parallel filters, streaming out of core datasets, and efficient resource and memory and data managements over complex execution sequence of filters greatly expeditesmore » any scientific workflow with image processing requirements. F3D performs several different types of 3D image processing operations, such as non-linear filtering using bilateral filtering and/or median filtering and/or morphological operators (MM). F3D gray-level MM operators are one-pass constant time methods that can perform morphological transformations with a line-structuring element oriented in discrete directions. Additionally, MM operators can be applied to gray-scale images, and consist of two parts: (a) a reference shape or structuring element, which is translated over the image, and (b) a mechanism, or operation, that defines the comparisons to be performed between the image and the structuring element. This tool provides a critical component within many complex pipelines such as those for performing automated segmentation of image stacks. F3D is also called a "descendent" of Quant-CT, another software we developed in the past. These two modules are to be integrated in a next version. Further details were reported in: D.M. Ushizima, T. Perciano, H. Krishnan, B. Loring, H. Bale, D. Parkinson, and J. Sethian. Structure recognition from high-resolution images of ceramic composites. IEEE International Conference on Big Data, October 2014.« less
Data processing for a cosmic ray experiment onboard the solar probes Helios 1 and 2: Experiment 6
NASA Technical Reports Server (NTRS)
Mueller-Mellin, R.; Green, G.; Iwers, B.; Kunow, H.; Wibberenz, G.; Fuckner, J.; Hempe, H.; Witte, M.
1982-01-01
The data processing system for the Helios experiment 6, measuring energetic charged particles of solar, planetary and galactic origin in the inner solar system, is described. The aim of this experiment is to extend knowledge on origin and propagation of cosmic rays. The different programs for data reduction, analysis, presentation, and scientific evaluation are described as well as hardware and software of the data processing equipment. A chronological presentation of the data processing operation is given. Procedures and methods for data analysis which were developed can be used with minor modifications for analysis of other space research experiments.
The moderate resolution imaging spectrometer (MODIS) science and data system requirements
NASA Technical Reports Server (NTRS)
Ardanuy, Philip E.; Han, Daesoo; Salomonson, Vincent V.
1991-01-01
The Moderate Resolution Imaging Spectrometer (MODIS) has been designated as a facility instrument on the first NASA polar orbiting platform as part of the Earth Observing System (EOS) and is scheduled for launch in the late 1990s. The near-global daily coverage of MODIS, combined with its continuous operation, broad spectral coverage, and relatively high spatial resolution, makes it central to the objectives of EOS. The development, implementation, production, and validation of the core MODIS data products define a set of functional, performance, and operational requirements on the data system that operate between the sensor measurements and the data products supplied to the user community. The science requirements guiding the processing of MODIS data are reviewed, and the aspects of an operations concept for the production of data products from MODIS for use by the scientific community are discussed.
Processing EOS MLS Level-2 Data
NASA Technical Reports Server (NTRS)
Snyder, W. Van; Wu, Dong; Read, William; Jiang, Jonathan; Wagner, Paul; Livesey, Nathaniel; Schwartz, Michael; Filipiak, Mark; Pumphrey, Hugh; Shippony, Zvi
2006-01-01
A computer program performs level-2 processing of thermal-microwave-radiance data from observations of the limb of the Earth by the Earth Observing System (EOS) Microwave Limb Sounder (MLS). The purpose of the processing is to estimate the composition and temperature of the atmosphere versus altitude from .8 to .90 km. "Level-2" as used here is a specialists f term signifying both vertical profiles of geophysical parameters along the measurement track of the instrument and processing performed by this or other software to generate such profiles. Designed to be flexible, the program is controlled via a configuration file that defines all aspects of processing, including contents of state and measurement vectors, configurations of forward models, measurement and calibration data to be read, and the manner of inverting the models to obtain the desired estimates. The program can operate in a parallel form in which one instance of the program acts a master, coordinating the work of multiple slave instances on a cluster of computers, each slave operating on a portion of the data. Optionally, the configuration file can be made to instruct the software to produce files of simulated radiances based on state vectors formed from sets of geophysical data-product files taken as input.
NASA Technical Reports Server (NTRS)
Gaulene, P.
1986-01-01
The SEE data processing system, developed in 1985, manages and process test results. General information is provided on the SEE system: objectives, characteristics, basic principles, general organization, and operation. Full documentation is accessible by computer using the HELP SEE command.
Viirs Land Science Investigator-Led Processing System
NASA Astrophysics Data System (ADS)
Devadiga, S.; Mauoka, E.; Roman, M. O.; Wolfe, R. E.; Kalb, V.; Davidson, C. C.; Ye, G.
2015-12-01
The objective of the NASA's Suomi National Polar Orbiting Partnership (S-NPP) Land Science Investigator-led Processing System (Land SIPS), housed at the NASA Goddard Space Flight Center (GSFC), is to produce high quality land products from the Visible Infrared Imaging Radiometer Suite (VIIRS) to extend the Earth System Data Records (ESDRs) developed from NASA's heritage Earth Observing System (EOS) Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the EOS Terra and Aqua satellites. In this paper we will present the functional description and capabilities of the S-NPP Land SIPS, including system development phases and production schedules, timeline for processing, and delivery of land science products based on coordination with the S-NPP Land science team members. The Land SIPS processing stream is expected to be operational by December 2016, generating land products either using the NASA science team delivered algorithms, or the "best-of" science algorithms currently in operation at NASA's Land Product Evaluation and Algorithm Testing Element (PEATE). In addition to generating the standard land science products through processing of the NASA's VIIRS Level 0 data record, the Land SIPS processing system is also used to produce a suite of near-real time products for NASA's application community. Land SIPS will also deliver the standard products, ancillary data sets, software and supporting documentation (ATBDs) to the assigned Distributed Active Archive Centers (DAACs) for archival and distribution. Quality assessment and validation will be an integral part of the Land SIPS processing system; the former being performed at Land Data Operational Product Evaluation (LDOPE) facility, while the latter under the auspices of the CEOS Working Group on Calibration & Validation (WGCV) Land Product Validation (LPV) Subgroup; adopting the best-practices and tools used to assess the quality of heritage EOS-MODIS products generated at the MODIS Adaptive Processing System (MODAPS).
Method and apparatus for monitoring plasma processing operations
Smith, Jr., Michael Lane; Ward, Pamela Denise Peardon; Stevenson, Joel O'Don
2002-01-01
The invention generally relates to various aspects of a plasma process, and more specifically the monitoring of such plasma processes. One aspect relates in at least some manner to calibrating or initializing a plasma monitoring assembly. This type of calibration may be used to address wavelength shifts, intensity shifts, or both associated with optical emissions data obtained on a plasma process. A calibration light may be directed at a window through which optical emissions data is being obtained to determine the effect, if any, that the inner surface of the window is having on the optical emissions data being obtained therethrough, the operation of the optical emissions data gathering device, or both. Another aspect relates in at least some manner to various types of evaluations which may be undertaken of a plasma process which was run, and more typically one which is currently being run, within the processing chamber. Plasma health evaluations and process identification through optical emissions analysis are included in this aspect. Yet another aspect associated with the present invention relates in at least some manner to the endpoint of a plasma process (e.g., plasma recipe, plasma clean, conditioning wafer operation) or discrete/discernible portion thereof (e.g., a plasma step of a multiple step plasma recipe). Another aspect associated with the present invention relates to how one or more of the above-noted aspects may be implemented into a semiconductor fabrication facility, such as the distribution of wafers to a wafer production system. A final aspect of the present invention relates to a network a plurality of plasma monitoring systems, including with remote capabilities (i.e., outside of the clean room).
Georgia resource assessment project: Institutionalizing LANDSAT and geographic data base techniques
NASA Technical Reports Server (NTRS)
Pierce, R. R.; Rado, B. Q.; Faust, N.
1981-01-01
Digital data from LANDSAT for each 1.1-acre cell in Georgia were processed and the land cover conditions were categorized. Several test cases were completed and an operational hardware and software processing capability was established at the Georgia Institute of Technology. The operational capability was developed to process the entire state (60,000 sq. miles and 14 LANDSAT scenes) in a cooperative project between eleven divisions and agencies at the regional, state, and federal levels. Products were developed for State agencies such as in both mapped and statistical formats. A computerized geographical data base was developed for management programs. To a large extent the applications of the data base evolved as users of LANDSAT information requested that other data (i.e., soils, slope, land use, etc.) be made compatible with LANDSAT for management programs. To date, geographic data bases incorporating LANDSAT and other spatial data deal with elements of the municipal solid waste management program, and reservoir management for the Corps of Engineers. LANDSAT data are also being used for applications in wetland, wildlife, and forestry management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulvatunyou, Boonserm; Wysk, Richard A.; Cho, Hyunbo
2004-06-01
In today's global manufacturing environment, manufacturing functions are distributed as never before. Design, engineering, fabrication, and assembly of new products are done routinely in many different enterprises scattered around the world. Successful business transactions require the sharing of design and engineering data on an unprecedented scale. This paper describes a framework that facilitates the collaboration of engineering tasks, particularly process planning and analysis, to support such globalized manufacturing activities. The information models of data and the software components that integrate those information models are described. The integration framework uses an Integrated Product and Process Data (IPPD) representation called a Resourcemore » Independent Operation Summary (RIOS) to facilitate the communication of business and manufacturing requirements. Hierarchical process modeling, process planning decomposition and an augmented AND/OR directed graph are used in this representation. The Resource Specific Process Planning (RSPP) module assigns required equipment and tools, selects process parameters, and determines manufacturing costs based on two-level hierarchical RIOS data. The shop floor knowledge (resource and process knowledge) and a hybrid approach (heuristic and linear programming) to linearize the AND/OR graph provide the basis for the planning. Finally, a prototype system is developed and demonstrated with an exemplary part. Java and XML (Extensible Markup Language) are used to ensure software and information portability.« less
NASA Astrophysics Data System (ADS)
Lyness, E.; Franz, H. B.; Prats, B.
2017-12-01
The Sample Analysis at Mars (SAM) instrument is a suite of instruments on Mars aboard the Mars Science Laboratory rover. Centered on a mass spectrometer, SAM delivers its data to the PDS Atmosphere's node in PDS3 format. Over five years on Mars the process of operating SAM has evolved and extended significantly from the plan in place at the time the PDS3 delivery specification was written. For instance, SAM commonly receives double or even triple sample aliquots from the rover's drill. SAM also stores samples in spare cups for long periods of time for future analysis. These unanticipated operational changes mean that the PDS data deliveries are absent some valuable metadata without which the data can be confusing. The Mars Organic Molecule Analyzer (MOMA) instrument is another suite of instruments centered on a mass spectrometer bound for Mars. MOMA is part of the European ExoMars rover mission schedule to arrive on Mars in 2021. While SAM and MOMA differ in some important scientific ways - MOMA uses an linear ion trap compared to the SAM quadropole mass spectrometer and MOMA has a laser desorption experiment that SAM lacks - the data content from the PDS point of view is comparable. Both instruments produce data containing mass spectra acquired from solid samples collected on the surface of Mars. The MOMA PDS delivery will make use of PDS4 improvements to provide a metadata context to the data. The MOMA PDS4 specification makes few assumptions of the operational processes. Instead it provides a means for the MOMA operators to provide the important contextual metadata that was unanticipated during specification development. Further, the software tools being developed for instrument operators will provide a means for the operators to add this crucial metadata at the time it is best know - during operations.
Kychakoff, George [Maple Valley, WA; Afromowitz, Martin A [Mercer Island, WA; Hogle, Richard E [Olympia, WA
2008-10-14
A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions of about 4 or 8.7 microns and directly producing images of the interior of the boiler, or producing feeding signals to a data processing system for information to enable a distributed control system by which the boilers are operated to operate said boilers more efficiently. The data processing system includes an image pre-processing circuit in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. It also includes an image compensation system for array compensation to correct for pixel variation and dead cells, etc., and for correcting geometric distortion. An image segmentation module receives a cleaned image from the image pre-processing circuit for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. It also accomplishes thresholding/clustering on gray scale/texture and makes morphological transforms to smooth regions, and identifies regions by connected components. An image-understanding unit receives a segmented image sent from the image segmentation module and matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system for more efficient operation of the plant pendant tube cleaning and operating systems.
Xray: N-dimensional, labeled arrays for analyzing physical datasets in Python
NASA Astrophysics Data System (ADS)
Hoyer, S.
2015-12-01
Efficient analysis of geophysical datasets requires tools that both preserve and utilize metadata, and that transparently scale to process large datas. Xray is such a tool, in the form of an open source Python library for analyzing the labeled, multi-dimensional array (tensor) datasets that are ubiquitous in the Earth sciences. Xray's approach pairs Python data structures based on the data model of the netCDF file format with the proven design and user interface of pandas, the popular Python data analysis library for labeled tabular data. On top of the NumPy array, xray adds labeled dimensions (e.g., "time") and coordinate values (e.g., "2015-04-10"), which it uses to enable a host of operations powered by these labels: selection, aggregation, alignment, broadcasting, split-apply-combine, interoperability with pandas and serialization to netCDF/HDF5. Many of these operations are enabled by xray's tight integration with pandas. Finally, to allow for easy parallelism and to enable its labeled data operations to scale to datasets that does not fit into memory, xray integrates with the parallel processing library dask.
Operation plan for the data 100/LARS terminal system
NASA Technical Reports Server (NTRS)
Bowen, A. J., Jr.
1980-01-01
The Data 100/LARS terminal system provides an interface for processing on the IBM 3031 computer system at Purdue University's Laboratory for Applications of Remote Sensing. The environment in which the system is operated and supported is discussed. The general support responsibilities, procedural mechanisms, and training established for the benefit of the system users are defined.
System integration of marketable subsystems
NASA Technical Reports Server (NTRS)
1978-01-01
These monthly reports, covering the period February 1978 through June 1978, describe the progress made in the major areas of the program. The areas covered are: systems integration of marketable subsystems; development, design, and building of site data acquisition subsystems; development and operation of the central data processing system; operation of the MSFC Solar Test Facility; and systems analysis.
Operations research methods improve chemotherapy patient appointment scheduling.
Santibáñez, Pablo; Aristizabal, Ruben; Puterman, Martin L; Chow, Vincent S; Huang, Wenhai; Kollmannsberger, Christian; Nordin, Travis; Runzer, Nancy; Tyldesley, Scott
2012-12-01
Clinical complexity, scheduling restrictions, and outdated manual booking processes resulted in frequent clerical rework, long waitlists for treatment, and late appointment notification for patients at a chemotherapy clinic in a large cancer center in British Columbia, Canada. A 17-month study was conducted to address booking, scheduling and workload issues and to develop, implement, and evaluate solutions. A review of scheduling practices included process observation and mapping, analysis of historical appointment data, creation of a new performance metric (final appointment notification lead time), and a baseline patient satisfaction survey. Process improvement involved discrete event simulation to evaluate alternative booking practice scenarios, development of an optimization-based scheduling tool to improve scheduling efficiency, and change management for implementation of process changes. Results were evaluated through analysis of appointment data, a follow-up patient survey, and staff surveys. Process review revealed a two-stage scheduling process. Long waitlists and late notification resulted from an inflexible first-stage process. The second-stage process was time consuming and tedious. After a revised, more flexible first-stage process and an automated second-stage process were implemented, the median percentage of appointments exceeding the final appointment notification lead time target of one week was reduced by 57% and median waitlist size decreased by 83%. Patient surveys confirmed increased satisfaction while staff feedback reported reduced stress levels. Significant operational improvements can be achieved through process redesign combined with operations research methods.
Towards operational multisensor registration
NASA Technical Reports Server (NTRS)
Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.
1991-01-01
To use data from a number of different remote sensors in a synergistic manner, a multidimensional analysis of the data is necessary. However, prior to this analysis, processing to correct for the systematic geometric distortion characteristic of each sensor is required. Furthermore, the registration process must be fully automated to handle a large volume of data and high data rates. A conceptual approach towards an operational multisensor registration algorithm is presented. The performance requirements of the algorithm are first formulated given the spatially, temporally, and spectrally varying factors that influence the image characteristics and the science requirements of various applications. Several registration techniques that fit within the structure of this algorithm are also presented. Their performance was evaluated using a multisensor test data set assembled from LANDSAT TM, SEASAT, SIR-B, Thermal Infrared Multispectral Scanner (TIMS), and SPOT sensors.
Advanced information processing system
NASA Technical Reports Server (NTRS)
Lala, J. H.
1984-01-01
Design and performance details of the advanced information processing system (AIPS) for fault and damage tolerant data processing on aircraft and spacecraft are presented. AIPS comprises several computers distributed throughout the vehicle and linked by a damage tolerant data bus. Most I/O functions are available to all the computers, which run in a TDMA mode. Each computer performs separate specific tasks in normal operation and assumes other tasks in degraded modes. Redundant software assures that all fault monitoring, logging and reporting are automated, together with control functions. Redundant duplex links and damage-spread limitation provide the fault tolerance. Details of an advanced design of a laboratory-scale proof-of-concept system are described, including functional operations.
NASA Astrophysics Data System (ADS)
Hung, Nguyen Trong; Thuan, Le Ba; Thanh, Tran Chi; Nhuan, Hoang; Khoai, Do Van; Tung, Nguyen Van; Lee, Jin-Young; Jyothi, Rajesh Kumar
2018-06-01
Modeling uranium dioxide pellet process from ammonium uranyl carbonate - derived uranium dioxide powder (UO2 ex-AUC powder) and predicting fuel rod temperature distribution were reported in the paper. Response surface methodology (RSM) and FRAPCON-4.0 code were used to model the process and to predict the fuel rod temperature under steady-state operating condition. Fuel rod design of AP-1000 designed by Westinghouse Electric Corporation, in these the pellet fabrication parameters are from the study, were input data for the code. The predictive data were suggested the relationship between the fabrication parameters of UO2 pellets and their temperature image in nuclear reactor.
National Centers for Environmental Prediction
Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar Series Other Information Collaborators In-House Website Transition to Operations Presentations
NASA Technical Reports Server (NTRS)
Dehghani, Navid; Tankenson, Michael
2006-01-01
This viewgraph presentation reviews the architectural description of the Mission Data Processing and Control System (MPCS). MPCS is an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is designed with these factors (1) Enabling plug and play architecture (2) MPCS has strong inheritance from GDS components that have been developed for other Flight Projects (MER, MRO, DAWN, MSAP), and are currently being used in operations and ATLO, and (3) MPCS components are Java-based, platform independent, and are designed to consume and produce XML-formatted data
CMOS serial link for fully duplexed data communication
NASA Astrophysics Data System (ADS)
Lee, Kyeongho; Kim, Sungjoon; Ahn, Gijung; Jeong, Deog-Kyoon
1995-04-01
This paper describes a CMOS serial link allowing fully duplexed 500 Mbaud serial data communication. The CMOS serial link is a robust and low-cost solution to high data rate requirements. A central charge pump PLL for generating multiphase clocks for oversampling is shared by several serial link channels. Fully duplexed serial data communication is realized in the bidirectional bridge by separating incoming data from the mixed signal on the cable end. The digital PLL accomplishes process-independent data recovery by using a low-ratio oversampling, a majority voting, and a parallel data recovery scheme. Mostly, digital approach could extend its bandwidth further with scaled CMOS technology. A single channel serial link and a charge pump PLL are integrated in a test chip using 1.2 micron CMOS process technology. The test chip confirms upto 500 Mbaud unidirectional mode operation and 320 Mbaud fully duplexed mode operation with pseudo random data patterns.
NASA Astrophysics Data System (ADS)
Benedetto, J.; Cloninger, A.; Czaja, W.; Doster, T.; Kochersberger, K.; Manning, B.; McCullough, T.; McLane, M.
2014-05-01
Successful performance of radiological search mission is dependent on effective utilization of mixture of signals. Examples of modalities include, e.g., EO imagery and gamma radiation data, or radiation data collected during multiple events. In addition, elevation data or spatial proximity can be used to enhance the performance of acquisition systems. State of the art techniques in processing and exploitation of complex information manifolds rely on diffusion operators. Our approach involves machine learning techniques based on analysis of joint data- dependent graphs and their associated diffusion kernels. Then, the significant eigenvectors of the derived fused graph Laplace and Schroedinger operators form the new representation, which provides integrated features from the heterogeneous input data. The families of data-dependent Laplace and Schroedinger operators on joint data graphs, shall be integrated by means of appropriately designed fusion metrics. These fused representations are used for target and anomaly detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.
2011-02-01
Individual raw datastreams from instrumentation at the Atmospheric Radiation Measurement (ARM) Climate Research Facility fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near-real time. Raw and processed data are then sent approximately daily to the ARM Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of processed data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual datastream, site, and month for the currentmore » year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY2010 for the Southern Great Plains (SGP) site is 2097.60 hours (0.95 x 2208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1987.20 hours (0.90 x 2208) and for the Tropical Western Pacific (TWP) locale is 1876.80 hours (0.85 x 2208). The first ARM Mobile Facility (AMF1) deployment in Graciosa Island, the Azores, Portugal, continued through this quarter, so the OPSMAX time this quarter is 2097.60 hours (0.95 x 2208). The second ARM Mobile Facility (AMF2) began deployment this quarter to Steamboat Springs, Colorado. The experiment officially began November 15, but most of the instruments were up and running by November 1. Therefore, the OPSMAX time for the AMF2 was 1390.80 hours (.95 x 1464 hours) for November and December (61 days). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or datastream. Data availability reported here refers to the average of the individual, continuous datastreams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. Summary. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period October 1-December 31, 2010, for the fixed sites. Because the AMFs operate episodically, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. This first quarter comprises a total of 2,208 possible hours for the fixed sites and the AMF1 and 1,464 possible hours for the AMF2. The average of the fixed sites exceeded our goal this quarter. The AMF1 has essentially completed its mission and is shutting down to pack up for its next deployment to India. Although all the raw data from the operational instruments are in the Archive for the AMF2, only the processed data are tabulated. Approximately half of the AMF2 instruments have data that was fully processed, resulting in the 46% of all possible data made available to users through the Archive for this first quarter. Typically, raw data is not made available to users unless specifically requested.« less
Closing data gaps for LCA of food products: estimating the energy demand of food processing.
Sanjuán, Neus; Stoessel, Franziska; Hellweg, Stefanie
2014-01-21
Food is one of the most energy and CO2-intensive consumer goods. While environmental data on primary agricultural products are increasingly becoming available, there are large data gaps concerning food processing. Bridging these gaps is important; for example, the food industry can use such data to optimize processes from an environmental perspective, and retailers may use this information for purchasing decisions. Producers and retailers can then market sustainable products and deliver the information demanded by governments and consumers. Finally, consumers are increasingly interested in the environmental information of foods in order to lower their consumption impacts. This study provides estimation tools for the energy demand of a representative set of food process unit operations such as dehydration, evaporation, or pasteurization. These operations are used to manufacture a variety of foods and can be combined, according to the product recipe, to quantify the heat and electricity demand during processing. In combination with inventory data on the production of the primary ingredients, this toolbox will be a basis to perform life cycle assessment studies of a large number of processed food products and to provide decision support to the stakeholders. Furthermore, a case study is performed to illustrate the application of the tools.
NASA Astrophysics Data System (ADS)
Yoon, S.
2016-12-01
To define geodetic reference frame using GPS data collected by Continuously Operating Reference Stations (CORS) network, historical GPS data needs to be reprocessed regularly. Reprocessing GPS data collected by upto 2000 CORS sites for the last two decades requires a lot of computational resource. At National Geodetic Survey (NGS), there has been one completed reprocessing in 2011, and currently, the second reprocessing is undergoing. For the first reprocessing effort, in-house computing resource was utilized. In the current second reprocessing effort, outsourced cloud computing platform is being utilized. In this presentation, the outline of data processing strategy at NGS is described as well as the effort to parallelize the data processing procedure in order to maximize the benefit of the cloud computing. The time and cost savings realized by utilizing cloud computing approach will also be discussed.
Controlling Real-Time Processes On The Space Station With Expert Systems
NASA Astrophysics Data System (ADS)
Leinweber, David; Perry, John
1987-02-01
Many aspects of space station operations involve continuous control of real-time processes. These processes include electrical power system monitoring, propulsion system health and maintenance, environmental and life support systems, space suit checkout, on-board manufacturing, and servicing of attached vehicles such as satellites, shuttles, orbital maneuvering vehicles, orbital transfer vehicles and remote teleoperators. Traditionally, monitoring of these critical real-time processes has been done by trained human experts monitoring telemetry data. However, the long duration of space station missions and the high cost of crew time in space creates a powerful economic incentive for the development of highly autonomous knowledge-based expert control procedures for these space stations. In addition to controlling the normal operations of these processes, the expert systems must also be able to quickly respond to anomalous events, determine their cause and initiate corrective actions in a safe and timely manner. This must be accomplished without excessive diversion of system resources from ongoing control activities and any events beyond the scope of the expert control and diagnosis functions must be recognized and brought to the attention of human operators. Real-time sensor based expert systems (as opposed to off-line, consulting or planning systems receiving data via the keyboard) pose particular problems associated with sensor failures, sensor degradation and data consistency, which must be explicitly handled in an efficient manner. A set of these systems must also be able to work together in a cooperative manner. This paper describes the requirements for real-time expert systems in space station control, and presents prototype implementations of space station expert control procedures in PICON (process intelligent control). PICON is a real-time expert system shell which operates in parallel with distributed data acquisition systems. It incorporates a specialized inference engine with a specialized scheduling portion specifically designed to match the allocation of system resources with the operational requirements of real-time control systems. Innovative knowledge engineering techniques used in PICON to facilitate the development of real-time sensor-based expert systems which use the special features of the inference engine are illustrated in the prototype examples.
NASA Technical Reports Server (NTRS)
Cronin, A. G.; Delaney, J. R.
1973-01-01
The system is discussed which was developed to process digitized telemetry data from the intensity monitoring spectrometer flown on the Orbiting Geophysical Observatory (OGO-F) Satellite. Functional descriptions and operating instructions are included for each program in the system.
Barbagallo, Simone; Corradi, Luca; de Ville de Goyet, Jean; Iannucci, Marina; Porro, Ivan; Rosso, Nicola; Tanfani, Elena; Testi, Angela
2015-05-17
The Operating Room (OR) is a key resource of all major hospitals, but it also accounts for up 40% of resource costs. Improving cost effectiveness, while maintaining a quality of care, is a universal objective. These goals imply an optimization of planning and a scheduling of the activities involved. This is highly challenging due to the inherent variable and unpredictable nature of surgery. A Business Process Modeling Notation (BPMN 2.0) was used for the representation of the "OR Process" (being defined as the sequence of all of the elementary steps between "patient ready for surgery" to "patient operated upon") as a general pathway ("path"). The path was then both further standardized as much as possible and, at the same time, keeping all of the key-elements that would allow one to address or define the other steps of planning, and the inherent and wide variability in terms of patient specificity. The path was used to schedule OR activity, room-by-room, and day-by-day, feeding the process from a "waiting list database" and using a mathematical optimization model with the objective of ending up in an optimized planning. The OR process was defined with special attention paid to flows, timing and resource involvement. Standardization involved a dynamics operation and defined an expected operating time for each operation. The optimization model has been implemented and tested on real clinical data. The comparison of the results reported with the real data, shows that by using the optimization model, allows for the scheduling of about 30% more patients than in actual practice, as well as to better exploit the OR efficiency, increasing the average operating room utilization rate up to 20%. The optimization of OR activity planning is essential in order to manage the hospital's waiting list. Optimal planning is facilitated by defining the operation as a standard pathway where all variables are taken into account. By allowing a precise scheduling, it feeds the process of planning and, further up-stream, the management of a waiting list in an interactive and bi-directional dynamic process.
Post-processing for improving hyperspectral anomaly detection accuracy
NASA Astrophysics Data System (ADS)
Wu, Jee-Cheng; Jiang, Chi-Ming; Huang, Chen-Liang
2015-10-01
Anomaly detection is an important topic in the exploitation of hyperspectral data. Based on the Reed-Xiaoli (RX) detector and a morphology operator, this research proposes a novel technique for improving the accuracy of hyperspectral anomaly detection. Firstly, the RX-based detector is used to process a given input scene. Then, a post-processing scheme using morphology operator is employed to detect those pixels around high-scoring anomaly pixels. Tests were conducted using two real hyperspectral images with ground truth information and the results based on receiver operating characteristic curves, illustrated that the proposed method reduced the false alarm rates of the RXbased detector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.
2009-10-15
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near-real time. Raw and processed data are then sent approximately daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the fourth quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 ? 2,208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 ? 2,208) and for the Tropical Western Pacific (TWP) locale is 1,876.8 hours (0.85 ? 2,208). The ARM Mobile Facility (AMF) was officially operational May 1 in Graciosa Island, the Azores, Portugal, so the OPSMAX time this quarter is 2,097.60 hours (0.95 x 2,208). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive result from downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period July 1 - September 30, 2009, for the fixed sites. Because the AMF operates episodically, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. The fourth quarter comprises a total of 2,208 hours for the fixed and mobile sites. The average of the fixed sites well exceeded our goal this quarter. The AMF data statistic requires explanation. Since the AMF radar data ingest software is being modified, the data are being stored in the DMF for data processing. Hence, the data are not at the Archive; they are anticipated to become available by the next report.« less
Scientific and Operational Requirements for TOMS Data
NASA Technical Reports Server (NTRS)
Krueger, Arlin J. (Editor)
1987-01-01
Global total ozone and sulfur dioxide data from the Nimbus 7 Total Ozone Mapping Spectrometer (TOMS) instrument have applications in a broad range of disciplines. The presentations of 29 speakers who are using the data in research or who have operational needs for the data are summarized. Five sessions addressed topics in stratospheric processes, tropospheric dynamics and chemistry, remote sensing, volcanology, and future instrument requirements. Stratospheric and some volcanology requirements can be met by a continuation of polar orbit satellites using a slightly modified TOMS but weather related research, tropospheric sulfur budget studies, and most operational needs require the time resolution of a geostationary instrument.
NASA Astrophysics Data System (ADS)
Pinner, J. W., IV
2016-02-01
Data from shipboard oceanographic sensors come in various formats and collection typically requires multiple data acquisition software packages running on multiple workstations throughout the vessel. Technicians must then corral all or a subset of the resulting data files so that they may be used by shipboard scientists. On many vessels the process of corralling files into a single cruise data package may change from cruise to cruise or even from technician to technician. It is these inconsistencies in the final cruise data packages that pose the greatest challenge when attempting to automate the process of cataloging cruise data for submission to data archives. A second challenge with the management of shipboard data is ensuring it's quality. Problems with sensors may go unnoticed simply because the technician/scientist was unaware the data from a sensor was absent, invalid, or out of range. The Open Vessel Data Management project (OpenVDM) is a ship-wide data management solution developed to address these issues. In the past three years OpenVDM has successfully demonstrated it's ability to adapt to the needs of vessels with different capabilities/missions while delivering a consistent cruise data package to scientists and adhering to the recommendations and best practices set forth by 3rd party data management groups such as R2R. In the last year OpenVDM has implemented a plugin architecture for monitoring data quality. This allowed vessel operators to develop custom data quality tests tailored to their vessel's unique raw datasets. Data quality test are performed in near-real-time and the results are readily available within a web-interface. This plugin architecture allows 3rd party data quality workgroups like SAMOS to migrate their data quality tests to the vessel and provide immediate determination of data quality. OpenVDM is currently operating aboard three vessels. The R/V Endeavor, operated by the University of Rhode Island, is a regional-class UNOLS research vessel operating under the traditional NFS, P.I. driven model. The E/V Nautilus, operated by the Ocean Exploration Trust specializes in ROV-based, telepresence-enabled oceanographic research. The R/V Falkor operated by the Schmidt Ocean Institute is an ocean research platform focusing on cutting-edge technology development.