Sample records for process operating performance

  1. 19 CFR 10.178 - Direct costs of processing operations performed in the beneficiary developing country.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Direct costs of processing operations performed in... processing operations performed in the beneficiary developing country. (a) Items included in the direct costs of processing operations. As used in § 10.176, the words “direct costs of processing operations...

  2. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  3. 19 CFR 10.197 - Direct costs of processing operations performed in a beneficiary country or countries.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Direct costs of processing operations performed in... TO A REDUCED RATE, ETC. Caribbean Basin Initiative § 10.197 Direct costs of processing operations... operations. As used in § 10.195 and § 10.198, the words “direct costs of processing operations” mean those...

  4. Statistical process control as a tool for controlling operating room performance: retrospective analysis and benchmarking.

    PubMed

    Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao

    2010-10-01

    There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.

  5. Stochastic availability analysis of operational data systems in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Issa, T. N.

    1991-01-01

    Existing availability models of standby redundant systems consider only an operator's performance and its interaction with the hardware performance. In the case of operational data systems in the Deep Space Network (DSN), in addition to an operator system interface, a controller reconfigures the system and links a standby unit into the network data path upon failure of the operating unit. A stochastic (Markovian) process technique is used to model and analyze the availability performance and occurrence of degradation due to partial failures are quantitatively incorporated into the model. Exact expressions of the steady state availability and proportion degraded performance measures are derived for the systems under study. The interaction among the hardware, operator, and controller performance parameters and that interaction's effect on data availability are evaluated and illustrated for an operational data processing system.

  6. Global tree network for computing structures enabling global processing operations

    DOEpatents

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  7. Performing process migration with allreduce operations

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Wallenfelt, Brian Paul

    2010-12-14

    Compute nodes perform allreduce operations that swap processes at nodes. A first allreduce operation generates a first result and uses a first process from a first compute node, a second process from a second compute node, and zeros from other compute nodes. The first compute node replaces the first process with the first result. A second allreduce operation generates a second result and uses the first result from the first compute node, the second process from the second compute node, and zeros from others. The second compute node replaces the second process with the second result, which is the first process. A third allreduce operation generates a third result and uses the first result from first compute node, the second result from the second compute node, and zeros from others. The first compute node replaces the first result with the third result, which is the second process.

  8. Influences of operational parameters on phosphorus removal in batch and continuous electrocoagulation process performance.

    PubMed

    Nguyen, Dinh Duc; Yoon, Yong Soo; Bui, Xuan Thanh; Kim, Sung Su; Chang, Soon Woong; Guo, Wenshan; Ngo, Huu Hao

    2017-11-01

    Performance of an electrocoagulation (EC) process in batch and continuous operating modes was thoroughly investigated and evaluated for enhancing wastewater phosphorus removal under various operating conditions, individually or combined with initial phosphorus concentration, wastewater conductivity, current density, and electrolysis times. The results revealed excellent phosphorus removal (72.7-100%) for both processes within 3-6 min of electrolysis, with relatively low energy requirements, i.e., less than 0.5 kWh/m 3 for treated wastewater. However, the removal efficiency of phosphorus in the continuous EC operation mode was better than that in batch mode within the scope of the study. Additionally, the rate and efficiency of phosphorus removal strongly depended on operational parameters, including wastewater conductivity, initial phosphorus concentration, current density, and electrolysis time. Based on experimental data, statistical model verification of the response surface methodology (RSM) (multiple factor optimization) was also established to provide further insights and accurately describe the interactive relationship between the process variables, thus optimizing the EC process performance. The EC process using iron electrodes is promising for improving wastewater phosphorus removal efficiency, and RSM can be a sustainable tool for predicting the performance of the EC process and explaining the influence of the process variables.

  9. 40 CFR Table 8 to Subpart Sssss of... - Continuous Compliance with Operating Limits

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... recent performance test; andii. Conducting annually an inspection of all duct work, vents, and capture... process operating parameters within the limits established during the most recent performance test i... processing rate at or below the maximum organic HAP processing rate established during the most recent...

  10. 40 CFR Table 8 to Subpart Sssss of... - Continuous Compliance with Operating Limits

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... recent performance test; andii. Conducting annually an inspection of all duct work, vents, and capture... process operating parameters within the limits established during the most recent performance test i... processing rate at or below the maximum organic HAP processing rate established during the most recent...

  11. 40 CFR Table 8 to Subpart Sssss of... - Continuous Compliance with Operating Limits

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... recent performance test; andii. Conducting annually an inspection of all duct work, vents, and capture... process operating parameters within the limits established during the most recent performance test i... processing rate at or below the maximum organic HAP processing rate established during the most recent...

  12. Application of agent-based system for bioprocess description and process improvement.

    PubMed

    Gao, Ying; Kipling, Katie; Glassey, Jarka; Willis, Mark; Montague, Gary; Zhou, Yuhong; Titchener-Hooker, Nigel J

    2010-01-01

    Modeling plays an important role in bioprocess development for design and scale-up. Predictive models can also be used in biopharmaceutical manufacturing to assist decision-making either to maintain process consistency or to identify optimal operating conditions. To predict the whole bioprocess performance, the strong interactions present in a processing sequence must be adequately modeled. Traditionally, bioprocess modeling considers process units separately, which makes it difficult to capture the interactions between units. In this work, a systematic framework is developed to analyze the bioprocesses based on a whole process understanding and considering the interactions between process operations. An agent-based approach is adopted to provide a flexible infrastructure for the necessary integration of process models. This enables the prediction of overall process behavior, which can then be applied during process development or once manufacturing has commenced, in both cases leading to the capacity for fast evaluation of process improvement options. The multi-agent system comprises a process knowledge base, process models, and a group of functional agents. In this system, agent components co-operate with each other in performing their tasks. These include the description of the whole process behavior, evaluating process operating conditions, monitoring of the operating processes, predicting critical process performance, and providing guidance to decision-making when coping with process deviations. During process development, the system can be used to evaluate the design space for process operation. During manufacture, the system can be applied to identify abnormal process operation events and then to provide suggestions as to how best to cope with the deviations. In all cases, the function of the system is to ensure an efficient manufacturing process. The implementation of the agent-based approach is illustrated via selected application scenarios, which demonstrate how such a framework may enable the better integration of process operations by providing a plant-wide process description to facilitate process improvement. Copyright 2009 American Institute of Chemical Engineers

  13. A noncoherent optical analog image processor.

    PubMed

    Swindell, W

    1970-11-01

    The description of a machine that performs a variety of image processing operations is given, together with a theoretical discussion of its operation. Spatial processing is performed by corrective convolution techniques. Density processing is achieved by means of an electrical transfer function generator included in the video circuit. Examples of images processed for removal of image motion blur, defocus, and atmospheric seeing blur are shown.

  14. Global interrupt and barrier networks

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E; Heidelberger, Philip; Kopcsay, Gerard V.; Steinmacher-Burow, Burkhard D.; Takken, Todd E.

    2008-10-28

    A system and method for generating global asynchronous signals in a computing structure. Particularly, a global interrupt and barrier network is implemented that implements logic for generating global interrupt and barrier signals for controlling global asynchronous operations performed by processing elements at selected processing nodes of a computing structure in accordance with a processing algorithm; and includes the physical interconnecting of the processing nodes for communicating the global interrupt and barrier signals to the elements via low-latency paths. The global asynchronous signals respectively initiate interrupt and barrier operations at the processing nodes at times selected for optimizing performance of the processing algorithms. In one embodiment, the global interrupt and barrier network is implemented in a scalable, massively parallel supercomputing device structure comprising a plurality of processing nodes interconnected by multiple independent networks, with each node including one or more processing elements for performing computation or communication activity as required when performing parallel algorithm operations. One multiple independent network includes a global tree network for enabling high-speed global tree communications among global tree network nodes or sub-trees thereof. The global interrupt and barrier network may operate in parallel with the global tree network for providing global asynchronous sideband signals.

  15. Distributed performance counters

    DOEpatents

    Davis, Kristan D; Evans, Kahn C; Gara, Alan; Satterfield, David L

    2013-11-26

    A plurality of first performance counter modules is coupled to a plurality of processing cores. The plurality of first performance counter modules is operable to collect performance data associated with the plurality of processing cores respectively. A plurality of second performance counter modules are coupled to a plurality of L2 cache units, and the plurality of second performance counter modules are operable to collect performance data associated with the plurality of L2 cache units respectively. A central performance counter module may be operable to coordinate counter data from the plurality of first performance counter modules and the plurality of second performance modules, the a central performance counter module, the plurality of first performance counter modules, and the plurality of second performance counter modules connected by a daisy chain connection.

  16. 40 CFR 60.473 - Monitoring of operations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Asphalt Processing and Asphalt Roofing Manufacture § 60.473 Monitoring of operations. (a) The owner or operator...

  17. 40 CFR 60.473 - Monitoring of operations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Asphalt Processing and Asphalt Roofing Manufacture § 60.473 Monitoring of operations. (a) The owner or operator...

  18. 40 CFR 60.473 - Monitoring of operations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Asphalt Processing and Asphalt Roofing Manufacture § 60.473 Monitoring of operations. (a) The owner or operator...

  19. Measuring, managing and maximizing refinery performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bascur, O.A.; Kennedy, J.P.

    1996-01-01

    Implementing continuous quality improvement is a confluence of total quality management, people empowerment, performance indicators and information engineering. Supporting information technologies allow a refiner to narrow the gap between management objectives and the process control level. Dynamic performance monitoring benefits come from production cost savings, improved communications and enhanced decision making. A refinery workgroup information flow model helps automate continuous improvement of processes, performance and the organization. The paper discusses the rethinking of refinery operations, dynamic performance monitoring, continuous process improvement, the knowledge coordinator and repository manager, an integrated plant operations workflow, and successful implementation.

  20. User's manual SIG: a general-purpose signal processing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lager, D.; Azevedo, S.

    1983-10-25

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Many of the basic operations one would perform on digitized data are contained in the core SIG package. Out of these core commands, more powerful signal processing algorithms may be built. Many different operations on time- and frequency-domain signals can be performed by SIG. They include operations on the samples of a signal, such as adding a scalar tomore » each sample, operations on the entire signal such as digital filtering, and operations on two or more signals such as adding two signals. Signals may be simulated, such as a pulse train or a random waveform. Graphics operations display signals and spectra.« less

  1. Manufacturing Execution Systems: Examples of Performance Indicator and Operational Robustness Tools.

    PubMed

    Gendre, Yannick; Waridel, Gérard; Guyon, Myrtille; Demuth, Jean-François; Guelpa, Hervé; Humbert, Thierry

    Manufacturing Execution Systems (MES) are computerized systems used to measure production performance in terms of productivity, yield, and quality. In the first part, performance indicator and overall equipment effectiveness (OEE), process robustness tools and statistical process control are described. The second part details some tools to help process robustness and control by operators by preventing deviations from target control charts. MES was developed by Syngenta together with CIMO for automation.

  2. General Recommendations on Fatigue Risk Management for the Canadian Forces

    DTIC Science & Technology

    2010-04-01

    missions performed in aviation require an individual(s) to process large amount of information in a short period of time and to do this on a continuous...information processing required during sustained operations can deteriorate an individual’s ability to perform a task. Given the high operational tempo...memory, which, in turn, is utilized to perform human thought processes (Baddeley, 2003). While various versions of this theory exist, they all share

  3. Real-Time Embedded High Performance Computing: Communications Scheduling.

    DTIC Science & Technology

    1995-06-01

    real - time operating system must explicitly limit the degradation of the timing performance of all processes as the number of processes...adequately supported by a real - time operating system , could compound the development problems encountered in the past. Many experts feel that the... real - time operating system support for an MPP, although they all provide some support for distributed real-time applications. A distributed real

  4. Group interaction and flight crew performance

    NASA Technical Reports Server (NTRS)

    Foushee, H. Clayton; Helmreich, Robert L.

    1988-01-01

    The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.

  5. Amplitude image processing by diffractive optics.

    PubMed

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  6. The Kepler Science Data Processing Pipeline Source Code Road Map

    NASA Technical Reports Server (NTRS)

    Wohler, Bill; Jenkins, Jon M.; Twicken, Joseph D.; Bryson, Stephen T.; Clarke, Bruce Donald; Middour, Christopher K.; Quintana, Elisa Victoria; Sanderfer, Jesse Thomas; Uddin, Akm Kamal; Sabale, Anima; hide

    2016-01-01

    We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed, developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center, the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center which hosts the computers required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Processing Pipeline, including, the software algorithms. We present the high-performance, parallel computing software modules of the pipeline that perform transit photometry, pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument characterization.

  7. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  8. Kepler Science Operations Center Architecture

    NASA Technical Reports Server (NTRS)

    Middour, Christopher; Klaus, Todd; Jenkins, Jon; Pletcher, David; Cote, Miles; Chandrasekaran, Hema; Wohler, Bill; Girouard, Forrest; Gunter, Jay P.; Uddin, Kamal; hide

    2010-01-01

    We give an overview of the operational concepts and architecture of the Kepler Science Data Pipeline. Designed, developed, operated, and maintained by the Science Operations Center (SOC) at NASA Ames Research Center, the Kepler Science Data Pipeline is central element of the Kepler Ground Data System. The SOC charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Data Pipeline, including the hardware infrastructure, scientific algorithms, and operational procedures. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center that hosts the computers required to perform data analysis. We discuss the high-performance, parallel computing software modules of the Kepler Science Data Pipeline that perform transit photometry, pixel-level calibration, systematic error-correction, attitude determination, stellar target management, and instrument characterization. We explain how data processing environments are divided to support operational processing and test needs. We explain the operational timelines for data processing and the data constructs that flow into the Kepler Science Data Pipeline.

  9. Understanding facilities design parameters for a remanufacturing system

    NASA Astrophysics Data System (ADS)

    Topcu, Aysegul; Cullinane, Thomas

    2005-11-01

    Remanufacturing is rapidly becoming a very important element in the economies of the world. Products such as washing machines, clothes driers, automobile parts, cell phones and a wide range of consumer durable goods are being reclaimed and sent through processes that restore these products to levels of operating performance that are as good or better than their new product performance. The operations involved in the remanufacturing process add several new dimensions to the work that must be performed. Disassembly is an operation that rarely appears on the operations chart of a typical production facility. The inspection and test functions in remanufacturing most often involve several more tasks than those involved in the first time manufacturing cycle. A close evaluation of most any remanufacturing operation reveals several points in the process in which parts must be cleaned, tested and stored. Although several researchers have focused their work on optimizing the disassembly function and the inspection, test and store functions, very little research has been devoted to studying the impact of the facilities design on the effectiveness of the remanufacturing process. The purpose of this paper will be to delineate the differences between first time manufacturing operations and remanufacturing operations for durable goods and to identify the features of the facilities design that must be considered if the remanufacturing operations are to be effective.

  10. Aerobic Digestion. Biological Treatment Process Control. Instructor's Guide.

    ERIC Educational Resources Information Center

    Klopping, Paul H.

    This unit on aerobic sludge digestion covers the theory of the process, system components, factors that affect the process performance, standard operational concerns, indicators of steady-state operations, and operational problems. The instructor's guide includes: (1) an overview of the unit; (2) lesson plan; (3) lecture outline (keyed to a set of…

  11. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    PubMed

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  12. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform

    PubMed Central

    Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance. PMID:29861711

  13. Examination of a carton sealing line using a thermographic scanner

    NASA Astrophysics Data System (ADS)

    Kleinfeld, Jack M.

    1999-03-01

    The study of the operation and performance of natural gas fired sealing lines for polyethylene coated beverage containers was performed. Both thermal and geometric data was abstracted from the thermal scans and used to characterize the performance of the sealing line. The impact of process operating variables such as line speed and carton to carton spacing was studied. Recommendations for system improvements, instrumentation and process control were made.

  14. An open system approach to process reengineering in a healthcare operational environment.

    PubMed

    Czuchry, A J; Yasin, M M; Norris, J

    2000-01-01

    The objective of this study is to examine the applicability of process reengineering in a healthcare operational environment. The intake process of a mental healthcare service delivery system is analyzed systematically to identify process-related problems. A methodology which utilizes an open system orientation coupled with process reengineering is utilized to overcome operational and patient related problems associated with the pre-reengineered intake process. The systematic redesign of the intake process resulted in performance improvements in terms of cost, quality, service and timing.

  15. Performance assessment of membrane distillation for skim milk and whey processing.

    PubMed

    Hausmann, Angela; Sanciolo, Peter; Vasiljevic, Todor; Kulozik, Ulrich; Duke, Mikel

    2014-01-01

    Membrane distillation is an emerging membrane process based on evaporation of a volatile solvent. One of its often stated advantages is the low flux sensitivity toward concentration of the processed fluid, in contrast to reverse osmosis. In the present paper, we looked at 2 high-solids applications of the dairy industry: skim milk and whey. Performance was assessed under various hydrodynamic conditions to investigate the feasibility of fouling mitigation by changing the operating parameters and to compare performance to widespread membrane filtration processes. Whereas filtration processes are hydraulic pressure driven, membrane distillation uses vapor pressure from heat to drive separation and, therefore, operating parameters have a different bearing on the process. Experimental and calculated results identified factors influencing heat and mass transfer under various operating conditions using polytetrafluoroethylene flat-sheet membranes. Linear velocity was found to influence performance during skim milk processing but not during whey processing. Lower feed and higher permeate temperature was found to reduce fouling in the processing of both dairy solutions. Concentration of skim milk and whey by membrane distillation has potential, as it showed high rejection (>99%) of all dairy components and can operate using low electrical energy and pressures (<10 kPa). At higher cross-flow velocities (around 0.141 m/s), fluxes were comparable to those found with reverse osmosis, achieving a sustainable flux of approximately 12 kg/h·m(2) for skim milk of 20% dry matter concentration and approximately 20 kg/h·m(2) after 18 h of operation with whey at 20% dry matter concentration. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; Call, Charles J.; Birmingham, Joseph G.; McDonald, Carolyn Evans; Kurath, Dean E.; Friedrich, Michele

    1998-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  17. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K.; Call, C.J.; Birmingham, J.G.; McDonald, C.E.; Kurath, D.E.; Friedrich, M.

    1998-09-22

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 26 figs.

  18. Image enhancement software for underwater recovery operations: User's manual

    NASA Astrophysics Data System (ADS)

    Partridge, William J.; Therrien, Charles W.

    1989-06-01

    This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.

  19. 19 CFR 10.178 - Direct costs of processing operations performed in the beneficiary developing country.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., design, engineering, and blueprint costs insofar as they are allocable to the specific merchandise; and... 19 Customs Duties 1 2013-04-01 2013-04-01 false Direct costs of processing operations performed in... TO A REDUCED RATE, ETC. General Provisions Generalized System of Preferences § 10.178 Direct costs of...

  20. 19 CFR 10.178 - Direct costs of processing operations performed in the beneficiary developing country.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., design, engineering, and blueprint costs insofar as they are allocable to the specific merchandise; and... 19 Customs Duties 1 2014-04-01 2014-04-01 false Direct costs of processing operations performed in... TO A REDUCED RATE, ETC. General Provisions Generalized System of Preferences § 10.178 Direct costs of...

  1. 19 CFR 10.197 - Direct costs of processing operations performed in a beneficiary country or countries.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... specific merchandise, including fringe benefits, on-the-job training, and the cost of engineering..., engineering, and blueprint costs insofar as they are allocable to the specific merchandise and; (4) Costs of... 19 Customs Duties 1 2013-04-01 2013-04-01 false Direct costs of processing operations performed in...

  2. 19 CFR 10.178 - Direct costs of processing operations performed in the beneficiary developing country.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., design, engineering, and blueprint costs insofar as they are allocable to the specific merchandise; and... 19 Customs Duties 1 2012-04-01 2012-04-01 false Direct costs of processing operations performed in... TO A REDUCED RATE, ETC. General Provisions Generalized System of Preferences § 10.178 Direct costs of...

  3. 19 CFR 10.197 - Direct costs of processing operations performed in a beneficiary country or countries.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... specific merchandise, including fringe benefits, on-the-job training, and the cost of engineering..., engineering, and blueprint costs insofar as they are allocable to the specific merchandise and; (4) Costs of... 19 Customs Duties 1 2014-04-01 2014-04-01 false Direct costs of processing operations performed in...

  4. Collective network for computer structures

    DOEpatents

    Blumrich, Matthias A; Coteus, Paul W; Chen, Dong; Gara, Alan; Giampapa, Mark E; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd E; Steinmacher-Burow, Burkhard D; Vranas, Pavlos M

    2014-01-07

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to the needs of a processing algorithm.

  5. Collective network for computer structures

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Chen, Dong [Croton On Hudson, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Takken, Todd E [Brewster, NY; Steinmacher-Burow, Burkhard D [Wernau, DE; Vranas, Pavlos M [Bedford Hills, NY

    2011-08-16

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

  6. Performance of high intensity fed-batch mammalian cell cultures in disposable bioreactor systems.

    PubMed

    Smelko, John Paul; Wiltberger, Kelly Rae; Hickman, Eric Francis; Morris, Beverly Janey; Blackburn, Tobias James; Ryll, Thomas

    2011-01-01

    The adoption of disposable bioreactor technology as an alternate to traditional nondisposable technology is gaining momentum in the biotechnology industry. Evaluation of current disposable bioreactors systems to sustain high intensity fed-batch mammalian cell culture processes needs to be explored. In this study, an assessment was performed comparing single-use bioreactors (SUBs) systems of 50-, 250-, and 1,000-L operating scales with traditional stainless steel (SS) and glass vessels using four distinct mammalian cell culture processes. This comparison focuses on expansion and production stage performance. The SUB performance was evaluated based on three main areas: operability, process scalability, and process performance. The process performance and operability aspects were assessed over time and product quality performance was compared at the day of harvest. Expansion stage results showed disposable bioreactors mirror traditional bioreactors in terms of cellular growth and metabolism. Set-up and disposal times were dramatically reduced using the SUB systems when compared with traditional systems. Production stage runs for both Chinese hamster ovary and NS0 cell lines in the SUB system were able to model SS bioreactors runs at 100-, 200-, 2,000-, and 15,000-L scales. A single 1,000-L SUB run applying a high intensity fed-batch process was able to generate 7.5 kg of antibody with comparable product quality. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  7. 19 CFR 10.206 - Value content requirement.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... countries, plus the direct costs of processing operations performed in a beneficiary country or countries...)(1) of this part. Any cost or value of materials or direct costs of processing operations...) combining or packaging operations, or mere dilution with water or mere dilution with another substance that...

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bickford, D.F.

    During the first two years of radioactive operation of the Defense Waste Processing Facility process, several areas for improvement in melter design were identified. Due to the need for a process that allows continuous melter operation, the down time associated with disruption to melter operation and pouring has significant cost impact. A major objective of this task is to address performance limitations and deficiencies identified by the user.

  9. Introducing the CERT (Trademark) Resiliency Engineering Framework: Improving the Security and Sustainability Processes

    DTIC Science & Technology

    2007-05-01

    business processes and services. 4. Security operations management addresses the day-to-day activities that the organization performs to protect the...Management TM – Technology Management Security Operations Management SOM – Security Operations Management 5.7.2 Important Operations Competency...deals with the provision of access rights to informa- tion and technical assets SOM – Security Operations Management , which addresses the fundamental

  10. 40 CFR Table 2 to Subpart Sssss of... - Operating Limits

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... hour) at or below the maximum organic HAP processing rate established during the most recent... allowable operating temperature for the oxidizer established during the most recent performance test. 6... operating temperature for the oxidizer established during the most recent performance test; and b. Check the...

  11. 40 CFR Table 2 to Subpart Sssss of... - Operating Limits

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... hour) at or below the maximum organic HAP processing rate established during the most recent... allowable operating temperature for the oxidizer established during the most recent performance test. 6... operating temperature for the oxidizer established during the most recent performance test; and b. Check the...

  12. 40 CFR Table 2 to Subpart Sssss of... - Operating Limits

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... hour) at or below the maximum organic HAP processing rate established during the most recent... allowable operating temperature for the oxidizer established during the most recent performance test. 6... operating temperature for the oxidizer established during the most recent performance test; and b. Check the...

  13. Effects of a malfunctional column on conventional and FeedCol-simulated moving bed chromatography performance.

    PubMed

    Song, Ji-Yeon; Oh, Donghoon; Lee, Chang-Ha

    2015-07-17

    The effects of a malfunctional column on the performance of a simulated moving bed (SMB) process were studied experimentally and theoretically. The experimental results of conventional four-zone SMB (2-2-2-2 configuration) and FeedCol operation (2-2-2-2 configuration with one feed column) with one malfunctional column were compared with simulation results of the corresponding SMB processes with a normal column configuration. The malfunctional column in SMB processes significantly deteriorated raffinate purity. However, the extract purity was equivalent or slightly improved compared with the corresponding normal SMB operation because the complete separation zone of the malfunctional column moved to a lower flow rate range in zones II and III. With the malfunctional column configuration, FeedCol operation gave better experimental performance (up to 7%) than conventional SMB operation because controlling product purity with FeedCol operation was more flexible through the use of two additional operating variables, injection time and injection length. Thus, compared with conventional SMB separation, extract with equivalent or slightly better purity could be produced from FeedCol operation even with a malfunctional column, while minimizing the decrease in raffinate purity (less than 2%). Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing

    PubMed Central

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2015-01-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA’s CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream. Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels. PMID:26566545

  15. The Automation of Reserve Processing.

    ERIC Educational Resources Information Center

    Self, James

    1985-01-01

    Describes an automated reserve processing system developed locally at Clemons Library, University of Virginia. Discussion covers developments in the reserve operation at Clemons Library, automation of the processing and circulation functions of reserve collections, and changes in reserve operation performance and staffing needs due to automation.…

  16. 40 CFR 60.255 - Performance tests and other compliance requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Preparation and Processing Plants § 60.255 Performance tests and other compliance requirements. (a) An owner... within a 60-minute period of) PM performance tests. (c) If any affected coal processing and conveying...) when the coal preparation and processing plant is in operation. Each observation must be recorded as...

  17. 40 CFR 60.255 - Performance tests and other compliance requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Preparation and Processing Plants § 60.255 Performance tests and other compliance requirements. (a) An owner... within a 60-minute period of) PM performance tests. (c) If any affected coal processing and conveying...) when the coal preparation and processing plant is in operation. Each observation must be recorded as...

  18. 40 CFR 60.255 - Performance tests and other compliance requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Preparation and Processing Plants § 60.255 Performance tests and other compliance requirements. (a) An owner... within a 60-minute period of) PM performance tests. (c) If any affected coal processing and conveying...) when the coal preparation and processing plant is in operation. Each observation must be recorded as...

  19. 40 CFR 60.255 - Performance tests and other compliance requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Preparation and Processing Plants § 60.255 Performance tests and other compliance requirements. (a) An owner... within a 60-minute period of) PM performance tests. (c) If any affected coal processing and conveying...) when the coal preparation and processing plant is in operation. Each observation must be recorded as...

  20. Operator performance evaluation using multi criteria decision making methods

    NASA Astrophysics Data System (ADS)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Razali, Siti Fatihah

    2014-06-01

    Operator performance evaluation is a very important operation in labor-intensive manufacturing industry because the company's productivity depends on the performance of its operators. The aims of operator performance evaluation are to give feedback to operators on their performance, to increase company's productivity and to identify strengths and weaknesses of each operator. In this paper, six multi criteria decision making methods; Analytical Hierarchy Process (AHP), fuzzy AHP (FAHP), ELECTRE, PROMETHEE II, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) are used to evaluate the operators' performance and to rank the operators. The performance evaluation is based on six main criteria; competency, experience and skill, teamwork and time punctuality, personal characteristics, capability and outcome. The study was conducted at one of the SME food manufacturing companies in Selangor. From the study, it is found that AHP and FAHP yielded the "outcome" criteria as the most important criteria. The results of operator performance evaluation showed that the same operator is ranked the first using all six methods.

  1. Solar industrial process heat systems: An assessment of standards for materials and components

    NASA Astrophysics Data System (ADS)

    Rossiter, W. J.; Shipp, W. E.

    1981-09-01

    A study was conducted to obtain information on the performance of materials and components in operational solar industrial process heat (PH) systems, and to provide recommendations for the development of standards including evaluative test procedures for materials and components. An assessment of the needs for standards for evaluating the long-term performance of materials and components of IPH systems was made. The assessment was based on the availability of existing standards, and information obtained from a field survey of operational systems, the literature, and discussions with individuals in the industry. Field inspections of 10 operational IPH systems were performed.

  2. The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.

  3. Evaluation of phase separator number in hydrodesulfurization (HDS) unit

    NASA Astrophysics Data System (ADS)

    Jayanti, A. D.; Indarto, A.

    2016-11-01

    The removal process of acid gases such as H2S in natural gas processing industry is required in order to meet sales gas specification. Hydrodesulfurization (HDS)is one of the processes in the refinery that is dedicated to reduce sulphur.InHDS unit, phase separator plays important role to remove H2S from hydrocarbons, operated at a certain pressure and temperature. Optimization of the number of separator performed on the system is then evaluated to understand the performance and economics. From the evaluation, it shows that all systems were able to meet the specifications of H2S in the desired product. However, one separator system resulted the highest capital and operational costs. The process of H2S removal with two separator systems showed the best performance in terms of both energy efficiency with the lowest capital and operating cost. The two separator system is then recommended as a reference in the HDS unit to process the removal of H2S from natural gas.

  4. LANDSAT-D Mission Operations Review (MOR)

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Portions of the LANDSAT-D systems operation plan are presented. An overview of the data processing operations, logistics and other operations support, prelaunch and post-launch activities, thematic mapper operations during the scrounge period, and LANDSAT-D performance evaluation is given.

  5. Performance measurement: integrating quality management and activity-based cost management.

    PubMed

    McKeon, T

    1996-04-01

    The development of an activity-based management system provides a framework for developing performance measures integral to quality and cost management. Performance measures that cross operational boundaries and embrace core processes provide a mechanism to evaluate operational results related to strategic intention and internal and external customers. The author discusses this measurement process that allows managers to evaluate where they are and where they want to be, and to set a course of action that closes the gap between the two.

  6. Development and testing of operational incident detection algorithms : executive summary

    DOT National Transportation Integrated Search

    1997-09-01

    This report describes the development of operational surveillance data processing algorithms and software for application to urban freeway systems, conforming to a framework in which data processing is performed in stages: sensor malfunction detectio...

  7. OR2020: The Operating Room of the Future

    DTIC Science & Technology

    2004-05-01

    25 3.3 Technical Requirements: Standards and Tools for Improved Operating R oom Process Integration...Image processing and visualization tools must be made available to the operating room. 5. Communications issues must be addressed and aim toward...protocols for effectively performing advanced surgeries and using telecommunications-ready tools as needed. The following recommendations were made

  8. An intelligent factory-wide optimal operation system for continuous production process

    NASA Astrophysics Data System (ADS)

    Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping

    2016-03-01

    In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.

  9. Using quantum process tomography to characterize decoherence in an analog electronic device

    NASA Astrophysics Data System (ADS)

    Ostrove, Corey; La Cour, Brian; Lanham, Andrew; Ott, Granville

    The mathematical structure of a universal gate-based quantum computer can be emulated faithfully on a classical electronic device using analog signals to represent a multi-qubit state. We describe a prototype device capable of performing a programmable sequence of single-qubit and controlled two-qubit gate operations on a pair of voltage signals representing the real and imaginary parts of a two-qubit quantum state. Analog filters and true-RMS voltage measurements are used to perform unitary and measurement gate operations. We characterize the degradation of the represented quantum state with successive gate operations by formally performing quantum process tomography to estimate the equivalent decoherence channel. Experimental measurements indicate that the performance of the device may be accurately modeled as an equivalent quantum operation closely resembling a depolarizing channel with a fidelity of over 99%. This work was supported by the Office of Naval Research under Grant No. N00014-14-1-0323.

  10. Linking performance benchmarking of refinery process chemicals to refinery key performance indicators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, J.M.; Nieman, L.D.

    In 1977 Solomon Associates, Inc. issued its first study of refining in the US entitled, Comparative Performance Analysis for Fuel Product Refineries, most commonly referred to as the Solomon Study, or the Fuels Study. In late 1993, both the Water and Waste Water Management, and Petroleum Divisions of Nalco Chemical Company came to the same conclusion; that they must have a better understanding of the Solomon Study process, and have some input to this system of measurement. The authors first approached Solomon Associates with the idea that a specific study should be done of specialty chemicals used in the refinery.more » They felt that this would result in two studies, one for water treatment applications, and one for process. The water treatment study came first, and was completed in 1993 with the United States Petroleum Refineries Water Treatment Performance Analysis for Operating Year 1993. The process study, entitled United States Petroleum Refinery Process Treatment Performance Analysis for Operating Years 1994--95 will be issued in the 2nd quarter of this year by Nalco/Exxon Energy Chemicals, L.P, which includes the combined resources of the former Petroleum Division of Nalco Chemical Company (including the petroleum related portions of most of its overseas companies), and the petroleum related specialty chemical operations of Exxon Chemical on a global basis. What follows is a recap of the process study focus, some examples of output, and comment on both the linkage to key refinery operating indicators, as well as the perception of the effect of such measurement on the supplier relationship of the future.« less

  11. Implementing asyncronous collective operations in a multi-node processing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Eisley, Noel A.; Heidelberger, Philip

    A method, system, and computer program product are disclosed for implementing an asynchronous collective operation in a multi-node data processing system. In one embodiment, the method comprises sending data to a plurality of nodes in the data processing system, broadcasting a remote get to the plurality of nodes, and using this remote get to implement asynchronous collective operations on the data by the plurality of nodes. In one embodiment, each of the nodes performs only one task in the asynchronous operations, and each nodes sets up a base address table with an entry for a base address of a memorymore » buffer associated with said each node. In another embodiment, each of the nodes performs a plurality of tasks in said collective operations, and each task of each node sets up a base address table with an entry for a base address of a memory buffer associated with the task.« less

  12. International Space Station Increment Operations Services

    NASA Astrophysics Data System (ADS)

    Michaelis, Horst; Sielaff, Christian

    2002-01-01

    The Industrial Operator (IO) has defined End-to-End services to perform efficiently all required operations tasks for the Manned Space Program (MSP) as agreed during the Ministerial Council in Edinburgh in November 2001. Those services are the result of a detailed task analysis based on the operations processes as derived from the Space Station Program Implementation Plans (SPIP) and defined in the Operations Processes Documents (OPD). These services are related to ISS Increment Operations and ATV Mission Operations. Each of these End-to-End services is typically characterised by the following properties: It has a clearly defined starting point, where all requirements on the end-product are fixed and associated performance metrics of the customer are well defined. It has a clearly defined ending point, when the product or service is delivered to the customer and accepted by him, according to the performance metrics defined at the start point. The implementation of the process might be restricted by external boundary conditions and constraints mutually agreed with the customer. As far as those are respected the IO has the free choice to select methods and means of implementation. The ISS Increment Operations Service (IOS) activities required for the MSP Exploitation program cover the complete increment specific cycle starting with the support to strategic planning and ending with the post increment evaluation. These activities are divided into sub-services including the following tasks: - ISS Planning Support covering the support to strategic and tactical planning up to the generation - Development &Payload Integration Support - ISS Increment Preparation - ISS Increment Execution These processes are tight together by the Increment Integration Management, which provides the planning and scheduling of all activities as well as the technical management of the overall process . The paper describes the entire End-to-End ISS Increment Operations service and the implementation to support the Columbus Flight 1E related increment and subsequent ISS increments. Special attention is paid to the implications caused by long term operations on hardware, software and operations personnel.

  13. A performance comparison of the IBM RS/6000 and the Astronautics ZS-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, W.M.; Abraham, S.G.; Davidson, E.S.

    1991-01-01

    Concurrent uniprocessor architectures, of which vector and superscalar are two examples, are designed to capitalize on fine-grain parallelism. The authors have developed a performance evaluation method for comparing and improving these architectures, and in this article they present the methodology and a detailed case study of two machines. The runtime of many programs is dominated by time spent in loop constructs - for example, Fortran Do-loops. Loops generally comprise two logical processes: The access process generates addresses for memory operations while the execute process operates on floating-point data. Memory access patterns typically can be generated independently of the data inmore » the execute process. This independence allows the access process to slip ahead, thereby hiding memory latency. The IBM 360/91 was designed in 1967 to achieve slip dynamically, at runtime. One CPU unit executes integer operations while another handles floating-point operations. Other machines, including the VAX 9000 and the IBM RS/6000, use a similar approach.« less

  14. Cold Test Operation of the German VEK Vitrification Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleisch, J.; Schwaab, E.; Weishaupt, M.

    2008-07-01

    In 2007 the German High-Level Liquid Waste (HLLW) Vitrification plant VEK (Verglasungseinrichtung Karlsruhe) has passed a three months integral cold test operation as final step before entering the hot phase. The overall performance of the vitrification process equipment with a liquid-fed ceramic glass melter as main component proved to be completely in line with the requirements of the regulatory body. The retention efficiency of main radioactive-bearing elements across melter and wet off-gas treatment system exceeded the design values distinctly. The strategy to produce a specified waste glass could be successfully demonstrated. The results of the cold test operation allow enteringmore » the next step of hot commissioning, i.e. processing of approximately 2 m{sup 3} of diluted HLLW. In summary: An important step of the VEK vitrification plant towards hot operation has been the performance of the cold test operation from April to July 2007. This first integral operation was carried out under boundary conditions and rules established for radioactive operation. Operation and process control were carried out following the procedure as documented in the licensed operational manuals. The function of the process technology and the safe operation could be demonstrated. No severe problems were encountered. Based on the positive results of the cold test, application of the license for hot operation has been initiated and is expected in the near future. (authors)« less

  15. Advancing metropolitan planning for operations : an objectives-driven, performance-based approach : a guidebook

    DOT National Transportation Integrated Search

    2010-02-01

    This guidebook presents an approach for integrating management and operations (M&O) strategies into the metropolitan transportation planning process that is designed to maximize the performance of the existing and planned transportation system. This ...

  16. Statistical process control: separating signal from noise in emergency department operations.

    PubMed

    Pimentel, Laura; Barrueto, Fermin

    2015-05-01

    Statistical process control (SPC) is a visually appealing and statistically rigorous methodology very suitable to the analysis of emergency department (ED) operations. We demonstrate that the control chart is the primary tool of SPC; it is constructed by plotting data measuring the key quality indicators of operational processes in rationally ordered subgroups such as units of time. Control limits are calculated using formulas reflecting the variation in the data points from one another and from the mean. SPC allows managers to determine whether operational processes are controlled and predictable. We review why the moving range chart is most appropriate for use in the complex ED milieu, how to apply SPC to ED operations, and how to determine when performance improvement is needed. SPC is an excellent tool for operational analysis and quality improvement for these reasons: 1) control charts make large data sets intuitively coherent by integrating statistical and visual descriptions; 2) SPC provides analysis of process stability and capability rather than simple comparison with a benchmark; 3) SPC allows distinction between special cause variation (signal), indicating an unstable process requiring action, and common cause variation (noise), reflecting a stable process; and 4) SPC keeps the focus of quality improvement on process rather than individual performance. Because data have no meaning apart from their context, and every process generates information that can be used to improve it, we contend that SPC should be seriously considered for driving quality improvement in emergency medicine. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. 40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...

  18. 40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...

  19. 40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...

  20. "Measuring Operational Effectiveness of Information Technology Infrastructure Library (IIL) and the Impact of Critical Facilities Inclusion in the Process."

    ERIC Educational Resources Information Center

    Woodell, Eric A.

    2013-01-01

    Information Technology (IT) professionals use the Information Technology Infrastructure Library (ITIL) process to better manage their business operations, measure performance, improve reliability and lower costs. This study examined the operational results of those data centers using ITIL against those that do not, and whether the results change…

  1. Operational Control Procedures for the Activated Sludge Process, Part I - Observations, Part II - Control Tests.

    ERIC Educational Resources Information Center

    West, Alfred W.

    This is the first in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. Part I of this document deals with physical observations which should be performed during each routine control test. Part II…

  2. Assessing hemispheric specialization for processing arithmetic skills in adults: A functional transcranial doppler ultrasonography (fTCD) study.

    PubMed

    Connaughton, Veronica M; Amiruddin, Azhani; Clunies-Ross, Karen L; French, Noel; Fox, Allison M

    2017-05-01

    A major model of the cerebral circuits that underpin arithmetic calculation is the triple-code model of numerical processing. This model proposes that the lateralization of mathematical operations is organized across three circuits: a left-hemispheric dominant verbal code; a bilateral magnitude representation of numbers and a bilateral Arabic number code. This study simultaneously measured the blood flow of both middle cerebral arteries using functional transcranial Doppler ultrasonography to assess hemispheric specialization during the performance of both language and arithmetic tasks. The propositions of the triple-code model were assessed in a non-clinical adult group by measuring cerebral blood flow during the performance of multiplication and subtraction problems. Participants were 17 adults aged between 18-27 years. We obtained laterality indices for each type of mathematical operation and compared these in participants with left-hemispheric language dominance. It was hypothesized that blood flow would lateralize to the left hemisphere during the performance of multiplication operations, but would not lateralize during the performance of subtraction operations. Hemispheric blood flow was significantly left lateralized during the multiplication task, but was not lateralized during the subtraction task. Compared to high spatial resolution neuroimaging techniques previously used to measure cerebral lateralization, functional transcranial Doppler ultrasonography is a cost-effective measure that provides a superior temporal representation of arithmetic cognition. These results provide support for the triple-code model of arithmetic processing and offer complementary evidence that multiplication operations are processed differently in the adult brain compared to subtraction operations. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Onboard Processing and Autonomous Operations on the IPEX Cubesat

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Doubleday, Joshua; Ortega, Kevin; Flatley, Tom; Crum, Gary; Geist, Alessandro; Lin, Michael; Williams, Austin; Bellardo, John; Puig-Suari, Jordi; hide

    2012-01-01

    IPEX is a 1u Cubesat sponsored by NASA Earth Science Technology Office (ESTO), the goals or which are: (1) Flight validate high performance flight computing, (2) Flight validate onboard instrument data processing product generation software, (3) flight validate autonomous operations for instrument processing, (4) enhance NASA outreach and university ties.

  4. 40 CFR 63.138 - Process wastewater provisions-performance standards for treatment processes managing Group 1...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... each treatment process. (b) Control options: Group 1 wastewater streams for Table 9 compounds. The... section. (c) Control options: Group 1 wastewater streams for Table 8 compounds. The owner or operator...) Residuals. For each residual removed from a Group 1 wastewater stream, the owner or operator shall control...

  5. 40 CFR 63.138 - Process wastewater provisions-performance standards for treatment processes managing Group 1...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... each treatment process. (b) Control options: Group 1 wastewater streams for Table 9 compounds. The... section. (c) Control options: Group 1 wastewater streams for Table 8 compounds. The owner or operator...) Residuals. For each residual removed from a Group 1 wastewater stream, the owner or operator shall control...

  6. 40 CFR 63.138 - Process wastewater provisions-performance standards for treatment processes managing Group 1...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... each treatment process. (b) Control options: Group 1 wastewater streams for Table 9 compounds. The... section. (c) Control options: Group 1 wastewater streams for Table 8 compounds. The owner or operator...) Residuals. For each residual removed from a Group 1 wastewater stream, the owner or operator shall control...

  7. Information Assurance Tasks Supporting the Processing of Electronic Records Archives

    DTIC Science & Technology

    2007-03-01

    3 Table 2. OpenVPN evaluation results...........................................................................................10 iv 1...operation of necessary security features and compare the network performance under OpenVPN (openvpn.net) operation with the network performance under no...VPN operation (non-VPN) in a gigabit network environment. The reason for selecting OpenVPN product was based on the previous findings of Khanvilkar

  8. Formal Multilevel Hierarchical Verification of Synchronous MOS VLSI Circuits.

    DTIC Science & Technology

    1987-06-01

    166 12.4 Capacitance Coupling............................. 166 12.5 Multiple Abstraction Fuctions ....................... 168...depend on whether it is performing flat verification or hierarchical verification. The primary operations of Silica Pithecus when performing flat...signals never arise. The primary operation of Silica Pithecus when performing hierarchical verification is processing constraints to show they hold

  9. Light Water Reactor Sustainability Program Operator Performance Metrics for Control Room Modernization: A Practical Guide for Early Design Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald Boring; Roger Lew; Thomas Ulrich

    2014-03-01

    As control rooms are modernized with new digital systems at nuclear power plants, it is necessary to evaluate the operator performance using these systems as part of a verification and validation process. There are no standard, predefined metrics available for assessing what is satisfactory operator interaction with new systems, especially during the early design stages of a new system. This report identifies the process and metrics for evaluating human system interfaces as part of control room modernization. The report includes background information on design and evaluation, a thorough discussion of human performance measures, and a practical example of how themore » process and metrics have been used as part of a turbine control system upgrade during the formative stages of design. The process and metrics are geared toward generalizability to other applications and serve as a template for utilities undertaking their own control room modernization activities.« less

  10. Hydrogen peroxide concentration by pervaporation of a ternary liquid solution in microfluidics.

    PubMed

    Ziemecka, Iwona; Haut, Benoît; Scheid, Benoit

    2015-01-21

    Pervaporation in a microfluidic device is performed on liquid ternary solutions of hydrogen peroxide-water-methanol in order to concentrate hydrogen peroxide (H2O2) by removing methanol. The quantitative analysis of the pervaporation of solutions with different initial compositions is performed, varying the operating temperature of the microfluidic device. Experimental results together with a mathematical model of the separation process are used to understand the effect of the operating conditions on the microfluidic device efficiency. The parameters influencing significantly the performance of pervaporation in the microfluidic device are determined and the limitations of the process are discussed. For the analysed system, the operating temperature of the chip has to be below the temperature at which H2O2 decomposes. Therefore, the choice of an adequate reduced operating pressure is required, depending on the expected separation efficiency.

  11. [Performance development of a university operating room after implementation of a central operating room management].

    PubMed

    Waeschle, R M; Sliwa, B; Jipp, M; Pütz, H; Hinz, J; Bauer, M

    2016-08-01

    The difficult financial situation in German hospitals requires measures for improvement in process quality. Associated increases in revenues in the high income field "operating room (OR) area" are increasingly the responsibility of OR management but it has not been shown that the introduction of an efficiency-oriented management leads to an increase in process quality and revenues in the operating theatre. Therefore the performance in the operating theatre of the University Medical Center Göttingen was analyzed for working days in the core operating time from 7.45 a.m. to 3.30 p.m. from 2009 to 2014. The achievement of process target times for the morning surgery start time and the turnover times of anesthesia and OR-nurses were calculated as indicators of process quality. The number of operations and cumulative incision-suture time were also analyzed as aggregated performance indicators. In order to assess the development of revenues in the operating theatre, the revenues from diagnosis-related groups (DRG) in all inpatient and occupational accident cases, adjusted for the regional basic case value from 2009, were calculated for each year. The development of revenues was also analyzed after deduction of revenues resulting from altered economic case weighting. It could be shown that the achievement of process target values for the morning surgery start time could be improved by 40 %, the turnover times for anesthesia reduced by 50 % and for the OR-nurses by 36 %. Together with the introduction of central planning for reallocation, an increase in operation numbers of 21 % and cumulative incision-suture times of 12% could be realized. Due to these additional operations the DRG revenues in 2014 could be increased to 132 % compared to 2009 or 127 % if the revenues caused by economic case weighting were excluded. The personnel complement in anesthesia (-1.7 %) and OR-nurses (+2.6 %) as well as anesthetists (+6.7 %) increased less compared to the revenues or were slightly reduced. This improvement in process quality and cumulative incision-suture times as well as the increase in revenues, reflect the positive impact of an efficiency-oriented central OR management. The OR management releases due to measures of process optimization the necessary personnel and time resources and therefore achieves the basic prerequisites for increased revenues of surgical disciplines. The method presented can be used by other hospitals as a guideline to analyze performance development.

  12. Modeling and optimum time performance for concurrent processing

    NASA Technical Reports Server (NTRS)

    Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy

    1988-01-01

    The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.

  13. Contributions to Executive Dysfunction in Operation Enduring Freedom/Operation Iraqi Freedom Veterans With Posttraumatic Stress Disorder and History of Mild Traumatic Brain Injury.

    PubMed

    Jurick, Sarah M; Crocker, Laura D; Sanderson-Cimino, Mark; Keller, Amber V; Trenova, Liljana S; Boyd, Briana L; Twamley, Elizabeth W; Rodgers, Carie S; Schiehser, Dawn M; Aupperle, Robin L; Jak, Amy J

    Posttraumatic stress disorder (PTSD), history of mild traumatic brain injury (mTBI), and executive function (EF) difficulties are prevalent in Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) Veterans. We evaluated the contributions of injury variables, lower-order cognitive component processes (processing speed/attention), and psychological symptoms to EF. OEF/OIF Veterans (N = 65) with PTSD and history of mTBI were administered neuropsychological tests of EF and self-report assessments of PTSD and depression. Those impaired on one or more EF measures had higher PTSD and depression symptoms and lower processing speed/attention performance than those with intact performance on all EF measures. Across participants, poorer attention/processing speed performance and higher psychological symptoms were associated with worse performance on specific aspects of EF (eg, inhibition and switching) even after accounting for injury variables. Although direct relationships between EF and injury variables were equivocal, there was an interaction between measures of injury burden and processing speed/attention such that those with greater injury burden exhibited significant and positive relationships between processing speed/attention and inhibition/switching, whereas those with lower injury burden did not. Psychological symptoms as well as lower-order component processes of EF (attention and processing speed) contribute significantly to executive dysfunction in OEF/OIF Veterans with PTSD and history of mTBI. However, there may be equivocal relationships between injury variables and EF that warrant further study. Results provide groundwork for more fully understanding cognitive symptoms in OEF/OIF Veterans with PTSD and history of mTBI that can inform psychological and cognitive interventions in this population.

  14. Vocational Education Operations Analysis Process.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento. Vocational Education Services.

    This manual on the vocational education operations analysis process is designed to provide vocational administrators/coordinators with an internal device to collect, analyze, and display vocational education performance data. The first section describes the system and includes the following: analysis worksheet, data sources, utilization, system…

  15. 40 CFR 61.31 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... associated elements. (b) Extraction plant means a facility chemically processing beryllium ore to beryllium..., electrochemical machining, etching, or other similar operations. (e) Ceramic plant means a manufacturing plant... compounds used or generated during any process or operation performed by a source subject to this subpart...

  16. 40 CFR 61.31 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... associated elements. (b) Extraction plant means a facility chemically processing beryllium ore to beryllium..., electrochemical machining, etching, or other similar operations. (e) Ceramic plant means a manufacturing plant... compounds used or generated during any process or operation performed by a source subject to this subpart...

  17. 40 CFR 61.31 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... associated elements. (b) Extraction plant means a facility chemically processing beryllium ore to beryllium..., electrochemical machining, etching, or other similar operations. (e) Ceramic plant means a manufacturing plant... compounds used or generated during any process or operation performed by a source subject to this subpart...

  18. Technology’s present situation and the development prospects of energy efficiency monitoring as well as performance testing & analysis for process flow compressors

    NASA Astrophysics Data System (ADS)

    Li, L.; Zhao, Y.; Wang, L.; Yang, Q.; Liu, G.; Tang, B.; Xiao, J.

    2017-08-01

    In this paper, the background of performance testing of in-service process flow compressors set in user field are introduced, the main technique barriers faced in the field test are summarized, and the factors that result in real efficiencies of most process flow compressors being lower than the guaranteed by manufacturer are analysed. The authors investigated the present operational situation of process flow compressors in China and found that low efficiency operation of flow compressors is because the compressed gas is generally forced to flow back into the inlet pipe for adapting to the process parameters variety. For example, the anti-surge valve is always opened for centrifugal compressor. To improve the operation efficiency of process compressors the energy efficiency monitoring technology was overviewed and some suggestions are proposed in the paper, which is the basis of research on energy efficiency evaluation and/or labelling of process compressors.

  19. DNA Bipedal Motor Achieves a Large Number of Steps Due to Operation Using Microfluidics-Based Interface.

    PubMed

    Tomov, Toma E; Tsukanov, Roman; Glick, Yair; Berger, Yaron; Liber, Miran; Avrahami, Dorit; Gerber, Doron; Nir, Eyal

    2017-04-25

    Realization of bioinspired molecular machines that can perform many and diverse operations in response to external chemical commands is a major goal in nanotechnology, but current molecular machines respond to only a few sequential commands. Lack of effective methods for introduction and removal of command compounds and low efficiencies of the reactions involved are major reasons for the limited performance. We introduce here a user interface based on a microfluidics device and single-molecule fluorescence spectroscopy that allows efficient introduction and removal of chemical commands and enables detailed study of the reaction mechanisms involved in the operation of synthetic molecular machines. The microfluidics provided 64 consecutive DNA strand commands to a DNA-based motor system immobilized inside the microfluidics, driving a bipedal walker to perform 32 steps on a DNA origami track. The microfluidics enabled removal of redundant strands, resulting in a 6-fold increase in processivity relative to an identical motor operated without strand removal and significantly more operations than previously reported for user-controlled DNA nanomachines. In the motor operated without strand removal, redundant strands interfere with motor operation and reduce its performance. The microfluidics also enabled computer control of motor direction and speed. Furthermore, analysis of the reaction kinetics and motor performance in the absence of redundant strands, made possible by the microfluidics, enabled accurate modeling of the walker processivity. This enabled identification of dynamic boundaries and provided an explanation, based on the "trap state" mechanism, for why the motor did not perform an even larger number of steps. This understanding is very important for the development of future motors with significantly improved performance. Our universal interface enables two-way communication between user and molecular machine and, relying on concepts similar to that of solid-phase synthesis, removes limitations on the number of external stimuli. This interface, therefore, is an important step toward realization of reliable, processive, reproducible, and useful externally controlled DNA nanomachines.

  20. Passive serialization in a multitasking environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennessey, J.P.; Osisek, D.L.; Seigh, J.W. II

    1989-02-28

    In a multiprocessing system having a control program in which data objects are shared among processes, this patent describes a method for serializing references to a data object by the processes so as to prevent invalid references to the data object by any process when an operation requiring exclusive access is performed by another process, comprising the steps of: permitting the processes to reference data objects on a shared access basis without obtaining a shared lock; monitoring a point of execution of the control program which is common to all processes in the system, which occurs regularly in the process'more » execution and across which no references to any data object can be maintained by any process, except references using locks; establishing a system reference point which occurs after each process in the system has passed the point of execution at least once since the last such system reference point; requesting an operation requiring exclusive access on a selected data object; preventing subsequent references by other processes to the selected data object; waiting until two of the system references points have occurred; and then performing the requested operation.« less

  1. Dynamic Exergy Method for Evaluating the Control and Operation of Oxy-Combustion Boiler Island Systems.

    PubMed

    Jin, Bo; Zhao, Haibo; Zheng, Chuguang; Liang, Zhiwu

    2017-01-03

    Exergy-based methods are widely applied to assess the performance of energy conversion systems; however, these methods mainly focus on a certain steady-state and have limited applications for evaluating the control impacts on system operation. To dynamically obtain the thermodynamic behavior and reveal the influences of control structures, layers and loops, on system energy performance, a dynamic exergy method is developed, improved, and applied to a complex oxy-combustion boiler island system for the first time. The three most common operating scenarios are studied, and the results show that the flow rate change process leads to less energy consumption than oxygen purity and air in-leakage change processes. The variation of oxygen purity produces the largest impact on system operation, and the operating parameter sensitivity is not affected by the presence of process control. The control system saves energy during flow rate and oxygen purity change processes, while it consumes energy during the air in-leakage change process. More attention should be paid to the oxygen purity change because it requires the largest control cost. In the control system, the supervisory control layer requires the greatest energy consumption and the largest control cost to maintain operating targets, while the steam control loops cause the main energy consumption.

  2. Development of Airport Surface Required Navigation Performance (RNP)

    NASA Technical Reports Server (NTRS)

    Cassell, Rick; Smith, Alex; Hicok, Dan

    1999-01-01

    The U.S. and international aviation communities have adopted the Required Navigation Performance (RNP) process for defining aircraft performance when operating the en-route, approach and landing phases of flight. RNP consists primarily of the following key parameters - accuracy, integrity, continuity, and availability. The processes and analytical techniques employed to define en-route, approach and landing RNP have been applied in the development of RNP for the airport surface. To validate the proposed RNP requirements several methods were used. Operational and flight demonstration data were analyzed for conformance with proposed requirements, as were several aircraft flight simulation studies. The pilot failure risk component was analyzed through several hypothetical scenarios. Additional simulator studies are recommended to better quantify crew reactions to failures as well as additional simulator and field testing to validate achieved accuracy performance, This research was performed in support of the NASA Low Visibility Landing and Surface Operations Programs.

  3. Graph processing platforms at scale: practices and experiences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Lee, Sangkeun; Brown, Tyler C

    2015-01-01

    Graph analysis unveils hidden associations of data in many phenomena and artifacts, such as road network, social networks, genomic information, and scientific collaboration. Unfortunately, a wide diversity in the characteristics of graphs and graph operations make it challenging to find a right combination of tools and implementation of algorithms to discover desired knowledge from the target data set. This study presents an extensive empirical study of three representative graph processing platforms: Pegasus, GraphX, and Urika. Each system represents a combination of options in data model, processing paradigm, and infrastructure. We benchmarked each platform using three popular graph operations, degree distribution,more » connected components, and PageRank over a variety of real-world graphs. Our experiments show that each graph processing platform shows different strength, depending the type of graph operations. While Urika performs the best in non-iterative operations like degree distribution, GraphX outputforms iterative operations like connected components and PageRank. In addition, we discuss challenges to optimize the performance of each platform over large scale real world graphs.« less

  4. Informatics in radiology: automated Web-based graphical dashboard for radiology operational business intelligence.

    PubMed

    Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S

    2009-11-01

    Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.

  5. Statistical porcess control in Deep Space Network operation

    NASA Technical Reports Server (NTRS)

    Hodder, J. A.

    2002-01-01

    This report describes how the Deep Space Mission System (DSMS) Operations Program Office at the Jet Propulsion Laboratory's (EL) uses Statistical Process Control (SPC) to monitor performance and evaluate initiatives for improving processes on the National Aeronautics and Space Administration's (NASA) Deep Space Network (DSN).

  6. Performing an allreduce operation using shared memory

    DOEpatents

    Archer, Charles J [Rochester, MN; Dozsa, Gabor [Ardsley, NY; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  7. Performing an allreduce operation using shared memory

    DOEpatents

    Archer, Charles J; Dozsa, Gabor; Ratterman, Joseph D; Smith, Brian E

    2014-06-10

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  8. 49 CFR 240.129 - Criteria for monitoring operational performance of certified engineers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... certified engineers. 240.129 Section 240.129 Transportation Other Regulations Relating to Transportation... LOCOMOTIVE ENGINEERS Component Elements of the Certification Process § 240.129 Criteria for monitoring operational performance of certified engineers. (a) Each railroad's program shall include criteria and...

  9. 49 CFR 240.129 - Criteria for monitoring operational performance of certified engineers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... certified engineers. 240.129 Section 240.129 Transportation Other Regulations Relating to Transportation... LOCOMOTIVE ENGINEERS Component Elements of the Certification Process § 240.129 Criteria for monitoring operational performance of certified engineers. (a) Each railroad's program shall include criteria and...

  10. 49 CFR 240.129 - Criteria for monitoring operational performance of certified engineers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... certified engineers. 240.129 Section 240.129 Transportation Other Regulations Relating to Transportation... LOCOMOTIVE ENGINEERS Component Elements of the Certification Process § 240.129 Criteria for monitoring operational performance of certified engineers. (a) Each railroad's program shall include criteria and...

  11. Nonlinear Performance Seeking Control using Fuzzy Model Reference Learning Control and the Method of Steepest Descent

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    1997-01-01

    Performance Seeking Control (PSC) attempts to find and control the process at the operating condition that will generate maximum performance. In this paper a nonlinear multivariable PSC methodology will be developed, utilizing the Fuzzy Model Reference Learning Control (FMRLC) and the method of Steepest Descent or Gradient (SDG). This PSC control methodology employs the SDG method to find the operating condition that will generate maximum performance. This operating condition is in turn passed to the FMRLC controller as a set point for the control of the process. The conventional SDG algorithm is modified in this paper in order for convergence to occur monotonically. For the FMRLC control, the conventional fuzzy model reference learning control methodology is utilized, with guidelines generated here for effective tuning of the FMRLC controller.

  12. Modified Universal Design Survey: Enhancing Operability of Launch Vehicle Ground Crew Worksites

    NASA Technical Reports Server (NTRS)

    Blume, Jennifer L.

    2010-01-01

    Operability is a driving requirement for next generation space launch vehicles. Launch site ground operations include numerous operator tasks to prepare the vehicle for launch or to perform preflight maintenance. Ensuring that components requiring operator interaction at the launch site are designed for optimal human use is a high priority for operability. To promote operability, a Design Quality Evaluation Survey based on Universal Design framework was developed to support Human Factors Engineering (HFE) evaluation for NASA s launch vehicles. Universal Design per se is not a priority for launch vehicle processing however; applying principles of Universal Design will increase the probability of an error free and efficient design which promotes operability. The Design Quality Evaluation Survey incorporates and tailors the seven Universal Design Principles and adds new measures for Safety and Efficiency. Adapting an approach proven to measure Universal Design Performance in Product, each principle is associated with multiple performance measures which are rated with the degree to which the statement is true. The Design Quality Evaluation Survey was employed for several launch vehicle ground processing worksite analyses. The tool was found to be most useful for comparative judgments as opposed to an assessment of a single design option. It provided a useful piece of additional data when assessing possible operator interfaces or worksites for operability.

  13. SIG. Signal Processing, Analysis, & Display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J.; Lager, D.; Azevedo, S.

    1992-01-22

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG; a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time and frequency-domain signals includingmore » operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments, commenting lines, defining commands, and automatic execution for each item in a `repeat` sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less

  14. SIG. Signal Processing, Analysis, & Display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J.; Lager, D.; Azevedo, S.

    1992-01-22

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time-and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time and frequency-domain signals includingmore » operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments, commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less

  15. Signal Processing, Analysis, & Display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lager, Darrell; Azevado, Stephen

    1986-06-01

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time- and frequency-domain signalsmore » including operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments,commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less

  16. SIG. Signal Processing, Analysis, & Display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J.; Lager, D.; Azevedo, S.

    1992-01-22

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time- and frequency-domain signalsmore » including operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments,commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less

  17. Operator Performance Support System (OPSS)

    NASA Technical Reports Server (NTRS)

    Conklin, Marlen Z.

    1993-01-01

    In the complex and fast reaction world of military operations, present technologies, combined with tactical situations, have flooded the operator with assorted information that he is expected to process instantly. As technologies progress, this flow of data and information have both guided and overwhelmed the operator. However, the technologies that have confounded many operators today can be used to assist him -- thus the Operator Performance Support Team. In this paper we propose an operator support station that incorporates the elements of Video and Image Databases, productivity Software, Interactive Computer Based Training, Hypertext/Hypermedia Databases, Expert Programs, and Human Factors Engineering. The Operator Performance Support System will provide the operator with an integrating on-line information/knowledge system that will guide expert or novice to correct systems operations. Although the OPSS is being developed for the Navy, the performance of the workforce in today's competitive industry is of major concern. The concepts presented in this paper which address ASW systems software design issues are also directly applicable to industry. the OPSS will propose practical applications in how to more closely align the relationships between technical knowledge and equipment operator performance.

  18. High temperature process steam application at the Southern Union Refining Company, Hobbs, New Mexico. Solar energy in the oil patch. Final report, Phase III: operation, maintenance, and performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, L.E.; McGuire, D.R.

    1984-05-01

    This final report summarizes the technical reports for Phase III of this project. The third phase included the operation, maintenance, upgrade and performance reporting of a 10,080 square foot Solar Industrial Process Heat System installed at the Famariss Energy Refinery of Southern Union Refining Company near Hobbs, New Mexico. This report contains a description of the upgraded system, and a summary of the overall operation, maintenance and performance of the installed system. The results of the upgrade activities can be seen in the last two months of operational data. Steam production was significantly greater in peak flow and monthly totalmore » than at any previous time. Also monthly total cost savings was greatly improved even though natural gas costs remain much lower than originally anticipated.« less

  19. Life Testing of the Vapor Compression Distillation/Urine Processing Assembly (VCD/UPA) at the Marshall Space Flight Center (1993 to 1997)

    NASA Technical Reports Server (NTRS)

    Wieland, P.; Hutchens, C.; Long, D.; Salyer, B.

    1998-01-01

    Wastewater and urine generated on the International Space Station will be processed to recover pure water using vapor compression distillation (VCD). To verify the long-term reliability and performance of the VCD Urine Processor Assembly (UPA), life testing was performed at the Marshall Space Flight Center (MSFC) from January 1993 to April 1996. Two UPA'S, the VCD-5 and VCD-5A, were tested for 204 days and 665 days, respectively. The compressor gears and the distillation centrifuge drive belt were found to have operating lives of approximately 4,800 hours, equivalent to 3.9 years of operation on ISS for a crew of three at an average processing rate of 1.76 kg/h (3.97 lb/h). Precise alignment of the flex-splines of the fluids and purge pump motor drives is essential to avoid premature failure after about 400 hours of operation. Results indicate that, with some design and procedural modifications and suitable quality control, the required performance and operational life can be met with the VCD/UPA.

  20. CSER 01-008 Canning of Thermally Stabilized Plutonium Oxide Powder in PFP Glovebox HC-21A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ERICKSON, D.G.

    This document presents the analysis performed to support the canning operation in HC-21A. Most of the actual analysis was performed for the operation in HC-18M and HA-20MB, and is documented in HNF-2707 Rev I a (Erickson 2001a). This document will reference Erickson (2001a) as necessary to support the operation in HC-21A. The plutonium stabilization program at the Plutonium Finishing Plant (PFP) uses heat to convert plutonium-bearing materials into dry powder that is chemically stable for long term storage. The stabilized plutonium is transferred into one of several gloveboxes for the canning process, Gloveboxes HC-18M in Room 228'2, HA-20MB in Roommore » 235B, and HC-21A in Room 230B are to be used for this process. This document presents the analysis performed to support the canning operation in HC-21A. Most of the actual analysis was performed for the operation in HC-I8M and HA-20MB, and is documented in HNF-2707 Rev l a (Erickson 2001a). This document will reference Erickson (2001a) as necessary to support the operation in HC-21A. Evaluation of this operation included normal, base cases, and contingencies. The base cases took the normal operations for each type of feed material and added the likely off-normal events. Each contingency is evaluated assuming the unlikely event happens to the conservative base case. Each contingency was shown to meet the double contingency requirement. That is, at least two unlikely, independent, and concurrent changes in process conditions are required before a criticality is possible.« less

  1. Baseline Description and Analysis of the Operations Related to Warehouse Controlled Documents at the Navy Publications and Forms Center, Philadelphia, Pennsylvania. Volume I. Phase I.

    DTIC Science & Technology

    1980-03-06

    performing the present NPFC tasks. Potential automation technologies may include order processing mechanization, demand printing from micrographic or...effort and documented in this volume included the following: a. Functional description of the order processing activities as they currently operate. b...covered under each analysis area. i It is obvious from the exhibit that the functional description of order processing operations was to include COG I

  2. Process for operating equilibrium controlled reactions

    DOEpatents

    Nataraj, Shankar; Carvill, Brian Thomas; Hufton, Jeffrey Raymond; Mayorga, Steven Gerard; Gaffney, Thomas Richard; Brzozowski, Jeffrey Richard

    2001-01-01

    A cyclic process for operating an equilibrium controlled reaction in a plurality of reactors containing an admixture of an adsorbent and a reaction catalyst suitable for performing the desired reaction which is operated in a predetermined timed sequence wherein the heating and cooling requirements in a moving reaction mass transfer zone within each reactor are provided by indirect heat exchange with a fluid capable of phase change at temperatures maintained in each reactor during sorpreaction, depressurization, purging and pressurization steps during each process cycle.

  3. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  4. 40 CFR 63.138 - Process wastewater provisions-performance standards for treatment processes managing Group 1...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... applicable. (4) Performance tests and design evaluations. If design steam stripper option (§ 63.138(d)) or..., neither a design evaluation nor a performance test is required. For any other non-biological treatment... or operator shall conduct either a design evaluation as specified in § 63.138(j), or a performance...

  5. Modeling operators' emergency response time for chemical processing operations.

    PubMed

    Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam

    2014-01-01

    Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations to determine performance coefficients. These coefficients represent the decrease in time required for various basic motions in emergency situations and were used to model an emergency response. This approach will make hazardous operations requiring operator response, alarm management, and evacuation processes easier to design and predict. An application of this methodology is included in the article. The time required for an emergency response was roughly a one-third faster than for a normal response time.

  6. Mission Operations Working Group (MOWG) Report to the OMI Science Team

    NASA Technical Reports Server (NTRS)

    Fisher, Dominic M.

    2017-01-01

    This PowerPoint presentation will discuss Aura's current spacecraft and OMI insturment status, highlight any performance trends and impacts to OMI operations, identify any operational changes and express concerns or potential process improvements.

  7. Methodology for the systems engineering process. Volume 3: Operational availability

    NASA Technical Reports Server (NTRS)

    Nelson, J. H.

    1972-01-01

    A detailed description and explanation of the operational availability parameter is presented. The fundamental mathematical basis for operational availability is developed, and its relationship to a system's overall performance effectiveness is illustrated within the context of identifying specific availability requirements. Thus, in attempting to provide a general methodology for treating both hypothetical and existing availability requirements, the concept of an availability state, in conjunction with the more conventional probability-time capability, is investigated. In this respect, emphasis is focused upon a balanced analytical and pragmatic treatment of operational availability within the system design process. For example, several applications of operational availability to typical aerospace systems are presented, encompassing the techniques of Monte Carlo simulation, system performance availability trade-off studies, analytical modeling of specific scenarios, as well as the determination of launch-on-time probabilities. Finally, an extensive bibliography is provided to indicate further levels of depth and detail of the operational availability parameter.

  8. Overview of the Smart Network Element Architecture and Recent Innovations

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.

    2008-01-01

    In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.

  9. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  10. Advanced multivariable control of a turboexpander plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altena, D.; Howard, M.; Bullin, K.

    1998-12-31

    This paper describes an application of advanced multivariable control on a natural gas plant and compares its performance to the previous conventional feed-back control. This control algorithm utilizes simple models from existing plant data and/or plant tests to hold the process at the desired operating point in the presence of disturbances and changes in operating conditions. The control software is able to accomplish this due to effective handling of process variable interaction, constraint avoidance and feed-forward of measured disturbances. The economic benefit of improved control lies in operating closer to the process constraints while avoiding significant violations. The South Texasmore » facility where this controller was implemented experienced reduced variability in process conditions which increased liquids recovery because the plant was able to operate much closer to the customer specified impurity constraint. An additional benefit of this implementation of multivariable control is the ability to set performance criteria beyond simple setpoints, including process variable constraints, relative variable merit and optimizing use of manipulated variables. The paper also details the control scheme applied to the complex turboexpander process and some of the safety features included to improve reliability.« less

  11. Implementation of an adaptive controller for the startup and steady-state running of a biomethanation process operated in the CSTR mode.

    PubMed

    Renard, P; Van Breusegem, V; Nguyen, M T; Naveau, H; Nyns, E J

    1991-10-20

    An adaptive control algorithm has been implemented on a biomethanation process to maintain propionate concentration, a stable variable, at a given low value, by steering the dilution rate. It was thereby expected to ensure the stability of the process during the startup and during steady-state running with an acceptable performance. The methane pilot reactor was operated in the completely mixed, once-through mode and computer-controlled during 161 days. The results yielded the real-life validation of the adaptive control algorithm, and documented the stability and acceptable performance expected.

  12. Aircraft Engine-Monitoring System And Display

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.; Person, Lee H., Jr.

    1992-01-01

    Proposed Engine Health Monitoring System and Display (EHMSD) provides enhanced means for pilot to control and monitor performances of engines. Processes raw sensor data into information meaningful to pilot. Provides graphical information about performance capabilities, current performance, and operational conditions in components or subsystems of engines. Provides means to control engine thrust directly and innovative means to monitor performance of engine system rapidly and reliably. Features reduce pilot workload and increase operational safety.

  13. Improving Program Performance through Management Information. A Workbook.

    ERIC Educational Resources Information Center

    Bienia, Nancy

    Designed specifically for state and local managers and supervisors who plan, direct, and operate child support enforcement programs, this workbook provides a four-part, step-by-step process for identifying needed information and methods of using the information to operate an effective program. The process consists of: (1) determining what…

  14. Fast ion swapping for quantum-information processing

    NASA Astrophysics Data System (ADS)

    Kaufmann, H.; Ruster, T.; Schmiegelow, C. T.; Luda, M. A.; Kaushal, V.; Schulz, J.; von Lindenfels, D.; Schmidt-Kaler, F.; Poschinger, U. G.

    2017-05-01

    We demonstrate a swap gate between laser-cooled ions in a segmented microtrap via fast physical swapping of the ion positions. This operation is used in conjunction with qubit initialization, manipulation, and readout and with other types of shuttling operations such as linear transport and crystal separation and merging. Combining these operations, we perform quantum process tomography of the swap gate, obtaining a mean process fidelity of 99.5(5)%. The swap operation is demonstrated with motional excitations below 0.05(1) quantum for all six collective modes of a two-ion crystal for a process duration of 42 μ s . Extending these techniques to three ions, we reverse the order of a three-ion crystal and reconstruct the truth table for this operation, resulting in a mean process fidelity of 99.96(13)% in the logical basis.

  15. Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel

    2011-01-01

    The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data,more » the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.« less

  16. Improving Operational System Performance of Internet of Things (IoT) in Indonesia Telecomunication Company

    NASA Astrophysics Data System (ADS)

    Dachyar, M.; Risky, S. A.

    2014-06-01

    Telecommunications company have to improve their business performance despite of the increase customers every year. In Indonesia, the telecommunication company have provided best services, improving operational systems by designing a framework for operational systems of the Internet of Things (IoT) other name of Machine to Machine (M2M). This study was conducted with expert opinion which further processed by the Analytic Hierarchy Process (AHP) to obtain important factor for organizations operational systems, and the Interpretive Structural Modeling (ISM) to determine factors of organization which found drives the biggest power. This study resulted, the greatest weight of SLA & KPI handling problems. The M2M current dashboard and current M2M connectivity have power to affect other factors and has important function for M2M operations roomates system which can be effectively carried out.

  17. Measuring Information Technology Performance: Operational Efficiency and Operational Effectiveness

    ERIC Educational Resources Information Center

    Moore, Annette G.

    2012-01-01

    This dissertation provides a practical approach for measuring operational efficiency and operational effectiveness for IT organizations introducing the ITIL process framework. The intent of the study was to assist Chief Information Officers (CIOs) in explaining the impact of introducing the Information Technology Infrastructure Library (ITIL)…

  18. 29 CFR 784.128 - Requirements for exemption of first processing, etc., at sea.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... an incident to, or in conjunction with “such” fishing operations—that is, the fishing operations of... packing, (b) such operations are performed as an incident to, or in conjunction with, fishing operations...

  19. Man-machine interactive imaging and data processing using high-speed digital mass storage

    NASA Technical Reports Server (NTRS)

    Alsberg, H.; Nathan, R.

    1975-01-01

    The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a television system. This medium provides a practical and useful link between workspace and the control station from which the operator perform his tasks. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. A general purpose digital computer, an extensive special purpose software system is used to perform an almost unlimited repertoire of processing operations.

  20. Foster Wheeler's Solutions for Large Scale CFB Boiler Technology: Features and Operational Performance of Łagisza 460 MWe CFB Boiler

    NASA Astrophysics Data System (ADS)

    Hotta, Arto

    During recent years, once-through supercritical (OTSC) CFB technology has been developed, enabling the CFB technology to proceed to medium-scale (500 MWe) utility projects such as Łagisza Power Plant in Poland owned by Poludniowy Koncern Energetyczny SA. (PKE), with net efficiency nearly 44%. Łagisza power plant is currently under commissioning and has reached full load operation in March 2009. The initial operation shows very good performance and confirms, that the CFB process has no problems with the scaling up to this size. Also the once-through steam cycle utilizing Siemens' vertical tube Benson technology has performed as predicted in the CFB process. Foster Wheeler has developed the CFB design further up to 800 MWe with net efficiency of ≥45%.

  1. An experimental investigation of the effects of alarm processing and display on operator performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O`Hara, J.; Brown, W.; Hallbert, B.

    1998-03-01

    This paper describes a research program sponsored by the US Nuclear Regulatory Commission to address the human factors engineering (HFE) aspects of nuclear power plant alarm systems. The overall objective of the program is to develop HFE review guidance for advanced alarm systems. As part of this program, guidance has been developed based on a broad base of technical and research literature. In the course of guidance development, aspects of alarm system design for which the technical basis was insufficient to support complete guidance development were identified. The primary purpose of the research reported in this paper was to evaluatemore » the effects of three of these alarm system design characteristics on operator performance in order to contribute to the understanding of potential safety issues and to provide data to support the development of design review guidance in these areas. Three alarm system design characteristics studied were (1) alarm processing (degree of alarm reduction), (2) alarm availability (dynamic prioritization and suppression), and (3) alarm display (a dedicated tile format, a mixed tile and message list format, and a format in which alarm information is integrated into the process displays). A secondary purpose was to provide confirmatory evidence of selected alarm system guidance developed in an earlier phase of the project. The alarm characteristics were combined into eight separate experimental conditions. Six, two-person crews of professional nuclear power plant operators participated in the study. Following training, each crew completed 16 test trials which consisted of two trials in each of the eight experimental conditions (one with a low-complexity scenario and one with a high-complexity scenario). Measures of process performance, operator task performance, situation awareness, and workload were obtained. In addition, operator opinions and evaluations of the alarm processing and display conditions were collected. No deficient performance was observed in any of the experimental conditions, providing confirmatory support for many design review guidelines. The operators identified numerous strengths and weaknesses associated with individual alarm design characteristics.« less

  2. NASA's Evolutionary Xenon Thruster (NEXT) Prototype Model 1R (PM1R) Ion Thruster and Propellant Management System Wear Test Results

    NASA Technical Reports Server (NTRS)

    VanNoord, Jonathan L.; Soulas, George C.; Sovey, James S.

    2010-01-01

    The results of the NEXT wear test are presented. This test was conducted with a 36-cm ion engine (designated PM1R) and an engineering model propellant management system. The thruster operated with beam extraction for a total of 1680 hr and processed 30.5 kg of xenon during the wear test, which included performance testing and some operation with an engineering model power processing unit. A total of 1312 hr was accumulated at full power, 277 hr at low power, and the remainder was at intermediate throttle levels. Overall ion engine performance, which includes thrust, thruster input power, specific impulse, and thrust efficiency, was steady with no indications of performance degradation. The propellant management system performed without incident during the wear test. The ion engine and propellant management system were also inspected following the test with no indication of anomalous hardware degradation from operation.

  3. Robust fusion-based processing for military polarimetric imaging systems

    NASA Astrophysics Data System (ADS)

    Hickman, Duncan L.; Smith, Moira I.; Kim, Kyung Su; Choi, Hyun-Jin

    2017-05-01

    Polarisation information within a scene can be exploited in military systems to give enhanced automatic target detection and recognition (ATD/R) performance. However, the performance gain achieved is highly dependent on factors such as the geometry, viewing conditions, and the surface finish of the target. Such performance sensitivities are highly undesirable in many tactical military systems where operational conditions can vary significantly and rapidly during a mission. Within this paper, a range of processing architectures and fusion methods is considered in terms of their practical viability and operational robustness for systems requiring ATD/R. It is shown that polarisation information can give useful performance gains but, to retained system robustness, the introduction of polarimetric processing should be done in such a way as to not compromise other discriminatory scene information in the spectral and spatial domains. The analysis concludes that polarimetric data can be effectively integrated with conventional intensity-based ATD/R by either adapting the ATD/R processing function based on the scene polarisation or else by detection-level fusion. Both of these approaches avoid the introduction of processing bottlenecks and limit the impact of processing on system latency.

  4. Life Testing of the Vapor Compression Distillation Urine Processing Assembly (VCD/UPA) at the Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Wieland, Paul O.

    1998-01-01

    Wastewater and urine generated on the International Space Station will be processed to recover pure water. The method selected is vapor compression distillation (VCD). To verify the long-term reliability and performance of the VCD Urine Processing Assembly (UPA), accelerated life testing was performed at the Marshall Space Flight Center (MSFC) from January 1993 to April 1996. Two UPAS, the VCD-5 and VCD-5A, were tested for 204 days and 665 days, respectively. The compressor gears and the distillation centrifuge drive belt were found to have an operating life of approximately 4800 hours. Precise alignment of the flex-spline of the fluids pump is essential to avoid failure of the pump after about 400 hours of operation. Also, leakage around the seals of the drive shaft of the fluids pump and purge pump must be eliminated for continued good performance. Results indicate that, with some design and procedural modifications and suitable quality control, the required performance and operational life can be met with the VCD/UPA.

  5. Dynamic Systems Analysis for Turbine Based Aero Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.

    2016-01-01

    The aircraft engine design process seeks to optimize the overall system-level performance, weight, and cost for a given concept. Steady-state simulations and data are used to identify trade-offs that should be balanced to optimize the system in a process known as systems analysis. These systems analysis simulations and data may not adequately capture the true performance trade-offs that exist during transient operation. Dynamic systems analysis provides the capability for assessing the dynamic tradeoffs at an earlier stage of the engine design process. The dynamic systems analysis concept, developed tools, and potential benefit are presented in this paper. To provide this capability, the Tool for Turbine Engine Closed-loop Transient Analysis (TTECTrA) was developed to provide the user with an estimate of the closed-loop performance (response time) and operability (high pressure compressor surge margin) for a given engine design and set of control design requirements. TTECTrA along with engine deterioration information, can be used to develop a more generic relationship between performance and operability that can impact the engine design constraints and potentially lead to a more efficient engine.

  6. Particle Engineering in Pharmaceutical Solids Processing: Surface Energy 
Considerations

    PubMed Central

    Williams, Daryl R.

    2015-01-01

    During the past 10 years particle engineering in the pharmaceutical industry has become a topic of increasing importance. Engineers and pharmacists need to understand and control a range of key unit manufacturing operations such as milling, granulation, crystallisation, powder mixing and dry powder inhaled drugs which can be very challenging. It has now become very clear that in many of these particle processing operations, the surface energy of the starting, intermediate or final products is a key factor in understanding the processing operation and or the final product performance. This review will consider the surface energy and surface energy heterogeneity of crystalline solids, methods for the measurement of surface energy, effects of milling on powder surface energy, adhesion and cohesion on powder mixtures, crystal habits and surface energy, surface energy and powder granulation processes, performance of DPI systems and finally crystallisation conditions and surface energy. This review will conclude that the importance of surface energy as a significant factor in understanding the performance of many particulate pharmaceutical products and processes has now been clearly established. It is still nevertheless, work in progress both in terms of development of methods and establishing the limits for when surface energy is the key variable of relevance. PMID:25876912

  7. SAR operational aspects

    NASA Astrophysics Data System (ADS)

    Holmdahl, P. E.; Ellis, A. B. E.; Moeller-Olsen, P.; Ringgaard, J. P.

    1981-12-01

    The basic requirements of the SAR ground segment of ERS-1 are discussed. A system configuration for the real time data acquisition station and the processing and archive facility is depicted. The functions of a typical SAR processing unit (SPU) are specified, and inputs required for near real time and full precision, deferred time processing are described. Inputs and the processing required for provision of these inputs to the SPU are dealt with. Data flow through the systems, and normal and nonnormal operational sequence, are outlined. Prerequisites for maintaining overall performance are identified, emphasizing quality control. The most demanding tasks to be performed by the front end are defined in order to determine types of processors and peripherals which comply with throughput requirements.

  8. Operation, Modeling and Analysis of the Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Whitlow, Jonathan E.

    2001-01-01

    The Reverse Water Gas Shift process is a candidate technology for water and oxygen production on Mars under the In-Situ Propellant Production project. This report focuses on the operation and analysis of the Reverse Water Gas Shift (RWGS) process, which has been constructed at Kennedy Space Center. A summary of results from the initial operation of the RWGS, process along with an analysis of these results is included in this report. In addition an evaluation of a material balance model developed from the work performed previously under the summer program is included along with recommendations for further experimental work.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mays, Jeff

    One-step hydrogen generation, using Sorption Enhanced Reforming (SER) technology, is an innovative means of providing critical energy and environmental improvements to US manufacturing processes. The Gas Technology Institute (GTI) is developing a Compact Hydrogen Generator (CHG) process, based on SER technology, which successfully integrates previously independent process steps, achieves superior energy efficiency by lowering reaction temperatures, and provides pathways to doubling energy productivity with less environmental pollution. GTI’s prior CHG process development efforts have culminated in an operational pilot plant. During the initial pilot testing, GTI identified two operating risks- 1) catalyst coating with calcium aluminate compounds, 2) limited solidsmore » handling of the sorbent. Under this contract GTI evaluated alternative materials (one catalyst and two sorbents) to mitigate both risks. The alternate catalyst met performance targets and did not experience coating with calcium aluminate compounds of any kind. The alternate sorbent materials demonstrated viable operation, with one material enabling a three-fold increase in sorbent flow. The testing also demonstrated operation at 90% of its rated capacity. Lastly, a carbon dioxide co-production study was performed to assess the advantage of the solid phase separation of carbon dioxide- inherent in the CHG process. Approximately 70% lower capital cost is achievable compared to SMR-based hydrogen production with CO2 capture, as well as improved operating costs.« less

  10. A Formalized Design Process for Bacterial Consortia That Perform Logic Computing

    PubMed Central

    Sun, Rui; Xi, Jingyi; Wen, Dingqiao; Feng, Jingchen; Chen, Yiwei; Qin, Xiao; Ma, Yanrong; Luo, Wenhan; Deng, Linna; Lin, Hanchi; Yu, Ruofan; Ouyang, Qi

    2013-01-01

    The concept of microbial consortia is of great attractiveness in synthetic biology. Despite of all its benefits, however, there are still problems remaining for large-scaled multicellular gene circuits, for example, how to reliably design and distribute the circuits in microbial consortia with limited number of well-behaved genetic modules and wiring quorum-sensing molecules. To manage such problem, here we propose a formalized design process: (i) determine the basic logic units (AND, OR and NOT gates) based on mathematical and biological considerations; (ii) establish rules to search and distribute simplest logic design; (iii) assemble assigned basic logic units in each logic operating cell; and (iv) fine-tune the circuiting interface between logic operators. We in silico analyzed gene circuits with inputs ranging from two to four, comparing our method with the pre-existing ones. Results showed that this formalized design process is more feasible concerning numbers of cells required. Furthermore, as a proof of principle, an Escherichia coli consortium that performs XOR function, a typical complex computing operation, was designed. The construction and characterization of logic operators is independent of “wiring” and provides predictive information for fine-tuning. This formalized design process provides guidance for the design of microbial consortia that perform distributed biological computation. PMID:23468999

  11. A Cognitive Architecture for Human Performance Process Model Research

    DTIC Science & Technology

    1992-11-01

    individually defined, updatable world representation which is a description of the world as the operator knows it. It contains rules for decisions, an...operate it), and rules of engagement (knowledge about the operator’s expected behavior). The HPP model works in the following way. Information enters...based models depict the problem-solving processes of experts. The experts’ knowledge is represented in symbol structures, along with rules for

  12. Improving perioperative performance: the use of operations management and the electronic health record.

    PubMed

    Foglia, Robert P; Alder, Adam C; Ruiz, Gardito

    2013-01-01

    Perioperative services require the orchestration of multiple staff, space and equipment. Our aim was to identify whether the implementation of operations management and an electronic health record (EHR) improved perioperative performance. We compared 2006, pre operations management and EHR implementation, to 2010, post implementation. Operations management consisted of: communication to staff of perioperative vision and metrics, obtaining credible data and analysis, and the implementation of performance improvement processes. The EHR allows: identification of delays and the accountable service or person, collection and collation of data for analysis in multiple venues, including operational, financial, and quality. Metrics assessed included: operative cases, first case on time starts; reason for delay, and operating revenue. In 2006, 19,148 operations were performed (13,545 in the Main Operating Room (OR) area, and 5603, at satellite locations); first case on time starts were 12%; reasons for first case delay were not identifiable; and operating revenue was $115.8M overall, with $78.1M in the Main OR area. In 2010, cases increased to 25,856 (+35%); Main OR area increased to 13,986 (+3%); first case on time starts improved to 46%; operations outside the Main OR area increased to 11,870 (112%); case delays were ascribed to nurses 7%, anesthesiologists 22%, surgeons 33%, and other (patient, hospital) 38%. Five surgeons (7%) accounted for 29% of surgical delays and 4 anesthesiologists (8%) for 45% of anesthesiology delays; operating revenue increased to $177.3M (+53%) overall, and in the Main OR area rose to $101.5M (+30%). The use of operations management and EHR resulted in improved processes, credible data, promptly sharing the metrics, and pinpointing individual provider performance. Implementation of these strategies allowed us to shift cases between facilities, reallocate OR blocks, increase first case on time starts four fold and operative cases by 35%, and these changes were associated with a 53% increase in operating revenue. The fact that revenue increase was greater than case volume (53% vs. 35%) speaks for improved performance. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. An Analysis of the Navy Regional Data Automation Center (NARDAC) chargeback System

    DTIC Science & Technology

    1986-09-01

    addition, operational control is concerned with performing predefined activities whereas management control relates to the organiza- tion’s goals and...In effect, the management control system monitors the progress of operations and alerts the "appropriate management level" when performance as measured...architecture, the financial control processes, and the audit function ( Brandon , 1978; Anderson, 1983). In an operating DP environment, however, non-financial

  14. Study of operator's information in the course of complicated and combined activity

    NASA Astrophysics Data System (ADS)

    Krylova, N. V.; Bokovikov, A. K.

    1982-08-01

    Speech characteristics of operators performing control and observation operations, Information reception, transmission and processing, and decision making when exposed to the real stress of parachute jumps were investigated. Form and content speech characteristics whose variations reflect the level of operators adaptation to stressful activities are reported.

  15. An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines

    PubMed Central

    Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John

    2015-01-01

    The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints. PMID:26062092

  16. An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines.

    PubMed

    Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John

    2015-01-01

    The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints.

  17. Safety Verification of the Small Aircraft Transportation System Concept of Operations

    NASA Technical Reports Server (NTRS)

    Carreno, Victor; Munoz, Cesar

    2005-01-01

    A critical factor in the adoption of any new aeronautical technology or concept of operation is safety. Traditionally, safety is accomplished through a rigorous process that involves human factors, low and high fidelity simulations, and flight experiments. As this process is usually performed on final products or functional prototypes, concept modifications resulting from this process are very expensive to implement. This paper describe an approach to system safety that can take place at early stages of a concept design. It is based on a set of mathematical techniques and tools known as formal methods. In contrast to testing and simulation, formal methods provide the capability of exhaustive state exploration analysis. We present the safety analysis and verification performed for the Small Aircraft Transportation System (SATS) Concept of Operations (ConOps). The concept of operations is modeled using discrete and hybrid mathematical models. These models are then analyzed using formal methods. The objective of the analysis is to show, in a mathematical framework, that the concept of operation complies with a set of safety requirements. It is also shown that the ConOps has some desirable characteristic such as liveness and absence of dead-lock. The analysis and verification is performed in the Prototype Verification System (PVS), which is a computer based specification language and a theorem proving assistant.

  18. A Semi-Preemptive Garbage Collector for Solid State Drives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Junghee; Kim, Youngjae; Shipman, Galen M

    2011-01-01

    NAND flash memory is a preferred storage media for various platforms ranging from embedded systems to enterprise-scale systems. Flash devices do not have any mechanical moving parts and provide low-latency access. They also require less power compared to rotating media. Unlike hard disks, flash devices use out-of-update operations and they require a garbage collection (GC) process to reclaim invalid pages to create free blocks. This GC process is a major cause of performance degradation when running concurrently with other I/O operations as internal bandwidth is consumed to reclaim these invalid pages. The invocation of the GC process is generally governedmore » by a low watermark on free blocks and other internal device metrics that different workloads meet at different intervals. This results in I/O performance that is highly dependent on workload characteristics. In this paper, we examine the GC process and propose a semi-preemptive GC scheme that can preempt on-going GC processing and service pending I/O requests in the queue. Moreover, we further enhance flash performance by pipelining internal GC operations and merge them with pending I/O requests whenever possible. Our experimental evaluation of this semi-preemptive GC sheme with realistic workloads demonstrate both improved performance and reduced performance variability. Write-dominant workloads show up to a 66.56% improvement in average response time with a 83.30% reduced variance in response time compared to the non-preemptive GC scheme.« less

  19. OPERATOR BURDEN IN METAL ADDITIVE MANUFACTURING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, Amy M; Love, Lonnie J

    2016-01-01

    Additive manufacturing (AM) is an emerging manufacturing process that creates usable machine parts via layer-by-layer joining of a stock material. With this layer-wise approach, high-performance geometries can be created which are impossible with traditional manufacturing methods. Metal AM technology has the potential to significantly reduce the manufacturing burden of developing custom hardware; however, a major consideration in choosing a metal AM system is the required amount of operator involvement (i.e., operator burden) in the manufacturing process. The operator burden not only determines the amount of operator training and specialization required but also the usability of the system in a facility.more » As operators of several metal AM processes, the Manufacturing Demonstration Facility (MDF) at Oak Ridge National Labs is uniquely poised to provide insight into requirements for operator involvement in each of the three major metal AM processes. The paper covers an overview of each of the three metal AM technologies, focusing on the burden on the operator to complete the build cycle, process the part for final use, and reset the AM equipment for future builds.« less

  20. Optimizing the availability of a buffered industrial process

    DOEpatents

    Martz, Jr., Harry F.; Hamada, Michael S.; Koehler, Arthur J.; Berg, Eric C.

    2004-08-24

    A computer-implemented process determines optimum configuration parameters for a buffered industrial process. A population size is initialized by randomly selecting a first set of design and operation values associated with subsystems and buffers of the buffered industrial process to form a set of operating parameters for each member of the population. An availability discrete event simulation (ADES) is performed on each member of the population to determine the product-based availability of each member. A new population is formed having members with a second set of design and operation values related to the first set of design and operation values through a genetic algorithm and the product-based availability determined by the ADES. Subsequent population members are then determined by iterating the genetic algorithm with product-based availability determined by ADES to form improved design and operation values from which the configuration parameters are selected for the buffered industrial process.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roger Lew; Ronald L. Boring; Thomas A. Ulrich

    Operators of critical processes, such as nuclear power production, must contend with highly complex systems, procedures, and regulations. Developing human-machine interfaces (HMIs) that better support operators is a high priority for ensuring the safe and reliable operation of critical processes. Human factors engineering (HFE) provides a rich and mature set of tools for evaluating the performance of HMIs, but the set of tools for developing and designing HMIs is still in its infancy. Here we propose that Microsoft Windows Presentation Foundation (WPF) is well suited for many roles in the research and development of HMIs for process control.

  2. Optimizing Blocking and Nonblocking Reduction Operations for Multicore Systems: Hierarchical Design and Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorentla Venkata, Manjunath; Shamis, Pavel; Graham, Richard L

    2013-01-01

    Many scientific simulations, using the Message Passing Interface (MPI) programming model, are sensitive to the performance and scalability of reduction collective operations such as MPI Allreduce and MPI Reduce. These operations are the most widely used abstractions to perform mathematical operations over all processes that are part of the simulation. In this work, we propose a hierarchical design to implement the reduction operations on multicore systems. This design aims to improve the efficiency of reductions by 1) tailoring the algorithms and customizing the implementations for various communication mechanisms in the system 2) providing the ability to configure the depth ofmore » hierarchy to match the system architecture, and 3) providing the ability to independently progress each of this hierarchy. Using this design, we implement MPI Allreduce and MPI Reduce operations (and its nonblocking variants MPI Iallreduce and MPI Ireduce) for all message sizes, and evaluate on multiple architectures including InfiniBand and Cray XT5. We leverage and enhance our existing infrastructure, Cheetah, which is a framework for implementing hierarchical collective operations to implement these reductions. The experimental results show that the Cheetah reduction operations outperform the production-grade MPI implementations such as Open MPI default, Cray MPI, and MVAPICH2, demonstrating its efficiency, flexibility and portability. On Infini- Band systems, with a microbenchmark, a 512-process Cheetah nonblocking Allreduce and Reduce achieves a speedup of 23x and 10x, respectively, compared to the default Open MPI reductions. The blocking variants of the reduction operations also show similar performance benefits. A 512-process nonblocking Cheetah Allreduce achieves a speedup of 3x, compared to the default MVAPICH2 Allreduce implementation. On a Cray XT5 system, a 6144-process Cheetah Allreduce outperforms the Cray MPI by 145%. The evaluation with an application kernel, Conjugate Gradient solver, shows that the Cheetah reductions speeds up total time to solution by 195%, demonstrating the potential benefits for scientific simulations.« less

  3. Materials management: stretching the "household" budget.

    PubMed

    Carpe, R H; Carroll, P E

    1987-11-01

    As CFOs assume responsibility for the materials management function because of the potential to maximize cash flow, achieve economies of scale, decrease costs, and streamline operations, they look for guidelines to evaluate performance. Conducting a systems operations audit can aid in assessing that performance. CFOs can determine whether materials management processes are working "smarter, nor harder."

  4. Identification of High Performance, Low Environmental Impact Materials and Processes Using Systematic Substitution (SyS)

    NASA Technical Reports Server (NTRS)

    Dhooge, P. M.; Nimitz, J. S.

    2001-01-01

    Process analysis can identify opportunities for efficiency improvement including cost reduction, increased safety, improved quality, and decreased environmental impact. A thorough, systematic approach to materials and process selection is valuable in any analysis. New operations and facilities design offer the best opportunities for proactive cost reduction and environmental improvement, but existing operations and facilities can also benefit greatly. Materials and processes that have been used for many years may be sources of excessive resource use, waste generation, pollution, and cost burden that should be replaced. Operational and purchasing personnel may not recognize some materials and processes as problems. Reasons for materials or process replacement may include quality and efficiency improvements, excessive resource use and waste generation, materials and operational costs, safety (flammability or toxicity), pollution prevention, compatibility with new processes or materials, and new or anticipated regulations.

  5. Approach for Configuring a Standardized Vessel for Processing Radioactive Waste Slurries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bamberger, Judith A.; Enderlin, Carl W.; Minette, Michael J.

    2015-09-10

    A standardized vessel design is being considered at the Waste Treatment and Immobilization Plant (WTP) that is under construction at Hanford, Washington. The standardized vessel design will be used for storing, blending, and chemical processing of slurries that exhibit a variable process feed including Newtonian to non-Newtonian rheologies over a range of solids loadings. Developing a standardized vessel is advantageous and reduces the testing required to evaluate the performance of the design. The objectives of this paper are to: 1) present a design strategy for developing a standard vessel mixing system design for the pretreatment portion of the waste treatmentmore » plant that must process rheologically and physically challenging process streams, 2) identify performance criteria that the design for the standard vessel must satisfy, 3) present parameters that are to be used for assessing the performance criteria, and 4) describe operation of the selected technology. Vessel design performance will be assessed for both Newtonian and non-Newtonian simulants which represent a range of waste types expected during operation. Desired conditions for the vessel operations are the ability to shear the slurry so that flammable gas does not accumulate within the vessel, that settled solids will be mobilized, that contents can be blended, and that contents can be transferred from the vessel. A strategy is presented for adjusting the vessel configuration to ensure that all these conditions are met.« less

  6. 40 CFR 63.1312 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....111) Owner or operator (§ 63.2) Performance evaluation (§ 63.2) Performance test (§ 63.2) Permitting...) Research and development facility (§ 63.101) Routed to a process or route to a process (§ 63.161) Run (§ 63... vessel (§ 63.161) Temperature monitoring device (§ 63.111) Test method (§ 63.2) Treatment process (§ 63...

  7. Objective assessment of operator performance during ultrasound-guided procedures.

    PubMed

    Tabriz, David M; Street, Mandie; Pilgram, Thomas K; Duncan, James R

    2011-09-01

    Simulation permits objective assessment of operator performance in a controlled and safe environment. Image-guided procedures often require accurate needle placement, and we designed a system to monitor how ultrasound guidance is used to monitor needle advancement toward a target. The results were correlated with other estimates of operator skill. The simulator consisted of a tissue phantom, ultrasound unit, and electromagnetic tracking system. Operators were asked to guide a needle toward a visible point target. Performance was video-recorded and synchronized with the electromagnetic tracking data. A series of algorithms based on motor control theory and human information processing were used to convert raw tracking data into different performance indices. Scoring algorithms converted the tracking data into efficiency, quality, task difficulty, and targeting scores that were aggregated to create performance indices. After initial feasibility testing, a standardized assessment was developed. Operators (N = 12) with a broad spectrum of skill and experience were enrolled and tested. Overall scores were based on performance during ten simulated procedures. Prior clinical experience was used to independently estimate operator skill. When summed, the performance indices correlated well with estimated skill. Operators with minimal or no prior experience scored markedly lower than experienced operators. The overall score tended to increase according to operator's clinical experience. Operator experience was linked to decreased variation in multiple aspects of performance. The aggregated results of multiple trials provided the best correlation between estimated skill and performance. A metric for the operator's ability to maintain the needle aimed at the target discriminated between operators with different levels of experience. This study used a highly focused task model, standardized assessment, and objective data analysis to assess performance during simulated ultrasound-guided needle placement. The performance indices were closely related to operator experience.

  8. Operator Performance Measures for Assessing Voice Communication Effectiveness

    DTIC Science & Technology

    1989-07-01

    performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor

  9. Comparative Effects of Antihistamines on Aircrew Mission Effectiveness under Sustained Operations

    DTIC Science & Technology

    1992-06-01

    measures consist mainly of process measures. Process measures are measures of activities used to accomplish the mission and produce the final results...They include task completion times and response variability, and information processing rates as they relate to unique task assignment. Performance...contains process measures that assess the Individual contributions of hardware/software and human components to overall system performance. Measures

  10. Strategies for Maximizing Successful Drug Substance Technology Transfer Using Engineering, Shake-Down, and Wet Test Runs.

    PubMed

    Abraham, Sushil; Bain, David; Bowers, John; Larivee, Victor; Leira, Francisco; Xie, Jasmina

    2015-01-01

    The technology transfer of biological products is a complex process requiring control of multiple unit operations and parameters to ensure product quality and process performance. To achieve product commercialization, the technology transfer sending unit must successfully transfer knowledge about both the product and the process to the receiving unit. A key strategy for maximizing successful scale-up and transfer efforts is the effective use of engineering and shake-down runs to confirm operational performance and product quality prior to embarking on good manufacturing practice runs such as process performance qualification runs. We consider key factors to consider in making the decision to perform shake-down or engineering runs. We also present industry benchmarking results of how engineering runs are used in drug substance technology transfers alongside the main themes and best practices that have emerged. Our goal is to provide companies with a framework for ensuring the "right first time" technology transfers with effective deployment of resources within increasingly aggressive timeline constraints. © PDA, Inc. 2015.

  11. Multidimensional Profiling of Task Stress States for Human Factors: A Brief Review.

    PubMed

    Matthews, Gerald

    2016-09-01

    This article advocates multidimensional assessment of task stress in human factors and reviews the use of the Dundee Stress State Questionnaire (DSSQ) for evaluation of systems and operators. Contemporary stress research has progressed from an exclusive focus on environmental stressors to transactional perspectives on the stress process. Performance impacts of stress reflect the operator's dynamic attempts to understand and cope with task demands. Multidimensional stress assessments are necessary to gauge the different forms of system-operator interaction. This review discusses the theoretical and practical use of the DSSQ in evaluating multidimensional patterns of stress response. It presents psychometric evidence for the multidimensional perspective and illustrative profiles of subjective state response to task stressors and environments. Evidence is also presented on stress state correlations with related variables, including personality, stress process measures, psychophysiological response, and objective task performance. Evidence supports the validity of the DSSQ as a task stress measure. Studies of various simulated environments show that different tasks elicit different profiles of stress state response. Operator characteristics such as resilience predict individual differences in state response to stressors. Structural equation modeling may be used to understand performance impacts of stress states. Multidimensional assessment affords insight into the stress process in a variety of human factors contexts. Integrating subjective and psychophysiological assessment is a priority for future research. Stress state measurement contributes to evaluating system design, countermeasures to stress and fatigue, and performance vulnerabilities. It may also support personnel selection and diagnostic monitoring of operators. © 2016, Human Factors and Ergonomics Society.

  12. DMA engine for repeating communication patterns

    DOEpatents

    Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard; Vranas, Pavlos

    2010-09-21

    A parallel computer system is constructed as a network of interconnected compute nodes to operate a global message-passing application for performing communications across the network. Each of the compute nodes includes one or more individual processors with memories which run local instances of the global message-passing application operating at each compute node to carry out local processing operations independent of processing operations carried out at other compute nodes. Each compute node also includes a DMA engine constructed to interact with the application via Injection FIFO Metadata describing multiple Injection FIFOs where each Injection FIFO may containing an arbitrary number of message descriptors in order to process messages with a fixed processing overhead irrespective of the number of message descriptors included in the Injection FIFO.

  13. Real-time processing of dual band HD video for maintaining operational effectiveness in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Parker, Steve C. J.; Hickman, Duncan L.; Smith, Moira I.

    2015-05-01

    Effective reconnaissance, surveillance and situational awareness, using dual band sensor systems, require the extraction, enhancement and fusion of salient features, with the processed video being presented to the user in an ergonomic and interpretable manner. HALO™ is designed to meet these requirements and provides an affordable, real-time, and low-latency image fusion solution on a low size, weight and power (SWAP) platform. The system has been progressively refined through field trials to increase its operating envelope and robustness. The result is a video processor that improves detection, recognition and identification (DRI) performance, whilst lowering operator fatigue and reaction times in complex and highly dynamic situations. This paper compares the performance of HALO™, both qualitatively and quantitatively, with conventional blended fusion for operation in degraded visual environments (DVEs), such as those experienced during ground and air-based operations. Although image blending provides a simple fusion solution, which explains its common adoption, the results presented demonstrate that its performance is poor compared to the HALO™ fusion scheme in DVE scenarios.

  14. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-02-12

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

  15. Range Process Simulation Tool

    NASA Technical Reports Server (NTRS)

    Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga

    2005-01-01

    Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.

  16. Insertion of operation-and-indicate instructions for optimized SIMD code

    DOEpatents

    Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K

    2013-06-04

    Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.

  17. Primary and Secondary Lithium Batteries Capable of Operating at Low Temperatures for Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Smart, M. C.; Ratnakumar, B. V.; West, W. C.; Brandon, E. J.

    2011-01-01

    Objectives and Approach: (1) Develop advanced Li ]ion electrolytes that enable cell operation over a wide temperature range (i.e., -60 to +60 C). Improve the high temperature stability and lifetime characteristics of wide operating temperature electrolytes. (2) Define the performance limitations at low and high temperature extremes, as well as, life limiting processes. (3) Demonstrate the performance of advanced electrolytes in large capacity prototype cells.

  18. Comparison of performance and operation of side-by-side integrated fixed-film and conventional activated sludge processes at demonstration scale.

    PubMed

    Stricker, Anne-Emmanuelle; Barrie, Ashley; Maas, Carol L A; Fernandes, William; Lishman, Lori

    2009-03-01

    A full-scale demonstration of an integrated fixed-film activated sludge (IFFAS) process with floating carriers has been conducted in Ontario, Canada, since August 2003. In this study, data collected on-site from July 2005 to December 2006 are analyzed and compared with the performance of a conventional activated sludge train operated in parallel. Both trains received similar loadings and maintained comparable mixed liquor concentrations; however, the IFFAS had 50% more biomass when the attached growth was considered. In the winter, the conventional train operated at the critical solids retention time (SRT) and had fluctuating partial nitrification. The IFFAS nitrified more consistently and had a doubled average capacity. In the summer, the suspended SRT was less limiting, and the benefit of IFFAS for nitrification was marginal. The lessons learned from the operational requirements and challenges of the IFFAS process (air flow, carrier management, and seasonal foaming) are discussed, and design recommendations are proposed for whole plant retrofit.

  19. Rejuvenation of automotive fuel cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Yu Seung; Langlois, David A.

    A process for rejuvenating fuel cells has been demonstrated to improve the performance of polymer exchange membrane fuel cells with platinum/ionomer electrodes. The process involves dehydrating a fuel cell and exposing at least the cathode of the fuel cell to dry gas (nitrogen, for example) at a temperature higher than the operating temperature of the fuel cell. The process may be used to prolong the operating lifetime of an automotive fuel cell.

  20. [Monitoring method for macroporous resin column chromatography process of salvianolic acids based on near infrared spectroscopy].

    PubMed

    Hou, Xiang-Mei; Zhang, Lei; Yue, Hong-Shui; Ju, Ai-Chun; Ye, Zheng-Liang

    2016-07-01

    To study and establish a monitoring method for macroporous resin column chromatography process of salvianolic acids by using near infrared spectroscopy (NIR) as a process analytical technology (PAT).The multivariate statistical process control (MSPC) model was developed based on 7 normal operation batches, and 2 test batches (including one normal operation batch and one abnormal operation batch) were used to verify the monitoring performance of this model. The results showed that MSPC model had a good monitoring ability for the column chromatography process. Meanwhile, NIR quantitative calibration model was established for three key quality indexes (rosmarinic acid, lithospermic acid and salvianolic acid B) by using partial least squares (PLS) algorithm. The verification results demonstrated that this model had satisfactory prediction performance. The combined application of the above two models could effectively achieve real-time monitoring for macroporous resin column chromatography process of salvianolic acids, and can be used to conduct on-line analysis of key quality indexes. This established process monitoring method could provide reference for the development of process analytical technology for traditional Chinese medicines manufacturing. Copyright© by the Chinese Pharmaceutical Association.

  1. Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.

    PubMed

    Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas

    2011-06-24

    This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Bench Scale Process for Low Cost CO 2 Capture Using a Phase-Changing Absorbent: Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westendorf, Tiffany; Buddle, Stanlee; Caraher, Joel

    The objective of this project is to design and build a bench-scale process for a novel phase-changing aminosilicone-based CO 2-capture solvent. The project will establish scalability and technical and economic feasibility of using a phase-changing CO 2-capture absorbent for post-combustion capture of CO 2 from coal-fired power plants. The U.S. Department of Energy’s goal for Transformational Carbon Capture Technologies is the development of technologies available for demonstration by 2025 that can capture 90% of emitted CO 2 with at least 95% CO 2 purity for less than $40/tonne of CO 2 captured. In the first budget period of the project,more » the bench-scale phase-changing CO2 capture process was designed using data and operating experience generated under a previous project (ARPA-e project DE-AR0000084). Sizing and specification of all major unit operations was completed, including detailed process and instrumentation diagrams. The system was designed to operate over a wide range of operating conditions to allow for exploration of the effect of process variables on CO 2 capture performance. In the second budget period of the project, individual bench-scale unit operations were tested to determine the performance of each of each unit. Solids production was demonstrated in dry simulated flue gas across a wide range of absorber operating conditions, with single stage CO 2 conversion rates up to 75mol%. Desorber operation was demonstrated in batch mode, resulting in desorption performance consistent with the equilibrium isotherms for GAP-0/CO 2 reaction. Important risks associated with gas humidity impact on solids consistency and desorber temperature impact on thermal degradation were explored, and adjustments to the bench-scale process were made to address those effects. Corrosion experiments were conducted to support selection of suitable materials of construction for the major unit operations in the process. The bench scale unit operations were assembled into a continuous system to support steady state system testing. In the third budget period of the project, continuous system testing was conducted, including closed-loop operation of the absorber and desober systems. Slurries of GAP-0/GAP-0 carbamate/water mixtures produced in the absorber were pumped successfully to the desorber unit, and regenerated solvent was returned to the absorber. A techno-economic analysis, EH&S risk assessment, and solvent manufacturability study were completed.« less

  3. Virtual fixtures as tools to enhance operator performance in telepresence environments

    NASA Astrophysics Data System (ADS)

    Rosenberg, Louis B.

    1993-12-01

    This paper introduces the notion of virtual fixtures for use in telepresence systems and presents an empirical study which demonstrates that such virtual fixtures can greatly enhance operator performance within remote environments. Just as tools and fixtures in the real world can enhance human performance by guiding manual operations, providing localizing references, and reducing the mental processing required to perform a task, virtual fixtures are computer generated percepts overlaid on top of the reflection of a remote workspace which can provide similar benefits. Like a ruler guiding a pencil in a real manipulation task, a virtual fixture overlaid on top of a remote workspace can act to reduce the mental processing required to perform a task, limit the workload of certain sensory modalities, and most of all allow precision and performance to exceed natural human abilities. Because such perceptual overlays are virtual constructions they can be diverse in modality, abstract in form, and custom tailored to individual task or user needs. This study investigates the potential of virtual fixtures by implementing simple combinations of haptic and auditory sensations as perceptual overlays during a standardized telemanipulation task.

  4. Workload Capacity: A Response Time-Based Measure of Automation Dependence.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2016-05-01

    An experiment used the workload capacity measure C(t) to quantify the processing efficiency of human-automation teams and identify operators' automation usage strategies in a speeded decision task. Although response accuracy rates and related measures are often used to measure the influence of an automated decision aid on human performance, aids can also influence response speed. Mean response times (RTs), however, conflate the influence of the human operator and the automated aid on team performance and may mask changes in the operator's performance strategy under aided conditions. The present study used a measure of parallel processing efficiency, or workload capacity, derived from empirical RT distributions as a novel gauge of human-automation performance and automation dependence in a speeded task. Participants performed a speeded probabilistic decision task with and without the assistance of an automated aid. RT distributions were used to calculate two variants of a workload capacity measure, COR(t) and CAND(t). Capacity measures gave evidence that a diagnosis from the automated aid speeded human participants' responses, and that participants did not moderate their own decision times in anticipation of diagnoses from the aid. Workload capacity provides a sensitive and informative measure of human-automation performance and operators' automation dependence in speeded tasks. © 2016, Human Factors and Ergonomics Society.

  5. Performance analysis of Supply Chain Management with Supply Chain Operation reference model

    NASA Astrophysics Data System (ADS)

    Hasibuan, Abdurrozzaq; Arfah, Mahrani; Parinduri, Luthfi; Hernawati, Tri; Suliawati; Harahap, Bonar; Rahmah Sibuea, Siti; Krianto Sulaiman, Oris; purwadi, Adi

    2018-04-01

    This research was conducted at PT. Shamrock Manufacturing Corpora, the company is required to think creatively to implement competition strategy by producing goods/services that are more qualified, cheaper. Therefore, it is necessary to measure the performance of Supply Chain Management in order to improve the competitiveness. Therefore, the company is required to optimize its production output to meet the export quality standard. This research begins with the creation of initial dimensions based on Supply Chain Management process, ie Plan, Source, Make, Delivery, and Return with hierarchy based on Supply Chain Reference Operation that is Reliability, Responsiveness, Agility, Cost, and Asset. Key Performance Indicator identification becomes a benchmark in performance measurement whereas Snorm De Boer normalization serves to equalize Key Performance Indicator value. Analiytical Hierarchy Process is done to assist in determining priority criteria. Measurement of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora produces SC. Responsiveness (0.649) has higher weight (priority) than other alternatives. The result of performance analysis using Supply Chain Reference Operation model of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora looks good because its monitoring system between 50-100 is good.

  6. Experimental analysis of robot-assisted needle insertion into porcine liver.

    PubMed

    Wang, Wendong; Shi, Yikai; Goldenberg, Andrew A; Yuan, Xiaoqing; Zhang, Peng; He, Lijing; Zou, Yingjie

    2015-01-01

    How to improve placement accuracy of needle insertion into liver tissue is of paramount interest to physicians. A robot-assisted system was developed to experimentally demonstrate its advantages in needle insertion surgeries. Experiments of needle insertion into porcine liver tissue were performed with conic tip needle (diameter 8 mm) and bevel tip needle (diameter 1.5 mm) in this study. Manual operation was designed to compare the performance of the presented robot-assisted system. The real-time force curves show outstanding advantages of robot-assisted operation in improving the controllability and stability of needle insertion process by comparing manual operation. The statistics of maximum force and average force further demonstrates robot-assisted operation causes less oscillation. The difference of liver deformation created by manual operation and robot-assisted operation is very low, 1 mm for average deformation and 2 mm for maximum deformation. To conclude, the presented robot-assisted system can improve placement accuracy of needle by stably control insertion process.

  7. Effects of Selected Task Performance Criteria at Initiating Adaptive Task Real locations

    NASA Technical Reports Server (NTRS)

    Montgomery, Demaris A.

    2001-01-01

    In the current report various performance assessment methods used to initiate mode transfers between manual control and automation for adaptive task reallocation were tested. Participants monitored two secondary tasks for critical events while actively controlling a process in a fictional system. One of the secondary monitoring tasks could be automated whenever operators' performance was below acceptable levels. Automation of the secondary task and transfer of the secondary task back to manual control were either human- or machine-initiated. Human-initiated transfers were based on the operator's assessment of the current task demands while machine-initiated transfers were based on the operators' performance. Different performance assessment methods were tested in two separate experiments.

  8. Methods of Formation of Students Technological Competence in the Speciality "Garment Industry and Fashion Design"

    ERIC Educational Resources Information Center

    Zholdasbekova, S.; Karataev, G.; Yskak, A.; Zholdasbekov, A.; Nurzhanbaeva, J.

    2015-01-01

    This article describes the major components of required technological skills (TS) for future designers taught during the academic process of a college. It considers the choices in terms of the various logical operations required by the fashion industry including fabric processing, assembly charts, performing work operations, etc. The article…

  9. 49 CFR 7.44 - Services performed without charge or at a reduced charge.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... charged to any requestor making a request under subpart C of this part for the first two hours of search... search is required two hours of search time will be considered spent when the hourly costs of operating the central processing unit used to perform the search added to the computer operator's salary cost...

  10. 49 CFR 7.44 - Services performed without charge or at a reduced charge.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... charged to any requestor making a request under subpart C of this part for the first two hours of search... search is required two hours of search time will be considered spent when the hourly costs of operating the central processing unit used to perform the search added to the computer operator's salary cost...

  11. 49 CFR 7.44 - Services performed without charge or at a reduced charge.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... charged to any requestor making a request under subpart C of this part for the first two hours of search... search is required two hours of search time will be considered spent when the hourly costs of operating the central processing unit used to perform the search added to the computer operator's salary cost...

  12. The Endurance of Children's Working Memory: A Recall Time Analysis

    ERIC Educational Resources Information Center

    Towse, John N.; Hitch, Graham J.; Hamilton, Z.; Pirrie, Sarah

    2008-01-01

    We analyze the timing of recall as a source of information about children's performance in complex working memory tasks. A group of 8-year-olds performed a traditional operation span task in which sequence length increased across trials and an operation period task in which processing requirements were extended across trials of constant sequence…

  13. A multiprocessing architecture for real-time monitoring

    NASA Technical Reports Server (NTRS)

    Schmidt, James L.; Kao, Simon M.; Read, Jackson Y.; Weitzenkamp, Scott M.; Laffey, Thomas J.

    1988-01-01

    A multitasking architecture for performing real-time monitoring and analysis using knowledge-based problem solving techniques is described. To handle asynchronous inputs and perform in real time, the system consists of three or more distributed processes which run concurrently and communicate via a message passing scheme. The Data Management Process acquires, compresses, and routes the incoming sensor data to other processes. The Inference Process consists of a high performance inference engine that performs a real-time analysis on the state and health of the physical system. The I/O Process receives sensor data from the Data Management Process and status messages and recommendations from the Inference Process, updates its graphical displays in real time, and acts as the interface to the console operator. The distributed architecture has been interfaced to an actual spacecraft (NASA's Hubble Space Telescope) and is able to process the incoming telemetry in real-time (i.e., several hundred data changes per second). The system is being used in two locations for different purposes: (1) in Sunnyville, California at the Space Telescope Test Control Center it is used in the preflight testing of the vehicle; and (2) in Greenbelt, Maryland at NASA/Goddard it is being used on an experimental basis in flight operations for health and safety monitoring.

  14. Investigation of Capabilities and Technologies Supporting Rapid UAV Launch System Development

    DTIC Science & Technology

    2015-06-01

    NUMBERS 6. AUTHOR(S) Patrick Alan Livesay 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943 8. PERFORMING ...to operate. This enabled the launcher design team to more clearly determine and articulate system require- ments and performance parameters. Next, a...Process (AHP) was performed to xvii prioritize the capabilities and assist in the decision-making process [1]. The AHP decision-analysis technique is

  15. [Operational costs and control of performance in surgical clinics between marketing and planning economics. Risk or perhaps quadrature of the circle].

    PubMed

    Kraus, T W; Weber, W; Mieth, M; Funk, H; Klar, E; Herfarth, C

    2000-03-01

    Surgical hospitals can be seen as operational or even industrial production systems. Doctors have a major impact on both medical performance and costs. For active participation in the management process, knowledge of industrial controlling mechanisms is required. German hospitals currently receive no procedure-related financial revenues, such as prices or tariffs for defined medical treatment activities. Maximum clinical revenues are, furthermore, limited by principles of planned economy and can be increased only slightly by greater medical performance. Costs are the only target that can be autonomously influenced by the management. Operative controlling in hospitals aims at horizontal and vertical coordination of subunits and decentralization of process regulations. Hospital medical performance is not clearly defined, its quantitative measurement very problematic. Process-orientated clinical activities are not taken into account. A high percentage of hospital costs are fixed and can be influenced only by major structural interventions in the long term. Variable costs are primarily dependent on the quantity of clinical activities, but also heavily influenced by patient structure (comorbidity and risk profile). The various forms of industrial cost calculations, such as internal budgeting, internal markets or flexible plan-cost balancing, cannot be directly applied in hospital management. Based on these analyses, current operational concepts and strategic trends are listed to describe cost-management options in hospitals with focus on the German health reforms.

  16. Numerical Investigation of Novel Oxygen Blast Furnace Ironmaking Processes

    NASA Astrophysics Data System (ADS)

    Li, Zhaoyang; Kuang, Shibo; Yu, Aibing; Gao, Jianjun; Qi, Yuanhong; Yan, Dingliu; Li, Yuntao; Mao, Xiaoming

    2018-04-01

    Oxygen blast furnace (OBF) ironmaking process has the potential to realize "zero carbon footprint" production, but suffers from the "thermal shortage" problem. This paper presents three novel OBF processes, featured by belly injection of reformed coke oven gas, burden hot-charge operation, and their combination, respectively. These processes were studied by a multifluid process model. The applicability of the model was confirmed by comparing the numerical results against the measured key performance indicators of an experimental OBF operated with or without injection of reformed coke oven gas. Then, these different OBF processes together with a pure OBF were numerically examined in aspects of in-furnace states and global performance, assuming that the burden quality can be maintained during the hot-charge operation. The numerical results show that under the present conditions, belly injection and hot charge, as auxiliary measures, are useful for reducing the fuel rate and increasing the productivity for OBFs but in different manners. Hot charge should be more suitable for OBFs of different sizes because it improves the thermochemical states throughout the dry zone rather than within a narrow region in the case of belly injection. The simultaneous application of belly injection and hot charge leads to the best process performance, at the same time, lowering down hot-charge temperature to achieve the same carbon consumption and hot metal temperature as that achieved when applying the hot charge alone. This feature will be practically beneficial in the application of hot-charge operation. In addition, a systematic study of hot-charge temperature reveals that optimal hot-charge temperatures can be identified according to the utilization efficiency of the sensible heat of hot burden.

  17. ConnectX-2 InfiniBand Management Queues: First Investigation of the New Support for Network Offloaded Collective Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Richard L; Poole, Stephen W; Shamis, Pavel

    2010-01-01

    This paper introduces the newly developed Infini-Band (IB) Management Queue capability, used by the Host Channel Adapter (HCA) to manage network task data flow dependancies, and progress the communications associated with such flows. These tasks include sends, receives, and the newly supported wait task, and are scheduled by the HCA based on a data dependency description provided by the user. This functionality is supported by the ConnectX-2 HCA, and provides the means for delegating collective communication management and progress to the HCA, also known as collective communication offload. This provides a means for overlapping collective communications managed by the HCAmore » and computation on the Central Processing Unit (CPU), thus making it possible to reduce the impact of system noise on parallel applications using collective operations. This paper further describes how this new capability can be used to implement scalable Message Passing Interface (MPI) collective operations, describing the high level details of how this new capability is used to implement the MPI Barrier collective operation, focusing on the latency sensitive performance aspects of this new capability. This paper concludes with small scale benchmark experiments comparing implementations of the barrier collective operation, using the new network offload capabilities, with established point-to-point based implementations of these same algorithms, which manage the data flow using the central processing unit. These early results demonstrate the promise this new capability provides to improve the scalability of high performance applications using collective communications. The latency of the HCA based implementation of the barrier is similar to that of the best performing point-to-point based implementation managed by the central processing unit, starting to outperform these as the number of processes involved in the collective operation increases.« less

  18. Improved biogas production from whole stillage by co-digestion with cattle manure.

    PubMed

    Westerholm, Maria; Hansson, Mikael; Schnürer, Anna

    2012-06-01

    Whole stillage, as sole substrate or co-digested with cattle manure, was evaluated as substrate for biogas production in five mesophilic laboratory-scale biogas reactors, operating semi-continuously for 640 days. The process performance was monitored by chemical parameters and by quantitative analysis of the methanogenic and acetogenic population. With whole stillage as sole substrate the process showed clear signs of instability after 120 days of operation. However, co-digestion with manure clearly improved biogas productivity and process stability and indicated increased methane yield compared with theoretical values. The methane yield at an organic loading rate (OLR) at 2.8 g VS/(L×day) and a hydraulic retention time (HRT) of 45 days with a substrate mixture 85% whole stillage and 15% manure (based on volatile solids [VS]) was 0.31 N L CH(4)/gVS. Surprisingly, the abundance of the methanogenic and acetogenic populations remained relatively stable throughout the whole operation and was not influenced by process performance. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Space processing applications rocket project SPAR 4, engineering report

    NASA Technical Reports Server (NTRS)

    Reeves, F. (Compiler)

    1980-01-01

    The materials processing experiments in space, conducted on the SPAR 4 Black Brant VC rocket, are described and discussed. The SPAR 4 payload configuration, the rocket performance, and the flight sequence are reported. The results, analyses, and anomalies of the four experiments are discussed. The experiments conducted were the uniform dispersions of crystallization processing, the contained polycrstalline solidification in low gravity, the containerless processing of ferromagnetic materials, and the containerless processing technology. The instrumentation operations, payload power relay anomaly, relay postflight operational test, and relay postflight shock test are reported.

  20. An Application of Six Sigma to Reduce Supplier Quality Cost

    NASA Astrophysics Data System (ADS)

    Gaikwad, Lokpriya Mohanrao; Teli, Shivagond Nagappa; Majali, Vijay Shashikant; Bhushi, Umesh Mahadevappa

    2016-01-01

    This article presents an application of Six Sigma to reduce supplier quality cost in manufacturing industry. Although there is a wider acceptance of Six Sigma in many organizations today, there is still a lack of in-depth case study of Six Sigma. For the present research the case study methodology was used. The company decided to reduce quality cost and improve selected processes using Six Sigma methodologies. Regarding the fact that there is a lack of case studies dealing with Six Sigma especially in individual manufacturing organization this article could be of great importance also for the practitioners. This paper discusses the quality and productivity improvement in a supplier enterprise through a case study. The paper deals with an application of Six Sigma define-measure-analyze-improve-control methodology in an industry which provides a framework to identify, quantify and eliminate sources of variation in an operational process in question, to optimize the operation variables, improve and sustain performance viz. process yield with well-executed control plans. Six Sigma improves the process performance (process yield) of the critical operational process, leading to better utilization of resources, decreases variations and maintains consistent quality of the process output.

  1. 12 CFR 7.5007 - Correspondent services.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... provision of computer networking packages and related hardware; (b) Data processing services; (c) The sale of software that performs data processing functions; (d) The development, operation, management, and...

  2. 12 CFR 7.5007 - Correspondent services.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... provision of computer networking packages and related hardware; (b) Data processing services; (c) The sale of software that performs data processing functions; (d) The development, operation, management, and...

  3. 12 CFR 7.5007 - Correspondent services.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... provision of computer networking packages and related hardware; (b) Data processing services; (c) The sale of software that performs data processing functions; (d) The development, operation, management, and...

  4. 12 CFR 7.5007 - Correspondent services.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... provision of computer networking packages and related hardware; (b) Data processing services; (c) The sale of software that performs data processing functions; (d) The development, operation, management, and...

  5. Use of lean and six sigma methodology to improve operating room efficiency in a high-volume tertiary-care academic medical center.

    PubMed

    Cima, Robert R; Brown, Michael J; Hebl, James R; Moore, Robin; Rogers, James C; Kollengode, Anantha; Amstutz, Gwendolyn J; Weisbrod, Cheryl A; Narr, Bradly J; Deschamps, Claude

    2011-07-01

    Operating rooms (ORs) are resource-intense and costly hospital units. Maximizing OR efficiency is essential to maintaining an economically viable institution. OR efficiency projects often focus on a limited number of ORs or cases. Efforts across an entire OR suite have not been reported. Lean and Six Sigma methodologies were developed in the manufacturing industry to increase efficiency by eliminating non-value-added steps. We applied Lean and Six Sigma methodologies across an entire surgical suite to improve efficiency. A multidisciplinary surgical process improvement team constructed a value stream map of the entire surgical process from the decision for surgery to discharge. Each process step was analyzed in 3 domains, ie, personnel, information processed, and time. Multidisciplinary teams addressed 5 work streams to increase value at each step: minimizing volume variation; streamlining the preoperative process; reducing nonoperative time; eliminating redundant information; and promoting employee engagement. Process improvements were implemented sequentially in surgical specialties. Key performance metrics were collected before and after implementation. Across 3 surgical specialties, process redesign resulted in substantial improvements in on-time starts and reduction in number of cases past 5 pm. Substantial gains were achieved in nonoperative time, staff overtime, and ORs saved. These changes resulted in substantial increases in margin/OR/day. Use of Lean and Six Sigma methodologies increased OR efficiency and financial performance across an entire operating suite. Process mapping, leadership support, staff engagement, and sharing performance metrics are keys to enhancing OR efficiency. The performance gains were substantial, sustainable, positive financially, and transferrable to other specialties. Copyright © 2011 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Hara, J.M.; W. Gunther, G. Martinez-Guridi

    New and advanced reactors will use integrated digital instrumentation and control (I&C) systems to support operators in their monitoring and control functions. Even though digital systems are typically highly reliable, their potential for degradation or failure could significantly affect operator performance and, consequently, impact plant safety. The U.S. Nuclear Regulatory Commission (NRC) supported this research project to investigate the effects of degraded I&C systems on human performance and plant operations. The objective was to develop human factors engineering (HFE) review guidance addressing the detection and management of degraded digital I&C conditions by plant operators. We reviewed pertinent standards and guidelines,more » empirical studies, and plant operating experience. In addition, we conducted an evaluation of the potential effects of selected failure modes of the digital feedwater system on human-system interfaces (HSIs) and operator performance. The results indicated that I&C degradations are prevalent in plants employing digital systems and the overall effects on plant behavior can be significant, such as causing a reactor trip or causing equipment to operate unexpectedly. I&C degradations can impact the HSIs used by operators to monitor and control the plant. For example, sensor degradations can make displays difficult to interpret and can sometimes mislead operators by making it appear that a process disturbance has occurred. We used the information obtained as the technical basis upon which to develop HFE review guidance. The guidance addresses the treatment of degraded I&C conditions as part of the design process and the HSI features and functions that support operators to monitor I&C performance and manage I&C degradations when they occur. In addition, we identified topics for future research.« less

  7. Hybrid performance measurement of a business process outsourcing - A Malaysian company perspective

    NASA Astrophysics Data System (ADS)

    Oluyinka, Oludapo Samson; Tamyez, Puteri Fadzline; Kie, Cheng Jack; Freida, Ayodele Ozavize

    2017-05-01

    It's no longer new that customer perceived value for product and services are now greatly influenced by its psychological and social advantages. In order to meet up with the increasing operational cost, response time, quality and innovative capabilities many companies turned their fixed operational cost to a variable cost through outsourcing. Hence, the researcher explored different underlying outsourcing theories and infer that these theories are essential to performance improvement. In this study, the researcher evaluates the performance of a business process outsource company by a combination of lean and agile method. To test the hypotheses, we analyze different variability that a business process company faces, how lean and agile have been used in other industry to address such variability and discuss the result using a predictive multiple regression analysis on data collected from companies in Malaysia. The findings from this study revealed that while each method has its own advantage, a business process outsource company could achieve more (up to 87%) increase in performance level by developing a strategy which focuses on a perfect mixture of lean and agile improvement methods. Secondly, this study shows that performance indicator could be better evaluated with non-metrics variables of the agile method. Thirdly, this study also shows that business process outsourcing company could perform better when they concentrate more on strengthening internal process integration of employees.

  8. Landsat-8 Operational Land Imager (OLI) radiometric performance on-orbit

    USGS Publications Warehouse

    Morfitt, Ron; Barsi, Julia A.; Levy, Raviv; Markham, Brian L.; Micijevic, Esad; Ong, Lawrence; Scaramuzza, Pat; Vanderwerff, Kelly

    2015-01-01

    Expectations of the Operational Land Imager (OLI) radiometric performance onboard Landsat-8 have been met or exceeded. The calibration activities that occurred prior to launch provided calibration parameters that enabled ground processing to produce imagery that met most requirements when data were transmitted to the ground. Since launch, calibration updates have improved the image quality even more, so that all requirements are met. These updates range from detector gain coefficients to reduce striping and banding to alignment parameters to improve the geometric accuracy. This paper concentrates on the on-orbit radiometric performance of the OLI, excepting the radiometric calibration performance. Topics discussed in this paper include: signal-to-noise ratios that are an order of magnitude higher than previous Landsat missions; radiometric uniformity that shows little residual banding and striping, and continues to improve; a dynamic range that limits saturation to extremely high radiance levels; extremely stable detectors; slight nonlinearity that is corrected in ground processing; detectors that are stable and 100% operable; and few image artifacts.

  9. Version pressure feedback mechanisms for speculative versioning caches

    DOEpatents

    Eichenberger, Alexandre E.; Gara, Alan; O& #x27; Brien, Kathryn M.; Ohmacht, Martin; Zhuang, Xiaotong

    2013-03-12

    Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.

  10. ICC '86; Proceedings of the International Conference on Communications, Toronto, Canada, June 22-25, 1986, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Papers are presented on ISDN, mobile radio systems and techniques for digital connectivity, centralized and distributed algorithms in computer networks, communications networks, quality assurance and impact on cost, adaptive filters in communications, the spread spectrum, signal processing, video communication techniques, and digital satellite services. Topics discussed include performance evaluation issues for integrated protocols, packet network operations, the computer network theory and multiple-access, microwave single sideband systems, switching architectures, fiber optic systems, wireless local communications, modulation, coding, and synchronization, remote switching, software quality, transmission, and expert systems in network operations. Consideration is given to wide area networks, image and speech processing, office communications application protocols, multimedia systems, customer-controlled network operations, digital radio systems, channel modeling and signal processing in digital communications, earth station/on-board modems, computer communications system performance evaluation, source encoding, compression, and quantization, and adaptive communications systems.

  11. Process concept of retorting of Julia Creek oil shale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitnai, O.

    1984-06-01

    A process is proposed for the above ground retorting of the Julia Creek oil shale in Queensland. The oil shale characteristics, process description, chemical reactions of the oil shale components, and the effects of variable and operating conditions on process performance are discussed. The process contains a fluidized bed combustor which performs both as a combustor of the spent shales and as a heat carrier generator for the pyrolysis step. 12 references, 5 figures, 5 tables.

  12. Historical data recording for process computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hale, J.C.; Sellars, H.L.

    1981-11-01

    Computers have been used to monitor and control chemical and refining processes for more than 15 years. During this time, there has been a steady growth in the variety and sophistication of the functions performed by these process computers. Early systems were limited to maintaining only current operating measurements, available through crude operator's consoles or noisy teletypes. The value of retaining a process history, that is, a collection of measurements over time, became apparent, and early efforts produced shift and daily summary reports. The need for improved process historians which record, retrieve and display process information has grown as processmore » computers assume larger responsibilities in plant operations. This paper describes newly developed process historian functions that have been used on several of its in-house process monitoring and control systems in Du Pont factories. 3 refs.« less

  13. Advanced information processing system

    NASA Technical Reports Server (NTRS)

    Lala, J. H.

    1984-01-01

    Design and performance details of the advanced information processing system (AIPS) for fault and damage tolerant data processing on aircraft and spacecraft are presented. AIPS comprises several computers distributed throughout the vehicle and linked by a damage tolerant data bus. Most I/O functions are available to all the computers, which run in a TDMA mode. Each computer performs separate specific tasks in normal operation and assumes other tasks in degraded modes. Redundant software assures that all fault monitoring, logging and reporting are automated, together with control functions. Redundant duplex links and damage-spread limitation provide the fault tolerance. Details of an advanced design of a laboratory-scale proof-of-concept system are described, including functional operations.

  14. Designing Security-Hardened Microkernels For Field Devices

    NASA Astrophysics Data System (ADS)

    Hieb, Jeffrey; Graham, James

    Distributed control systems (DCSs) play an essential role in the operation of critical infrastructures. Perimeter field devices are important DCS components that measure physical process parameters and perform control actions. Modern field devices are vulnerable to cyber attacks due to their increased adoption of commodity technologies and that fact that control networks are no longer isolated. This paper describes an approach for creating security-hardened field devices using operating system microkernels that isolate vital field device operations from untrusted network-accessible applications. The approach, which is influenced by the MILS and Nizza architectures, is implemented in a prototype field device. Whereas, previous microkernel-based implementations have been plagued by poor inter-process communication (IPC) performance, the prototype exhibits an average IPC overhead for protected device calls of 64.59 μs. The overall performance of field devices is influenced by several factors; nevertheless, the observed IPC overhead is low enough to encourage the continued development of the prototype.

  15. High performance Si nanowire field-effect-transistors based on a CMOS inverter with tunable threshold voltage.

    PubMed

    Van, Ngoc Huynh; Lee, Jae-Hyun; Sohn, Jung Inn; Cha, Seung Nam; Whang, Dongmok; Kim, Jong Min; Kang, Dae Joon

    2014-05-21

    We successfully fabricated nanowire-based complementary metal-oxide semiconductor (NWCMOS) inverter devices by utilizing n- and p-type Si nanowire field-effect-transistors (NWFETs) via a low-temperature fabrication processing technique. We demonstrate that NWCMOS inverter devices can be operated at less than 1 V, a significantly lower voltage than that of typical thin-film based complementary metal-oxide semiconductor (CMOS) inverter devices. This low-voltage operation was accomplished by controlling the threshold voltage of the n-type Si NWFETs through effective management of the nanowire (NW) doping concentration, while realizing high voltage gain (>10) and ultra-low static power dissipation (≤3 pW) for high-performance digital inverter devices. This result offers a viable means of fabricating high-performance, low-operation voltage, and high-density digital logic circuits using a low-temperature fabrication processing technique suitable for next-generation flexible electronics.

  16. Integrating Data Sources for Process Sustainability Assessments (presentation)

    EPA Science Inventory

    To perform a chemical process sustainability assessment requires significant data about chemicals, process design specifications, and operating conditions. The required information includes the identity of the chemicals used, the quantities of the chemicals within the context of ...

  17. Modeling and Evaluating Pilot Performance in NextGen: Review of and Recommendations Regarding Pilot Modeling Efforts, Architectures, and Validation Studies

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Keller, John; Peters, Steve; Small, Ronald; Hutchins, Shaun; Algarin, Liana; Gore, Brian Francis; Hooey, Becky Lee; Foyle, David C.

    2013-01-01

    NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA evaluate pilot modeling efforts and select the appropriate tools for future modeling efforts to predict pilot performance in NextGen operations.

  18. The role of strategies in motor learning

    PubMed Central

    Taylor, Jordan A.; Ivry, Richard B.

    2015-01-01

    There has been renewed interest in the role of strategies in sensorimotor learning. The combination of new behavioral methods and computational methods has begun to unravel the interaction between processes related to strategic control and processes related to motor adaptation. These processes may operate on very different error signals. Strategy learning is sensitive to goal-based performance error. In contrast, adaptation is sensitive to prediction errors between the desired and actual consequences of a planned movement. The former guides what the desired movement should be, whereas the latter guides how to implement the desired movement. Whereas traditional approaches have favored serial models in which an initial strategy-based phase gives way to more automatized forms of control, it now seems that strategic and adaptive processes operate with considerable independence throughout learning, although the relative weight given the two processes will shift with changes in performance. As such, skill acquisition involves the synergistic engagement of strategic and adaptive processes. PMID:22329960

  19. Low-thrust mission risk analysis, with application to a 1980 rendezvous with the comet Encke

    NASA Technical Reports Server (NTRS)

    Yen, C. L.; Smith, D. B.

    1973-01-01

    A computerized failure process simulation procedure is used to evaluate the risk in a solar electric space mission. The procedure uses currently available thrust-subsystem reliability data and performs approximate simulations of the thrust sybsystem burn operation, the system failure processes, and the retargeting operations. The method is applied to assess the risks in carrying out a 1980 rendezvous mission to the comet Encke. Analysis of the results and evaluation of the effects of various risk factors on the mission show that system component failure rates are the limiting factors in attaining a high mission relability. It is also shown that a well-designed trajectory and system operation mode can be used effectively to partially compensate for unreliable thruster performance.

  20. The FOT tool kit concept

    NASA Technical Reports Server (NTRS)

    Fatig, Michael

    1993-01-01

    Flight operations and the preparation for it has become increasingly complex as mission complexities increase. Further, the mission model dictates that a significant increase in flight operations activities is upon us. Finally, there is a need for process improvement and economy in the operations arena. It is therefore time that we recognize flight operations as a complex process requiring a defined, structured, and life cycle approach vitally linked to space segment, ground segment, and science operations processes. With this recognition, an FOT Tool Kit consisting of six major components designed to provide tools to guide flight operations activities throughout the mission life cycle was developed. The major components of the FOT Tool Kit and the concepts behind the flight operations life cycle process as developed at NASA's GSFC for GSFC-based missions are addressed. The Tool Kit is therefore intended to increase productivity, quality, cost, and schedule performance of the flight operations tasks through the use of documented, structured methodologies; knowledge of past lessons learned and upcoming new technology; and through reuse and sharing of key products and special application programs made possible through the development of standardized key products and special program directories.

  1. 29 CFR 784.152 - Operations performed on byproducts.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... resulting from processing or canning operations, to produce fish oil or meal, would come within the..., since fish oil is nonperishable in the sense that it may be held for a substantial period of time...

  2. 29 CFR 784.152 - Operations performed on byproducts.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... resulting from processing or canning operations, to produce fish oil or meal, would come within the..., since fish oil is nonperishable in the sense that it may be held for a substantial period of time...

  3. DFT algorithms for bit-serial GaAs array processor architectures

    NASA Technical Reports Server (NTRS)

    Mcmillan, Gary B.

    1988-01-01

    Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.

  4. Reliability and performance of a system-on-a-chip by predictive wear-out based activation of functional components

    DOEpatents

    Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong

    2013-10-01

    A processor-implemented method for determining aging of a processing unit in a processor the method comprising: calculating an effective aging profile for the processing unit wherein the effective aging profile quantifies the effects of aging on the processing unit; combining the effective aging profile with process variation data, actual workload data and operating conditions data for the processing unit; and determining aging through an aging sensor of the processing unit using the effective aging profile, the process variation data, the actual workload data, architectural characteristics and redundancy data, and the operating conditions data for the processing unit.

  5. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-07-09

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: establishing, for each node, a plurality of logical rings, each ring including a different set of at least one core on that node, each ring including the cores on at least two of the nodes; iteratively for each node: assigning each core of that node to one of the rings established for that node to which the core has not previously been assigned, and performing, for each ring for that node, a global allreduce operation using contribution data for the cores assigned to that ring or any global allreduce results from previous global allreduce operations, yielding current global allreduce results for each core; and performing, for each node, a local allreduce operation using the global allreduce results.

  7. Evaluation of an Atmosphere Revitalization Subsystem for Deep Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Perry, Jay L.; Abney, Morgan B.; Conrad, Ruth E.; Frederick, Kenneth R.; Greenwood, Zachary W.; Kayatin, Matthew J.; Knox, James C.; Newton, Robert L.; Parrish, Keith J.; Takada, Kevin C.; hide

    2015-01-01

    An Atmosphere Revitalization Subsystem (ARS) suitable for deployment aboard deep space exploration mission vehicles has been developed and functionally demonstrated. This modified ARS process design architecture was derived from the International Space Station's (ISS) basic ARS. Primary functions considered in the architecture include trace contaminant control, carbon dioxide removal, carbon dioxide reduction, and oxygen generation. Candidate environmental monitoring instruments were also evaluated. The process architecture rearranges unit operations and employs equipment operational changes to reduce mass, simplify, and improve the functional performance for trace contaminant control, carbon dioxide removal, and oxygen generation. Results from integrated functional demonstration are summarized and compared to the performance observed during previous testing conducted on an ISS-like subsystem architecture and a similarly evolved process architecture. Considerations for further subsystem architecture and process technology development are discussed.

  8. Bench-Scale Development of a Non-Aqueous Solvent (NAS) CO2 Capture Process for Coal-Fired Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lail, Marty

    The project aimed to advance RTI’s non-aqueous amine solvent technology by improving the solvent to reduce volatility, demonstrating long-term continuous operation at lab- (0.5 liters solvent) and bench-scale (~120 liters solvent), showing low reboiler heat duty measured during bench-scale testing, evaluating degradation products, building a rate-based process model, and evaluating the techno-economic performance of the process. The project team (RTI, SINTEF, Linde Engineering) and the technology performed well in each area of advancement. The modifications incorporated throughout the project enabled the attainment of target absorber and regenerator conditions for the process. Reboiler duties below 2,000 kJt/kg CO2 were observed inmore » a bench-scale test unit operated at RTI.« less

  9. Exact Performance Analysis of Two Distributed Processes with Multiple Synchronization Points.

    DTIC Science & Technology

    1987-05-01

    number of processes with straight-line sequences of semaphore operations . We use the geometric model for performance analysis, in contrast to proving...distribution unlimited. 4. PERFORMING’*ORGANIZATION REPORT NUMBERS) 5. MONITORING ORGANIZATION REPORT NUMB CS-TR-1845 6a. NAME OF PERFORMING ORGANIZATION 6b...OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATIO U University of Maryland (If applicable) Office of Naval Research N/A 6c. ADDRESS (City, State, and

  10. Advancements in Risk-Informed Performance-Based Asset Management for Commercial Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liming, James K.; Ravindra, Mayasandra K.

    2006-07-01

    Over the past several years, ABSG Consulting Inc. (ABS Consulting) and the South Texas Project Nuclear Operating Company (STPNOC) have developed a decision support process and associated software for risk-informed, performance-based asset management (RIPBAM) of nuclear power plant facilities. RIPBAM applies probabilistic risk assessment (PRA) tools and techniques in the realm of plant physical and financial asset management. The RIPBAM process applies a tiered set of models and supporting performance measures (or metrics) that can ultimately be applied to support decisions affecting the allocation and management of plant resources (e.g., funding, staffing, scheduling, etc.). In general, the ultimate goal ofmore » the RIPBAM process is to continually support decision-making to maximize a facility's net present value (NPV) and long-term profitability for its owners. While the initial applications of RIPBAM have been for nuclear power stations, the methodology can easily be adapted to other types of power station or complex facility decision-making support. RIPBAM can also be designed to focus on performance metrics other than NPV and profitability (e.g., mission reliability, operational availability, probability of mission success per dollar invested, etc.). Recent advancements in the RIPBAM process focus on expanding the scope of previous RIPBAM applications to include not only operations, maintenance, and safety issues, but also broader risk perception components affecting plant owner (stockholder), operator, and regulator biases. Conceptually, RIPBAM is a comprehensive risk-informed cash flow model for decision support. It originated as a tool to help manage plant refueling outage scheduling, and was later expanded to include the full spectrum of operations and maintenance decision support. However, it differs from conventional business modeling tools in that it employs a systems engineering approach with broadly based probabilistic analysis of organizational 'value streams'. The scope of value stream inclusion in the process can be established by the user, but in its broadest applications, RIPBAM can be used to address how risk perceptions of plant owners and regulators are impacted by plant performance. Plant staffs can expand and refine RIPBAM models scope via a phased program of activities over time. This paper shows how the multi-metric uncertainty analysis feature of RIPBAM can apply a wide spectrum of decision-influencing factors to support decisions designed to maximize the probability of achieving, maintaining, and improving upon plant goals and objectives. In this paper, the authors show how this approach can be extremely valuable to plant owners and operators in supporting plant value-impacting decision-making processes. (authors)« less

  11. Morphology evolution in high-performance polymer solar cells processed from nonhalogenated solvent

    DOE PAGES

    Cai, Wanzhu; Liu, Peng; Jin, Yaocheng; ...

    2015-05-26

    A new processing protocol based on non-halogenated solvent and additive is developed to produce polymer solar cells with power conversion efficiencies better than those processed from commonly used halogenated solvent-additive pair. Morphology studies show that good performance correlates with a finely distributed nanomorphology with a well-defined polymer fibril network structure, which leads to balanced charge transport in device operation.

  12. Improving Team Performance: Proceedings of the Rand Team Performance Workshop.

    DTIC Science & Technology

    1980-08-01

    organization theory, small group processes, cognitive psychologi training and instruction , decision theory, artificial intelligence, and human engineering...theory, small group processes, cognitive psy- chology, training and instruction , heuristic modeling, decision theory, and human engineering. Within...interact with. The operators are taught about the equipment and how it works; the actual job is left to be learned aboard ship. The cognitive processes the

  13. Application of data mining in performance measures

    NASA Astrophysics Data System (ADS)

    Chan, Michael F. S.; Chung, Walter W.; Wong, Tai Sun

    2001-10-01

    This paper proposes a structured framework for exploiting data mining application for performance measures. The context is set in an airline company is illustrated for the use of such framework. The framework takes in consideration of how a knowledge worker interacts with performance information at the enterprise level to support them to make informed decision in managing the effectiveness of operations. A case study of applying data mining technology for performance data in an airline company is illustrated. The use of performance measures is specifically applied to assist in the aircraft delay management process. The increasingly dispersed and complex operations of airline operation put much strain on the part of knowledge worker in using search, acquiring and analyzing information to manage performance. One major problem faced with knowledge workers is the identification of root causes of performance deficiency. The large amount of factors involved in the analyze the root causes can be time consuming and the objective of applying data mining technology is to reduce the time and resources needed for such process. The increasing market competition for better performance management in various industries gives rises to need of the intelligent use of data. Because of this, the framework proposed here is very much generalizable to industries such as manufacturing. It could assist knowledge workers who are constantly looking for ways to improve operation effectiveness through new initiatives and the effort is required to be quickly done to gain competitive advantage in the marketplace.

  14. VASP-4096: a very high performance programmable device for digital media processing applications

    NASA Astrophysics Data System (ADS)

    Krikelis, Argy

    2001-03-01

    Over the past few years, technology drivers for microprocessors have changed significantly. Media data delivery and processing--such as telecommunications, networking, video processing, speech recognition and 3D graphics--is increasing in importance and will soon dominate the processing cycles consumed in computer-based systems. This paper presents the architecture of the VASP-4096 processor. VASP-4096 provides high media performance with low energy consumption by integrating associative SIMD parallel processing with embedded microprocessor technology. The major innovations in the VASP-4096 is the integration of thousands of processing units in a single chip that are capable of support software programmable high-performance mathematical functions as well as abstract data processing. In addition to 4096 processing units, VASP-4096 integrates on a single chip a RISC controller that is an implementation of the SPARC architecture, 128 Kbytes of Data Memory, and I/O interfaces. The SIMD processing in VASP-4096 implements the ASProCore architecture, which is a proprietary implementation of SIMD processing, operates at 266 MHz with program instructions issued by the RISC controller. The device also integrates a 64-bit synchronous main memory interface operating at 133 MHz (double-data rate), and a 64- bit 66 MHz PCI interface. VASP-4096, compared with other processors architectures that support media processing, offers true performance scalability, support for deterministic and non-deterministic data processing on a single device, and software programmability that can be re- used in future chip generations.

  15. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  16. Process mapping as a framework for performance improvement in emergency general surgery.

    PubMed

    DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad

    2017-12-01

    Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.

  17. Process mapping as a framework for performance improvement in emergency general surgery.

    PubMed

    DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad

    2018-02-01

    Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.

  18. Performance indicators for quality in surgical and laboratory services at Muhimbili National Hospital (MNH) in Tanzania.

    PubMed

    Mbembati, Naboth A; Mwangu, Mugwira; Muhondwa, Eustace P Y; Leshabari, Melkizedek M

    2008-04-01

    Muhimbili National Hospital (MNH), a teaching and national referral hospital, is undergoing major reforms to improve the quality of health care. We performed a retrospective descriptive study using a set of performance indicators for the surgical and laboratory services of MNH in years 2001 and 2002, to help monitor and evaluate the impact of reforms on the quality of health care during and after the reform process. Hospital records were reviewed and information recorded for planned and postponed operations, laboratory equipment, reagents, laboratory tests and quality assurance programmes. In the year 2001 a total of 4332 non-emergency operations were planned, 3313 operations were performed and 1019 (23.5%) operations were postponed. In the year 2002, 4301 non-emergency operations were planned, 3046 were performed and 1255 (29%) were postponed. The most common reasons for operation postponement were "time-barred", interference by emergency operations, no show of patients and inoperable anaesthetic machines. Equipment problems and supply and staff shortages together accounted for one quarter of postponements. In the laboratory, a lack of equipment prevented some tests, but quality assurance was performed for most tests. Current surgical services at MNH are inadequate; operating theatres require modern, functioning equipment and adequate supplies of consumables to provide satisfactory care.

  19. Characterization and nultivariate analysis of physical properties of processing peaches

    USDA-ARS?s Scientific Manuscript database

    Characterization of physical properties of fruits represents the first vital step to ensure optimal performance of fruit processing operations and is also a prerequisite in the development of new processing equipment. In this study, physical properties of engineering significance to processing of th...

  20. Status quo and current trends of operating room management in Germany.

    PubMed

    Baumgart, André; Schüpfer, Guido; Welker, Andreas; Bender, Hans-Joachim; Schleppers, Alexander

    2010-04-01

    Ongoing healthcare reforms in Germany have required strenuous efforts to adapt hospital and operating room organizations to the needs of patients, new technological developments, and social and economic demands. This review addresses the major developments in German operating room management research and current practice. The introduction of the diagnosis-related group system in 2003 has changed the incentive structure of German hospitals to redesign their operating room units. The role of operating room managers has been gradually changing in hospitals in response to the change in the reimbursement system. Operating room managers are today specifically qualified and increasingly externally hired staff. They are more and more empowered with authority to plan and control operating rooms as profit centers. For measuring performance, common perioperative performance indicators are still scarcely implemented in German hospitals. In 2008, a concerted time glossary was established to enable consistent monitoring of operating room performance with generally accepted process indicators. These key performance indicators are a consistent way to make a procedure or case - and also the effectiveness of the operating room management - more transparent. In the presence of increasing financial pressure, a hospital's executives need to empower an independent operating room management function to achieve the hospital's economic goals. Operating room managers need to adopt evidence-based methods also from other scientific fields, for example management science and information technology, to further sustain operating room performance.

  1. Marshall Space Flight Center Ground Systems Development and Integration

    NASA Technical Reports Server (NTRS)

    Wade, Gina

    2016-01-01

    Ground Systems Development and Integration performs a variety of tasks in support of the Mission Operations Laboratory (MOL) and other Center and Agency projects. These tasks include various systems engineering processes such as performing system requirements development, system architecture design, integration, verification and validation, software development, and sustaining engineering of mission operations systems that has evolved the Huntsville Operations Support Center (HOSC) into a leader in remote operations for current and future NASA space projects. The group is also responsible for developing and managing telemetry and command configuration and calibration databases. Personnel are responsible for maintaining and enhancing their disciplinary skills in the areas of project management, software engineering, software development, software process improvement, telecommunications, networking, and systems management. Domain expertise in the ground systems area is also maintained and includes detailed proficiency in the areas of real-time telemetry systems, command systems, voice, video, data networks, and mission planning systems.

  2. Energy and Water Conservation Assessment of the Radiochemical Processing Laboratory (RPL) at Pacific Northwest National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Stephanie R.; Koehler, Theresa M.; Boyd, Brian K.

    2014-05-31

    This report summarizes the results of an energy and water conservation assessment of the Radiochemical Processing Laboratory (RPL) at Pacific Northwest National Laboratory (PNNL). The assessment was performed in October 2013 by engineers from the PNNL Building Performance Team with the support of the dedicated RPL staff and several Facilities and Operations (F&O) department engineers. The assessment was completed for the Facilities and Operations (F&O) department at PNNL in support of the requirements within Section 432 of the Energy Independence and Security Act (EISA) of 2007.

  3. Philosophy of ATHEANA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bley, D.C.; Cooper, S.E.; Forester, J.A.

    ATHEANA, a second-generation Human Reliability Analysis (HRA) method integrates advances in psychology with engineering, human factors, and Probabilistic Risk Analysis (PRA) disciplines to provide an HRA quantification process and PRA modeling interface that can accommodate and represent human performance in real nuclear power plant events. The method uses the characteristics of serious accidents identified through retrospective analysis of serious operational events to set priorities in a search process for significant human failure events, unsafe acts, and error-forcing context (unfavorable plant conditions combined with negative performance-shaping factors). ATHEANA has been tested in a demonstration project at an operating pressurized water reactor.

  4. Review of performance, medical, and operational data on pilot aging issues

    NASA Technical Reports Server (NTRS)

    Stoklosa, J. H.

    1992-01-01

    An extensive review of the literature and studies relating to performance, medical, operational, and legal data regarding pilot aging issues was performed in order to determine what evidence there is, if any, to support mandatory pilot retirement. Popular misconceptions about aging, including the failure to distinguish between the normal aging process and disease processes that occur more frequently in older individuals, continue to contribute to much of the misunderstanding and controversy that surround this issue. Results: Review of medical data related to the pilot aging issue indicate that recent improvement in medical diagnostics and treatment technology have made it possible to identify to a high degree individuals who are at risk for developing sudden incapacitating illness and for treating those with disqualifying medical conditions. Performance studies revealed that after controlling for the presence of disease states, older pilots are able to perform as well as younger pilots on many performance tasks. Review of accident data showed that older, healthy pilots do not have higher accident rates than younger pilots, and indeeed, evidence suggests that older pilots have an advantage in the cockpit due to higher experience levels. The Man-Machine-Mission-Environment interface of factors can be managed through structured, supervised, and enhanced operations, maintenance, flight reviews, and safety procedures in order to ensure safe and productive operations by reducing the margin of error and by increasing the margin of safety. Conclusions: There is no evidence indicating any specific age as an arbitrary cut-off point for pilots to perform their fight duties. A combination of regular medical screening, performance evaluation, enhanced operational maintenance, and safety procedures can most effectively ensure a safe pilot population than can a mandatory retirement policy based on arbitrary age restrictions.

  5. Determining the microwave coupling and operational efficiencies of a microwave plasma assisted chemical vapor deposition reactor under high pressure diamond synthesis operating conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nad, Shreya; Department of Physics and Astronomy, Michigan State University, East Lansing, Michigan 48824; Gu, Yajun

    2015-07-15

    The microwave coupling efficiency of the 2.45 GHz, microwave plasma assisted diamond synthesis process is investigated by experimentally measuring the performance of a specific single mode excited, internally tuned microwave plasma reactor. Plasma reactor coupling efficiencies (η) > 90% are achieved over the entire 100–260 Torr pressure range and 1.5–2.4 kW input power diamond synthesis regime. When operating at a specific experimental operating condition, small additional internal tuning adjustments can be made to achieve η > 98%. When the plasma reactor has low empty cavity losses, i.e., the empty cavity quality factor is >1500, then overall microwave discharge coupling efficienciesmore » (η{sub coup}) of >94% can be achieved. A large, safe, and efficient experimental operating regime is identified. Both substrate hot spots and the formation of microwave plasmoids are eliminated when operating within this regime. This investigation suggests that both the reactor design and the reactor process operation must be considered when attempting to lower diamond synthesis electrical energy costs while still enabling a very versatile and flexible operation performance.« less

  6. Influence of sludge properties and hydraulic loading on the performance of secondary settling tanks--full-scale operational results.

    PubMed

    Vestner, R J; Günthert, F Wolfgang

    2004-01-01

    Full-scale investigations at a WWTP with a two-stage secondary settling tank process revealed relationships between significant operating parameters and performance in terms of effluent suspended solids concentration. Besides common parameters (e.g. surface overflow rate and sludge volume loading rate) feed SS concentration and flocculation time must be considered. Concentration of the return activated sludge may help to estimate the performance of existing secondary settling tanks.

  7. Deterministic mechanisms define the long-term anaerobic digestion microbiome and its functionality regardless of the initial microbial community.

    PubMed

    Peces, M; Astals, S; Jensen, P D; Clarke, W P

    2018-05-17

    The impact of the starting inoculum on long-term anaerobic digestion performance, process functionality and microbial community composition remains unclear. To understand the impact of starting inoculum, active microbial communities from four different full-scale anaerobic digesters were each used to inoculate four continuous lab-scale anaerobic digesters, which were operated identically for 295 days. Digesters were operated at 15 days solid retention time, an organic loading rate of 1 g COD L r -1 d -1 (75:25 - cellulose:casein) and 37 °C. Results showed that long-term process performance, metabolic rates (hydrolytic, acetogenic, and methanogenic) and microbial community are independent of the inoculum source. Digesters process performance converged after 80 days, while metabolic rates and microbial communities converged after 120-145 days. The convergence of the different microbial communities towards a core-community proves that the deterministic factors (process operational conditions) were a stronger driver than the initial microbial community composition. Indeed, the core-community represented 72% of the relative abundance among the four digesters. Moreover, a number of positive correlations were observed between higher metabolic rates and the relative abundance of specific microbial groups. These correlations showed that both substrate consumers and suppliers trigger higher metabolic rates, expanding the knowledge of the nexus between microorganisms and functionality. Overall, these results support that deterministic factors control microbial communities in bioreactors independently of the inoculum source. Hence, it seems plausible that a desired microbial composition and functionality can be achieved by tuning process operational conditions. Copyright © 2018. Published by Elsevier Ltd.

  8. Applying lessons from commercial aviation safety and operations to resuscitation.

    PubMed

    Ornato, Joseph P; Peberdy, Mary Ann

    2014-02-01

    Both commercial aviation and resuscitation are complex activities in which team members must respond to unexpected emergencies in a consistent, high quality manner. Lives are at stake in both activities and the two disciplines have similar leadership structures, standard setting processes, training methods, and operational tools. Commercial aviation crews operate with remarkable consistency and safety, while resuscitation team performance and outcomes are highly variable. This commentary provides the perspective of two physician-pilots showing how commercial aviation training, operations, and safety principles can be adapted to resuscitation team training and performance. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Electrolytes for Use in High Energy Lithium-Ion Batteries with Wide Operating Temperature Range

    NASA Technical Reports Server (NTRS)

    Smart, Marshall C.; Ratnakumar, B. V.; West, W. C.; Whitcanack, L. D.; Huang, C.; Soler, J.; Krause, F. C.

    2011-01-01

    Objectives of this work are: (1) Develop advanced Li -ion electrolytes that enable cell operation over a wide temperature range (i.e., -30 to +60C). (2) Improve the high temperature stability and lifetime characteristics of wide operating temperature electrolytes. (3) Improve the high voltage stability of these candidate electrolytes systems to enable operation up to 5V with high specific energy cathode materials. (4) Define the performance limitations at low and high temperature extremes, as well as, life limiting processes. (5) Demonstrate the performance of advanced electrolytes in large capacity prototype cells.

  10. A novelty detection diagnostic methodology for gearboxes operating under fluctuating operating conditions using probabilistic techniques

    NASA Astrophysics Data System (ADS)

    Schmidt, S.; Heyns, P. S.; de Villiers, J. P.

    2018-02-01

    In this paper, a fault diagnostic methodology is developed which is able to detect, locate and trend gear faults under fluctuating operating conditions when only vibration data from a single transducer, measured on a healthy gearbox are available. A two-phase feature extraction and modelling process is proposed to infer the operating condition and based on the operating condition, to detect changes in the machine condition. Information from optimised machine and operating condition hidden Markov models are statistically combined to generate a discrepancy signal which is post-processed to infer the condition of the gearbox. The discrepancy signal is processed and combined with statistical methods for automatic fault detection and localisation and to perform fault trending over time. The proposed methodology is validated on experimental data and a tacholess order tracking methodology is used to enhance the cost-effectiveness of the diagnostic methodology.

  11. Effectiveness of facilitated introduction of a standard operating procedure into routine processes in the operating theatre: a controlled interrupted time series.

    PubMed

    Morgan, Lauren; New, Steve; Robertson, Eleanor; Collins, Gary; Rivero-Arias, Oliver; Catchpole, Ken; Pickering, Sharon P; Hadi, Mohammed; Griffin, Damian; McCulloch, Peter

    2015-02-01

    Standard operating procedures (SOPs) should improve safety in the operating theatre, but controlled studies evaluating the effect of staff-led implementation are needed. In a controlled interrupted time series, we evaluated three team process measures (compliance with WHO surgical safety checklist, non-technical skills and technical performance) and three clinical outcome measures (length of hospital stay, complications and readmissions) before and after a 3-month staff-led development of SOPs. Process measures were evaluated by direct observation, using Oxford Non-Technical Skills II for non-technical skills and the 'glitch count' for technical performance. All staff in two orthopaedic operating theatres were trained in the principles of SOPs and then assisted to develop standardised procedures. Staff in a control operating theatre underwent the same observations but received no training. The change in difference between active and control groups was compared before and after the intervention using repeated measures analysis of variance. We observed 50 operations before and 55 after the intervention and analysed clinical data on 1022 and 861 operations, respectively. The staff chose to structure their efforts around revising the 'whiteboard' which documented and prompted tasks, rather than directly addressing specific task problems. Although staff preferred and sustained the new system, we found no significant differences in process or outcome measures before/after intervention in the active versus the control group. There was a secular trend towards worse outcomes in the postintervention period, seen in both active and control theatres. SOPs when developed and introduced by frontline staff do not necessarily improve operative processes or outcomes. The inherent tension in improvement work between giving staff ownership of improvement and maintaining control of direction needs to be managed, to ensure staff are engaged but invest energy in appropriate change. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Development of Chemical Process Design and Control for ...

    EPA Pesticide Factsheets

    This contribution describes a novel process systems engineering framework that couples advanced control with sustainability evaluation and decision making for the optimization of process operations to minimize environmental impacts associated with products, materials, and energy. The implemented control strategy combines a biologically inspired method with optimal control concepts for finding more sustainable operating trajectories. The sustainability assessment of process operating points is carried out by using the U.S. E.P.A.’s Gauging Reaction Effectiveness for the ENvironmental Sustainability of Chemistries with a multi-Objective Process Evaluator (GREENSCOPE) tool that provides scores for the selected indicators in the economic, material efficiency, environmental and energy areas. The indicator scores describe process performance on a sustainability measurement scale, effectively determining which operating point is more sustainable if there are more than several steady states for one specific product manufacturing. Through comparisons between a representative benchmark and the optimal steady-states obtained through implementation of the proposed controller, a systematic decision can be made in terms of whether the implementation of the controller is moving the process towards a more sustainable operation. The effectiveness of the proposed framework is illustrated through a case study of a continuous fermentation process for fuel production, whose materi

  13. Delving into sensible measures to enhance the environmental performance of biohydrogen: A quantitative approach based on process simulation, life cycle assessment and data envelopment analysis.

    PubMed

    Martín-Gamboa, Mario; Iribarren, Diego; Susmozas, Ana; Dufour, Javier

    2016-08-01

    A novel approach is developed to evaluate quantitatively the influence of operational inefficiency in biomass production on the life-cycle performance of hydrogen from biomass gasification. Vine-growers and process simulation are used as key sources of inventory data. The life cycle assessment of biohydrogen according to current agricultural practices for biomass production is performed, as well as that of target biohydrogen according to agricultural practices optimised through data envelopment analysis. Only 20% of the vineyards assessed operate efficiently, and the benchmarked reduction percentages of operational inputs range from 45% to 73% in the average vineyard. The fulfilment of operational benchmarks avoiding irregular agricultural practices is concluded to improve significantly the environmental profile of biohydrogen (e.g., impact reductions above 40% for eco-toxicity and global warming). Finally, it is shown that this type of bioenergy system can be an excellent replacement for conventional hydrogen in terms of global warming and non-renewable energy demand. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A discrepancy within primate spatial vision and its bearing on the definition of edge detection processes in machine vision

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1990-01-01

    The visual perception of form information is considered to be based on the functioning of simple and complex neurons in the primate striate cortex. However, a review of the physiological data on these brain cells cannot be harmonized with either the perceptual spatial frequency performance of primates or the performance which is necessary for form perception in humans. This discrepancy together with recent interest in cortical-like and perceptual-like processing in image coding and machine vision prompted a series of image processing experiments intended to provide some definition of the selection of image operators. The experiments were aimed at determining operators which could be used to detect edges in a computational manner consistent with the visual perception of structure in images. Fundamental issues were the selection of size (peak spatial frequency) and circular versus oriented operators (or some combination). In a previous study, circular difference-of-Gaussian (DOG) operators, with peak spatial frequency responses at about 11 and 33 cyc/deg were found to capture the primary structural information in images. Here larger scale circular DOG operators were explored and led to severe loss of image structure and introduced spatial dislocations (due to blur) in structure which is not consistent with visual perception. Orientation sensitive operators (akin to one class of simple cortical neurons) introduced ambiguities of edge extent regardless of the scale of the operator. For machine vision schemes which are functionally similar to natural vision form perception, two circularly symmetric very high spatial frequency channels appear to be necessary and sufficient for a wide range of natural images. Such a machine vision scheme is most similar to the physiological performance of the primate lateral geniculate nucleus rather than the striate cortex.

  15. Analytic and heuristic processing influences on adolescent reasoning and decision-making.

    PubMed

    Klaczynski, P A

    2001-01-01

    The normative/descriptive gap is the discrepancy between actual reasoning and traditional standards for reasoning. The relationship between age and the normative/descriptive gap was examined by presenting adolescents with a battery of reasoning and decision-making tasks. Middle adolescents (N = 76) performed closer to normative ideals than early adolescents (N = 66), although the normative/descriptive gap was large for both groups. Correlational analyses revealed that (1) normative responses correlated positively with each other, (2) nonnormative responses were positively interrelated, and (3) normative and nonnormative responses were largely independent. Factor analyses suggested that performance was based on two processing systems. The "analytic" system operates on "decontextualized" task representations and underlies conscious, computational reasoning. The "heuristic" system operates on "contextualized," content-laden representations and produces "cognitively cheap" responses that sometimes conflict with traditional norms. Analytic processing was more clearly linked to age and to intelligence than heuristic processing. Implications for cognitive development, the competence/performance issue, and rationality are discussed.

  16. Business process re-engineering in the logistics industry: a study of implementation, success factors, and performance

    NASA Astrophysics Data System (ADS)

    Shen, Chien-wen; Chou, Ching-Chih

    2010-02-01

    As business process re-engineering (BPR) is an important foundation to ensure the success of enterprise systems, this study would like to investigate the relationships among BPR implementation, BPR success factors, and business performance for logistics companies. Our empirical findings show that BPR companies outperformed non-BPR companies, not only on information processing, technology applications, organisational structure, and co-ordination, but also on all of the major logistics operations. Comparing the different perceptions of the success factors for BPR, non-BPR companies place greater emphasis on the importance of employee involvement while BPR companies are more concerned about the influence of risk management. Our findings also suggest that management attitude towards BPR success factors could affect performance with regard to technology applications and logistics operations. Logistics companies which have not yet implemented the BPR approach could refer to our findings to evaluate the advantages of such an undertaking and to take care of those BPR success factors affecting performance before conducting BPR projects.

  17. Missile signal processing common computer architecture for rapid technology upgrade

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  18. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  19. Overview of nanofluid application through minimum quantity lubrication (MQL) in metal cutting process

    NASA Astrophysics Data System (ADS)

    Sharif, Safian; Sadiq, Ibrahim Ogu; Suhaimi, Mohd Azlan; Rahim, Shayfull Zamree Abd

    2017-09-01

    Pollution related activities in addition to handling cost of conventional cutting fluid application in metal cutting industry has generated a lot of concern over time. The desire for a green machining environment which will preserve the environment through reduction or elimination of machining related pollution, reduction in oil consumption and safety of the machine operators without compromising an efficient machining process led to search for alternatives to conventional cutting fluid. Amongst the alternatives of dry machining, cryogenic cooling, high pressure cooling, near dry or minimum quantity lubrication (MQL), MQL have shown remarkable performance in terms of cost, machining output, safety of environment and machine operators. However, the MQL under aggressive machining or very high speed machining pose certain restriction as the lubrication media cannot perform efficiently at elevated temperature. In compensating for the shortcomings of MQL technique, high thermal conductivity nanoparticles are introduced in cutting fluids for use in the MQL lubrication process. They have indicated enhanced performance of machining process and significant reduction of loads on the environment. The present work is aimed at evaluating the application and performance of nanofluid in metal cutting process through MQL lubrication technique highlighting their impacts and prospects as lubrication strategy in metal cutting process for sustainable green manufacturing. Enhanced performance of vegetable oil based nanofluids over mineral oil-based nanofluids have been reported and thus highlighted.

  20. The Research and Implementation of MUSER CLEAN Algorithm Based on OpenCL

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Chen, K.; Deng, H.; Wang, F.; Mei, Y.; Wei, S. L.; Dai, W.; Yang, Q. P.; Liu, Y. B.; Wu, J. P.

    2017-03-01

    It's urgent to carry out high-performance data processing with a single machine in the development of astronomical software. However, due to the different configuration of the machine, traditional programming techniques such as multi-threading, and CUDA (Compute Unified Device Architecture)+GPU (Graphic Processing Unit) have obvious limitations in portability and seamlessness between different operation systems. The OpenCL (Open Computing Language) used in the development of MUSER (MingantU SpEctral Radioheliograph) data processing system is introduced. And the Högbom CLEAN algorithm is re-implemented into parallel CLEAN algorithm by the Python language and PyOpenCL extended package. The experimental results show that the CLEAN algorithm based on OpenCL has approximately equally operating efficiency compared with the former CLEAN algorithm based on CUDA. More important, the data processing in merely CPU (Central Processing Unit) environment of this system can also achieve high performance, which has solved the problem of environmental dependence of CUDA+GPU. Overall, the research improves the adaptability of the system with emphasis on performance of MUSER image clean computing. In the meanwhile, the realization of OpenCL in MUSER proves its availability in scientific data processing. In view of the high-performance computing features of OpenCL in heterogeneous environment, it will probably become the preferred technology in the future high-performance astronomical software development.

  1. Electrokinetic remediation prefield test methods

    NASA Technical Reports Server (NTRS)

    Hodko, Dalibor (Inventor)

    2000-01-01

    Methods for determining the parameters critical in designing an electrokinetic soil remediation process including electrode well spacing, operating current/voltage, electroosmotic flow rate, electrode well wall design, and amount of buffering or neutralizing solution needed in the electrode wells at operating conditions are disclosed These methods are preferably performed prior to initiating a full scale electrokinetic remediation process in order to obtain efficient remediation of the contaminants.

  2. Performance of the Landsat-Data Collection System in a Total System Context

    NASA Technical Reports Server (NTRS)

    Paulson, R. W. (Principal Investigator); Merk, C. F.

    1975-01-01

    The author has identified the following significant results. This experiment was, and continues to be, an integration of the LANDSAT-DCS with the data collection and processing system of the Geological Survey. Although an experimental demonstration, it was a successful integration of a satellite relay system that is capable of continental data collection, and an existing governmental nationwide operational data processing and distributing networks. The Survey's data processing system uses a large general purpose computer with insufficient redundancy for 24-hour a day, 7 day a week operation. This is significant, but soluble obstacle to converting the experimental integration of the system to an operational integration.

  3. Ensuring Quality

    ERIC Educational Resources Information Center

    Erickson, Paul W.

    2009-01-01

    This article discusses why building commissioning for education institutions is needed. School facilities owners and operators should confirm whether their building systems are performing as expected. The more comprehensive the confirmation process, the greater opportunity there is for reducing operations and maintenance costs, and improving…

  4. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  5. Process control systems at Homer City coal preparation plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shell, W.P.

    1983-03-01

    An important part of process control engineering is the implementation of the basic control system design through commissioning to routine operation. This is a period when basic concepts can be reviewed and improvements either implemented or recorded for application in future systems. The experience of commissioning the process control systems in the Homer City coal cleaning plant are described and discussed. The current level of operating control performance in individual sections and the overall system are also reported and discussed.

  6. Fuzzy simulation in concurrent engineering

    NASA Technical Reports Server (NTRS)

    Kraslawski, A.; Nystrom, L.

    1992-01-01

    Concurrent engineering is becoming a very important practice in manufacturing. A problem in concurrent engineering is the uncertainty associated with the values of the input variables and operating conditions. The problem discussed in this paper concerns the simulation of processes where the raw materials and the operational parameters possess fuzzy characteristics. The processing of fuzzy input information is performed by the vertex method and the commercial simulation packages POLYMATH and GEMS. The examples are presented to illustrate the usefulness of the method in the simulation of chemical engineering processes.

  7. Monitoring cognitive and emotional processes through pupil and cardiac response during dynamic versus logical task.

    PubMed

    Causse, Mickaël; Sénard, Jean-Michel; Démonet, Jean François; Pastor, Josette

    2010-06-01

    The paper deals with the links between physiological measurements and cognitive and emotional functioning. As long as the operator is a key agent in charge of complex systems, the definition of metrics able to predict his performance is a great challenge. The measurement of the physiological state is a very promising way but a very acute comprehension is required; in particular few studies compare autonomous nervous system reactivity according to specific cognitive processes during task performance and task related psychological stress is often ignored. We compared physiological parameters recorded on 24 healthy subjects facing two neuropsychological tasks: a dynamic task that require problem solving in a world that continually evolves over time and a logical task representative of cognitive processes performed by operators facing everyday problem solving. Results showed that the mean pupil diameter change was higher during the dynamic task; conversely, the heart rate was more elevated during the logical task. Finally, the systolic blood pressure seemed to be strongly sensitive to psychological stress. A better taking into account of the precise influence of a given cognitive activity and both workload and related task-induced psychological stress during task performance is a promising way to better monitor operators in complex working situations to detect mental overload or pejorative stress factor of error.

  8. Predictive displays for a process-control schematic interface.

    PubMed

    Yin, Shanqing; Wickens, Christopher D; Helander, Martin; Laberge, Jason C

    2015-02-01

    Our objective was to examine the extent to which increasing precision of predictive (rate of change) information in process control will improve performance on a simulated process-control task. Predictive displays have been found to be useful in process control (as well as aviation and maritime industries). However, authors of prior research have not examined the extent to which predictive value is increased by increasing predictor resolution, nor has such research tied potential improvements to changes in process control strategy. Fifty nonprofessional participants each controlled a simulated chemical mixture process (honey mixer simulation) that simulated the operations found in process control. Participants in each of five groups controlled with either no predictor or a predictor ranging in the resolution of prediction of the process. Increasing detail resolution generally increased the benefit of prediction over the control condition although not monotonically so. The best overall performance, combining quality and predictive ability, was obtained by the display of intermediate resolution. The two displays with the lowest resolution were clearly inferior. Predictors with higher resolution are of value but may trade off enhanced sensitivity to variable change (lower-resolution discrete state predictor) with smoother control action (higher-resolution continuous predictors). The research provides guidelines to the process-control industry regarding displays that can most improve operator performance.

  9. Bioreactor performance: a more scientific approach for practice.

    PubMed

    Lübbert, A; Bay Jørgensen, S

    2001-02-13

    In practice, the performance of a biochemical conversion process, i.e. the bioreactor performance, is essentially determined by the benefit/cost ratio. The benefit is generally defined in terms of the amount of the desired product produced and its market price. Cost reduction is the major objective in biochemical engineering. There are two essential engineering approaches to minimizing the cost of creating a particular product in an existing plant. One is to find a control path or operational procedure that optimally uses the dynamics of the process and copes with the many constraints restricting production. The other is to remove or lower the constraints by constructive improvements of the equipment and/or the microorganisms. This paper focuses on the first approach, dealing with optimization of the operational procedure and the measures by which one can ensure that the process adheres to the predetermined path. In practice, feedforward control is the predominant control mode applied. However, as it is frequently inadequate for optimal performance, feedback control may also be employed. Relevant aspects of such performance optimization are discussed.

  10. F3D Image Processing and Analysis for Many - and Multi-core Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    F3D is written in OpenCL, so it achieve[sic] platform-portable parallelism on modern mutli-core CPUs and many-core GPUs. The interface and mechanims to access F3D core are written in Java as a plugin for Fiji/ImageJ to deliver several key image-processing algorithms necessary to remove artifacts from micro-tomography data. The algorithms consist of data parallel aware filters that can efficiently utilizes[sic] resources and can work on out of core datasets and scale efficiently across multiple accelerators. Optimizing for data parallel filters, streaming out of core datasets, and efficient resource and memory and data managements over complex execution sequence of filters greatly expeditesmore » any scientific workflow with image processing requirements. F3D performs several different types of 3D image processing operations, such as non-linear filtering using bilateral filtering and/or median filtering and/or morphological operators (MM). F3D gray-level MM operators are one-pass constant time methods that can perform morphological transformations with a line-structuring element oriented in discrete directions. Additionally, MM operators can be applied to gray-scale images, and consist of two parts: (a) a reference shape or structuring element, which is translated over the image, and (b) a mechanism, or operation, that defines the comparisons to be performed between the image and the structuring element. This tool provides a critical component within many complex pipelines such as those for performing automated segmentation of image stacks. F3D is also called a "descendent" of Quant-CT, another software we developed in the past. These two modules are to be integrated in a next version. Further details were reported in: D.M. Ushizima, T. Perciano, H. Krishnan, B. Loring, H. Bale, D. Parkinson, and J. Sethian. Structure recognition from high-resolution images of ceramic composites. IEEE International Conference on Big Data, October 2014.« less

  11. Auditory Working Memory Load Impairs Visual Ventral Stream Processing: Toward a Unified Model of Attentional Load

    ERIC Educational Resources Information Center

    Klemen, Jane; Buchel, Christian; Buhler, Mira; Menz, Mareike M.; Rose, Michael

    2010-01-01

    Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences…

  12. 40 CFR 63.117 - Process vent provisions-reporting and recordkeeping requirements for group and TRE determinations...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... and recordkeeping requirements for group and TRE determinations and performance tests. (a) Each owner or operator subject to the control provisions for Group 1 process vents in § 63.113(a) or the... recordkeeping requirements for group and TRE determinations and performance tests. 63.117 Section 63.117...

  13. 40 CFR 63.117 - Process vent provisions-reporting and recordkeeping requirements for group and TRE determinations...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and recordkeeping requirements for group and TRE determinations and performance tests. (a) Each owner or operator subject to the control provisions for Group 1 process vents in § 63.113(a) or the... recordkeeping requirements for group and TRE determinations and performance tests. 63.117 Section 63.117...

  14. ConnectX2 In niBand Management Queues: New support for Network Of oaded

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Richard L; Poole, Stephen W; Shamis, Pavel

    2010-01-01

    This paper introduces the newly developed InfiniBand (IB) Management Queue capability, used by the Host Channel Adapter (HCA) to manage network task data flow dependancies, and progress the communications associated with such flows. These tasks include sends, receives, and the newly supported wait task, and are scheduled by the HCA based on a data dependency description provided by the user. This functionality is supported by the ConnectX-2 HCA, and provides the means for delegating collective communication management and progress to the HCA, also known as collective communication offload. This provides a means for overlapping collective communications managed by the HCAmore » and computation on the Central Processing Unit (CPU), thus making it possible to reduce the impact of system noise on parallel applications using collective operations. This paper further describes how this new capability can be used to implement scalable Message Passing Interface (MPI) collective operations, describing the high level details of how this new capability is used to implement the MPI Barrier collective operation, focusing on the latency sensitive performance aspects of this new capability. This paper concludes with small scale benchmark experiments comparing implementations of the barrier collective operation, using the new network offload capabilities, with established point-to-point based implementations of these same algorithms, which manage the data flow using the central processing unit. These early results demonstrate the promise this new capability provides to improve the scalability of high-performance applications using collective communications. The latency of the HCA based implementation of the barrier is similar to that of the best performing point-to-point based implementation managed by the central processing unit, starting to outperform these as the number of processes involved in the collective operation increases.« less

  15. A Systematic Approach for Obtaining Performance on Matrix-Like Operations

    NASA Astrophysics Data System (ADS)

    Veras, Richard Michael

    Scientific Computation provides a critical role in the scientific process because it allows us ask complex queries and test predictions that would otherwise be unfeasible to perform experimentally. Because of its power, Scientific Computing has helped drive advances in many fields ranging from Engineering and Physics to Biology and Sociology to Economics and Drug Development and even to Machine Learning and Artificial Intelligence. Common among these domains is the desire for timely computational results, thus a considerable amount of human expert effort is spent towards obtaining performance for these scientific codes. However, this is no easy task because each of these domains present their own unique set of challenges to software developers, such as domain specific operations, structurally complex data and ever-growing datasets. Compounding these problems are the myriads of constantly changing, complex and unique hardware platforms that an expert must target. Unfortunately, an expert is typically forced to reproduce their effort across multiple problem domains and hardware platforms. In this thesis, we demonstrate the automatic generation of expert level high-performance scientific codes for Dense Linear Algebra (DLA), Structured Mesh (Stencil), Sparse Linear Algebra and Graph Analytic. In particular, this thesis seeks to address the issue of obtaining performance on many complex platforms for a certain class of matrix-like operations that span across many scientific, engineering and social fields. We do this by automating a method used for obtaining high performance in DLA and extending it to structured, sparse and scale-free domains. We argue that it is through the use of the underlying structure found in the data from these domains that enables this process. Thus, obtaining performance for most operations does not occur in isolation of the data being operated on, but instead depends significantly on the structure of the data.

  16. In situ steam enhanced recovery process, Hughes Environmental Systems, Inc. innovative technology evaluation report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, K.

    1995-01-01

    This Innovative Technology Evaluation report summarizes the findings of an evaluation of the in situ Steam Enhanced Recovery Process (SERP) operated by Hughes Environmental Systems, Inc. at the Rainbow Disposal facility in Huntington Beach, California. The technology demonstration was conducted concurrent with a full-scale remedial action using the technology on an underground diesel leak. From this demonstration, it was concluded that the SERP process did not achieve the remedial goals desired at this site and there were significant operational problems. It is believed that these operational problems can be solved and substantially better performance can be attained. The cost ofmore » treatment was quite low, as expected with an in-situ process.« less

  17. Coal liquefaction process solvent characterization and evaluation: Progress report, 1 April--30 June 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winschel, R. A.; Robbins, G. A.; Burke, F. P.

    1986-11-01

    Conoco Coal Research Division is characterizing samples of direct coal liquefaction process oils based on a variety of analytical techniques to provide a detailed description of the chemical composition of the oils to more fully understand the interrelationship of process oil composition and process operations, to aid in plant operation, and to lead to process improvements. The approach taken is to obtain analyses of a large number of well-defined process oils taken during periods of known operating conditions and known process performance. A set of thirty-one process oils from the Hydrocarbon Research, Inc. (HRI) Catalytic Two-Stage Liquefaction (CTSL) bench unitmore » was analyzed to provide information on process performance. The Fourier-Transform infrared (FTIR) spectroscopic method for the determination of phenolics in cola liquids was further verified. A set of four tetahydrofuran-soluble products from Purdue Research Foundation's reactions of coal/potassium/crown ether, analyzed by GC/MS and FTIR, were found to consist primarily of paraffins (excluding contaminants). Characterization data (elemental analyses, /sup 1/H-NMR and phenolic concentrations) were obtained on a set of twenty-seven two-stage liquefaction oils. Two activities were begun but not completed. First, analyses were started on oils from Wilsonville Run 250 (close- coupled ITSL). Also, a carbon isotopic method is being examined for utility in determining the relative proportion of coal and petroleum products in coprocessing oils.« less

  18. Dynamic modeling and analyses of simultaneous saccharification and fermentation process to produce bio-ethanol from rice straw.

    PubMed

    Ko, Jordon; Su, Wen-Jun; Chien, I-Lung; Chang, Der-Ming; Chou, Sheng-Hsin; Zhan, Rui-Yu

    2010-02-01

    The rice straw, an agricultural waste from Asians' main provision, was collected as feedstock to convert cellulose into ethanol through the enzymatic hydrolysis and followed by the fermentation process. When the two process steps are performed sequentially, it is referred to as separate hydrolysis and fermentation (SHF). The steps can also be performed simultaneously, i.e., simultaneous saccharification and fermentation (SSF). In this research, the kinetic model parameters of the cellulose saccharification process step using the rice straw as feedstock is obtained from real experimental data of cellulase hydrolysis. Furthermore, this model can be combined with a fermentation model at high glucose and ethanol concentrations to form a SSF model. The fermentation model is based on cybernetic approach from a paper in the literature with an extension of including both the glucose and ethanol inhibition terms to approach more to the actual plants. Dynamic effects of the operating variables in the enzymatic hydrolysis and the fermentation models will be analyzed. The operation of the SSF process will be compared to the SHF process. It is shown that the SSF process is better in reducing the processing time when the product (ethanol) concentration is high. The means to improve the productivity of the overall SSF process, by properly using aeration during the batch operation will also be discussed.

  19. Industrial wastewater platform: upgrading of the biological process and operative configurations for best performance.

    PubMed

    Eusebi, Anna Laura; Massi, Alessandro; Sablone, Emiliano; Santinelli, Martina; Battistoni, Paolo

    2012-01-01

    The treatment of industrial liquid wastes is placed in a wide context of technologies and is related to the high variability of the influent physical-chemical characteristics. In this condition, the achievement of satisfactory biological unit efficiency could be complicated. An alternate process (AC) with aerobic and anoxic phases fed in a continuous way was evaluated as an operative solution to optimize the performance of the biological reactor in a platform for the treatment of industrial liquid wastes. The process application has determined a stable quality effluent with an average concentration of 25 mg TN L(-1), according to the law limits. The use of discharged wastewaters as rapid carbon sources to support the anoxic phase of the alternate cycle, realizes a reduction of TN of 95% without impact on the total operative costs. The evaluation of the micro-pollutants behaviour has highlighted a bio-adsorption phenomenon in the first reactor. The implementation of the process defined 31% of energy saving during period 1 and 19% for the periods 2, 3 and 4.

  20. Influence of operating conditions on the optimum design of electric vehicle battery cooling plates

    NASA Astrophysics Data System (ADS)

    Jarrett, Anthony; Kim, Il Yong

    2014-01-01

    The efficiency of cooling plates for electric vehicle batteries can be improved by optimizing the geometry of internal fluid channels. In practical operation, a cooling plate is exposed to a range of operating conditions dictated by the battery, environment, and driving behaviour. To formulate an efficient cooling plate design process, the optimum design sensitivity with respect to each boundary condition is desired. This determines which operating conditions must be represented in the design process, and therefore the complexity of designing for multiple operating conditions. The objective of this study is to determine the influence of different operating conditions on the optimum cooling plate design. Three important performance measures were considered: temperature uniformity, mean temperature, and pressure drop. It was found that of these three, temperature uniformity was most sensitive to the operating conditions, especially with respect to the distribution of the input heat flux, and also to the coolant flow rate. An additional focus of the study was the distribution of heat generated by the battery cell: while it is easier to assume that heat is generated uniformly, by using an accurate distribution for design optimization, this study found that cooling plate performance could be significantly improved.

  1. 40 CFR 461.73 - New source performance standards. (NSPS).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS BATTERY MANUFACTURING POINT SOURCE CATEGORY Zinc Subcategory § 461.73 New... times. (b) There shall be no discharge allowance for process wastewater pollutants from any battery manufacturing operation other than those battery manufacturing operations listed above. ...

  2. Systems Operation Studies for Automated Guideway Transit Systems : Detailed Station Model Functional Specifications

    DOT National Transportation Integrated Search

    1981-07-01

    The Detailed Station Model (DSM) is a discrete event model representing the interrelated queueing processes associated with vehicle and passenger activities in an AGT station. The DSM will provide operational and performance measures of alternative s...

  3. System Operations Studies for Automated Guideway Transit Systems : Detailed Station Model User's Manual

    DOT National Transportation Integrated Search

    1981-07-01

    The Detailed Station Model (DSM) is a discrete event model representing the interrelated queueing processes associated with vehicle and passenger activities in an AGT station. The DSM will provide operational and performance measures of alternative s...

  4. Hanford’s Supplemental Treatment Project: Full-Scale Integrated Testing of In-Container-Vitrification and a 10,000-Liter Dryer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witwer, Keith S.; Dysland, Eric J.; Garfield, J. S.

    2008-02-22

    The GeoMelt® In-Container Vitrification™ (ICV™) process was selected by the U.S. Department of Energy (DOE) in 2004 for further evaluation as the supplemental treatment technology for Hanford’s low-activity waste (LAW). Also referred to as “bulk vitrification,” this process combines glass forming minerals, LAW, and chemical amendments; dries the mixture; and then vitrifies the material in a refractory-lined steel container. AMEC Nuclear Ltd. (AMEC) is adapting its GeoMelt ICV™ technology for this application with technical and analytical support from Pacific Northwest National Laboratory (PNNL). The DVBS project is funded by the DOE Office of River Protection and administered by CH2M HILLmore » Hanford Group, Inc. The Demonstration Bulk Vitrification Project (DBVS) was initiated to engineer, construct, and operate a full-scale bulk vitrification pilot-plant to treat up to 750,000 liters of LAW from Waste Tank 241-S-109 at the DOE Hanford Site. Since the beginning of the DBVS project in 2004, testing has used laboratory, crucible-scale, and engineering-scale equipment to help establish process limitations of selected glass formulations and identify operational issues. Full-scale testing has provided critical design verification of the ICV™ process before operating the Hanford pilot-plant. In 2007, the project’s fifth full-scale test, called FS-38D, (also known as the Integrated Dryer Melter Test, or IDMT,) was performed. This test had three primary objectives: 1) Demonstrate the simultaneous and integrated operation of the ICV™ melter with a 10,000-liter dryer, 2) Demonstrate the effectiveness of a new feed reformulation and change in process methodology towards reducing the production and migration of molten ionic salts (MIS), and, 3) Demonstrate that an acceptable glass product is produced under these conditions. Testing was performed from August 8 to 17, 2007. Process and analytical results demonstrated that the primary test objectives, along with a dozen supporting objectives, were successfully met. Glass performance exceeded all disposal performance criteria. A previous issue with MIS containment was successfully resolved in FS-38D, and the ICV™ melter was integrated with a full-scale, 10,000-liter dryer. This paper describes the rationale for performing the test, the purpose and outcome of scale-up tests preceding it, and the performance and outcome of FS-38D.« less

  5. The role of NASA for aerospace information

    NASA Technical Reports Server (NTRS)

    Chandler, G. P., Jr.

    1980-01-01

    The NASA Scientific and Technical Information Program operations are performed by two contractor operated facilities. The NASA STI Facility, located near Baltimore, Maryland, employs about 210 people who process report literature, operate the computer complex, and provide support for software maintenance and developments. A second contractor, the Technical Information Services of the American Institute of Aeronautics and Astronautics, employs approximately 80 people in New York City and processes the open literature such as journals, magazines, and books. Features of these programs include online access via RECON, announcement services, and international document exchange.

  6. Modeling and experimental performance of an intermediate temperature reversible solid oxide cell for high-efficiency, distributed-scale electrical energy storage

    NASA Astrophysics Data System (ADS)

    Wendel, Christopher H.; Gao, Zhan; Barnett, Scott A.; Braun, Robert J.

    2015-06-01

    Electrical energy storage is expected to be a critical component of the future world energy system, performing load-leveling operations to enable increased penetration of renewable and distributed generation. Reversible solid oxide cells, operating sequentially between power-producing fuel cell mode and fuel-producing electrolysis mode, have the capability to provide highly efficient, scalable electricity storage. However, challenges ranging from cell performance and durability to system integration must be addressed before widespread adoption. One central challenge of the system design is establishing effective thermal management in the two distinct operating modes. This work leverages an operating strategy to use carbonaceous reactant species and operate at intermediate stack temperature (650 °C) to promote exothermic fuel-synthesis reactions that thermally self-sustain the electrolysis process. We present performance of a doped lanthanum-gallate (LSGM) electrolyte solid oxide cell that shows high efficiency in both operating modes at 650 °C. A physically based electrochemical model is calibrated to represent the cell performance and used to simulate roundtrip operation for conditions unique to these reversible systems. Design decisions related to system operation are evaluated using the cell model including current density, fuel and oxidant reactant compositions, and flow configuration. The analysis reveals tradeoffs between electrical efficiency, thermal management, energy density, and durability.

  7. Integrated Main Propulsion System Performance Reconstruction Process/Models

    NASA Technical Reports Server (NTRS)

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  8. Implementing Lumberjacks and Black Swans Into Model-Based Tools to Support Human-Automation Interaction.

    PubMed

    Sebok, Angelia; Wickens, Christopher D

    2017-03-01

    The objectives were to (a) implement theoretical perspectives regarding human-automation interaction (HAI) into model-based tools to assist designers in developing systems that support effective performance and (b) conduct validations to assess the ability of the models to predict operator performance. Two key concepts in HAI, the lumberjack analogy and black swan events, have been studied extensively. The lumberjack analogy describes the effects of imperfect automation on operator performance. In routine operations, an increased degree of automation supports performance, but in failure conditions, increased automation results in more significantly impaired performance. Black swans are the rare and unexpected failures of imperfect automation. The lumberjack analogy and black swan concepts have been implemented into three model-based tools that predict operator performance in different systems. These tools include a flight management system, a remotely controlled robotic arm, and an environmental process control system. Each modeling effort included a corresponding validation. In one validation, the software tool was used to compare three flight management system designs, which were ranked in the same order as predicted by subject matter experts. The second validation compared model-predicted operator complacency with empirical performance in the same conditions. The third validation compared model-predicted and empirically determined time to detect and repair faults in four automation conditions. The three model-based tools offer useful ways to predict operator performance in complex systems. The three tools offer ways to predict the effects of different automation designs on operator performance.

  9. Radiology operations: what you don't know could be costing you millions.

    PubMed

    Joffe, Sam; Drew, Donna; Bansal, Manju; Hase, Michael

    2007-01-01

    Rapid growth in advanced imaging procedures has left hospital radiology departments struggling to keep up with demand, resulting in loss of patients to facilities that can offer service more quickly. While the departments appear to be working at full capacity, an operational analysis of over 400 hospital radiology departments in the US by GE Healthcare has determined that, paradoxically, many departments are in fact underutilized and operating for below their potential capacity. While CT cycle time in hospitals that were studied averaged 35 minutes, top performing hospitals operated the same equipment at a cycle time of 15 minutes, yielding approximately double the throughput volume. Factors leading to suboptimal performance include accounting metrics that mask true performance, leadership focus on capital investment rather than operations, under staffing, under scheduling, poorly aligned incentives, a fragmented view of operations, lack of awareness of latent opportunities, and lack of sufficient skills and processes to implement improvements. The study showed how modest investments in radiology operations can dramatically improve access to services and profitability.

  10. Effects of extended lay-off periods on performance and operator trust under adaptable automation.

    PubMed

    Chavaillaz, Alain; Wastell, David; Sauer, Jürgen

    2016-03-01

    Little is known about the long-term effects of system reliability when operators do not use a system during an extended lay-off period. To examine threats to skill maintenance, 28 participants operated twice a simulation of a complex process control system for 2.5 h, with an 8-month retention interval between sessions. Operators were provided with an adaptable support system, which operated at one of the following reliability levels: 60%, 80% or 100%. Results showed that performance, workload, and trust remained stable at the second testing session, but operators lost self-confidence in their system management abilities. Finally, the effects of system reliability observed at the first testing session were largely found again at the second session. The findings overall suggest that adaptable automation may be a promising means to support operators in maintaining their performance at the second testing session. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Slipstream pilot-scale demonstration of a novel amine-based post-combustion technology for carbon dioxide capture from coal-fired power plant flue gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamurthy, Krish R.

    Post-combustion CO 2 capture (PCC) technology offers flexibility to treat the flue gas from both existing and new coal-fired power plants and can be applied to treat all or a portion of the flue gas. Solvent-based technologies are today the leading option for PCC from commercial coal-fired power plants as they have been applied in large-scale in other applications. Linde and BASF have been working together to develop and further improve a PCC process incorporating BASF’s novel aqueous amine-based solvent technology. This technology offers significant benefits compared to other solvent-based processes as it aims to reduce the regeneration energy requirementsmore » using novel solvents that are very stable under the coal-fired power plant feed gas conditions. BASF has developed the desired solvent based on the evaluation of a large number of candidates. In addition, long-term small pilot-scale testing of the BASF solvent has been performed on a lignite-fired flue gas. In coordination with BASF, Linde has evaluated a number of options for capital cost reduction in large engineered systems for solvent-based PCC technology. This report provides a summary of the work performed and results from a project supported by the US DOE (DE-FE0007453) for the pilot-scale demonstration of a Linde-BASF PCC technology using coal-fired power plant flue gas at a 1-1.5 MWe scale in Wilsonville, AL at the National Carbon Capture Center (NCCC). Following a project kick-off meeting in November 2011 and the conclusion of pilot plant design and engineering in February 2013, mechanical completion of the pilot plant was achieved in July 2014, and final commissioning activities were completed to enable start-up of operations in January 2015. Parametric tests were performed from January to December 2015 to determine optimal test conditions and evaluate process performance over a variety of operation parameters. A long-duration 1500-hour continuous test campaign was performed from May to August 2016 at a selected process condition to evaluate process performance and solvent stability over a longer period similar to how the process would operate as a continuously running large-scale PCC plant. The pilot plant integrated a number of unique features of the Linde-BASF technology aimed at lowering overall energy consumption and capital costs. During the overall test period including startup, parametric testing and long-duration testing, the pilot plant was operated for a total of 6,764 hours out of which testing with flue gas was performed for 4,109 hours. The pilot plant testing demonstrated all of the performance targets including CO 2 capture rate exceeding 90%, CO 2 purity exceeding 99.9 mol% (dry), flue gas processing capacity up to 15,500 lbs/hr (equivalent to 1.5 MWe capacity slipstream), regeneration energy as low as 2.7 GJ/tonne CO 2, and regenerator operating pressure up to 3.4 bara. Excellent solvent stability performance data was measured and verified by Linde and BASF during both test campaigns. In addition to process data, significant operational learnings were gained from pilot tests that will contribute greatly to the commercial success of PCC. Based on a thorough techno-economic assessment (TEA) of the Linde-BASF PCC process integrated with a 550 MWe supercritical coal-fired power plant, the net efficiency of the integrated power plant with CO 2 capture is increased from 28.4% with the DOE/NETL Case 12 reference to 30.9% with the Linde-BASF PCC plant previously presented utilizing the BASF OASE® blue solvent [Ref. 4], and is further increased to 31.4% using a Linde-BASF PCC plant with BASF OASE® blue solvent and an advanced stripper interstage heater (SIH) configuration. The Linde-BASF PCC plant incorporating the BASF OASE® blue solvent also results in significantly lower overall capital costs, thereby reducing the cost of electricity (COE) and cost of CO 2 captured from $147.25/MWh and $56.49/MT CO 2, respectively, for the reference DOE/NETL Case 12 plant, to $128.49/MWh and $41.85/MT CO2 for process case LB1, respectively, and $126.65/MWh and $40.66/MT CO 2 for process case SIH, respectively. With additional innovative Linde-BASF PCC process configuration improvements, the COE and cost of CO 2 captured can be further reduced to $125.51/MWh and $39.90/MT CO 2 for a further optimized PCC process defined as LB1-CREB. Most notably, the Linde-BASF process options assessed have already demonstrated the potential to lower the cost of CO 2 captured below the DOE target of $40/MT CO 2 at the 550 MWe scale for second generation PCC technologies. Project organization, structure, goals, tasks, accomplishments, process criteria and milestones will be presented in this report along with highlights and key results from parametric and long-duration testing of the Linde-BASF PCC pilot. The parametric and long-duration testing campaigns were aimed at validating the performance of the PCC technology against targets determined from a preliminary techno-economic assessment. The stability of the solvent with extended operation in a realistic power plant setting was measured with performance verified. Additionally, general solvent classification information, process operating conditions, normalized solvent performance data, solvent stability test results, flue gas conditions data, CO 2 purity data in the gaseous product stream, steam requirements and process flow diagrams, and updated process economic data for a scaled-up 550 MWe supercritical power plant with CO 2 capture are presented and discussed in this report.« less

  12. How does processing affect storage in working memory tasks? Evidence for both domain-general and domain-specific effects.

    PubMed

    Jarrold, Christopher; Tam, Helen; Baddeley, Alan D; Harvey, Caroline E

    2011-05-01

    Two studies that examine whether the forgetting caused by the processing demands of working memory tasks is domain-general or domain-specific are presented. In each, separate groups of adult participants were asked to carry out either verbal or nonverbal operations on exactly the same processing materials while maintaining verbal storage items. The imposition of verbal processing tended to produce greater forgetting even though verbal processing operations took no longer to complete than did nonverbal processing operations. However, nonverbal processing did cause forgetting relative to baseline control conditions, and evidence from the timing of individuals' processing responses suggests that individuals in both processing groups slowed their responses in order to "refresh" the memoranda. Taken together the data suggest that processing has a domain-general effect on working memory performance by impeding refreshment of memoranda but can also cause effects that appear domain-specific and that result from either blocking of rehearsal or interference.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chevallier, J.J.; Quetier, F.P.; Marshall, D.W.

    Sedco Forex has developed an integrated computer system to enhance the technical performance of the company at various operational levels and to increase the understanding and knowledge of the drill crews. This paper describes the system and how it is used for recording and processing drilling data at the rig site, for associated technical analyses, and for well design, planning, and drilling performance studies at the operational centers. Some capabilities related to the statistical analysis of the company's operational records are also described, and future development of rig computing systems for drilling applications and management tasks is discussed.

  14. Microcomponent sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; McDonald, Carolyn E.

    1997-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  15. Orbital transfer vehicle concept definition and system analysis study. Volume 4, Appendix A: Space station accommodations. Revision 1

    NASA Technical Reports Server (NTRS)

    Randall, Roger M.

    1987-01-01

    Orbit Transfer Vehicle (OTV) processing at the space station is divided into two major categories: OTV processing and assembly operations, and support operations. These categories are further subdivided into major functional areas to allow development of detailed OTV processing procedures and timelines. These procedures and timelines are used to derive the specific space station accommodations necessary to support OTV activities. The overall objective is to limit impact on OTV processing requirements on space station operations, involvement of crew, and associated crew training and skill requirements. The operational concept maximizes use of automated and robotic systems to perform all required OTV servicing and maintenance tasks. Only potentially critical activities would require direct crew involvement or supervision. EVA operations are considered to be strictly contingency back-up to failure of the automated and robotic systems, with the exception of the initial assembly of Space-Based OTV accommodations at the space station, which will require manned involvement.

  16. Application of genetic algorithm in integrated setup planning and operation sequencing

    NASA Astrophysics Data System (ADS)

    Kafashi, Sajad; Shakeri, Mohsen

    2011-01-01

    Process planning is an essential component for linking design and manufacturing process. Setup planning and operation sequencing is two main tasks in process planning. Many researches solved these two problems separately. Considering the fact that the two functions are complementary, it is necessary to integrate them more tightly so that performance of a manufacturing system can be improved economically and competitively. This paper present a generative system and genetic algorithm (GA) approach to process plan the given part. The proposed approach and optimization methodology analyses the TAD (tool approach direction), tolerance relation between features and feature precedence relations to generate all possible setups and operations using workshop resource database. Based on these technological constraints the GA algorithm approach, which adopts the feature-based representation, optimizes the setup plan and sequence of operations using cost indices. Case study show that the developed system can generate satisfactory results in optimizing the setup planning and operation sequencing simultaneously in feasible condition.

  17. Testing and checkout experiences in the National Transonic Facility since becoming operational

    NASA Technical Reports Server (NTRS)

    Bruce, W. E., Jr.; Gloss, B. B.; Mckinney, L. W.

    1988-01-01

    The U.S. National Transonic Facility, constructed by NASA to meet the national needs for High Reynolds Number Testing, has been operational in a checkout and test mode since the operational readiness review (ORR) in late 1984. During this time, there have been problems centered around the effect of large temperature excursions on the mechanical movement of large components, the reliable performance of instrumentation systems, and an unexpected moisture problem with dry insulation. The more significant efforts since the ORR are reviewed and NTF status concerning hardware, instrumentation and process controls systems, operating constraints imposed by the cryogenic environment, and data quality and process controls is summarized.

  18. Principles of Temporal Processing Across the Cortical Hierarchy.

    PubMed

    Himberger, Kevin D; Chien, Hsiang-Yun; Honey, Christopher J

    2018-05-02

    The world is richly structured on multiple spatiotemporal scales. In order to represent spatial structure, many machine-learning models repeat a set of basic operations at each layer of a hierarchical architecture. These iterated spatial operations - including pooling, normalization and pattern completion - enable these systems to recognize and predict spatial structure, while robust to changes in the spatial scale, contrast and noisiness of the input signal. Because our brains also process temporal information that is rich and occurs across multiple time scales, might the brain employ an analogous set of operations for temporal information processing? Here we define a candidate set of temporal operations, and we review evidence that they are implemented in the mammalian cerebral cortex in a hierarchical manner. We conclude that multiple consecutive stages of cortical processing can be understood to perform temporal pooling, temporal normalization and temporal pattern completion. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Relationship between operational variables, fundamental physics and foamed cement properties in lab and field generated foamed cement slurries

    DOE PAGES

    Glosser, D.; Kutchko, B.; Benge, G.; ...

    2016-03-21

    Foamed cement is a critical component for wellbore stability. The mechanical performance of a foamed cement depends on its microstructure, which in turn depends on the preparation method and attendant operational variables. Determination of cement stability for field use is based on laboratory testing protocols governed by API Recommended Practice 10B-4 (API RP 10B-4, 2015). However, laboratory and field operational variables contrast considerably in terms of scale, as well as slurry mixing and foaming processes. Here in this paper, laboratory and field operational processes are characterized within a physics-based framework. It is shown that the “atomization energy” imparted by themore » high pressure injection of nitrogen gas into the field mixed foamed cement slurry is – by a significant margin – the highest energy process, and has a major impact on the void system in the cement slurry. There is no analog for this high energy exchange in current laboratory cement preparation and testing protocols. Quantifying the energy exchanges across the laboratory and field processes provides a basis for understanding relative impacts of these variables on cement structure, and can ultimately lead to the development of practices to improve cement testing and performance.« less

  20. Digitally programmable microfluidic automaton for multiscale combinatorial mixing and sample processing†

    PubMed Central

    Jensen, Erik C.; Stockton, Amanda M.; Chiesl, Thomas N.; Kim, Jungkyu; Bera, Abhisek; Mathies, Richard A.

    2013-01-01

    A digitally programmable microfluidic Automaton consisting of a 2-dimensional array of pneumatically actuated microvalves is programmed to perform new multiscale mixing and sample processing operations. Large (µL-scale) volume processing operations are enabled by precise metering of multiple reagents within individual nL-scale valves followed by serial repetitive transfer to programmed locations in the array. A novel process exploiting new combining valve concepts is developed for continuous rapid and complete mixing of reagents in less than 800 ms. Mixing, transfer, storage, and rinsing operations are implemented combinatorially to achieve complex assay automation protocols. The practical utility of this technology is demonstrated by performing automated serial dilution for quantitative analysis as well as the first demonstration of on-chip fluorescent derivatization of biomarker targets (carboxylic acids) for microchip capillary electrophoresis on the Mars Organic Analyzer. A language is developed to describe how unit operations are combined to form a microfluidic program. Finally, this technology is used to develop a novel microfluidic 6-sample processor for combinatorial mixing of large sets (>26 unique combinations) of reagents. The digitally programmable microfluidic Automaton is a versatile programmable sample processor for a wide range of process volumes, for multiple samples, and for different types of analyses. PMID:23172232

  1. Process Management inside ATLAS DAQ

    NASA Astrophysics Data System (ADS)

    Alexandrov, I.; Amorim, A.; Badescu, E.; Burckhart-Chromek, D.; Caprini, M.; Dobson, M.; Duval, P. Y.; Hart, R.; Jones, R.; Kazarov, A.; Kolos, S.; Kotov, V.; Liko, D.; Lucio, L.; Mapelli, L.; Mineev, M.; Moneta, L.; Nassiakou, M.; Pedro, L.; Ribeiro, A.; Roumiantsev, V.; Ryabov, Y.; Schweiger, D.; Soloviev, I.; Wolters, H.

    2002-10-01

    The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.

  2. Preliminary design review package for the solar heating and cooling central data processing system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The Central Data Processing System (CDPS) is designed to transform the raw data collected at remote sites into performance evaluation information for assessing the performance of solar heating and cooling systems. Software requirements for the CDPS are described. The programming standards to be used in development, documentation, and maintenance of the software are discussed along with the CDPS operations approach in support of daily data collection and processing.

  3. Design of a robust fuzzy controller for the arc stability of CO(2) welding process using the Taguchi method.

    PubMed

    Kim, Dongcheol; Rhee, Sehun

    2002-01-01

    CO(2) welding is a complex process. Weld quality is dependent on arc stability and minimizing the effects of disturbances or changes in the operating condition commonly occurring during the welding process. In order to minimize these effects, a controller can be used. In this study, a fuzzy controller was used in order to stabilize the arc during CO(2) welding. The input variable of the controller was the Mita index. This index estimates quantitatively the arc stability that is influenced by many welding process parameters. Because the welding process is complex, a mathematical model of the Mita index was difficult to derive. Therefore, the parameter settings of the fuzzy controller were determined by performing actual control experiments without using a mathematical model of the controlled process. The solution, the Taguchi method was used to determine the optimal control parameter settings of the fuzzy controller to make the control performance robust and insensitive to the changes in the operating conditions.

  4. Application of dragonfly algorithm for optimal performance analysis of process parameters in turn-mill operations- A case study

    NASA Astrophysics Data System (ADS)

    Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT

    2018-02-01

    Meta-heuristic multi-response optimization methods are widely in use to solve multi-objective problems to obtain Pareto optimal solutions during optimization. This work focuses on optimal multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using Taguchi design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to optimize the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the optimal multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response optimization technique Viz. Grey relational analysis (GRA).

  5. Apply TQM to E-Government Outsourcing Management

    NASA Astrophysics Data System (ADS)

    Huai, Jinmei

    This paper developed an approach to e-government outsourcing quality management. E-government initiatives have rapidly increased in the last decades and the success of these activities will largely depend on their operation quality. As an instrument to improve operation quality, outsourcing can be applied to e-government. This paper inspected process of e-government outsourcing and discussed how to improve the outsourcing performance through total quality management (TQM). The characteristics and special requirements of e-government outsourcing were analyzed as the basis for discussion. Then the principles and application of total quality management were interpreted. Finally the process of improving performance of e-government was analyzed in the context of outsourcing.

  6. Ground Robotic Hand Applications for the Space Program study (GRASP)

    NASA Astrophysics Data System (ADS)

    Grissom, William A.; Rafla, Nader I.

    1992-04-01

    This document reports on a NASA-STDP effort to address research interests of the NASA Kennedy Space Center (KSC) through a study entitled, Ground Robotic-Hand Applications for the Space Program (GRASP). The primary objective of the GRASP study was to identify beneficial applications of specialized end-effectors and robotic hand devices for automating any ground operations which are performed at the Kennedy Space Center. Thus, operations for expendable vehicles, the Space Shuttle and its components, and all payloads were included in the study. Typical benefits of automating operations, or augmenting human operators performing physical tasks, include: reduced costs; enhanced safety and reliability; and reduced processing turnaround time.

  7. Ground Robotic Hand Applications for the Space Program study (GRASP)

    NASA Technical Reports Server (NTRS)

    Grissom, William A.; Rafla, Nader I. (Editor)

    1992-01-01

    This document reports on a NASA-STDP effort to address research interests of the NASA Kennedy Space Center (KSC) through a study entitled, Ground Robotic-Hand Applications for the Space Program (GRASP). The primary objective of the GRASP study was to identify beneficial applications of specialized end-effectors and robotic hand devices for automating any ground operations which are performed at the Kennedy Space Center. Thus, operations for expendable vehicles, the Space Shuttle and its components, and all payloads were included in the study. Typical benefits of automating operations, or augmenting human operators performing physical tasks, include: reduced costs; enhanced safety and reliability; and reduced processing turnaround time.

  8. Impact of Electrostatics on Processing and Product Performance of Pharmaceutical Solids.

    PubMed

    Desai, Parind Mahendrakumar; Tan, Bernice Mei Jin; Liew, Celine Valeria; Chan, Lai Wah; Heng, Paul Wan Sia

    2015-01-01

    Manufacturing of pharmaceutical solids involves different unit operations and processing steps such as powder blending, fluidization, sieving, powder coating, pneumatic conveying and spray drying. During these operations, particles come in contact with other particles, different metallic, glass or polymer surfaces and can become electrically charged. Electrostatic charging often gives a negative connotation as it creates sticking, jamming, segregation or other issues during tablet manufacturing, capsule filling, film packaging and other pharmaceutical operations. A thorough and fundamental appreciation of the current knowledge of mechanisms and the potential outcomes is essential in order to minimize potential risks resulting from this phenomenon. The intent of this review is to discuss the electrostatic properties of pharmaceutical powders, equipment surfaces and devices affecting pharmaceutical processing and product performance. Furthermore, the underlying mechanisms responsible for the electrostatic charging are described and factors affecting electrostatic charging have been reviewed in detail. Feasibility of different methods used in the laboratory and pharmaceutical industry to measure charge propensity and decay has been summarized. Different computational and experimental methods studied have proven that the particle charging is a very complex phenomenon and control of particle charging is extremely important to achieve reliable manufacturing and reproducible product performance.

  9. Mental Workload and Performance Experiment (MWPE) Team in the Spacelab Payload Operations Control

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The primary payload for Space Shuttle Mission STS-42, launched January 22, 1992, was the International Microgravity Laboratory-1 (IML-1), a pressurized manned Spacelab module. The goal of IML-1 was to explore in depth the complex effects of weightlessness of living organisms and materials processing. Around-the-clock research was performed on the human nervous system's adaptation to low gravity and effects of microgravity on other life forms such as shrimp eggs, lentil seedlings, fruit fly eggs, and bacteria. Materials processing experiments were also conducted, including crystal growth from a variety of substances such as enzymes, mercury iodide, and a virus. The Huntsville Operations Support Center (HOSC) Spacelab Payload Operations Control Center (SL POCC) at the Marshall Space Flight Center (MSFC) was the air/ground communication channel used between the astronauts and ground control teams during the Spacelab missions. Featured is the Mental Workload and Performance Experiment (MWPE) team in the SL POCC) during STS-42, IML-1 mission.

  10. Mental Workload and Performance Experiment (MWPE) Team in the Spacelab Payload Operations Control

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The primary payload for Space Shuttle Mission STS-42, launched January 22, 1992, was the International Microgravity Laboratory-1 (IML-1), a pressurized manned Spacelab module. The goal of IML-1 was to explore in depth the complex effects of weightlessness of living organisms and materials processing. Around-the-clock research was performed on the human nervous system's adaptation to low gravity and effects of microgravity on other life forms such as shrimp eggs, lentil seedlings, fruit fly eggs, and bacteria. Materials processing experiments were also conducted, including crystal growth from a variety of substances such as enzymes, mercury iodide, and a virus. The Huntsville Operations Support Center (HOSC) Spacelab Payload Operations Control Center (SL POCC) at the Marshall Space Flight Center (MSFC) was the air/ground communication channel used between the astronauts and ground control teams during the Spacelab missions. Featured activities are of the Mental Workload and Performance Experiment (MWPE) team in the SL POCC during the IML-1 mission.

  11. Effect of Gas Pressure on Polarization of SOFC Cathode Prepared by Plasma Spray

    NASA Astrophysics Data System (ADS)

    Li, Cheng-Xin; Wang, Zhun-Zhun; Liu, Shuai; Li, Chang-Jiu

    2013-06-01

    A cermet-supported tubular SOFC was fabricated using thermal spray. The cell performance was investigated at temperatures from 750 to 900 °C and pressures from 0.1 to 0.5 MPa to examine the effect of operating gas pressure on the cell performance. The influence of gas pressure on the cathodic polarization was studied through the electrochemical impedance approach to examine the controlling electrochemical processes during cell operation. Results show that increasing the operating gas pressure improves the power output performance significantly. When the gas pressure is increased from 0.1 to 0.3 MPa, the maximum power density is increased by a factor of 32% at a temperature of 800 °C. The cathode polarization decreases significantly with the increase of the gas pressure. The electrochemical analysis shows that the main control processes of the cathode reaction are the oxygen species transfer at the three-phase boundary and oxygen diffusion on the surface or in the bulk of the cathode, which are enhanced with increasing gas pressure.

  12. DSN system performance test software

    NASA Technical Reports Server (NTRS)

    Martin, M.

    1978-01-01

    The system performance test software is currently being modified to include additional capabilities and enhancements. Additional software programs are currently being developed for the Command Store and Forward System and the Automatic Total Recall System. The test executive is the main program. It controls the input and output of the individual test programs by routing data blocks and operator directives to those programs. It also processes data block dump requests from the operator.

  13. Safety in the operating theatre--part 1: interpersonal relationships and team performance

    NASA Technical Reports Server (NTRS)

    Schaefer, H. G.; Helmreich, R. L.; Scheidegger, D.

    1995-01-01

    The authors examine the application of interpersonal human factors training on operating room (OR) personnel. Mortality studies of OR deaths and critical incident studies of anesthesia are examined to determine the role of human error in OR incidents. Theoretical models of system vulnerability to accidents are presented with emphasis on a systems approach to OR performance. Input, process, and outcome factors are discussed in detail.

  14. From Prime to Extended Mission: Evolution of the MER Tactical Uplink Process

    NASA Technical Reports Server (NTRS)

    Mishkin, Andrew H.; Laubach, Sharon

    2006-01-01

    To support a 90-day surface mission for two robotic rovers, the Mars Exploration Rover mission designed and implemented an intensive tactical operations process, enabling daily commanding of each rover. Using a combination of new processes, custom software tools, a Mars-time staffing schedule, and seven-day-a-week operations, the MER team was able to compress the traditional weeks-long command-turnaround for a deep space robotic mission to about 18 hours. However, the pace of this process was never intended to be continued indefinitely. Even before the end of the three-month prime mission, MER operations began evolving towards greater sustainability. A combination of continued software tool development, increasing team experience, and availability of reusable sequences first reduced the mean process duration to approximately 11 hours. The number of workshifts required to perform the process dropped, and the team returned to a modified 'Earth-time' schedule. Additional process and tool adaptation eventually provided the option of planning multiple Martian days of activity within a single workshift, making 5-day-a-week operations possible. The vast majority of the science team returned to their home institutions, continuing to participate fully in the tactical operations process remotely. MER has continued to operate for over two Earth-years as many of its key personnel have moved on to other projects, the operations team and budget have shrunk, and the rovers have begun to exhibit symptoms of aging.

  15. Modeling, control, and dynamic performance analysis of a reverse osmosis desalination plant integrated within hybrid energy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jong Suk; Chen, Jun; Garcia, Humberto E.

    An RO (reverse osmosis) desalination plant is proposed as an effective, FLR (flexible load resource) to be integrated into HES (hybrid energy systems) to support various types of ancillary services to the electric grid, under variable operating conditions. To study the dynamic (transient) analysis of such system, among the various unit operations within HES, special attention is given here to the detailed dynamic modeling and control design of RO desalination process with a spiral-wound membrane module. The model incorporates key physical phenomena that have been investigated individually into a dynamic integrated model framework. In particular, the solution-diffusion model modified withmore » the concentration polarization theory is applied to predict RO performance over a large range of operating conditions. Simulation results involving several case studies suggest that an RO desalination plant, acting as a FLR, can provide operational flexibility to participate in energy management at the utility scale by dynamically optimizing the use of excess electrical energy. Here, the incorporation of additional commodity (fresh water) produced from a FLR allows a broader range of HES operations for maximizing overall system performance and profitability. For the purpose of assessing the incorporation of health assessment into process operations, an online condition monitoring approach for RO membrane fouling supervision is addressed in the case study presented.« less

  16. Modeling, control, and dynamic performance analysis of a reverse osmosis desalination plant integrated within hybrid energy systems

    DOE PAGES

    Kim, Jong Suk; Chen, Jun; Garcia, Humberto E.

    2016-06-17

    An RO (reverse osmosis) desalination plant is proposed as an effective, FLR (flexible load resource) to be integrated into HES (hybrid energy systems) to support various types of ancillary services to the electric grid, under variable operating conditions. To study the dynamic (transient) analysis of such system, among the various unit operations within HES, special attention is given here to the detailed dynamic modeling and control design of RO desalination process with a spiral-wound membrane module. The model incorporates key physical phenomena that have been investigated individually into a dynamic integrated model framework. In particular, the solution-diffusion model modified withmore » the concentration polarization theory is applied to predict RO performance over a large range of operating conditions. Simulation results involving several case studies suggest that an RO desalination plant, acting as a FLR, can provide operational flexibility to participate in energy management at the utility scale by dynamically optimizing the use of excess electrical energy. Here, the incorporation of additional commodity (fresh water) produced from a FLR allows a broader range of HES operations for maximizing overall system performance and profitability. For the purpose of assessing the incorporation of health assessment into process operations, an online condition monitoring approach for RO membrane fouling supervision is addressed in the case study presented.« less

  17. A definition of high-level decisions in the engineering of systems

    NASA Astrophysics Data System (ADS)

    Powell, Robert Anthony

    The role of the systems engineer defines that he or she be proactive and guide the program manager and their customers through their decisions to enhance the effectiveness of system development---producing faster, better, and cheaper systems. The present lack of coverage in literature on what these decisions are and how they relate to each other may be a contributing factor to the high rate of failure among system projects. At the onset of the system development process, decisions have an integral role in the design of a system that meets stakeholders' needs. This is apparent during the design and qualification of both the Development System and the Operational System. The performance, cost and schedule of the Development System affect the performance of the Operational System and are affected by decisions that influence physical elements of the Development System. The performance, cost, and schedule of the Operational System is affected by decisions that influence physical elements of the Operational System. Traditionally, product and process have been designed using know-how and trial and error. However, the empiricism of engineers and program managers is limited which can, and has led to costly mistakes. To date, very little research has explored decisions made in the engineering of a system. In government, literature exists on procurement processes for major system development; but in general literature on decisions, how they relate to each other, and the key information requirements within one of two systems and across the two systems is not readily available. This research hopes to improve the processes inherent in the engineering of systems. The primary focus of this research is on department of defense (DoD) military systems, specifically aerospace systems and may generalize more broadly. The result of this research is a process tool, a Decision System Model, which can be used by systems engineers to guide the program manager and their customers through the decisions about concurrently designing and qualifying both the Development and Operational systems.

  18. The Joint Distribution Process Analysis Center (JDPAC): Background and Current Capability

    DTIC Science & Technology

    2007-06-12

    Systems Integration and Data Management JDDE Analysis/Global Distribution Performance Assessment Futures/Transformation Analysis Balancing Operational Art ... Science JDPAC “101” USTRANSCOM Future Operations Center SDDC – TEA Army SES (Dual Hat) • Transportability Engineering • Other Title 10

  19. Movement Processes as Observable Behavior.

    ERIC Educational Resources Information Center

    Harrington, Wilma M.

    The operations for achieving skill in motor performance are perceiving, patterning, adapting, refining, varying, improvising, and composing. These operations are readily observable in physical education classes. An observation record containing the seven catagories was used to classify teacher feedback to students. The teachers observed were…

  20. NASA Conjunction Assessment Organizational Approach and the Associated Determination of Screening Volume Sizes

    NASA Technical Reports Server (NTRS)

    Newman, Lauri K.; Hejduk, Matthew D.

    2015-01-01

    NASA is committed to safety of flight for all of its operational assets Performed by CARA at NASA GSFC for robotic satellites Focus of this briefing Performed by TOPO at NASA JSC for human spaceflight he Conjunction Assessment Risk Analysis (CARA) was stood up to offer this service to all NASA robotic satellites Currently provides service to 70 operational satellites NASA unmanned operational assets Other USG assets (USGS, USAF, NOAA) International partner assets Conjunction Assessment (CA) is the process of identifying close approaches between two orbiting objects; sometimes called conjunction screening The Joint Space Operations Center (JSpOC) a USAF unit at Vandenberg AFB, maintains the high accuracy catalog of space objects, screens CARA-supported assets against the catalog, performs OD tasking, and generates close approach data.

  1. Troubleshooting crude vacuum tower overhead ejector systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, J.R.; Frens, L.L.

    1995-03-01

    Routinely surveying tower overhead vacuum systems can improve performance and product quality. These vacuum systems normally provide reliable and consistent operation. However, process conditions, supplied utilities, corrosion, erosion and fouling all have an impact on ejector system performance. Refinery vacuum distillation towers use ejector systems to maintain tower top pressure and remove overhead gases. However, as with virtually all refinery equipment, performance may be affected by a number of variables. These variables may act independently or concurrently. It is important to understand basic operating principles of vacuum systems and how performance is affected by: utilities, corrosion and erosion, fouling, andmore » process conditions. Reputable vacuum-system suppliers have service engineers that will come to a refinery to survey the system and troubleshoot performance or offer suggestions for improvement. A skilled vacuum-system engineer may be needed to diagnose and remedy system problems. The affect of these variables on performance is discussed. A case history is described of a vacuum system on a crude tower in a South American refinery.« less

  2. Application of quality by design principles to the development and technology transfer of a major process improvement for the manufacture of a recombinant protein.

    PubMed

    Looby, Mairead; Ibarra, Neysi; Pierce, James J; Buckley, Kevin; O'Donovan, Eimear; Heenan, Mary; Moran, Enda; Farid, Suzanne S; Baganz, Frank

    2011-01-01

    This study describes the application of quality by design (QbD) principles to the development and implementation of a major manufacturing process improvement for a commercially distributed therapeutic protein produced in Chinese hamster ovary cell culture. The intent of this article is to focus on QbD concepts, and provide guidance and understanding on how the various components combine together to deliver a robust process in keeping with the principles of QbD. A fed-batch production culture and a virus inactivation step are described as representative examples of upstream and downstream unit operations that were characterized. A systematic approach incorporating QbD principles was applied to both unit operations, involving risk assessment of potential process failure points, small-scale model qualification, design and execution of experiments, definition of operating parameter ranges and process validation acceptance criteria followed by manufacturing-scale implementation and process validation. Statistical experimental designs were applied to the execution of process characterization studies evaluating the impact of operating parameters on product quality attributes and process performance parameters. Data from process characterization experiments were used to define the proven acceptable range and classification of operating parameters for each unit operation. Analysis of variance and Monte Carlo simulation methods were used to assess the appropriateness of process design spaces. Successful implementation and validation of the process in the manufacturing facility and the subsequent manufacture of hundreds of batches of this therapeutic protein verifies the approaches taken as a suitable model for the development, scale-up and operation of any biopharmaceutical manufacturing process. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  3. In-situ plasma processing to increase the accelerating gradients of SRF cavities

    DOE PAGES

    Doleans, Marc; Afanador, Ralph; Barnhart, Debra L.; ...

    2015-12-31

    A new in-situ plasma processing technique is being developed at the Spallation Neutron Source (SNS) to improve the performance of the cavities in operation. The technique utilizes a low-density reactive oxygen plasma at room temperature to remove top surface hydrocarbons. The plasma processing technique increases the work function of the cavity surface and reduces the overall amount of vacuum and electron activity during cavity operation; in particular it increases the field emission onset, which enables cavity operation at higher accelerating gradients. Experimental evidence also suggests that the SEY of the Nb surface decreases after plasma processing which helps mitigating multipactingmore » issues. This article discusses the main developments and results from the plasma processing R&D are presented and experimental results for in-situ plasma processing of dressed cavities in the SNS horizontal test apparatus.« less

  4. Analysis of the potential application of the Davenport/Short information technology model to a research and development organization

    NASA Technical Reports Server (NTRS)

    Decker, Deron R.

    1991-01-01

    Part of the role of the Mission Operations Lab is the development of budget inputs for Huntsville Operations/Payload Crew Training Center/Payload Operations Control Center (HOSC/PCTC/POCC) activity. These budget inputs are part of the formal Program Operating Plan (POP) process, which occurs twice yearly, and of the formal creation of the yearly operating plan. Both POPs and the operation plan serve the purpose of mapping out planned expenditures for the next fiscal year and for a number of outlying years. Based on these plans, the various Project Offices at the Center fund the HOSC/PCTC/POCC activity. Because of Mission Operations Lab's role in budget development, some of the Project Offices have begun looking to Mission Operations, and specifically the EO02 branch, to track expenditures and explain/justify any deviations from plans. EO02 has encountered difficulties acquiring the necessary information to perform this function. It appears that the necessary linkages with other units had not been fully developed and integrated with the flow of information in budget implementation. The purpose of this study is to document the budget process from the point of view of EO02 and to identify the steps necessary for it to effectively perform this role on a continuous basis.

  5. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOEpatents

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  6. Network acceleration techniques

    NASA Technical Reports Server (NTRS)

    Crowley, Patricia (Inventor); Maccabe, Arthur Barney (Inventor); Awrach, James Michael (Inventor)

    2012-01-01

    Splintered offloading techniques with receive batch processing are described for network acceleration. Such techniques offload specific functionality to a NIC while maintaining the bulk of the protocol processing in the host operating system ("OS"). The resulting protocol implementation allows the application to bypass the protocol processing of the received data. Such can be accomplished this by moving data from the NIC directly to the application through direct memory access ("DMA") and batch processing the receive headers in the host OS when the host OS is interrupted to perform other work. Batch processing receive headers allows the data path to be separated from the control path. Unlike operating system bypass, however, the operating system still fully manages the network resource and has relevant feedback about traffic and flows. Embodiments of the present disclosure can therefore address the challenges of networks with extreme bandwidth delay products (BWDP).

  7. A Big Spatial Data Processing Framework Applying to National Geographic Conditions Monitoring

    NASA Astrophysics Data System (ADS)

    Xiao, F.

    2018-04-01

    In this paper, a novel framework for spatial data processing is proposed, which apply to National Geographic Conditions Monitoring project of China. It includes 4 layers: spatial data storage, spatial RDDs, spatial operations, and spatial query language. The spatial data storage layer uses HDFS to store large size of spatial vector/raster data in the distributed cluster. The spatial RDDs are the abstract logical dataset of spatial data types, and can be transferred to the spark cluster to conduct spark transformations and actions. The spatial operations layer is a series of processing on spatial RDDs, such as range query, k nearest neighbor and spatial join. The spatial query language is a user-friendly interface which provide people not familiar with Spark with a comfortable way to operation the spatial operation. Compared with other spatial frameworks, it is highlighted that comprehensive technologies are referred for big spatial data processing. Extensive experiments on real datasets show that the framework achieves better performance than traditional process methods.

  8. Storing and managing information artifacts collected by information analysts using a computing device

    DOEpatents

    Pike, William A; Riensche, Roderick M; Best, Daniel M; Roberts, Ian E; Whyatt, Marie V; Hart, Michelle L; Carr, Norman J; Thomas, James J

    2012-09-18

    Systems and computer-implemented processes for storage and management of information artifacts collected by information analysts using a computing device. The processes and systems can capture a sequence of interactive operation elements that are performed by the information analyst, who is collecting an information artifact from at least one of the plurality of software applications. The information artifact can then be stored together with the interactive operation elements as a snippet on a memory device, which is operably connected to the processor. The snippet comprises a view from an analysis application, data contained in the view, and the sequence of interactive operation elements stored as a provenance representation comprising operation element class, timestamp, and data object attributes for each interactive operation element in the sequence.

  9. Performance evaluation approach for the supercritical helium cold circulators of ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaghela, H.; Sarkar, B.; Bhattacharya, R.

    2014-01-29

    The ITER project design foresees Supercritical Helium (SHe) forced flow cooling for the main cryogenic components, namely, the superconducting (SC) magnets and cryopumps (CP). Therefore, cold circulators have been selected to provide the required SHe mass flow rate to cope with specific operating conditions and technical requirements. Considering the availability impacts of such machines, it has been decided to perform evaluation tests of the cold circulators at operating conditions prior to the series production in order to minimize the project technical risks. A proposal has been conceptualized, evaluated and simulated to perform representative tests of the full scale SHe coldmore » circulators. The objectives of the performance tests include the validation of normal operating condition, transient and off-design operating modes as well as the efficiency measurement. A suitable process and instrumentation diagram of the test valve box (TVB) has been developed to implement the tests at the required thermodynamic conditions. The conceptual engineering design of the TVB has been developed along with the required thermal analysis for the normal operating conditions to support the performance evaluation of the SHe cold circulator.« less

  10. Opcode counting for performance measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Satterfield, David L.; Walkup, Robert E.

    Methods, systems and computer program products are disclosed for measuring a performance of a program running on a processing unit of a processing system. In one embodiment, the method comprises informing a logic unit of each instruction in the program that is executed by the processing unit, assigning a weight to each instruction, assigning the instructions to a plurality of groups, and analyzing the plurality of groups to measure one or more metrics. In one embodiment, each instruction includes an operating code portion, and the assigning includes assigning the instructions to the groups based on the operating code portions ofmore » the instructions. In an embodiment, each type of instruction is assigned to a respective one of the plurality of groups. These groups may be combined into a plurality of sets of the groups.« less

  11. Opcode counting for performance measurement

    DOEpatents

    Gara, Alan; Satterfield, David L; Walkup, Robert E

    2013-10-29

    Methods, systems and computer program products are disclosed for measuring a performance of a program running on a processing unit of a processing system. In one embodiment, the method comprises informing a logic unit of each instruction in the program that is executed by the processing unit, assigning a weight to each instruction, assigning the instructions to a plurality of groups, and analyzing the plurality of groups to measure one or more metrics. In one embodiment, each instruction includes an operating code portion, and the assigning includes assigning the instructions to the groups based on the operating code portions of the instructions. In an embodiment, each type of instruction is assigned to a respective one of the plurality of groups. These groups may be combined into a plurality of sets of the groups.

  12. Opcode counting for performance measurement

    DOEpatents

    Gara, Alan; Satterfield, David L.; Walkup, Robert E.

    2015-08-11

    Methods, systems and computer program products are disclosed for measuring a performance of a program running on a processing unit of a processing system. In one embodiment, the method comprises informing a logic unit of each instruction in the program that is executed by the processing unit, assigning a weight to each instruction, assigning the instructions to a plurality of groups, and analyzing the plurality of groups to measure one or more metrics. In one embodiment, each instruction includes an operating code portion, and the assigning includes assigning the instructions to the groups based on the operating code portions of the instructions. In an embodiment, each type of instruction is assigned to a respective one of the plurality of groups. These groups may be combined into a plurality of sets of the groups.

  13. Opcode counting for performance measurement

    DOEpatents

    Gara, Alan; Satterfield, David L.; Walkup, Robert E.

    2016-10-18

    Methods, systems and computer program products are disclosed for measuring a performance of a program running on a processing unit of a processing system. In one embodiment, the method comprises informing a logic unit of each instruction in the program that is executed by the processing unit, assigning a weight to each instruction, assigning the instructions to a plurality of groups, and analyzing the plurality of groups to measure one or more metrics. In one embodiment, each instruction includes an operating code portion, and the assigning includes assigning the instructions to the groups based on the operating code portions of the instructions. In an embodiment, each type of instruction is assigned to a respective one of the plurality of groups. These groups may be combined into a plurality of sets of the groups.

  14. High-performance recombinant protein production with Escherichia coli in continuously operated cascades of stirred-tank reactors.

    PubMed

    Schmideder, Andreas; Weuster-Botz, Dirk

    2017-07-01

    The microbial expression of intracellular, recombinant proteins in continuous bioprocesses suffers from low product concentrations. Hence, a process for the intracellular production of photoactivatable mCherry with Escherichia coli in a continuously operated cascade of two stirred-tank reactors was established to separate biomass formation (first reactor) and protein expression (second reactor) spatially. Cascades of miniaturized stirred-tank reactors were implemented, which enable the 24-fold parallel characterization of cascade processes and the direct scale-up of results to the liter scale. With PAmCherry concentrations of 1.15 g L -1 cascades of stirred-tank reactors improved the process performance significantly compared to production processes in chemostats. In addition, an optimized fed-batch process was outperformed regarding space-time yield (149 mg L -1  h -1 ). This study implicates continuous cascade processes to be a promising alternative to fed-batch processes for microbial protein production and demonstrates that miniaturized stirred-tank reactors can reduce the timeline and costs for cascade process characterization.

  15. Comparative performance evaluation of full-scale anaerobic and aerobic wastewater treatment processes in Brazil.

    PubMed

    von Sperling, M; Oliveira, S C

    2009-01-01

    This article evaluates and compares the actual behavior of 166 full-scale anaerobic and aerobic wastewater treatment plants in operation in Brazil, providing information on the performance of the processes in terms of the quality of the generated effluent and the removal efficiency achieved. The observed results of effluent concentrations and removal efficiencies of the constituents BOD, COD, TSS (total suspended solids), TN (total nitrogen), TP (total phosphorus) and FC (faecal or thermotolerant coliforms) have been compared with the typical expected performance reported in the literature. The treatment technologies selected for study were: (a) predominantly anaerobic: (i) septic tank + anaerobic filter (ST + AF), (ii) UASB reactor without post-treatment (UASB) and (iii) UASB reactor followed by several post-treatment processes (UASB + POST); (b) predominantly aerobic: (iv) facultative pond (FP), (v) anaerobic pond followed by facultative pond (AP + FP) and (vi) activated sludge (AS). The results, confirmed by statistical tests, showed that, in general, the best performance was achieved by AS, but closely followed by UASB reactor, when operating with any kind of post-treatment. The effluent quality of the anaerobic processes ST + AF and UASB reactor without post-treatment was very similar to the one presented by facultative pond, a simpler aerobic process, regarding organic matter.

  16. Maine Facility Research Summary : Dynamic Sign Systems for Narrow Bridges

    DOT National Transportation Integrated Search

    1997-09-01

    This report describes the development of operational surveillance data processing algorithms and software for application to urban freeway systems, conforming to a framework in which data processing is performed in stages: sensor malfunction detectio...

  17. Age Differences in the Speed of Processing: A Critique

    ERIC Educational Resources Information Center

    Chi, Michelen T. H.

    1977-01-01

    This paper questions the assumption that a central processing deficit exists in the speed of performing mental operations by children as compared to adults. Two hypotheses are proposed and data are cited as evidence. (JMB)

  18. Features of an effective operative dentistry learning environment: students' perceptions and relationship with performance.

    PubMed

    Suksudaj, N; Lekkas, D; Kaidonis, J; Townsend, G C; Winning, T A

    2015-02-01

    Students' perceptions of their learning environment influence the quality of outcomes they achieve. Learning dental operative techniques in a simulated clinic environment is characterised by reciprocal interactions between skills training, staff- and student-related factors. However, few studies have examined how students perceive their operative learning environments and whether there is a relationship between their perceptions and subsequent performance. Therefore, this study aimed to clarify which learning activities and interactions students perceived as supporting their operative skills learning and to examine relationships with their outcomes. Longitudinal data about examples of operative laboratory sessions that were perceived as effective or ineffective for learning were collected twice a semester, using written critical incidents and interviews. Emergent themes from these data were identified using thematic analysis. Associations between perceptions of learning effectiveness and performance were analysed using chi-square tests. Students indicated that an effective learning environment involved interactions with tutors and peers. This included tutors arranging group discussions to clarify processes and outcomes, providing demonstrations and constructive feedback. Feedback focused on mistakes, and not improvement, was reported as being ineffective for learning. However, there was no significant association between students' perceptions of the effectiveness of their learning experiences and subsequent performance. It was clear that learning in an operative technique setting involved various factors related not only to social interactions and observational aspects of learning but also to cognitive, motivational and affective processes. Consistent with studies that have demonstrated complex interactions between students, their learning environment and outcomes, other factors need investigation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Parallel Transport Quantum Logic Gates with Trapped Ions.

    PubMed

    de Clercq, Ludwig E; Lo, Hsiang-Yu; Marinelli, Matteo; Nadlinger, David; Oswald, Robin; Negnevitsky, Vlad; Kienzler, Daniel; Keitch, Ben; Home, Jonathan P

    2016-02-26

    We demonstrate single-qubit operations by transporting a beryllium ion with a controlled velocity through a stationary laser beam. We use these to perform coherent sequences of quantum operations, and to perform parallel quantum logic gates on two ions in different processing zones of a multiplexed ion trap chip using a single recycled laser beam. For the latter, we demonstrate individually addressed single-qubit gates by local control of the speed of each ion. The fidelities we observe are consistent with operations performed using standard methods involving static ions and pulsed laser fields. This work therefore provides a path to scalable ion trap quantum computing with reduced requirements on the optical control complexity.

  20. Timeliner: Automating Procedures on the ISS

    NASA Technical Reports Server (NTRS)

    Brown, Robert; Braunstein, E.; Brunet, Rick; Grace, R.; Vu, T.; Zimpfer, Doug; Dwyer, William K.; Robinson, Emily

    2002-01-01

    Timeliner has been developed as a tool to automate procedural tasks. These tasks may be sequential tasks that would typically be performed by a human operator, or precisely ordered sequencing tasks that allow autonomous execution of a control process. The Timeliner system includes elements for compiling and executing sequences that are defined in the Timeliner language. The Timeliner language was specifically designed to allow easy definition of scripts that provide sequencing and control of complex systems. The execution environment provides real-time monitoring and control based on the commands and conditions defined in the Timeliner language. The Timeliner sequence control may be preprogrammed, compiled from Timeliner "scripts," or it may consist of real-time, interactive inputs from system operators. In general, the Timeliner system lowers the workload for mission or process control operations. In a mission environment, scripts can be used to automate spacecraft operations including autonomous or interactive vehicle control, performance of preflight and post-flight subsystem checkouts, or handling of failure detection and recovery. Timeliner may also be used for mission payload operations, such as stepping through pre-defined procedures of a scientific experiment.

  1. Operation of electrothermal and electrostatic MUMPs microactuators underwater

    NASA Astrophysics Data System (ADS)

    Sameoto, Dan; Hubbard, Ted; Kujath, Marek

    2004-10-01

    Surface-micromachined actuators made in multi-user MEMS processes (MUMPs) have been operated underwater without modifying the manufacturing process. Such actuators have generally been either electro-thermally or electro-statically actuated and both actuator styles are tested here for suitability underwater. This is believed to be the first time that thermal and electrostatic actuators have been compared for deflection underwater relative to air performance. A high-frequency ac square wave is used to replicate a dc-driven actuator output without the associated problem of electrolysis in water. This method of ac activation, with frequencies far above the mechanical resonance frequencies of the MEMS actuators, has been termed root mean square (RMS) operation. Both thermal and electrostatic actuators have been tested and proved to work using RMS control. Underwater performance has been evaluated by using in-air operation of these actuators as a benchmark. When comparing deflection per volt applied, thermal actuators operate between 5 and 9% of in-air deflection and electrostatic actuators show an improvement in force per volt applied of upwards of 6000%. These results agree with predictions based on the physical properties of the surrounding medium.

  2. Total Quality Management Implementation Plan.

    DTIC Science & Technology

    1989-06-01

    Quality Management Implementation Plan 6. AUTHOR(S) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION Defense General...E 14. SUBJECT TERMS 15. NUMBER OF PAGES TOM (Total Quality Management ), Continuous Process Improvement,_________ Depot Operations, Supply Support 16

  3. Long-term pavement performance program manual for profile measurements and processing

    DOT National Transportation Integrated Search

    2008-11-01

    This manual describes operational procedures for measuring longitudinal pavement profiles for the Long-Term Pavement Performance (LTPP) Program using the International Cybernetics Corporation (ICC) road profiler, Face Company Dipstick, and the rod an...

  4. K-Channel: A Multifunctional Architecture for Dynamically Reconfigurable Sample Processing in Droplet Microfluidics.

    PubMed

    Doonan, Steven R; Bailey, Ryan C

    2017-04-04

    By rapidly creating libraries of thousands of unique, miniaturized reactors, droplet microfluidics provides a powerful method for automating high-throughput chemical analysis. In order to engineer in-droplet assays, microfluidic devices must add reagents into droplets, remove fluid from droplets, and perform other necessary operations, each typically provided by a unique, specialized geometry. Unfortunately, modifying device performance or changing operations usually requires re-engineering the device among these specialized geometries, a time-consuming and costly process when optimizing in-droplet assays. To address this challenge in implementing droplet chemistry, we have developed the "K-channel," which couples a cross-channel flow to the segmented droplet flow to enable a range of operations on passing droplets. K-channels perform reagent injection (0-100% of droplet volume), fluid extraction (0-50% of droplet volume), and droplet splitting (1:1-1:5 daughter droplet ratio). Instead of modifying device dimensions or channel configuration, adjusting external conditions, such as applied pressure and electric field, selects the K-channel process and tunes its magnitude. Finally, interfacing a device-embedded magnet allows selective capture of 96% of droplet-encapsulated superparamagnetic beads during 1:1 droplet splitting events at ∼400 Hz. Addition of a second K-channel for injection (after the droplet splitting K-channel) enables integrated washing of magnetic beads within rapidly moving droplets. Ultimately, the K-channel provides an exciting opportunity to perform many useful droplet operations across a range of magnitudes without requiring architectural modifications. Therefore, we envision the K-channel as a versatile, easy to use microfluidic component enabling diverse, in-droplet (bio)chemical manipulations.

  5. Development of a plan for automating integrated circuit processing

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The operations analysis and equipment evaluations pertinent to the design of an automated production facility capable of manufacturing beam-lead CMOS integrated circuits are reported. The overall plan shows approximate cost of major equipment, production rate and performance capability, flexibility, and special maintenance requirements. Direct computer control is compared with supervisory-mode operations. The plan is limited to wafer processing operations from the starting wafer to the finished beam-lead die after separation etching. The work already accomplished in implementing various automation schemes, and the type of equipment which can be found for instant automation are described. The plan is general, so that small shops or large production units can perhaps benefit. Examples of major types of automated processing machines are shown to illustrate the general concepts of automated wafer processing.

  6. Range pattern matching with layer operations and continuous refinements

    NASA Astrophysics Data System (ADS)

    Tseng, I.-Lun; Lee, Zhao Chuan; Li, Yongfu; Perez, Valerio; Tripathi, Vikas; Ong, Jonathan Yoong Seang

    2018-03-01

    At advanced and mainstream process nodes (e.g., 7nm, 14nm, 22nm, and 55nm process nodes), lithography hotspots can exist in layouts of integrated circuits even if the layouts pass design rule checking (DRC). Existence of lithography hotspots in a layout can cause manufacturability issues, which can result in yield losses of manufactured integrated circuits. In order to detect lithography hotspots existing in physical layouts, pattern matching (PM) algorithms and commercial PM tools have been developed. However, there are still needs to use DRC tools to perform PM operations. In this paper, we propose a PM synthesis methodology, which uses a continuous refinement technique, for the automatic synthesis of a given lithography hotspot pattern into a DRC deck, which consists of layer operation commands, so that an equivalent PM operation can be performed by executing the synthesized deck with the use of a DRC tool. Note that the proposed methodology can deal with not only exact patterns, but also range patterns. Also, lithography hotspot patterns containing multiple layers can be processed. Experimental results show that the proposed methodology can accurately and efficiently detect lithography hotspots in physical layouts.

  7. A strategic value management approach for energy and maintenance management in a building

    NASA Astrophysics Data System (ADS)

    Nawi, Mohd Nasrun Mohd; Dahlan, Nofri Yenita; Nadarajan, Santhirasegaran

    2015-05-01

    Fragmentation process is always been highlighted by the stakeholders in the construction industry as one of the `critical' issue that diminishing the opportunity for stakeholders that involved during the operation and maintenance stage to influence design decisions. Failure of design professionals to consider how a maintenance contractor or facility manager will construct the design thus results in higher operating cost, wastage, defects during the maintenance and operation process. Moving towards team integration is considered a significant strategy for overcoming the issue. Value Management is a style of management dedicated to guiding people and promoting innovation with the aim to improve overall building performance through structured, team-oriented exercises which make explicit, and appraise subsequent decisions, by reference to the value requirements of the clients. Accordingly, this paper discusses the fragmentation issue in more detail including the definition, causes and effects to the maintenance and operation of building and at the same time will highlighted the potential of VM integrated team approach as a strategic management approach for overcoming that issue. It also explores that the team integration strategy alleviates scheduling problems, delays and disputes during the construction process, and, hence, prevent harming the overall building performance.

  8. JWST Wavefront Sensing and Control: Operations Plans, Demonstrations, and Status

    NASA Astrophysics Data System (ADS)

    Perrin, Marshall; Acton, D. Scott; Lajoie, Charles-Philippe; Knight, J. Scott; Myers, Carey; Stark, Chris; JWST Wavefront Sensing & Control Team

    2018-01-01

    After JWST launches and unfolds in space, its telescope optics will be aligned through a complex series of wavefront sensing and control (WFSC) steps to achieve diffraction-limited performance. This iterative process will comprise about half of the observatory commissioning time (~ 3 out of 6 months). We summarize the JWST WFSC process, schedule, and expectations for achieved performance, and discuss our team’s activities to prepare for an effective & efficient telescope commissioning. During the recently-completed OTIS cryo test at NASA JSC, WFSC demonstrations showed the flight-like operation of the entire JWST active optics and WFSC system from end to end, including all hardware and software components. In parallel, the same test data were processed through the JWST Mission Operations Center at STScI to demonstrate the readiness of ground system components there (such as the flight operations system, data pipelines, archives, etc). Moreover, using the Astronomer’s Proposal Tool (APT), the entire telescope commissioning program has been implemented, reviewed, and is ready for execution. Between now and launch our teams will continue preparations for JWST commissioning, including further rehearsals and testing, to ensure a successful alignment of JWST’s telescope optics.

  9. PERFORMANCE EVALUATION AT A LONG-TERM FOOD PROCESSING LAND TREATMENT SITE

    EPA Science Inventory

    The objective of this project was to determine the performance of a full scale, operating overland flow land (GEL) treatment system treating nonhazardous waste. Performance was evaluated in terms of treatment of the applied waste and the environmental impact of the system, partic...

  10. Preliminary sizing and performance of aircraft

    NASA Technical Reports Server (NTRS)

    Fetterman, D. E., Jr.

    1985-01-01

    The basic processes of a program that performs sizing operations on a baseline aircraft and determines their subsequent effects on aerodynamics, propulsion, weights, and mission performance are described. Input requirements are defined and output listings explained. Results obtained by applying the method to several types of aircraft are discussed.

  11. 40 CFR 98.124 - Monitoring and QA/QC requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... efficiency test provided that the design, operation, or maintenance of the destruction device has not changed... the last emissions test), you must repeat the emission characterization. Perform the emission... process vent, previous test results, provided the tests are representative of current operating conditions...

  12. 40 CFR 98.124 - Monitoring and QA/QC requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... efficiency test provided that the design, operation, or maintenance of the destruction device has not changed... the last emissions test), you must repeat the emission characterization. Perform the emission... process vent, previous test results, provided the tests are representative of current operating conditions...

  13. 40 CFR 461.13 - New source performance standards (NSPS).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... GUIDELINES AND STANDARDS (CONTINUED) BATTERY MANUFACTURING POINT SOURCE CATEGORY Cadmium Subcategory § 461.13... allowance for process wastewater pollutants from any battery manufacturing operation other than those battery manufacturing operations listed above. [49 FR 9134, Mar. 9, 1984; 49 FR 13879, Apr. 9, 1984] ...

  14. 40 CFR 461.13 - New source performance standards (NSPS).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... GUIDELINES AND STANDARDS (CONTINUED) BATTERY MANUFACTURING POINT SOURCE CATEGORY Cadmium Subcategory § 461.13... allowance for process wastewater pollutants from any battery manufacturing operation other than those battery manufacturing operations listed above. [49 FR 9134, Mar. 9, 1984; 49 FR 13879, Apr. 9, 1984] ...

  15. 40 CFR 461.13 - New source performance standards (NSPS).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... GUIDELINES AND STANDARDS (CONTINUED) BATTERY MANUFACTURING POINT SOURCE CATEGORY Cadmium Subcategory § 461.13... allowance for process wastewater pollutants from any battery manufacturing operation other than those battery manufacturing operations listed above. [49 FR 9134, Mar. 9, 1984; 49 FR 13879, Apr. 9, 1984] ...

  16. RLV/X-33 operations overview

    NASA Astrophysics Data System (ADS)

    Black, Stephen T.; Eshleman, Wally

    1997-01-01

    This paper describes the VentureStar™ SSTO RLV and X-33 operations concepts. Applications of advanced technologies, automated ground support systems, advanced aircraft and launch vehicle lessons learned have been integrated to develop a streamlined vehicle and mission processing concept necessary to meet the goals of a commercial SSTO RLV. These concepts will be validated by the X-33 flight test program where financial and technical risk mitigation are required. The X-33 flight test program totally demonstrates the vehicle performance, technology, and efficient ground operations at the lowest possible cost. The Skunk Work's test program approach and test site proximity to the production plant are keys. The X-33 integrated flight and ground test program incrementally expands the knowledge base of the overall system allowing minimum risk progression to the next flight test program milestone. Subsequent X-33 turnaround processing flows will be performed with an aircraft operations philosophy. The differences will be based on research and development, component reliability and flight test requirements.

  17. An overview of safety assessment, regulation, and control of hazardous material use at NREL

    NASA Astrophysics Data System (ADS)

    Nelson, B. P.; Crandall, R. S.; Moskowitz, P. D.; Fthenakis, V. M.

    1992-12-01

    This paper summarizes the methodology we use to ensure the safe use of hazardous materials at the National Renewable Energy Laboratory (NREL). First, we analyze the processes and the materials used in those processes to identify the hazards presented. Then we study federal, state, and local regulations and apply the relevant requirements to our operations. When necessary, we generate internal safety documents to consolidate this information. We design research operations and support systems to conform to these requirements. Before we construct the systems, we perform a semiquantitative risk analysis on likely accident scenarios. All scenarios presenting an unacceptable risk require system or procedural modifications to reduce the risk. Following these modifications, we repeat the risk analysis to ensure that the respective accident scenarios present an acceptable risk. Once all risks are acceptable, we conduct an operational readiness review (ORR). A management-appointed panel performs the ORR ensuring compliance with all relevant requirements. After successful completion of the ORR, operations can begin.

  18. ? filtering for stochastic systems driven by Poisson processes

    NASA Astrophysics Data System (ADS)

    Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya

    2015-01-01

    This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.

  19. Development of modified FT (MFT) process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jinglai Zhou; Zhixin Zhang; Wenjie Shen

    1995-12-31

    Two-Stage Modified FT (MFT) process has been developed for producing high-octane gasoline from coal-based syngas. The main R&D are focused on the development of catalysts and technologies process. Duration tests were finished in the single-tube reactor, pilot plant (100T/Y), and industrial demonstration plant (2000T/Y). A series of satisfactory results has been obtained in terms of operating reliability of equipments, performance of catalysts, purification of coal - based syngas, optimum operating conditions, properties of gasoline and economics etc. Further scaling - up commercial plant is being considered.

  20. Central Data Processing System (CDPS) user's manual: Solar heating and cooling program

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The software and data base management system required to assess the performance of solar heating and cooling systems installed at multiple sites is presented. The instrumentation data associated with these systems is collected, processed, and presented in a form which supported continuity of performance evaluation across all applications. The CDPS consisted of three major elements: communication interface computer, central data processing computer, and performance evaluation data base. Users of the performance data base were identified, and procedures for operation, and guidelines for software maintenance were outlined. The manual also defined the output capabilities of the CDPS in support of external users of the system.

  1. The Effects of Operational Parameters on a Mono-wire Cutting System: Efficiency in Marble Processing

    NASA Astrophysics Data System (ADS)

    Yilmazkaya, Emre; Ozcelik, Yilmaz

    2016-02-01

    Mono-wire block cutting machines that cut with a diamond wire can be used for squaring natural stone blocks and the slab-cutting process. The efficient use of these machines reduces operating costs by ensuring less diamond wire wear and longer wire life at high speeds. The high investment costs of these machines will lead to their efficient use and reduce production costs by increasing plant efficiency. Therefore, there is a need to investigate the cutting performance parameters of mono-wire cutting machines in terms of rock properties and operating parameters. This study aims to investigate the effects of the wire rotational speed (peripheral speed) and wire descending speed (cutting speed), which are the operating parameters of a mono-wire cutting machine, on unit wear and unit energy, which are the performance parameters in mono-wire cutting. By using the obtained results, cuttability charts for each natural stone were created on the basis of unit wear and unit energy values, cutting optimizations were performed, and the relationships between some physical and mechanical properties of rocks and the optimum cutting parameters obtained as a result of the optimization were investigated.

  2. Modeling and Advanced Control for Sustainable Process ...

    EPA Pesticide Factsheets

    This book chapter introduces a novel process systems engineering framework that integrates process control with sustainability assessment tools for the simultaneous evaluation and optimization of process operations. The implemented control strategy consists of a biologically-inspired, multi-agent-based method. The sustainability and performance assessment of process operating points is carried out using the U.S. E.P.A.’s GREENSCOPE assessment tool that provides scores for the selected economic, material management, environmental and energy indicators. The indicator results supply information on whether the implementation of the controller is moving the process towards a more sustainable operation. The effectiveness of the proposed framework is illustrated through a case study of a continuous bioethanol fermentation process whose dynamics are characterized by steady-state multiplicity and oscillatory behavior. This book chapter contribution demonstrates the application of novel process control strategies for sustainability by increasing material management, energy efficiency, and pollution prevention, as needed for SHC Sustainable Uses of Wastes and Materials Management.

  3. Inputs requested from earth resources remote sensing data users regarding LANDSAT-C mission requirements and data needs

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Inputs from prospective LANDSAT-C data users are requested to aid NASA in defining LANDSAT-C mission and data requirements and in making decisions regarding the scheduling of satellite operations and ground data processing operations. Design specifications, multispectral band scanner performance characteristics, satellite schedule operations, and types of available data products are briefly described.

  4. GaAs Supercomputing: Architecture, Language, And Algorithms For Image Processing

    NASA Astrophysics Data System (ADS)

    Johl, John T.; Baker, Nick C.

    1988-10-01

    The application of high-speed GaAs processors in a parallel system matches the demanding computational requirements of image processing. The architecture of the McDonnell Douglas Astronautics Company (MDAC) vector processor is described along with the algorithms and language translator. Most image and signal processing algorithms can utilize parallel processing and show a significant performance improvement over sequential versions. The parallelization performed by this system is within each vector instruction. Since each vector has many elements, each requiring some computation, useful concurrent arithmetic operations can easily be performed. Balancing the memory bandwidth with the computation rate of the processors is an important design consideration for high efficiency and utilization. The architecture features a bus-based execution unit consisting of four to eight 32-bit GaAs RISC microprocessors running at a 200 MHz clock rate for a peak performance of 1.6 BOPS. The execution unit is connected to a vector memory with three buses capable of transferring two input words and one output word every 10 nsec. The address generators inside the vector memory perform different vector addressing modes and feed the data to the execution unit. The functions discussed in this paper include basic MATRIX OPERATIONS, 2-D SPATIAL CONVOLUTION, HISTOGRAM, and FFT. For each of these algorithms, assembly language programs were run on a behavioral model of the system to obtain performance figures.

  5. High Performance Computing Operations Review Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cupps, Kimberly C.

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  6. System reliability, performance and trust in adaptable automation.

    PubMed

    Chavaillaz, Alain; Wastell, David; Sauer, Jürgen

    2016-01-01

    The present study examined the effects of reduced system reliability on operator performance and automation management in an adaptable automation environment. 39 operators were randomly assigned to one of three experimental groups: low (60%), medium (80%), and high (100%) reliability of automation support. The support system provided five incremental levels of automation which operators could freely select according to their needs. After 3 h of training on a simulated process control task (AutoCAMS) in which the automation worked infallibly, operator performance and automation management were measured during a 2.5-h testing session. Trust and workload were also assessed through questionnaires. Results showed that although reduced system reliability resulted in lower levels of trust towards automation, there were no corresponding differences in the operators' reliance on automation. While operators showed overall a noteworthy ability to cope with automation failure, there were, however, decrements in diagnostic speed and prospective memory with lower reliability. Copyright © 2015. Published by Elsevier Ltd.

  7. Performance of a novel baffled osmotic membrane bioreactor-microfiltration hybrid system under continuous operation for simultaneous nutrient removal and mitigation of brine discharge.

    PubMed

    Pathak, Nirenkumar; Chekli, Laura; Wang, Jin; Kim, Youngjin; Phuntsho, Sherub; Li, Sheng; Ghaffour, Noreddine; Leiknes, TorOve; Shon, Hokyong

    2017-09-01

    The present study investigated the performance of an integrated osmotic and microfiltration membrane bioreactor system for wastewater treatment employing baffles in the reactor. Thus, this reactor design enables both aerobic and anoxic processes in an attempt to reduce the process footprint and energy costs associated with continuous aeration. The process performance was evaluated in terms of water flux, salinity build up in the bioreactor, organic and nutrient removal and microbial activity using synthetic reverse osmosis (RO) brine as draw solution (DS). The incorporation of MF membrane was effective in maintaining a reasonable salinity level (612-1434mg/L) in the reactor which resulted in a much lower flux decline (i.e. 11.48-6.98LMH) as compared to previous studies. The stable operation of the osmotic membrane bioreactor-forward osmosis (OMBR-FO) process resulted in an effective removal of both organic matter (97.84%) and nutrient (phosphate 87.36% and total nitrogen 94.28%), respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bornea, A.; Zamfirache, M.; Stefan, L.

    ICIT (Institute for Cryogenics and Isotopic Technologies) has used its experience in cryogenic water distillation process to propose a similar process for hydrogen distillation that can be used in detritiation technologies. This process relies on the same packages but a stainless filling is tested instead of the phosphorous bronze filling used for water distillation. This paper presents two types of packages developed for hydrogen distillation, both have a stainless filling but it differs in terms of density, exchange surface and specific volume. Performance data have been obtained on laboratory scale. In order to determine the characteristics of the package, themore » installation was operated in the total reflux mode, for different flow rate for the liquid. There were made several experiments considering different operating conditions. Samples extracted at the top and bottom of cryogenic distillation column allowed mathematical processing to determine the separation performance. The experiments show a better efficiency for the package whose exchange surface was higher and there were no relevant differences between both packages as the operating pressure of the cryogenic column was increasing. For a complete characterization of the packages, future experiments will be considered to determine performance at various velocities in the column and their correlation with the pressure in the column. We plan further experiments to separate tritium from the mixture of isotopes DT, having in view that our goal is to apply this results to a detritiation plant.« less

  9. Process economics of renewable biorefineries: butanol and ethanol production in integrated bioprocesses from lignocellulosics and other industrial by-products

    USDA-ARS?s Scientific Manuscript database

    This chapter provides process economic details on production of butanol from lignocellulosic biomass and glycerol in integrated bioreactors where numerous unit operations are combined. In order to compare various processes, economic evaluations were performed using SuperPro Designer Software (versio...

  10. Data Processing Technology, A Suggested 2-Year Post High School Curriculum.

    ERIC Educational Resources Information Center

    Central Texas Coll., Killeen.

    This guide identifies technicians, states specific job requirements, and describes special problems in defining, initiating, and operating post-high school programs in data processing technology. The following are discussed: (1) the program (employment opportunities, the technician, work performed by data processing personnel, the faculty, student…

  11. Survey of Munitions Response Technologies

    DTIC Science & Technology

    2006-06-01

    3-34 3.3.4 Digital Data Processing .......................................................................... 3-36 4.0 SOURCE DATA AND METHODS...6-4 6.1.6 DGM versus Mag and Flag Processes ..................................................... 6-5 6.1.7 Translation to...signatures, surface clutter, variances in operator technique, target selection, and data processing all degrade from and affect optimum performance

  12. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.

  13. National Centers for Environmental Prediction

    Science.gov Websites

    Processing Land Surface Software Engineering Hurricanes Model Information Documentation Performance Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar Series Other Information Collaborators In-House Website Transition to Operations Presentations

  14. Object as a model of intelligent robot in the virtual workspace

    NASA Astrophysics Data System (ADS)

    Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.

    2015-11-01

    The contemporary industry requires that every element of a production line will fit into the global schema, which is connected with the global structure of business. There is the need to find the practical and effective ways of the design and management of the production process. The term “effective” should be understood in a manner that there exists a method, which allows building a system of nodes and relations in order to describe the role of the particular machine in the production process. Among all the machines involved in the manufacturing process, industrial robots are the most complex ones. This complexity is reflected in the realization of elaborated tasks, involving handling, transporting or orienting the objects in a work space, and even performing simple machining processes, such as deburring, grinding, painting, applying adhesives and sealants etc. The robot also performs some activities connected with automatic tool changing and operating the equipment mounted on the wrist of the robot. Because of having the programmable control system, the robot also performs additional activities connected with sensors, vision systems, operating the storages of manipulated objects, tools or grippers, measuring stands, etc. For this reason the description of the robot as a part of production system should take into account the specific nature of this machine: the robot is a substitute of a worker, who performs his tasks in a particular environment. In this case, the model should be able to characterize the essence of "employment" in the sufficient way. One of the possible approaches to this problem is to treat the robot as an object, in the sense often used in computer science. This allows both: to describe certain operations performed on the object, as well as describing the operations performed by the object. This paper focuses mainly on the definition of the object as the model of the robot. This model is confronted with the other possible descriptions. The results can be further used during designing of the complete manufacturing system, which takes into account all the involved machines and has the form of an object-oriented model.

  15. Modelling Tradeoffs Evolution in Multipurpose Water Systems Operation in Response to Extreme Events

    NASA Astrophysics Data System (ADS)

    Mason, E.; Gazzotti, P.; Amigoni, F.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Multipurpose water resource systems are usually operated on a tradeoff of the operating objectives, which - under steady state climatic and socio-economic boundary conditions - is supposed to ensure a fair and/or efficient balance among the conflicting interests. Extreme variability in the system's drivers might affect operators' risk aversion and force a change in the tradeoff. Properly accounting for these shifts is key to any rigorous retrospective assessment of operators' behavior and the associated system's performance. In this study, we explore how the selection of different optimal tradeoffs among the operating objectives is linked to the variations of the boundary conditions, such as, for example, drifting rainfall season or remarkable changes in crop and energy prices. We argue that tradeoff selection is driven by recent, extreme variations in system performance: underperforming on one of the operating objective target value should push the tradeoff toward the disadvantaged objective. To test this assumption, we developed a rational procedure to simulate the operators' tradeoff selection process. We map the selection onto a multi lateral negotiation process, where different multiple, virtual agents optimize different operating objectives. The agents periodically negotiate a compromise on the operating policy. The agent's rigidity in each negotiation round is determined by the recent system performances according to the specific objective it represents. The negotiation follows a set-based egocentric monotonic concession protocol: at each negotiation step an agent incrementally adds some options to the set of its acceptable compromises and (possibly) accepts lower and lower satisfying policies until an agreement is achieved. We apply this reiterated negotiation framework on the regulated Lake Como, Italy, simulating the lake dam operation and its recurrent updates over the last 50 years. The operation aims to balance shoreline flood prevention and irrigation deficit control in the downstream irrigated areas. The results of our simulated negotiations are able to accurately capture the operator's risk aversion changes as driven by extreme wet and dry situations, and to well reproduce the observational release data.

  16. Cooperative optimization of reconfigurable machine tool configurations and production process plan

    NASA Astrophysics Data System (ADS)

    Xie, Nan; Li, Aiping; Xue, Wei

    2012-09-01

    The production process plan design and configurations of reconfigurable machine tool (RMT) interact with each other. Reasonable process plans with suitable configurations of RMT help to improve product quality and reduce production cost. Therefore, a cooperative strategy is needed to concurrently solve the above issue. In this paper, the cooperative optimization model for RMT configurations and production process plan is presented. Its objectives take into account both impacts of process and configuration. Moreover, a novel genetic algorithm is also developed to provide optimal or near-optimal solutions: firstly, its chromosome is redesigned which is composed of three parts, operations, process plan and configurations of RMTs, respectively; secondly, its new selection, crossover and mutation operators are also developed to deal with the process constraints from operation processes (OP) graph, otherwise these operators could generate illegal solutions violating the limits; eventually the optimal configurations for RMT under optimal process plan design can be obtained. At last, a manufacturing line case is applied which is composed of three RMTs. It is shown from the case that the optimal process plan and configurations of RMT are concurrently obtained, and the production cost decreases 6.28% and nonmonetary performance increases 22%. The proposed method can figure out both RMT configurations and production process, improve production capacity, functions and equipment utilization for RMT.

  17. Electrospun amplified fiber optics.

    PubMed

    Morello, Giovanni; Camposeo, Andrea; Moffa, Maria; Pisignano, Dario

    2015-03-11

    All-optical signal processing is the focus of much research aiming to obtain effective alternatives to existing data transmission platforms. Amplification of light in fiber optics, such as in Erbium-doped fiber amplifiers, is especially important for efficient signal transmission. However, the complex fabrication methods involving high-temperature processes performed in a highly pure environment slow the fabrication process and make amplified components expensive with respect to an ideal, high-throughput, room temperature production. Here, we report on near-infrared polymer fiber amplifiers working over a band of ∼20 nm. The fibers are cheap, spun with a process entirely carried out at room temperature, and shown to have amplified spontaneous emission with good gain coefficients and low levels of optical losses (a few cm(-1)). The amplification process is favored by high fiber quality and low self-absorption. The found performance metrics appear to be suitable for short-distance operations, and the large variety of commercially available doping dyes might allow for effective multiwavelength operations by electrospun amplified fiber optics.

  18. Meta-control of combustion performance with a data mining approach

    NASA Astrophysics Data System (ADS)

    Song, Zhe

    Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.

  19. Challenges in building intelligent systems for space mission operations

    NASA Technical Reports Server (NTRS)

    Hartman, Wayne

    1991-01-01

    The purpose here is to provide a top-level look at the stewardship functions performed in space operations, and to identify the major issues and challenges that must be addressed to build intelligent systems that can realistically support operations functions. The focus is on decision support activities involving monitoring, state assessment, goal generation, plan generation, and plan execution. The bottom line is that problem solving in the space operations domain is a very complex process. A variety of knowledge constructs, representations, and reasoning processes are necessary to support effective human problem solving. Emulating these kinds of capabilities in intelligent systems offers major technical challenges that the artificial intelligence community is only beginning to address.

  20. Technical Challenges and Opportunities of Centralizing Space Science Mission Operations (SSMO) at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Ido, Haisam; Burns, Rich

    2015-01-01

    The NASA Goddard Space Science Mission Operations project (SSMO) is performing a technical cost-benefit analysis for centralizing and consolidating operations of a diverse set of missions into a unified and integrated technical infrastructure. The presentation will focus on the notion of normalizing spacecraft operations processes, workflows, and tools. It will also show the processes of creating a standardized open architecture, creating common security models and implementations, interfaces, services, automations, notifications, alerts, logging, publish, subscribe and middleware capabilities. The presentation will also discuss how to leverage traditional capabilities, along with virtualization, cloud computing services, control groups and containers, and possibly Big Data concepts.

  1. Assessing performance in complex team environments.

    PubMed

    Whitmore, Jeffrey N

    2005-07-01

    This paper provides a brief introduction to team performance assessment. It highlights some critical aspects leading to the successful measurement of team performance in realistic console operations; discusses the idea of process and outcome measures; presents two types of team data collection systems; and provides an example of team performance assessment. Team performance assessment is a complicated endeavor relative to assessing individual performance. Assessing team performance necessitates a clear understanding of each operator's task, both at the individual and team level, and requires planning for efficient data capture and analysis. Though team performance assessment requires considerable effort, the results can be very worthwhile. Most tasks performed in Command and Control environments are team tasks, and understanding this type of performance is becoming increasingly important to the evaluation of mission success and for overall system optimization.

  2. Study and Analysis of The Robot-Operated Material Processing Systems (ROMPS)

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.

    1996-01-01

    This is a report presenting the progress of a research grant funded by NASA for work performed during 1 Oct. 1994 - 31 Sep. 1995. The report deals with the development and investigation of potential use of software for data processing for the Robot Operated Material Processing System (ROMPS). It reports on the progress of data processing of calibration samples processed by ROMPS in space and on earth. First data were retrieved using the I/O software and manually processed using MicroSoft Excel. Then the data retrieval and processing process was automated using a program written in C which is able to read the telemetry data and produce plots of time responses of sample temperatures and other desired variables. LabView was also employed to automatically retrieve and process the telemetry data.

  3. 40 CFR 240.210-1 - Requirement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Requirement. 240.210-1 Section 240.210... THE THERMAL PROCESSING OF SOLID WASTES Requirements and Recommended Procedures § 240.210-1 Requirement... the design requirements. An operations manual describing the various tasks to be performed, operating...

  4. 40 CFR 240.210-1 - Requirement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Requirement. 240.210-1 Section 240.210... THE THERMAL PROCESSING OF SOLID WASTES Requirements and Recommended Procedures § 240.210-1 Requirement... the design requirements. An operations manual describing the various tasks to be performed, operating...

  5. 40 CFR 240.210-1 - Requirement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Requirement. 240.210-1 Section 240.210... THE THERMAL PROCESSING OF SOLID WASTES Requirements and Recommended Procedures § 240.210-1 Requirement... the design requirements. An operations manual describing the various tasks to be performed, operating...

  6. 40 CFR 60.665 - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Distillation Operations § 60.665 Reporting and recordkeeping requirements. (a) Each owner or operator subject... of recovery equipment or a distillation unit; (2) Any recalculation of the TRE index value performed... distillation process unit containing the affected facility. These must be reported as soon as possible after...

  7. 40 CFR 60.665 - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Distillation Operations § 60.665 Reporting and recordkeeping requirements. (a) Each owner or operator subject... of recovery equipment or a distillation unit; (2) Any recalculation of the TRE index value performed... distillation process unit containing the affected facility. These must be reported as soon as possible after...

  8. 7 CFR 274.4 - Reconciliation and reporting.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... basis and consist of: (1) Information on how the system operates relative to its performance standards..., shall be submitted by each State agency operating an issuance system. The report shall be prepared at... reconciliation process. The EBT system shall provide reports and documentation pertaining to the following: (1...

  9. System-on-chip architecture and validation for real-time transceiver optimization: APC implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Suarez, Hernan; Zhang, Yan R.

    2015-05-01

    New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.

  10. Extending i-line capabilities through variance characterization and tool enhancement

    NASA Astrophysics Data System (ADS)

    Miller, Dan; Salinas, Adrian; Peterson, Joel; Vickers, David; Williams, Dan

    2006-03-01

    Continuous economic pressures have moved a large percent of integrated device manufacturing (IDM) operations either overseas or to foundry operations over the last 10 years. These pressures have left the IDM fabs in the U.S. with required COO improvements in order to maintain operations domestically. While the assets of many of these factories are at a very favorable point in the depreciation life cycle, the equipment and processes are constrained to the quality of the equipment in its original state and the degradation over its installed life. With the objective to enhance output and improve process performance, this factory and their primary lithography process tool supplier have been able to extend the usable life of the existing process tools, increase the output of the tool base, and improve the distribution of the CDs on the product produced. Texas Instruments Incorporated lead an investigation with the POLARIS ® Systems & Services business of FSI International to determine the sources of variance in the i-line processing of a wide array of IC device types. Data from the sources of variance were investigated such as PEB temp, PEB delay time, develop recipe, develop time, and develop programming. While PEB processes are a primary driver of acid catalyzed resists, the develop mode is shown in this work to have an overwhelming impact on the wafer to wafer and across wafer CD performance of these i-line processes. These changes have been able to improve the wafer to wafer CD distribution by more than 80 %, and the within wafer CD distribution by more than 50 % while enabling a greater than 50 % increase in lithography cluster throughput. The paper will discuss the contribution from each of the sources of variance and their importance in overall system performance.

  11. Microcomponent sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K..; McDonald, C.E.

    1997-03-18

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 14 figs.

  12. Reusable Rocket Engine Operability Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Komar, D. R.

    1998-01-01

    This paper describes the methodology, model, input data, and analysis results of a reusable launch vehicle engine operability study conducted with the goal of supporting design from an operations perspective. Paralleling performance analyses in schedule and method, this requires the use of metrics in a validated operations model useful for design, sensitivity, and trade studies. Operations analysis in this view is one of several design functions. An operations concept was developed given an engine concept and the predicted operations and maintenance processes incorporated into simulation models. Historical operations data at a level of detail suitable to model objectives were collected, analyzed, and formatted for use with the models, the simulations were run, and results collected and presented. The input data used included scheduled and unscheduled timeline and resource information collected into a Space Transportation System (STS) Space Shuttle Main Engine (SSME) historical launch operations database. Results reflect upon the importance not only of reliable hardware but upon operations and corrective maintenance process improvements.

  13. Progress towards an Optimization Methodology for Combustion-Driven Portable Thermoelectric Power Generation Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.

    2012-03-13

    Enormous military and commercial interests exist in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. Design and development of a portable TE power system using a JP-8 combustor as a high temperature heat source and optimal process flows depend on efficient heat generation, transfer, and recovery within the system are explored. Design optimization of the system required considering the combustion system efficiency and TE conversion efficiency simultaneously. The combustor performance and TE sub-system performance were coupled directlymore » through exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation of this system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed thermoelectric converter thermal/mechanical modeling. To this end, this work reports on design integration of systemlevel process flow simulations using commercial software CHEMCADTM with in-house thermoelectric converter and module optimization, and heat exchanger analyses using COMSOLTM software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem level conversion efficiencies exceeding 10%. These TE advances are integrated with a high performance microtechnology combustion reactor based on recent advances at the Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation established a basis for optimal selection of fuel and air flow rates, thermoelectric module design and operating conditions, and microtechnology heat-exchanger design criteria. This paper will discuss this simulation process that leads directly to system efficiency power maps defining potentially available optimal system operating conditions and regimes. This coupled simulation approach enables pathways for integrated use of high-performance combustor components, high performance TE devices, and microtechnologies to produce a compact, lightweight, combustion driven TE power system prototype that operates on common fuels.« less

  14. Simulation of mass storage systems operating in a large data processing facility

    NASA Technical Reports Server (NTRS)

    Holmes, R.

    1972-01-01

    A mass storage simulation program was written to aid system designers in the design of a data processing facility. It acts as a tool for measuring the overall effect on the facility of on-line mass storage systems, and it provides the means of measuring and comparing the performance of competing mass storage systems. The performance of the simulation program is demonstrated.

  15. Total Quality Management Implementation Plan: Defense Depot, Ogden

    DTIC Science & Technology

    1989-07-01

    NUMBERS Total Quality Management Implementation Plan Defense Depot Ogden 6. AUTHOR(S) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING...PAGES TQM (Total Quality Management ), Continuous Process Improvement, Depot Operations, Process Action Teams 16. PRICE CODE 17. SECURITY...034 A Message From The Commander On Total Quality Management i fully support the DLA aoproacii to Total Quality Management . As stated by General

  16. Motivation for documentation.

    PubMed

    Graham, Denise H

    2004-11-01

    The quality improvement plan relies on controlling quality of care through improving the process or system as a whole. Your ongoing data collection is paramount to the process of system-wide improvement and performance, enhancement of financial performance, operational performance and overall service performance and satisfaction. The threat of litigation and having to defend yourself from a claim of wrongdoing still looms every time your wheels turn. Your runsheet must serve and protect you. Look at the NFPA 1710 standard, which was enacted to serve and protect firefighters. This standard was enacted with their personal safety and well-being as the principle behind staffing requirements. At what stage of draft do you suppose the NFPA 1710 standard would be today if the relative data were collected sporadically or were not tracked for each service-related death? It may have taken many more service-related deaths to effect change for a system-wide improvement in operational performance. Every call merits documentation and data collection. Your data are catalysts for change.

  17. Application of System Operational Effectiveness Methodology to Space Launch Vehicle Development and Operations

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Kelley, Gary W.

    2012-01-01

    The Department of Defense (DoD) defined System Operational Effectiveness (SOE) model provides an exceptional framework for an affordable approach to the development and operation of space launch vehicles and their supporting infrastructure. The SOE model provides a focal point from which to direct and measure technical effectiveness and process efficiencies of space launch vehicles. The application of the SOE model to a space launch vehicle's development and operation effort leads to very specific approaches and measures that require consideration during the design phase. This paper provides a mapping of the SOE model to the development of space launch vehicles for human exploration by addressing the SOE model key points of measurement including System Performance, System Availability, Technical Effectiveness, Process Efficiency, System Effectiveness, Life Cycle Cost, and Affordable Operational Effectiveness. In addition, the application of the SOE model to the launch vehicle development process is defined providing the unique aspects of space launch vehicle production and operations in lieu of the traditional broader SOE context that examines large quantities of fielded systems. The tailoring and application of the SOE model to space launch vehicles provides some key insights into the operational design drivers, capability phasing, and operational support systems.

  18. Design and Performance of the Astro-E/XRS Signal Processing System

    NASA Technical Reports Server (NTRS)

    Boyce, Kevin R.; Audley, M. D.; Baker, R. G.; Dumonthier, J. J.; Fujimoto, R.; Gendreau, K. C.; Ishisaki, Y.; Kelley, R. L.; Stahle, C. K.; Szymkowiak, A. E.

    1999-01-01

    We describe the signal processing system of the Astro-E XRS instrument. The Calorimeter Analog Processor (CAP) provides bias and power for the detectors and amplifies the detector signals by a factor of 20,000. The Calorimeter Digital Processor (CDP) performs the digital processing of the calorimeter signals, detecting X-ray pulses and analyzing them by optimal filtering. We describe the operation of pulse detection, Pulse height analysis. and risetime determination. We also discuss performance, including the three event grades (hi-res mid-res, and low-res). anticoincidence detection, counting rate dependence, and noise rejection.

  19. Implementation of in-line infrared monitor in full-scale anaerobic digestion process.

    PubMed

    Spanjers, H; Bouvier, J C; Steenweg, P; Bisschops, I; van Gils, W; Versprille, B

    2006-01-01

    During start up but also during normal operation, anaerobic reactor systems should be run and monitored carefully to secure trouble-free operation, because the process is vulnerable to disturbances such as temporary overloading, biomass wash out and influent toxicity. The present method of monitoring is usually by manual sampling and subsequent laboratory analysis. Data collection, processing and feedback to system operation is manual and ad hoc, and involves high-level operator skills and attention. As a result, systems tend to be designed at relatively conservative design loading rates resulting in significant over-sizing of reactors and thus increased systems cost. It is therefore desirable to have on-line and continuous access to performance data on influent and effluent quality. Relevant variables to indicate process performance include VFA, COD, alkalinity, sulphate, and, if aerobic post-treatment is considered, total nitrogen, ammonia and nitrate. Recently, mid-IR spectrometry was demonstrated on a pilot scale to be suitable for in-line simultaneous measurement of these variables. This paper describes a full-scale application of the technique to test its ability to monitor continuously and without human intervention the above variables simultaneously in two process streams. For VFA, COD, sulphate, ammonium and TKN good agreement was obtained between in-line and manual measurements. During a period of six months the in-line measurements had to be interrupted several times because of clogging. It appeared that the sample pre-treatment unit was not able to cope with high solids concentrations all the time.

  20. Introduction to the scientific application system of DAMPE (On behalf of DAMPE collaboration)

    NASA Astrophysics Data System (ADS)

    Zang, Jingjing

    2016-07-01

    The Dark Matter Particle Explorer (DAMPE) is a high energy particle physics experiment satellite, launched on 17 Dec 2015. The science data processing and payload operation maintenance for DAMPE will be provided by the DAMPE Scientific Application System (SAS) at the Purple Mountain Observatory (PMO) of Chinese Academy of Sciences. SAS is consisted of three subsystems - scientific operation subsystem, science data and user management subsystem and science data processing subsystem. In cooperation with the Ground Support System (Beijing), the scientific operation subsystem is responsible for proposing observation plans, monitoring the health of satellite, generating payload control commands and participating in all activities related to payload operation. Several databases developed by the science data and user management subsystem of DAMPE methodically manage all collected and reconstructed science data, down linked housekeeping data, payload configuration and calibration data. Under the leadership of DAMPE Scientific Committee, this subsystem is also responsible for publication of high level science data and supporting all science activities of the DAMPE collaboration. The science data processing subsystem of DAMPE has already developed a series of physics analysis software to reconstruct basic information about detected cosmic ray particle. This subsystem also maintains the high performance computing system of SAS to processing all down linked science data and automatically monitors the qualities of all produced data. In this talk, we will describe all functionalities of whole DAMPE SAS system and show you main performances of data processing ability.

  1. Electrophysiologically dissociating episodic preretrieval processing.

    PubMed

    Bridger, Emma K; Mecklinger, Axel

    2012-06-01

    Contrasts between ERPs elicited by new items from tests with distinct episodic retrieval requirements index preretrieval processing. Preretrieval operations are thought to facilitate the recovery of task-relevant information because they have been shown to correlate with response accuracy in tasks in which prioritizing the retrieval of this information could be a useful strategy. This claim was tested here by contrasting new item ERPs from two retrieval tasks, each designed to explicitly require the recovery of a different kind of mnemonic information. New item ERPs differed from 400 msec poststimulus, but the distribution of these effects varied markedly, depending upon participants' response accuracy: A protracted posteriorly located effect was present for higher performing participants, whereas an anteriorly distributed effect occurred for lower performing participants. The magnitude of the posterior effect from 400 to 800 msec correlated with response accuracy, supporting the claim that preretrieval processes facilitate the recovery of task-relevant information. Additional contrasts between ERPs from these tasks and an old/new recognition task operating as a relative baseline revealed task-specific effects with nonoverlapping scalp topographies, in line with the assumption that these new item ERP effects reflect qualitatively distinct retrieval operations. Similarities in these effects were also used to reason about preretrieval processes related to the general requirement to recover contextual details. These insights, alongside the distinct pattern of effects for the two accuracy groups, reveal the multifarious nature of preretrieval processing while indicating that only some of these classes of operation are systematically related to response accuracy in recognition memory tasks.

  2. Decision support for operations and maintenance (DSOM) system

    DOEpatents

    Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA

    2006-03-21

    A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.

  3. M-DAS: System for multispectral data analysis. [in Saginaw Bay, Michigan

    NASA Technical Reports Server (NTRS)

    Johnson, R. H.

    1975-01-01

    M-DAS is a ground data processing system designed for analysis of multispectral data. M-DAS operates on multispectral data from LANDSAT, S-192, M2S and other sources in CCT form. Interactive training by operator-investigators using a variable cursor on a color display was used to derive optimum processing coefficients and data on cluster separability. An advanced multivariate normal-maximum likelihood processing algorithm was used to produce output in various formats: color-coded film images, geometrically corrected map overlays, moving displays of scene sections, coverage tabulations and categorized CCTs. The analysis procedure for M-DAS involves three phases: (1) screening and training, (2) analysis of training data to compute performance predictions and processing coefficients, and (3) processing of multichannel input data into categorized results. Typical M-DAS applications involve iteration between each of these phases. A series of photographs of the M-DAS display are used to illustrate M-DAS operation.

  4. Intra-operative disruptions, surgeon's mental workload, and technical performance in a full-scale simulated procedure.

    PubMed

    Weigl, Matthias; Stefan, Philipp; Abhari, Kamyar; Wucherer, Patrick; Fallavollita, Pascal; Lazarovici, Marc; Weidert, Simon; Euler, Ekkehard; Catchpole, Ken

    2016-02-01

    Surgical flow disruptions occur frequently and jeopardize perioperative care and surgical performance. So far, insights into subjective and cognitive implications of intra-operative disruptions for surgeons and inherent consequences for performance are inconsistent. This study aimed to investigate the effect of surgical flow disruption on surgeon's intra-operative workload and technical performance. In a full-scale OR simulation, 19 surgeons were randomly allocated to either of the two disruption scenarios (telephone call vs. patient discomfort). Using a mixed virtual reality simulator with a computerized, high-fidelity mannequin, all surgeons were trained in performing a vertebroplasty procedure and subsequently performed such a procedure under experimental conditions. Standardized measures on subjective workload and technical performance (trocar positioning deviation from expert-defined standard, number, and duration of X-ray acquisitions) were collected. Intra-operative workload during simulated disruption scenarios was significantly higher compared to training sessions (p < .01). Surgeons in the telephone call scenario experienced significantly more distraction compared to their colleagues in the patient discomfort scenario (p < .05). However, workload tended to be increased in surgeons who coped with distractions due to patient discomfort. Technical performance was not significantly different between both disruption scenarios. We found a significant association between surgeons' intra-operative workload and technical performance such that surgeons with increased mental workload tended to perform worse (β = .55, p = .04). Surgical flow disruptions affect surgeons' intra-operative workload. Increased mental workload was associated with inferior technical performance. Our simulation-based findings emphasize the need to establish smooth surgical flow which is characterized by a low level of process deviations and disruptions.

  5. The Triangle of the Space Launch System Operations

    NASA Astrophysics Data System (ADS)

    Fayolle, Eric

    2010-09-01

    Firemen know it as “fire triangle”, mathematicians know it as “golden triangle”, sailormen know it as “Bermuda triangle”, politicians know it as “Weimar triangle”… This article aims to present a new aspect of that shape geometry in the space launch system world: “the triangle of the space launch system operations”. This triangle is composed of these three following topics, which have to be taken into account for any space launch system operation processing: design, safety and operational use. Design performance is of course taking into account since the early preliminary phase of a system development. This design performance is matured all along the development phases, thanks to consecutives iterations in order to respect the financial and timing constraints imposed to the development of the system. This process leads to a detailed and precise design to assess the required performance. Then, the operational use phase brings its batch of constraints during the use of the system. This phase is conducted by specific procedures for each operation. Each procedure has sequences for each sub-system, which have to be conducted in a very precise chronological way. These procedures can be processed by automatic way or manual way, with the necessity or not of the implication of operators, and in a determined environment. Safeguard aims to verify the respect of the specific constraints imposed to guarantee the safety of persons and property, the protection of public health and the environment. Safeguard has to be taken into account above the operational constraints of any space operation, without forgetting the highest safety level for the operators of the space operation, and of course without damaging the facilities or without disturbing the external environment. All space operations are the result of a “win-win” compromise between these three topics. Contrary to the fire triangle where one of the topics has to be suppressed in order to avoid the combustion, no topics at all should be suppressed in the triangle of the space launch system operations. Indeed, if safeguard is not considered since the beginning of the development phase, this development will not take into account safeguard constraints. Then, the operational phase will become very difficult because unavailable, to respect safety rules required for the operational use phase of the system. Taking into account safeguard constraints in late project phases will conduct to very high operational constraints, sometimes quite disturbing for the operator, even blocking to be able to consider the operational use phase as mature and optimized. On the contrary, if design performance is not taken into account in order to favor safeguard aspect in the operational use phase, system design will not be optimized, what will lead to high planning and timing impacts. The examples detailed in this article show the compromise for what each designer should confront with during the development of any system dealing with the safety of persons and property, the protection of public health and the environment.

  6. The Preparation for and Execution of Engineering Operations for the Mars Curiosity Rover Mission

    NASA Technical Reports Server (NTRS)

    Samuels, Jessica A.

    2013-01-01

    The Mars Science Laboratory Curiosity Rover mission is the most complex and scientifically packed rover that has ever been operated on the surface of Mars. The preparation leading up to the surface mission involved various tests, contingency planning and integration of plans between various teams and scientists for determining how operation of the spacecraft (s/c) would be facilitated. In addition, a focused set of initial set of health checks needed to be defined and created in order to ensure successful operation of rover subsystems before embarking on a two year science journey. This paper will define the role and responsibilities of the Engineering Operations team, the process involved in preparing the team for rover surface operations, the predefined engineering activities performed during the early portion of the mission, and the evaluation process used for initial and day to day spacecraft operational assessment.

  7. Real-time thermal imaging of solid oxide fuel cell cathode activity in working condition.

    PubMed

    Montanini, Roberto; Quattrocchi, Antonino; Piccolo, Sebastiano A; Amato, Alessandra; Trocino, Stefano; Zignani, Sabrina C; Faro, Massimiliano Lo; Squadrito, Gaetano

    2016-09-01

    Electrochemical methods such as voltammetry and electrochemical impedance spectroscopy are effective for quantifying solid oxide fuel cell (SOFC) operational performance, but not for identifying and monitoring the chemical processes that occur on the electrodes' surface, which are thought to be strictly related to the SOFCs' efficiency. Because of their high operating temperature, mechanical failure or cathode delamination is a common shortcoming of SOFCs that severely affects their reliability. Infrared thermography may provide a powerful tool for probing in situ SOFC electrode processes and the materials' structural integrity, but, due to the typical design of pellet-type cells, a complete optical access to the electrode surface is usually prevented. In this paper, a specially designed SOFC is introduced, which allows temperature distribution to be measured over all the cathode area while still preserving the electrochemical performance of the device. Infrared images recorded under different working conditions are then processed by means of a dedicated image processing algorithm for quantitative data analysis. Results reported in the paper highlight the effectiveness of infrared thermal imaging in detecting the onset of cell failure during normal operation and in monitoring cathode activity when the cell is fed with different types of fuels.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glosser, D.; Kutchko, B.; Benge, G.

    Foamed cement is a critical component for wellbore stability. The mechanical performance of a foamed cement depends on its microstructure, which in turn depends on the preparation method and attendant operational variables. Determination of cement stability for field use is based on laboratory testing protocols governed by API Recommended Practice 10B-4 (API RP 10B-4, 2015). However, laboratory and field operational variables contrast considerably in terms of scale, as well as slurry mixing and foaming processes. Here in this paper, laboratory and field operational processes are characterized within a physics-based framework. It is shown that the “atomization energy” imparted by themore » high pressure injection of nitrogen gas into the field mixed foamed cement slurry is – by a significant margin – the highest energy process, and has a major impact on the void system in the cement slurry. There is no analog for this high energy exchange in current laboratory cement preparation and testing protocols. Quantifying the energy exchanges across the laboratory and field processes provides a basis for understanding relative impacts of these variables on cement structure, and can ultimately lead to the development of practices to improve cement testing and performance.« less

  9. Deterministic processes guide long-term synchronised population dynamics in replicate anaerobic digesters

    PubMed Central

    Vanwonterghem, Inka; Jensen, Paul D; Dennis, Paul G; Hugenholtz, Philip; Rabaey, Korneel; Tyson, Gene W

    2014-01-01

    A replicate long-term experiment was conducted using anaerobic digestion (AD) as a model process to determine the relative role of niche and neutral theory on microbial community assembly, and to link community dynamics to system performance. AD is performed by a complex network of microorganisms and process stability relies entirely on the synergistic interactions between populations belonging to different functional guilds. In this study, three independent replicate anaerobic digesters were seeded with the same diverse inoculum, supplied with a model substrate, α-cellulose, and operated for 362 days at a 10-day hydraulic residence time under mesophilic conditions. Selective pressure imposed by the operational conditions and model substrate caused large reproducible changes in community composition including an overall decrease in richness in the first month of operation, followed by synchronised population dynamics that correlated with changes in reactor performance. This included the synchronised emergence and decline of distinct Ruminococcus phylotypes at day 148, and emergence of a Clostridium and Methanosaeta phylotype at day 178, when performance became stable in all reactors. These data suggest that many dynamic functional niches are predictably filled by phylogenetically coherent populations over long time scales. Neutral theory would predict that a complex community with a high degree of recognised functional redundancy would lead to stochastic changes in populations and community divergence over time. We conclude that deterministic processes may play a larger role in microbial community dynamics than currently appreciated, and under controlled conditions it may be possible to reliably predict community structural and functional changes over time. PMID:24739627

  10. Performing Surgery: Commonalities with Performers Outside Medicine

    PubMed Central

    Kneebone, Roger L.

    2016-01-01

    This paper argues for the inclusion of surgery within the canon of performance science. The world of medicine presents rich, complex but relatively under-researched sites of performance. Performative aspects of clinical practice are overshadowed by a focus on the processes and outcomes of medical care, such as diagnostic accuracy and the results of treatment. The primacy of this “clinical” viewpoint—framed by clinical professionals as the application of medical knowledge—hides resonances with performance in other domains. Yet the language of performance is embedded in the culture of surgery—surgeons “perform” operations, work in an operating “theater” and use “instruments.” This paper asks what might come into view if we take this performative language at face value and interrogate surgery from the perspective of performance science. PMID:27630587

  11. Biological monitoring of benzene exposure for process operators during ordinary activity in the upstream petroleum industry.

    PubMed

    Bråtveit, Magne; Kirkeleit, Jorunn; Hollund, Bjørg Eli; Moen, Bente E

    2007-07-01

    This study characterized the exposure of crude oil process operators to benzene and related aromatics during ordinary activity and investigated whether the operators take up benzene at this level of exposure. We performed the study on a fixed, integrated oil and gas production facility on Norway's continental shelf. The study population included 12 operators and 9 referents. We measured personal exposure to benzene, toluene, ethylbenzene and xylene during three consecutive 12-h work shifts using organic vapour passive dosimeter badges. We sampled blood and urine before departure to the production facility (pre-shift), immediately after the work shift on Day 13 of the work period (post-shift) and immediately before the following work shift (pre-next shift). We also measured the exposure to hydrocarbons during short-term tasks by active sampling using Tenax tubes. The arithmetic mean exposure over the 3 days was 0.042 ppm for benzene (range <0.001-0.69 ppm), 0.05 ppm for toluene, 0.02 ppm for ethylbenzene and 0.03 ppm for xylene. Full-shift personal exposure was significantly higher when the process operators performed flotation work during the shift versus other tasks. Work in the flotation area was associated with short-term (6-15 min) arithmetic mean exposure to benzene of 1.06 ppm (range 0.09-2.33 ppm). The concentrations of benzene in blood and urine did not differ between operators and referents at any time point. When we adjusted for current smoking in regression analysis, benzene exposure was significantly associated with the post-shift concentration of benzene in blood (P = 0.01) and urine (P = 0.03), respectively. Although these operators perform tasks with relatively high short-term exposure to benzene, the full-shift mean exposure is low during ordinary activity. Some evidence indicates benzene uptake within this range of exposure.

  12. Energy Conversion Alternatives Study (ECAS), Westinghouse phase 1. Volume 3: Combustors, furnaces and low-BTU gasifiers. [used in coal gasification and coal liquefaction (equipment specifications)

    NASA Technical Reports Server (NTRS)

    Hamm, J. R.

    1976-01-01

    Information is presented on the design, performance, operating characteristics, cost, and development status of coal preparation equipment, combustion equipment, furnaces, low-Btu gasification processes, low-temperature carbonization processes, desulfurization processes, and pollution particulate removal equipment. The information was compiled for use by the various cycle concept leaders in determining the performance, capital costs, energy costs, and natural resource requirements of each of their system configurations.

  13. Common Ada (tradename) Missile Package (CAMP) Project. Missile Software Parts. Volume 8. Detail Design Document

    DTIC Science & Technology

    1988-03-01

    PACKAGE BODY ) TLCSC P661 (CATALOG #P106-0) This package contains the CAMP parts required to do the vaypoint steering portion of navigation. The...3.3.4.1.6 PROCESSING The following describes the processing performed by this part: package body WaypointSteering is package body ...Steering_Vector_Operations is separate; package body Steering_Vector_Operations_with_Arcsin is separate; procedure Compute Turn_Angle_and Direction (UnitNormal C

  14. Autonomous Vehicle Systems: Implications for Maritime Operations, Warfare Capabilities, and Command and Control

    DTIC Science & Technology

    2010-06-01

    speed doubles approximately every 18 months, Nick Bostrom published a study in 1998 that equated computer processing power to that of the human...bits, this equates to 1017 operations per second, or 1011 millions of instructions per second (MIPS), for human brain performance ( Bostrom , 1998). In...estimates based off Moore’s Law put realistic, affordable computer processing power equal to that of humans somewhere in the 2020–2025 timeframe ( Bostrom

  15. Power partial-discard strategy to obtain improved performance for simulated moving bed chromatography.

    PubMed

    Chung, Ji-Woo; Kim, Kyung-Min; Yoon, Tae-Ung; Kim, Seung-Ik; Jung, Tae-Sung; Han, Sang-Sup; Bae, Youn-Sang

    2017-12-22

    A novel power partial-discard (PPD) strategy was developed as a variant of the partial-discard (PD) operation to further improve the separation performance of the simulated moving bed (SMB) process. The PPD operation varied the flow rates of discard streams by introducing a new variable, the discard amount (DA) as well as varying the reported variable, discard length (DL), while the conventional PD used fixed discard flow rates. The PPD operations showed significantly improved purities in spite of losses in recoveries. Remarkably, the PPD operation could provide more enhanced purity for a given recovery or more enhanced recovery for a given purity than the PD operation. The two variables, DA and DL, in the PPD operation played a key role in achieving the desired purity and recovery. The PPD operations will be useful for attaining high-purity products with reasonable recoveries. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Safety Assurance in NextGen

    NASA Technical Reports Server (NTRS)

    HarrisonFleming, Cody; Spencer, Melissa; Leveson, Nancy; Wilkinson, Chris

    2012-01-01

    The generation of minimum operational, safety, performance, and interoperability requirements is an important aspect of safely integrating new NextGen components into the Communication Navigation Surveillance and Air Traffic Management (CNS/ATM) system. These requirements are used as part of the implementation and approval processes. In addition, they provide guidance to determine the levels of design assurance and performance that are needed for each element of the new NextGen procedures, including aircraft, operator, and Air Navigation and Service Provider. Using the enhanced Airborne Traffic Situational Awareness for InTrail Procedure (ATSA-ITP) as an example, this report describes some limitations of the current process used for generating safety requirements and levels of required design assurance. An alternative process is described, as well as the argument for why the alternative can generate more comprehensive requirements and greater safety assurance than the current approach.

  17. Calibration and simulation of two large wastewater treatment plants operated for nutrient removal.

    PubMed

    Ferrer, J; Morenilla, J J; Bouzas, A; García-Usach, F

    2004-01-01

    Control and optimisation of plant processes has become a priority for WWTP managers. The calibration and verification of a mathematical model provides an important tool for the investigation of advanced control strategies that may assist in the design or optimization of WWTPs. This paper describes the calibration of the ASM2d model for two full scale biological nitrogen and phosphorus removal plants in order to characterize the biological process and to upgrade the plants' performance. Results from simulation showed a good correspondence with experimental data demonstrating that the model and the calibrated parameters were able to predict the behaviour of both WWTPs. Once the calibration and simulation process was finished, a study for each WWTP was done with the aim of improving its performance. Modifications focused on reactor configuration and operation strategies were proposed.

  18. Shared address collectives using counter mechanisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blocksome, Michael; Dozsa, Gabor; Gooding, Thomas M

    A shared address space on a compute node stores data received from a network and data to transmit to the network. The shared address space includes an application buffer that can be directly operated upon by a plurality of processes, for instance, running on different cores on the compute node. A shared counter is used for one or more of signaling arrival of the data across the plurality of processes running on the compute node, signaling completion of an operation performed by one or more of the plurality of processes, obtaining reservation slots by one or more of the pluralitymore » of processes, or combinations thereof.« less

  19. Assessment, Planning, and Execution Considerations for Conjunction Risk Assessment and Mitigation Operations

    NASA Technical Reports Server (NTRS)

    Frigm, Ryan C.; Levi, Joshua A.; Mantziaras, Dimitrios C.

    2010-01-01

    An operational Conjunction Assessment Risk Analysis (CARA) concept is the real-time process of assessing risk posed by close approaches and reacting to those risks if necessary. The most effective way to completely mitigate conjunction risk is to perform an avoidance maneuver. The NASA Goddard Space Flight Center has implemented a routine CARA process since 2005. Over this period, considerable experience has been gained and many lessons have been learned. This paper identifies and presents these experiences as general concepts in the description of the Conjunction Assessment, Flight Dynamics, and Flight Operations methodologies and processes. These general concepts will be tied together and will be exemplified through a case study of an actual high risk conjunction event for the Aura mission.

  20. Performance evaluation of cross-flow membrane system for wastewater reuse from the wood-panels industry.

    PubMed

    Dizge, Nadir

    2014-01-01

    The objectives of this investigation were to perform a series of lab-scale membrane separation experiments under various operating conditions to investigate the performance behaviour of nanofiltration membrane (NF 270) for wastewater reuse from the wood-panels industry. The operating condition effects, e.g. cross-flow velocity (CFV), trans membrane pressure (TMP) and temperature, on the permeate flux and contaminant rejection efficiency were investigated. Moreover, three different samples: (1) raw wastewater collected from the wood-panels industry; (2) ultrafiltration pre-treated wastewater (UF-NF); and (3) coagulation/flocculation pre-treated wastewater (CF-NF) were employed in this study. The UF-NF was proposed as a pre-treatment process because it could reduce the chemical oxygen demand (COD) effectively with lower energy consumption than CF-NF. The performance of NF 270 membrane was assessed by measurements of the many parameters (pH, conductivity, total dissolved solids, COD, suspended solids, total nitrogen, nitrite, nitrate, and total phosphate) under various operating conditions. It was noted that the contaminant rejection was affected by changing TMP and CFV. It was concluded that the purified water stream can be recycled into the process for water reuse or safely disposed to the river.

  1. Advancing Forward-Looking Metrics: A Linear Program Optimization and Robust Variable Selection for Change in Stock Levels as a Result of Recurring MICAP Parts

    DTIC Science & Technology

    2013-06-01

    Kobu, 2007) Gunasekaran and Kobu also presented six observations as they relate to these key performance indicators ( KPI ), as follows: 1...Internal business process (50% of the KPI ) and customers (50% of the KPI ) play a significant role in SC environments. This implies that internal business...process PMs have significant impact on the operational performance. 2. The most widely used PM is financial performance (38% of the KPI ). This

  2. Hadamard Transform Time-of-Flight Mass Spectrometry

    DTIC Science & Technology

    2004-11-30

    computer. This rather slow process -. 12 allowed us to evaluate different methods of -M processing the data prior to performing the 4- - inverse transform . The...DSK6713 is capable of • o , , performing the inverse transform and this would 0 & 10 Time (ms,)’ 5 20 25 be the preferred mode of operation since...treating the raw data prior to performing the -20"d Loutled patern inverse transform . We expected that noise associated 0-with the pulsing of the Bradbury

  3. Enhanced visualization of inner ear structures

    NASA Astrophysics Data System (ADS)

    Niemczyk, Kazimierz; Kucharski, Tomasz; Kujawinska, Malgorzata; Bruzgielewicz, Antoni

    2004-07-01

    Recently surgery requires extensive support from imaging technologies in order to increase effectiveness and safety of operations. One of important tasks is to enhance visualisation of quasi-phase (transparent) 3d structures. Those structures are characterized by very low contrast. It makes differentiation of tissues in field of view very difficult. For that reason the surgeon may be extremly uncertain during operation. This problem is connected with supporting operations of inner ear during which physician has to perform cuts at specific places of quasi-transparent velums. Conventionally during such operations medical doctor views the operating field through stereoscopic microscope. In the paper we propose a 3D visualisation system based on Helmet Mounted Display. Two CCD cameras placed at the output of microscope perform acquisition of stereo pairs of images. The images are processed in real-time with the goal of enhancement of quasi-phased structures. The main task is to create algorithm that is not sensitive to changes in intensity distribution. The disadvantages of existing algorithms is their lack of adaptation to occuring reflexes and shadows in field of view. The processed images from both left and right channels are overlaid on the actual images exported and displayed at LCD's of Helmet Mounted Display. A physician observes by HMD (Helmet Mounted Display) a stereoscopic operating scene with indication of the places of special interest. The authors present the hardware ,procedures applied and initial results of inner ear structure visualisation. Several problems connected with processing of stereo-pair images are discussed.

  4. Monitoring and predicting cognitive state and performance via physiological correlates of neuronal signals.

    PubMed

    Russo, Michael B; Stetz, Melba C; Thomas, Maria L

    2005-07-01

    Judgment, decision making, and situational awareness are higher-order mental abilities critically important to operational cognitive performance. Higher-order mental abilities rely on intact functioning of multiple brain regions, including the prefrontal, thalamus, and parietal areas. Real-time monitoring of individuals for cognitive performance capacity via an approach based on sampling multiple neurophysiologic signals and integrating those signals with performance prediction models potentially provides a method of supporting warfighters' and commanders' decision making and other operationally relevant mental processes and is consistent with the goals of augmented cognition. Cognitive neurophysiological assessments that directly measure brain function and subsequent cognition include positron emission tomography, functional magnetic resonance imaging, mass spectroscopy, near-infrared spectroscopy, magnetoencephalography, and electroencephalography (EEG); however, most direct measures are not practical to use in operational environments. More practical, albeit indirect measures that are generated by, but removed from the actual neural sources, are movement activity, oculometrics, heart rate, and voice stress signals. The goal of the papers in this section is to describe advances in selected direct and indirect cognitive neurophysiologic monitoring techniques as applied for the ultimate purpose of preventing operational performance failures. These papers present data acquired in a wide variety of environments, including laboratory, simulator, and clinical arenas. The papers discuss cognitive neurophysiologic measures such as digital signal processing wrist-mounted actigraphy; oculometrics including blinks, saccadic eye movements, pupillary movements, the pupil light reflex; and high-frequency EEG. These neurophysiological indices are related to cognitive performance as measured through standard test batteries and simulators with conditions including sleep loss, time on task, and aviation flight-induced fatigue.

  5. Quantum key distillation from Gaussian states by Gaussian operations.

    PubMed

    Navascués, M; Bae, J; Cirac, J I; Lewestein, M; Sanpera, A; Acín, A

    2005-01-14

    We study the secrecy properties of Gaussian states under Gaussian operations. Although such operations are useless for quantum distillation, we prove that it is possible to distill a secret key secure against any attack from sufficiently entangled Gaussian states with nonpositive partial transposition. Moreover, all such states allow for key distillation, when Eve is assumed to perform finite-size coherent attacks before the reconciliation process.

  6. Deep Space Network equipment performance, reliability, and operations management information system

    NASA Technical Reports Server (NTRS)

    Cooper, T.; Lin, J.; Chatillon, M.

    2002-01-01

    The Deep Space Mission System (DSMS) Operations Program Office and the DeepSpace Network (DSN) facilities utilize the Discrepancy Reporting Management System (DRMS) to collect, process, communicate and manage data discrepancies, equipment resets, physical equipment status, and to maintain an internal Station Log. A collaborative effort development between JPL and the Canberra Deep Space Communication Complex delivered a system to support DSN Operations.

  7. Effects of TEL Confusers on Operator Target Acquisition Performance with SAR Imagery

    DTIC Science & Technology

    1998-12-01

    processing known as the theory of signal detection (TSD) (Gescheider, 1985; Green & Swets, 1966; Macmillan & Creelman , 1991; Wilson, 1992). A TSD...localizations (Hacker & Ratcliff, 1979; Macmillan & Creelman , 1991). The index of bias in a target localization task provides a measure of the operator’s...of correct localizations substituted for hits (Macmillan & Creelman , 1991). Receiver Operating Characteristic Curves. In addition to the calculation

  8. Microwave induced plasma for solid fuels and waste processing: A review on affecting factors and performance criteria.

    PubMed

    Ho, Guan Sem; Faizal, Hasan Mohd; Ani, Farid Nasir

    2017-11-01

    High temperature thermal plasma has a major drawback which consumes high energy. Therefore, non-thermal plasma which uses comparatively lower energy, for instance, microwave plasma is more attractive to be applied in gasification process. Microwave-induced plasma gasification also carries the advantages in terms of simplicity, compactness, lightweight, uniform heating and the ability to operate under atmospheric pressure that gains attention from researchers. The present paper synthesizes the current knowledge available for microwave plasma gasification on solid fuels and waste, specifically on affecting parameters and their performance. The review starts with a brief outline on microwave plasma setup in general, and followed by the effect of various operating parameters on resulting output. Operating parameters including fuel characteristics, fuel injection position, microwave power, addition of steam, oxygen/fuel ratio and plasma working gas flow rate are discussed along with several performance criteria such as resulting syngas composition, efficiency, carbon conversion, and hydrogen production rate. Based on the present review, fuel retention time is found to be the key parameter that influences the gasification performance. Therefore, emphasis on retention time is necessary in order to improve the performance of microwave plasma gasification of solid fuels and wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. NASA Processes and Requirements for Conducting Human-in-the-Loop Closed Chamber Tests

    NASA Technical Reports Server (NTRS)

    Barta, Daniel J.; Montz, Michael E.

    2004-01-01

    NASA has specific processes and requirements that must be followed for tests involving human subjects to be conducted in a safe and effective manner. There are five distinct phases of test operations. Phase one, the test request phase, consists of those activities related to initiating, processing, reviewing, and evaluating the test request. Phase two, the test preparation phase consists of those activities related to planning, coordinating, documenting, and building up the test. Phase three, the test readiness phase consists of those activities related to verifying and reviewing the planned test operations. Phase four, the test activity phase, consists of all pretest operations, functional checkouts, emergency drills, and test operations. Phase five, the post test activity phase, consists of those activities performed once the test is completed, including briefings, documentation of anomalies, data reduction and archiving, and reporting. Project management processes must be followed for facility modifications and major test buildup, which include six phases: initiation and assessment, requirements evaluation, preliminary design, detailed design, use readiness review (URR) and acceptance. Compliance with requirements for safety and quality assurance are documented throughout the test buildup and test operation processes. Tests involving human subjects must be reviewed by the applicable Institutional Review Board (IRB).

  10. An order insertion scheduling model of logistics service supply chain considering capacity and time factors.

    PubMed

    Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.

  11. ICD-10: from assessment to remediation to strategic opportunity.

    PubMed

    Dugan, John K

    2012-02-01

    Healthcare finance teams should perform an enterprisewide assessment to determine what ICD-10 means to their organization, strategically, operationally, and financially. CFOs should strategically evaluate the impact of ICD-10 on the organization's entire financial operation. Organizations should have a contingency plan in place across all processes.

  12. Aerobic Digestion. Student Manual. Biological Treatment Process Control.

    ERIC Educational Resources Information Center

    Klopping, Paul H.

    This manual contains the textual material for a single-lesson unit on aerobic sludge digestion. Topic areas addressed include: (1) theory of aerobic digestion; (2) system components; (3) performance factors; (4) indicators of stable operation; and (5) operational problems and their solutions. A list of objectives, glossary of key terms, and…

  13. 10 CFR 63.112 - Requirements for preclosure safety analysis of the geologic repository operations area.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... emergency power to instruments, utility service systems, and operating systems important to safety if there... include: (a) A general description of the structures, systems, components, equipment, and process... of the performance of the structures, systems, and components to identify those that are important to...

  14. 10 CFR 63.112 - Requirements for preclosure safety analysis of the geologic repository operations area.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... emergency power to instruments, utility service systems, and operating systems important to safety if there... include: (a) A general description of the structures, systems, components, equipment, and process... of the performance of the structures, systems, and components to identify those that are important to...

  15. 10 CFR 63.112 - Requirements for preclosure safety analysis of the geologic repository operations area.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... emergency power to instruments, utility service systems, and operating systems important to safety if there... include: (a) A general description of the structures, systems, components, equipment, and process... of the performance of the structures, systems, and components to identify those that are important to...

  16. 10 CFR 63.112 - Requirements for preclosure safety analysis of the geologic repository operations area.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... emergency power to instruments, utility service systems, and operating systems important to safety if there... include: (a) A general description of the structures, systems, components, equipment, and process... of the performance of the structures, systems, and components to identify those that are important to...

  17. Aerobic Digestion. Sludge Treatment and Disposal Course #166. Instructor's Guide [and] Student Workbook.

    ERIC Educational Resources Information Center

    Klopping, Paul H.

    This lesson is a basic description of aerobic digestion. Topics presented include a general process overview discussion of a typical digester's components, factors influencing performance, operational controls, and biological considerations for successful operation. The lesson includes an instructor's guide and student workbook. The instructor's…

  18. ITS data quality control and the calculation of mobility performance measures

    DOT National Transportation Integrated Search

    2000-09-01

    This report describes the results of research on the use of intelligent transportation system (ITS) data in calculating mobility performance measures for ITS operations. The report also describes a data quality control process developed for the Trans...

  19. 40 CFR 60.661 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Volatile Organic Compound (VOC) Emissions From Synthetic Organic Chemical Manufacturing Industry (SOCMI) Distillation Operations § 60.661... for destroying organic compounds and does not extract energy in the form of steam or process heat...

  20. 40 CFR 60.661 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Volatile Organic Compound (VOC) Emissions From Synthetic Organic Chemical Manufacturing Industry (SOCMI) Distillation Operations § 60.661... for destroying organic compounds and does not extract energy in the form of steam or process heat...

  1. 40 CFR 60.661 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Volatile Organic Compound (VOC) Emissions From Synthetic Organic Chemical Manufacturing Industry (SOCMI) Distillation Operations § 60.661... for destroying organic compounds and does not extract energy in the form of steam or process heat...

  2. Development of in situ product removal strategies in biocatalysis applying scaled-down unit operations.

    PubMed

    Heintz, Søren; Börner, Tim; Ringborg, Rolf H; Rehn, Gustav; Grey, Carl; Nordblad, Mathias; Krühne, Ulrich; Gernaey, Krist V; Adlercreutz, Patrick; Woodley, John M

    2017-03-01

    An experimental platform based on scaled-down unit operations combined in a plug-and-play manner enables easy and highly flexible testing of advanced biocatalytic process options such as in situ product removal (ISPR) process strategies. In such a platform, it is possible to compartmentalize different process steps while operating it as a combined system, giving the possibility to test and characterize the performance of novel process concepts and biocatalysts with minimal influence of inhibitory products. Here the capabilities of performing process development by applying scaled-down unit operations are highlighted through a case study investigating the asymmetric synthesis of 1-methyl-3-phenylpropylamine (MPPA) using ω-transaminase, an enzyme in the sub-family of amino transferases (ATAs). An on-line HPLC system was applied to avoid manual sample handling and to semi-automatically characterize ω-transaminases in a scaled-down packed-bed reactor (PBR) module, showing MPPA as a strong inhibitor. To overcome the inhibition, a two-step liquid-liquid extraction (LLE) ISPR concept was tested using scaled-down unit operations combined in a plug-and-play manner. Through the tested ISPR concept, it was possible to continuously feed the main substrate benzylacetone (BA) and extract the main product MPPA throughout the reaction, thereby overcoming the challenges of low substrate solubility and product inhibition. The tested ISPR concept achieved a product concentration of 26.5 g MPPA  · L -1 , a purity up to 70% g MPPA  · g tot -1 and a recovery in the range of 80% mol · mol -1 of MPPA in 20 h, with the possibility to increase the concentration, purity, and recovery further. Biotechnol. Bioeng. 2017;114: 600-609. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Stable and verifiable state estimation methods and systems with spacecraft applications

    NASA Technical Reports Server (NTRS)

    Li, Rongsheng (Inventor); Wu, Yeong-Wei Andy (Inventor)

    2001-01-01

    The stability of a recursive estimator process (e.g., a Kalman filter is assured for long time periods by periodically resetting an error covariance P(t.sub.n) of the system to a predetermined reset value P.sub.r. The recursive process is thus repetitively forced to start from a selected covariance and continue for a time period that is short compared to the system's total operational time period. The time period in which the process must maintain its numerical stability is significantly reduced as is the demand on the system's numerical stability. The process stability for an extended operational time period T.sub.o is verified by performing the resetting step at the end of at least one reset time period T.sub.r whose duration is less than the operational time period T.sub.o and then confirming stability of the process over the reset time period T.sub.r. Because the recursive process starts from a selected covariance at the beginning of each reset time period T.sub.r, confirming stability of the process over at least one reset time period substantially confirms stability over the longer operational time period T.sub.o.

  4. Hanford's Supplemental Treatment Project: Full-Scale Integrated Testing of In-Container-Vitrification and a 10,000-Liter Dryer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witwer, K.S.; Dysland, E.J.; Garfield, J.S.

    2008-07-01

    The GeoMelt{sup R} In-Container Vitrification{sup TM} (ICV{sup TM}) process was selected by the U.S. Department of Energy (DOE) in 2004 for further evaluation as the supplemental treatment technology for Hanford's low-activity waste (LAW). Also referred to as 'bulk vitrification', this process combines glass forming minerals, LAW, and chemical amendments; dries the mixture; and then vitrifies the material in a refractory-lined steel container. AMEC Nuclear Ltd. (AMEC) is adapting its GeoMelt ICV{sup TM} technology for this application with technical and analytical support from Pacific Northwest National Laboratory (PNNL). The DVBS project is funded by the DOE Office of River Protection andmore » administered by CH2M HILL Hanford Group, Inc. The Demonstration Bulk Vitrification Project (DBVS) was initiated to engineer, construct, and operate a full-scale bulk vitrification pilot-plant to treat up to 750,000 liters of LAW from Waste Tank 241-S-109 at the DOE Hanford Site. Since the beginning of the DBVS project in 2004, testing has used laboratory, crucible-scale, and engineering-scale equipment to help establish process limitations of selected glass formulations and identify operational issues. Full-scale testing has provided critical design verification of the ICV{sup TM} process before operating the Hanford pilot-plant. In 2007, the project's fifth full-scale test, called FS-38D, (also known as the Integrated Dryer Melter Test, or IDMT,) was performed. This test had three primary objectives: 1) Demonstrate the simultaneous and integrated operation of the ICV{sup TM} melter with a 10,000- liter dryer, 2) Demonstrate the effectiveness of a new feed reformulation and change in process methodology towards reducing the production and migration of molten ionic salts (MIS), and, 3) Demonstrate that an acceptable glass product is produced under these conditions. Testing was performed from August 8 to 17, 2007. Process and analytical results demonstrated that the primary test objectives, along with a dozen supporting objectives, were successfully met. Glass performance exceeded all disposal performance criteria. A previous issue with MIS containment was successfully resolved in FS-38D, and the ICV{sup TM} melter was integrated with a full-scale, 10,000-liter dryer. This paper describes the rationale for performing the test, the purpose and outcome of scale-up tests preceding it, and the performance and outcome of FS-38D. (authors)« less

  5. State machine analysis of sensor data from dynamic processes

    DOEpatents

    Cook, William R.; Brabson, John M.; Deland, Sharon M.

    2003-12-23

    A state machine model analyzes sensor data from dynamic processes at a facility to identify the actual processes that were performed at the facility during a period of interest for the purpose of remote facility inspection. An inspector can further input the expected operations into the state machine model and compare the expected, or declared, processes to the actual processes to identify undeclared processes at the facility. The state machine analysis enables the generation of knowledge about the state of the facility at all levels, from location of physical objects to complex operational concepts. Therefore, the state machine method and apparatus may benefit any agency or business with sensored facilities that stores or manipulates expensive, dangerous, or controlled materials or information.

  6. Solar-powered Rankine heat pump for heating and cooling

    NASA Technical Reports Server (NTRS)

    Rousseau, J.

    1978-01-01

    The design, operation and performance of a familyy of solar heating and cooling systems are discussed. The systems feature a reversible heat pump operating with R-11 as the working fluid and using a motor-driven centrifugal compressor. In the cooling mode, solar energy provides the heat source for a Rankine power loop. The system is operational with heat source temperatures ranging from 155 to 220 F; the estimated coefficient of performance is 0.7. In the heating mode, the vapor-cycle heat pump processes solar energy collected at low temperatures (40 to 80 F). The speed of the compressor can be adjusted so that the heat pump capacity matches the load, allowing a seasonal coefficient of performance of about 8 to be attained.

  7. Cost-performance analysis of nutrient removal in a full-scale oxidation ditch process based on kinetic modeling.

    PubMed

    Li, Zheng; Qi, Rong; Wang, Bo; Zou, Zhe; Wei, Guohong; Yang, Min

    2013-01-01

    A full-scale oxidation ditch process for treating sewage was simulated with the ASM2d model and optimized for minimal cost with acceptable performance in terms of ammonium and phosphorus removal. A unified index was introduced by integrating operational costs (aeration energy and sludge production) with effluent violations for performance evaluation. Scenario analysis showed that, in comparison with the baseline (all of the 9 aerators activated), the strategy of activating 5 aerators could save aeration energy significantly with an ammonium violation below 10%. Sludge discharge scenario analysis showed that a sludge discharge flow of 250-300 m3/day (solid retention time (SRT), 13-15 days) was appropriate for the enhancement of phosphorus removal without excessive sludge production. The proposed optimal control strategy was: activating 5 rotating disks operated with a mode of "111100100" ("1" represents activation and "0" represents inactivation) for aeration and sludge discharge flow of 200 m3/day (SRT, 19 days). Compared with the baseline, this strategy could achieve ammonium violation below 10% and TP violation below 30% with substantial reduction of aeration energy cost (46%) and minimal increment of sludge production (< 2%). This study provides a useful approach for the optimization of process operation and control.

  8. Criticality Safety Evaluation of Standard Criticality Safety Requirements #1-520 g Operations in PF-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamanaka, Alan Joseph Jr.

    Guidance has been requested from the Nuclear Criticality Safety Division (NCSD) regarding processes that involve 520 grams of fissionable material or less. This Level-3 evaluation was conducted and documented in accordance with NCS-AP-004 (Ref. 1), formerly NCS-GUIDE-01. This evaluation is being written as a generic evaluation for all operations that will be able to operate using a 520-gram mass limit. Implementation for specific operations will be performed using a Level 1 CSED, which will confirm and document that this CSED can be used for the specific operation as discussed in NCS-MEMO-17-007 (Ref. 2). This Level 3 CSED updates and supersedesmore » the analysis performed in NCS-TECH-14-014 (Ref. 3).« less

  9. The Alaska SAR processor - Operations and control

    NASA Technical Reports Server (NTRS)

    Carande, Richard E.

    1989-01-01

    The Alaska SAR (synthetic-aperture radar) Facility (ASF) will be capable of receiving, processing, archiving, and producing a variety of SAR image products from three satellite-borne SARs: E-ERS-1 (ESA), J-ERS-1 (NASDA) and Radarsat (Canada). Crucial to the success of the ASF is the Alaska SAR processor (ASP), which will be capable of processing over 200 100-km x 100-km (Seasat-like) frames per day from the raw SAR data, at a ground resolution of about 30 m x 30 m. The processed imagery is of high geometric and radiometric accuracy, and is geolocated to within 500 m. Special-purpose hardware has been designed to execute a SAR processing algorithm to achieve this performance. This hardware is currently undergoing acceptance testing for delivery to the University of Alaska. Particular attention has been devoted to making the operations semi-automated and to providing a friendly operator interface via a computer workstation. The operations and control of the Alaska SAR processor are described.

  10. Cooling Performance Analysis of ThePrimary Cooling System ReactorTRIGA-2000Bandung

    NASA Astrophysics Data System (ADS)

    Irianto, I. D.; Dibyo, S.; Bakhri, S.; Sunaryo, G. R.

    2018-02-01

    The conversion of reactor fuel type will affect the heat transfer process resulting from the reactor core to the cooling system. This conversion resulted in changes to the cooling system performance and parameters of operation and design of key components of the reactor coolant system, especially the primary cooling system. The calculation of the operating parameters of the primary cooling system of the reactor TRIGA 2000 Bandung is done using ChemCad Package 6.1.4. The calculation of the operating parameters of the cooling system is based on mass and energy balance in each coolant flow path and unit components. Output calculation is the temperature, pressure and flow rate of the coolant used in the cooling process. The results of a simulation of the performance of the primary cooling system indicate that if the primary cooling system operates with a single pump or coolant mass flow rate of 60 kg/s, it will obtain the reactor inlet and outlet temperature respectively 32.2 °C and 40.2 °C. But if it operates with two pumps with a capacity of 75% or coolant mass flow rate of 90 kg/s, the obtained reactor inlet, and outlet temperature respectively 32.9 °C and 38.2 °C. Both models are qualified as a primary coolant for the primary coolant temperature is still below the permitted limit is 49.0 °C.

  11. Specialized operating room for cesarean section in the perinatal care unit: a review of the opening process and operating room management.

    PubMed

    Kasagi, Yoshihiro; Okutani, Ryu; Oda, Yutaka

    2015-02-01

    We have opened an operating room in the perinatal care unit (PNCU), separate from our existing central operating rooms, to be used exclusively for cesarean sections. The purpose is to meet the increasing need for both emergency cesarean sections and non-obstetric surgeries. It is equipped with the same surgical instruments, anesthesia machine, monitoring system, rapid infusion system and airway devices as the central operating rooms. An anesthesiologist and a nurse from the central operating rooms trained the nurses working in the new operating room, and discussed solutions to numerous problems that arose before and after its opening. Currently most of the elective and emergency cesarean sections carried out during the daytime on weekdays are performed in the PNCU operating room. A total of 328 and 347 cesarean sections were performed in our hospital during 2011 and 2012, respectively, of which 192 (55.5 %) and 254 (73.2 %) were performed in the PNCU operating room. The mean occupancy rate of the central operating rooms also increased from 81 % in 2011 to 90 % in 2012. The PNCU operating room was built with the support of motivated personnel and multidisciplinary teamwork, and has been found to be beneficial for both surgeons and anesthesiologists, while it also contributes to hospital revenue.

  12. Introducing a laparoscopic simulation training and credentialing program in gynaecology: an observational study.

    PubMed

    Janssens, Sarah; Beckmann, Michael; Bonney, Donna

    2015-08-01

    Simulation training in laparoscopic surgery has been shown to improve surgical performance. To describe the implementation of a laparoscopic simulation training and credentialing program for gynaecology registrars. A pilot program consisting of protected, supervised laparoscopic simulation time, a tailored curriculum and a credentialing process, was developed and implemented. Quantitative measures assessing simulated surgical performance were measured over the simulation training period. Laparoscopic procedures requiring credentialing were assessed for both the frequency of a registrar being the primary operator and the duration of surgery and compared to a presimulation cohort. Qualitative measures regarding quality of surgical training were assessed pre- and postsimulation. Improvements were seen in simulated surgical performance in efficiency domains. Operative time for procedures requiring credentialing was reduced by 12%. Primary operator status in the operating theatre for registrars was unchanged. Registrar assessment of training quality improved. The introduction of a laparoscopic simulation training and credentialing program resulted in improvements in simulated performance, reduced operative time and improved registrar assessment of the quality of training. © 2015 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.

  13. The Effects of Lower-Level Processing Skills on FL Reading Performance: Implications for Instruction.

    ERIC Educational Resources Information Center

    Koda, Keiko

    1992-01-01

    The relationship between lower-level verbal processing skills and foreign language reading proficiency was investigated with U.S. college students learning Japanese. Focus was on the specific effects of letter identification and word recognition. Findings suggest that efficient lower-level verbal processing operations are essential in foreign…

  14. 40 CFR 63.1260 - Reporting requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... process vents as required in § 63.1257(d)(2)(ii). (6) Data and other information supporting the determination of annual average concentrations by process simulation as required in § 63.1257(e)(1)(ii). (7... must be performed while a process with a vent subject to § 63.1254(a)(3) will be operating. (g...

  15. Processing Fluency in Education: How Metacognitive Feelings Shape Learning, Belief Formation, and Affect

    ERIC Educational Resources Information Center

    Reber, Rolf; Greifeneder, Rainer

    2017-01-01

    Processing fluency--the experienced ease with which a mental operation is performed--has attracted little attention in educational psychology, despite its relevance. The present article reviews and integrates empirical evidence on processing fluency that is relevant to school education. Fluency is important, for instance, in learning,…

  16. Early Benchmarks of Product Generation Capabilities of the GOES-R Ground System for Operational Weather Prediction

    NASA Astrophysics Data System (ADS)

    Kalluri, S. N.; Haman, B.; Vititoe, D.

    2014-12-01

    The ground system under development for Geostationary Operational Environmental Satellite-R (GOES-R) series of weather satellite has completed a key milestone in implementing the science algorithms that process raw sensor data to higher level products in preparation for launch. Real time observations from GOES-R are expected to make significant contributions to Earth and space weather prediction, and there are stringent requirements to product weather products at very low latency to meet NOAA's operational needs. Simulated test data from all the six GOES-R sensors are being processed by the system to test and verify performance of the fielded system. Early results show that the system development is on track to meet functional and performance requirements to process science data. Comparison of science products generated by the ground system from simulated data with those generated by the algorithm developers show close agreement among data sets which demonstrates that the algorithms are implemented correctly. Successful delivery of products to AWIPS and the Product Distribution and Access (PDA) system from the core system demonstrate that the external interfaces are working.

  17. Evaluation of Process Performance for Sustainable Hard Machining

    NASA Astrophysics Data System (ADS)

    Rotella, Giovanna; Umbrello, Domenico; , Oscar W. Dillon, Jr.; Jawahir, I. S.

    This paper aims to evaluate the sustainability performance of machining operation of through-hardening steel, AISI 52100, taking into account the impact of the material removal process in its various aspects. Experiments were performed for dry and cryogenic cutting conditions using chamfered cubic boron nitride (CBN) tool inserts at varying cutting conditions (cutting speed and feed rate). Cutting forces, mechanical power, tool wear, white layer thickness, surface roughness and residual stresses were investigated in order to evaluate the effects of extreme in-process cooling on the machined surface. The results indicate that cryogenic cooling has the potential to be used for surface integrity enhancement for improved product life and more sustainable functional performance.

  18. The Effects of Transcranial Direct Current Stimulation (tDCS) on Multitasking Throughput Capacity

    PubMed Central

    Nelson, Justin; McKinley, Richard A.; Phillips, Chandler; McIntire, Lindsey; Goodyear, Chuck; Kreiner, Aerial; Monforton, Lanie

    2016-01-01

    Background: Multitasking has become an integral attribute associated with military operations within the past several decades. As the amount of information that needs to be processed during these high level multitasking environments exceeds the human operators' capabilities, the information throughput capacity reaches an asymptotic limit. At this point, the human operator can no longer effectively process and respond to the incoming information resulting in a plateau or decline in performance. The objective of the study was to evaluate the efficacy of a non-invasive brain stimulation technique known as transcranial direct current stimulation (tDCS) applied to a scalp location over the left dorsolateral prefrontal cortex (lDLPFC) to improve information processing capabilities during a multitasking environment. Methods: The study consisted of 20 participants from Wright-Patterson Air Force Base (16 male and 4 female) with an average age of 31.1 (SD = 4.5). Participants were randomly assigned into two groups, each consisting of eight males and two females. Group one received 2 mA of anodal tDCS and group two received sham tDCS over the lDLPFC on their testing day. Results: The findings indicate that anodal tDCS significantly improves the participants' information processing capability resulting in improved performance compared to sham tDCS. For example, the multitasking throughput capacity for the sham tDCS group plateaued near 1.0 bits/s at the higher baud input (2.0 bits/s) whereas the anodal tDCS group plateaued near 1.3 bits/s. Conclusion: The findings provided new evidence that tDCS has the ability to augment and enhance multitasking capability in a human operator. Future research should be conducted to determine the longevity of the enhancement of transcranial direct current stimulation on multitasking performance, which has yet to be accomplished. PMID:27965553

  19. The Effects of Transcranial Direct Current Stimulation (tDCS) on Multitasking Throughput Capacity.

    PubMed

    Nelson, Justin; McKinley, Richard A; Phillips, Chandler; McIntire, Lindsey; Goodyear, Chuck; Kreiner, Aerial; Monforton, Lanie

    2016-01-01

    Background: Multitasking has become an integral attribute associated with military operations within the past several decades. As the amount of information that needs to be processed during these high level multitasking environments exceeds the human operators' capabilities, the information throughput capacity reaches an asymptotic limit. At this point, the human operator can no longer effectively process and respond to the incoming information resulting in a plateau or decline in performance. The objective of the study was to evaluate the efficacy of a non-invasive brain stimulation technique known as transcranial direct current stimulation (tDCS) applied to a scalp location over the left dorsolateral prefrontal cortex (lDLPFC) to improve information processing capabilities during a multitasking environment. Methods: The study consisted of 20 participants from Wright-Patterson Air Force Base (16 male and 4 female) with an average age of 31.1 (SD = 4.5). Participants were randomly assigned into two groups, each consisting of eight males and two females. Group one received 2 mA of anodal tDCS and group two received sham tDCS over the lDLPFC on their testing day. Results: The findings indicate that anodal tDCS significantly improves the participants' information processing capability resulting in improved performance compared to sham tDCS. For example, the multitasking throughput capacity for the sham tDCS group plateaued near 1.0 bits/s at the higher baud input (2.0 bits/s) whereas the anodal tDCS group plateaued near 1.3 bits/s. Conclusion: The findings provided new evidence that tDCS has the ability to augment and enhance multitasking capability in a human operator. Future research should be conducted to determine the longevity of the enhancement of transcranial direct current stimulation on multitasking performance, which has yet to be accomplished.

  20. The simulation study on optical target laser active detection performance

    NASA Astrophysics Data System (ADS)

    Li, Ying-chun; Hou, Zhao-fei; Fan, Youchen

    2014-12-01

    According to the working principle of laser active detection system, the paper establishes the optical target laser active detection simulation system, carry out the simulation study on the detection process and detection performance of the system. For instance, the performance model such as the laser emitting, the laser propagation in the atmosphere, the reflection of optical target, the receiver detection system, the signal processing and recognition. We focus on the analysis and modeling the relationship between the laser emitting angle and defocus amount and "cat eye" effect echo laser in the reflection of optical target. Further, in the paper some performance index such as operating range, SNR and the probability of the system have been simulated. The parameters including laser emitting parameters, the reflection of the optical target and the laser propagation in the atmosphere which make a great influence on the performance of the optical target laser active detection system. Finally, using the object-oriented software design methods, the laser active detection system with the opening type, complete function and operating platform, realizes the process simulation that the detection system detect and recognize the optical target, complete the performance simulation of each subsystem, and generate the data report and the graph. It can make the laser active detection system performance models more intuitive because of the visible simulation process. The simulation data obtained from the system provide a reference to adjust the structure of the system parameters. And it provides theoretical and technical support for the top level design of the optical target laser active detection system and performance index optimization.

  1. Can spectro-temporal complexity explain the autistic pattern of performance on auditory tasks?

    PubMed

    Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter

    2006-01-01

    To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material (pure tones) and/or low-level operations (detection, labelling, chord disembedding, detection of pitch changes) show a superior level of performance and shorter ERP latencies. In contrast, tasks involving spectrally- and temporally-dynamic material and/or complex operations (evaluation, attention) are poorly performed by autistics, or generate inferior ERP activity or brain activation. Neural complexity required to perform auditory tasks may therefore explain pattern of performance and activation of autistic individuals during auditory tasks.

  2. Augmenting team cognition in human-automation teams performing in complex operational environments.

    PubMed

    Cuevas, Haydee M; Fiore, Stephen M; Caldwell, Barrett S; Strater, Laura

    2007-05-01

    There is a growing reliance on automation (e.g., intelligent agents, semi-autonomous robotic systems) to effectively execute increasingly cognitively complex tasks. Successful team performance for such tasks has become even more dependent on team cognition, addressing both human-human and human-automation teams. Team cognition can be viewed as the binding mechanism that produces coordinated behavior within experienced teams, emerging from the interplay between each team member's individual cognition and team process behaviors (e.g., coordination, communication). In order to better understand team cognition in human-automation teams, team performance models need to address issues surrounding the effect of human-agent and human-robot interaction on critical team processes such as coordination and communication. Toward this end, we present a preliminary theoretical framework illustrating how the design and implementation of automation technology may influence team cognition and team coordination in complex operational environments. Integrating constructs from organizational and cognitive science, our proposed framework outlines how information exchange and updating between humans and automation technology may affect lower-level (e.g., working memory) and higher-level (e.g., sense making) cognitive processes as well as teams' higher-order "metacognitive" processes (e.g., performance monitoring). Issues surrounding human-automation interaction are discussed and implications are presented within the context of designing automation technology to improve task performance in human-automation teams.

  3. Heat for film processing from solar energy

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Report describes solar water heating system for laboratory in Mill Valley, California. System furnishes 59 percent of hot water requirements for photographic film processing. Text of report discusses system problems and modifications, analyzes performance and economics, and supplies drawings and operation/maintenance manual.

  4. Launch processing system transition from development to operation

    NASA Technical Reports Server (NTRS)

    Paul, H. C.

    1977-01-01

    The Launch Processing System has been under development at Kennedy Space Center since 1973. A prototype system was developed and delivered to Marshall Space Flight Center for Solid Rocket Booster checkout in July 1976. The first production hardware arrived in late 1976. The System uses a distributed computer network for command and monitoring and is supported by a dual large scale computer system for 'off line' processing. A high level of automation is anticipated for Shuttle and Payload testing and launch operations to gain the advantages of short turnaround capability, repeatability of operations, and minimization of operations and maintenance (O&M) manpower. Learning how to efficiently apply the system is our current problem. We are searching for more effective ways to convey LPS system performance characteristics from the designer to a large number of users. Once we have done this, we can realize the advantages of LPS system design.

  5. Strengthening the revenue cycle: a 4-step method for optimizing payment.

    PubMed

    Clark, Jonathan J

    2008-10-01

    Four steps for enhancing the revenue cycle to ensure optimal payment are: *Establish key performance indicator dashboards in each department that compare current with targeted performance; *Create proper organizational structures for each department; *Ensure that high-performing leaders are hired in all management and supervisory positions; *Implement efficient processes in underperforming operations.

  6. ASIC For Complex Fixed-Point Arithmetic

    NASA Technical Reports Server (NTRS)

    Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.

    1995-01-01

    Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.

  7. Research to Operations: The Critical Transition

    NASA Technical Reports Server (NTRS)

    Fogarty, Jennifer A.

    2009-01-01

    Space Life Sciences Directorate (SLSD) specializes in transitioning technology and knowledge to medical operations. This activity encompasses funding a spectrum of research and technology efforts, such as understanding fundamental biological mechanisms altered by microgravity and executing technology watches for state of the art diagnostic imaging equipment. This broad spectrum approach to fulfilling the need to protect crewmember health and performance during long and short duration missions to the International Space Station, moon and Mars is made possible by having a line of site between research and operations. Currently, SLSD's line of site is articulated in a transition to medical practice (TMP) process. This process is designed to shepherd information and knowledge gained through fundamental and mechanistic research toward the development of an operational solution such as a pre-flight selection criteria; an in-flight countermeasure, monitoring capability or treatment; or a post-flight reconditioning program. The TMP process is also designed to assist with the customization of mature hardware or technology for NASA specific use. The benefits of this process are that the concept of operational usability is interjected early in the research, design, or acquisition phase, and stakeholders are involved early to identify requirements and also periodically asked to assess requirements compliance of research or technology development project. Currently a device known as the actiwatch is being assessed for the final transition to operational use. Specific examples of research to operations transition success help to illustrate the process and bolster communication between the research and medical operations communities.

  8. Planar-Processed Polymer Transistors.

    PubMed

    Xu, Yong; Sun, Huabin; Shin, Eul-Yong; Lin, Yen-Fu; Li, Wenwu; Noh, Yong-Young

    2016-10-01

    Planar-processed polymer transistors are proposed where the effective charge injection and the split unipolar charge transport are all on the top surface of the polymer film, showing ideal device characteristics with unparalleled performance. This technique provides a great solution to the problem of fabrication limitations, the ambiguous operating principle, and the performance improvements in practical applications of conjugated-polymer transistors. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. "Chemical transformers" from nanoparticle ensembles operated with logic.

    PubMed

    Motornov, Mikhail; Zhou, Jian; Pita, Marcos; Gopishetty, Venkateshwarlu; Tokarev, Ihor; Katz, Evgeny; Minko, Sergiy

    2008-09-01

    The pH-responsive nanoparticles were coupled with information-processing enzyme-based systems to yield "smart" signal-responsive hybrid systems with built-in Boolean logic. The enzyme systems performed AND/OR logic operations, transducing biochemical input signals into reversible structural changes (signal-directed self-assembly) of the nanoparticle assemblies, thus resulting in the processing and amplification of the biochemical signals. The hybrid system mimics biological systems in effective processing of complex biochemical information, resulting in reversible changes of the self-assembled structures of the nanoparticles. The bioinspired approach to the nanostructured morphing materials could be used in future self-assembled molecular robotic systems.

  10. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  11. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  12. Bench-Scale Process for Low-Cost Carbon Dioxide (CO2) Capture Using a Phase-Changing Absorbent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westendorf, Tiffany; Caraher, Joel; Chen, Wei

    2015-03-31

    The objective of this project is to design and build a bench-scale process for a novel phase-changing aminosilicone-based CO2-capture solvent. The project will establish scalability and technical and economic feasibility of using a phase-changing CO2-capture absorbent for post-combustion capture of CO2 from coal-fired power plants with 90% capture efficiency and 95% CO2 purity at a cost of $40/tonne of CO2 captured by 2025 and a cost of <$10/tonne of CO2 captured by 2035. In the first budget period of this project, the bench-scale phase-changing CO2 capture process was designed using data and operating experience generated under a previous project (ARPA-emore » project DE-AR0000084). Sizing and specification of all major unit operations was completed, including detailed process and instrumentation diagrams. The system was designed to operate over a wide range of operating conditions to allow for exploration of the effect of process variables on CO2 capture performance.« less

  13. How to pass a sensor acceptance test: using the gap between acceptance criteria and operational performance

    NASA Astrophysics Data System (ADS)

    Bijl, Piet

    2016-10-01

    When acquiring a new imaging system and operational task performance is a critical factor for success, it is necessary to specify minimum acceptance requirements that need to be met using a sensor performance model and/or performance tests. Currently, there exist a variety of models and test from different origin (defense, security, road safety, optometry) and they all do different predictions. This study reviews a number of frequently used methods and shows the effects that small changes in procedure or threshold criteria can have on the outcome of a test. For example, a system may meet the acceptance requirements but not satisfy the needs for the operational task, or the choice of test may determine the rank order of candidate sensors. The goal of the paper is to make people aware of the pitfalls associated with the acquisition process, by i) illustrating potential tricks to have a system accepted that is actually not suited for the operational task, and ii) providing tips to avoid this unwanted situation.

  14. Long-term performance evaluation of EBPR process in tropical climate: start-up, process stability, and the effect of operational pH and influent C:P ratio.

    PubMed

    Ong, Y H; Chua, A S M; Lee, B P; Ngoh, G C

    2013-01-01

    To date, little information is known about the operation of the enhanced biological phosphorus removal (EBPR) process in tropical climates. Along with the global concerns on nutrient pollution and the increasing array of local regulatory requirements, the applicability and compliance accountability of the EBPR process for sewage treatment in tropical climates is being evaluated. A sequencing batch reactor (SBR) inoculated with seed sludge from a conventional activated sludge (CAS) process was successfully acclimatized to EBPR conditions at 28 °C after 13 days' operation. Enrichment of Candidatus Accumulibacter phosphatis in the SBR was confirmed through fluorescence in situ hybridization (FISH). The effects of operational pH and influent C:P ratio on EBPR were then investigated. At pH 7 or pH 8, phosphorus removal rates of the EBPR processes were relatively higher when operated at C:P ratio of 3 than C:P ratio of 10, with 0.019-0.020 and 0.011-0.012 g-P/g-MLVSS•day respectively. One-year operation of the 28 °C EBPR process at C:P ratio of 3 and pH 8 demonstrated stable phosphorus removal rate of 0.020 ± 0.003 g-P/g-MLVSS•day, corresponding to effluent with phosphorus concentration <0.5 mg/L. This study provides the first evidence on good EBPR activity at relatively high temperature, indicating its applicability in a tropical climate.

  15. A Practice-Oriented Bifurcation Analysis for Pulse Energy Converters. Part 2: An Operating Regime

    NASA Astrophysics Data System (ADS)

    Kolokolov, Yury; Monovskaya, Anna

    The paper continues the discussion on bifurcation analysis for applications in practice-oriented solutions for pulse energy conversion systems (PEC-systems). Since a PEC-system represents a nonlinear object with a variable structure, then the description of its dynamics evolution involves bifurcation analysis conceptions. This means the necessity to resolve the conflict-of-units between the notions used to describe natural evolution (i.e. evolution of the operating process towards nonoperating processes and vice versa) and the notions used to describe a desirable artificial regime (i.e. an operating regime). We consider cause-effect relations in the following sequence: nonlinear dynamics-output signal-operating characteristics, where these characteristics include stability and performance. Then regularities of nonlinear dynamics should be translated into regularities of the output signal dynamics, and, after, into an evolutional picture of each operating characteristic. In order to make the translation without losses, we first take into account heterogeneous properties within the structures of the operating process in the parametrical (P-) and phase (X-) spaces, and analyze regularities of the operating stability and performance on the common basis by use of the modified bifurcation diagrams built in joint PX-space. Then, the correspondence between causes (degradation of the operating process stability) and effects (changes of the operating characteristics) is decomposed into three groups of abnormalities: conditionally unavoidable abnormalities (CU-abnormalities); conditionally probable abnormalities (CP-abnormalities); conditionally regular abnormalities (CR-abnormalities). Within each of these groups the evolutional homogeneity is retained. After, the resultant evolution of each operating characteristic is naturally aggregated through the superposition of cause-effect relations in accordance with each of the abnormalities. We demonstrate that the practice-oriented bifurcation analysis has fundamentally specific purposes and tools, like for the computer-based bifurcation analysis and the experimental bifurcation analysis. That is why, from our viewpoint, it seems to be a rather novel direction in the general context of bifurcation analysis conceptions. We believe that the discussion could be interesting to pioneer research intended for the design of promising systems of pulse energy conversion.

  16. Freeway performance measurement system : an operational analysis tool

    DOT National Transportation Integrated Search

    2001-07-30

    PeMS is a freeway performance measurement system for all of California. It processes 2 : GB/day of 30-second loop detector data in real time to produce useful information. Managers : at any time can have a uniform, and comprehensive assessment of fre...

  17. 10 CFR 71.121 - Internal inspection.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... performed for each work operation where necessary to assure quality. If direct inspection of processed... REGULATORY COMMISSION (CONTINUED) PACKAGING AND TRANSPORTATION OF RADIOACTIVE MATERIAL Quality Assurance § 71... execute a program for inspection of activities affecting quality by or for the organization performing the...

  18. 10 CFR 71.121 - Internal inspection.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... performed for each work operation where necessary to assure quality. If direct inspection of processed... REGULATORY COMMISSION (CONTINUED) PACKAGING AND TRANSPORTATION OF RADIOACTIVE MATERIAL Quality Assurance § 71... execute a program for inspection of activities affecting quality by or for the organization performing the...

  19. 10 CFR 71.121 - Internal inspection.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... performed for each work operation where necessary to assure quality. If direct inspection of processed... REGULATORY COMMISSION (CONTINUED) PACKAGING AND TRANSPORTATION OF RADIOACTIVE MATERIAL Quality Assurance § 71... execute a program for inspection of activities affecting quality by or for the organization performing the...

  20. 10 CFR 71.121 - Internal inspection.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... performed for each work operation where necessary to assure quality. If direct inspection of processed... REGULATORY COMMISSION (CONTINUED) PACKAGING AND TRANSPORTATION OF RADIOACTIVE MATERIAL Quality Assurance § 71... execute a program for inspection of activities affecting quality by or for the organization performing the...

  1. 10 CFR 71.121 - Internal inspection.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... performed for each work operation where necessary to assure quality. If direct inspection of processed... REGULATORY COMMISSION (CONTINUED) PACKAGING AND TRANSPORTATION OF RADIOACTIVE MATERIAL Quality Assurance § 71... execute a program for inspection of activities affecting quality by or for the organization performing the...

  2. Bench-scale performance testing and economic analyses of electrostatic dry coal cleaning. Final report, October 1980-July 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich, S.R.

    1987-02-01

    The report gives results of preliminary performance evaluations and economic analyses of the Advanced Energy Dynamics (AED) electrostatic dry coal-cleaning process. Grab samples of coal-feed-product coals were obtained from 25 operating physical coal-cleaning (PCC) plants. These samples were analyzed for ash, sulfur, and energy content and splits of the original samples of feed run-of-mine coal were provided for bench-scale testing in an electrostatic separation apparatus. The process showed superior sulfur-removal performance at equivalent cost and energy-recovery levels. The ash-removal capability of the process was not evaluated completely: overall, ash-removal results indicated that the process did not perform as well asmore » the PCC plants.« less

  3. Hydrogen Production via a High-Efficiency Low-Temperature Reformer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul KT Liu; Theo T. Tsotsis

    2006-05-31

    Fuel cells are promoted by the US government as a viable alternative for clean and efficient energy generation. It is anticipated that the fuel cell market will rise if the key technical barriers can be overcome. One of them is certainly fuel processing and purification. Existing fuel reforming processes are energy intensive, extremely complicated and capital intensive; these disadvantages handicap the scale-down of existing reforming process, targeting distributed or on-board/stationary hydrogen production applications. Our project involves the bench-scale demonstration of a high-efficiency low-temperature steam reforming process. Hydrogen production can be operated at 350 to 400ºC with our invention, as opposedmore » to >800ºC of existing reforming. In addition, our proposed process improves the start-up deficiency of conventional reforming due to its low temperature operation. The objective of this project is to demonstrate the invented process concept via a bench scale unit and verify mathematical simulation for future process optimization study. Under this project, we have performed the experimental work to determine the adsorption isotherm, reaction kinetics, and membrane permeances required to perform the process simulation based upon the mathematical model developed by us. A ceramic membrane coated with palladium thin film fabricated by us was employed in this study. The adsorption isotherm for a selected hydrotalcite adsorbent was determined experimentally. Further, the capacity loss under cyclic adsorption/desorption was confirmed to be negligible. Finally a commercial steam reforming catalyst was used to produce the reaction kinetic parameters required for the proposed operating condition. With these input parameters, a mathematical simulation was performed to predict the performance of the invented process. According to our simulation, our invented hybrid process can deliver 35 to 55% methane conversion, in comparison with the 12 and 18-21% conversion of the packed bed and an adsorptive reactor respectively. In addition CO contamination with <10 to 120 ppm is predicted for the invented process depending upon the cycle time for the PSA type operation. In comparison, the adsorption reactor can also deliver a similar CO contaminant at the low end; however, its high end reaches as high as 300 ppm based upon the simulation of our proposed operating condition. Our experimental results for the packed bed and the membrane reactor deliver 12 and 18% conversion at 400°C, approaching the conversion by the mathematical simulation. Due to the time constraint, the experimental study on the conversion of the invented process has not been complete. However, our in-house study using a similar process concept for the water gas shift reaction has demonstrated the reliability of our mathematical simulation for the invented process. In summary, we are confident that the invented process can deliver efficiently high purity hydrogen at a low temperature (~400°C). According to our projection, the invented process can further achieve 5% energy savings and ~50% capital savings over conventional reforming for fuel cell applications. The pollution abatement potential associated with the implementation of fuel cells, including the elimination of nitrogen oxides and CO, and the reduction in volatile organics and CO2, can thus be realized with the implementation of this invented process. The projected total market size for equipment sale for the proposed process in US is $1.5 billion annually.« less

  4. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  5. Launch Processing System. [for Space Shuttle

    NASA Technical Reports Server (NTRS)

    Byrne, F.; Doolittle, G. V.; Hockenberger, R. W.

    1976-01-01

    This paper presents a functional description of the Launch Processing System, which provides automatic ground checkout and control of the Space Shuttle launch site and airborne systems, with emphasis placed on the Checkout, Control, and Monitor Subsystem. Hardware and software modular design concepts for the distributed computer system are reviewed relative to performing system tests, launch operations control, and status monitoring during ground operations. The communication network design, which uses a Common Data Buffer interface to all computers to allow computer-to-computer communication, is discussed in detail.

  6. The Impact of Pictorial Display on Operator Learning and Performance. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Miller, R. A.; Messing, L. J.; Jagacinski, R. J.

    1984-01-01

    The effects of pictorially displayed information on human learning and performance of a simple control task were investigated. The controlled system was a harmonic oscillator and the system response was displayed to subjects as either an animated pendulum or a horizontally moving dot. Results indicated that the pendulum display did not effect performance scores but did significantly effect the learning processes of individual operators. The subjects with the pendulum display demonstrated more vertical internal models early in the experiment and the manner in which their internal models were tuned with practice showed increased variability between subjects.

  7. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howe, Gary; Albritton, John; Denton, David

    In September 2010, RTI and the DOE/NETL signed a cooperative agreement (DE-FE000489) to design, build, and operate a pre-commercial syngas cleaning system that would capture up to 90% of the CO 2 in the syngas slipstream, and demonstrate the ability to reduce syngas contaminants to meet DOE’s specifications for chemical production application. This pre-commercial syngas cleaning system is operated at Tampa Electric Company’s (TEC) 250-MWe integrated gasification combined cycle (IGCC) plant at Polk Power Station (PPS), located near Tampa, Florida. The syngas cleaning system consists of the following units: Warm Gas Desulfurization Process (WDP) - this unit processes a syngasmore » flow equivalent of 50 MWe of power (50 MWe equivalent corresponds to about 2.0 MM scfh of syngas on dry basis) to produce a desulfurized syngas with a total sulfur (H 2S+COS) concentration ~ 10 ppmv. Water Gas Shift (WGS) Reactor - this unit converts sufficient CO into CO 2 to enable 90% capture of the CO 2 in the syngas slipstream. This reactor uses conventional commercial shift catalyst technologies. Low Temperature Gas Cooling (LTGC) - this unit cools the syngas for the low temperature activated MDEA process and separates any condensed water. Activated MDEA Process (aMDEA) - this unit employs a non-selective separation for the CO 2 and H 2S present in the raw syngas stream. Because of the selective sulfur removal by the upstream WDP unit, the CO 2 capture target of 90% CO 2 can be achieved with the added benefit that total sulfur concentration in the CO 2 product is < 100 ppmv. An additional advantage of the activated MDEA process is that the non-selective sulfur removal from the treated syngas reduces sulfur in the treated gas to very low sub-ppmv concentrations, which are required for chemical production applications. Testing to date of this pre-commercial syngas cleaning system has shown that the technology has great potential to provide clean syngas from coal and petcoke-based gasification at increased efficiency and at significantly lower capital and operating costs than conventional syngas cleanup technologies. However, before the technology can be deemed ready for scale-up to a full commercial-scale demonstration, additional R&D testing is needed at the site to address the following critical technical risks: WDP sorbent stability and performance; Impact of WDP on downstream cleanup and conversion steps; Metallurgy and refractory; Syngas cleanup performance and controllability; Carbon capture performance and additional syngas cleanup The proposed plan to acquire this additional R&D data involves: Operation of the units to achieve an additional 3,000 hours of operation of the system within the performance period, with a target of achieving 1,000 hours of those hours via continuous operation of the entire integrated pre-commercial demonstration system; Rapid turnaround of repairs and/or modifications required as necessary to return any specific unit to operating status with documentation and lessons learned to support technology maturation, and; Proactive performance of maintenance activities during any unplanned outages and if possible while operating.« less

  8. Improved Anomaly Detection using Integrated Supervised and Unsupervised Processing

    NASA Astrophysics Data System (ADS)

    Hunt, B.; Sheppard, D. G.; Wetterer, C. J.

    There are two broad technologies of signal processing applicable to space object feature identification using nonresolved imagery: supervised processing analyzes a large set of data for common characteristics that can be then used to identify, transform, and extract information from new data taken of the same given class (e.g. support vector machine); unsupervised processing utilizes detailed physics-based models that generate comparison data that can then be used to estimate parameters presumed to be governed by the same models (e.g. estimation filters). Both processes have been used in non-resolved space object identification and yield similar results yet arrived at using vastly different processes. The goal of integrating the results of the two is to seek to achieve an even greater performance by building on the process diversity. Specifically, both supervised processing and unsupervised processing will jointly operate on the analysis of brightness (radiometric flux intensity) measurements reflected by space objects and observed by a ground station to determine whether a particular day conforms to a nominal operating mode (as determined from a training set) or exhibits anomalous behavior where a particular parameter (e.g. attitude, solar panel articulation angle) has changed in some way. It is demonstrated in a variety of different scenarios that the integrated process achieves a greater performance than each of the separate processes alone.

  9. Characteristics of membrane fouling in submerged membrane bioreactor under sub-critical flux operation.

    PubMed

    Su, Y C; Huang, C P; Pan, Jill R; Lee, H C

    2008-01-01

    Recently, the membrane bioreactor (MBR) process has become one of the novel technologies to enhance the performance of biological treatment of wastewater. Membrane bioreactor process uses the membrane unit to replace a sediment tank, and this can greatly enhance treatment performance. However, membrane fouling in MBR restricts its widespread application because it leads to permeate flux decline, making more frequent membrane cleaning and replacement necessary, which then increases operating and maintenance costs. This study investigated the sludge characteristics in membrane fouling under sub-critical flux operation and also assessed the effect of shear stress on membrane fouling. Membrane fouling was slow under sub-critical flux operation. However, as filamentous microbes became dominant in the reactor, membrane fouling increased dramatically due to the increased viscosity and polysaccharides. A close link was found between membrane fouling and the amount of polysaccharides in soluble EPS. The predominant resistance was the cake resistance which could be minimized by increasing the shear stress. However, the resistance of colloids and solutes was not apparently reduced by increasing shear stress. Therefore, smaller particles such as macromolecules (e.g. polysaccharides) may play an important role in membrane fouling under sub-critical flux operation.

  10. High-gain 1.3  μm GaInNAs semiconductor optical amplifier with enhanced temperature stability for all-optical signal processing at 10  Gb/s.

    PubMed

    Fitsios, D; Giannoulis, G; Korpijärvi, V-M; Viheriälä, J; Laakso, A; Iliadis, N; Dris, S; Spyropoulou, M; Avramopoulos, H; Kanellos, G T; Pleros, N; Guina, M

    2015-01-01

    We report on the complete experimental evaluation of a GaInNAs/GaAs (dilute nitride) semiconductor optical amplifier that operates at 1.3 μm and exhibits 28 dB gain and a gain recovery time of 100 ps. Successful wavelength conversion operation is demonstrated using pseudorandom bit sequence 27-1 non-return-to-zero bit streams at 5 and 10  Gb/s, yielding error-free performance and showing feasibility for implementation in various signal processing functionalities. The operational credentials of the device are analyzed in various operational regimes, while its nonlinear performance is examined in terms of four-wave mixing. Moreover, characterization results reveal enhanced temperature stability with almost no gain variation around the 1320 nm region for a temperature range from 20°C to 50°C. The operational characteristics of the device, along with the cost and energy benefits of dilute nitride technology, make it very attractive for application in optical access networks and dense photonic integrated circuits.

  11. Experience Transitioning Models and Data at the NOAA Space Weather Prediction Center

    NASA Astrophysics Data System (ADS)

    Berger, Thomas

    2016-07-01

    The NOAA Space Weather Prediction Center has a long history of transitioning research data and models into operations and with the validation activities required. The first stage in this process involves demonstrating that the capability has sufficient value to customers to justify the cost needed to transition it and to run it continuously and reliably in operations. Once the overall value is demonstrated, a substantial effort is then required to develop the operational software from the research codes. The next stage is to implement and test the software and product generation on the operational computers. Finally, effort must be devoted to establishing long-term measures of performance, maintaining the software, and working with forecasters, customers, and researchers to improve over time the operational capabilities. This multi-stage process of identifying, transitioning, and improving operational space weather capabilities will be discussed using recent examples. Plans for future activities will also be described.

  12. Retrieving Historical Electrorefining Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, Meagan Daniella

    Pyrochemical Operations began at Los Alamos National Laboratory (LANL) during 1962 (1). Electrorefining (ER) has been implemented as a routine process since the 1980’s. The process data that went through the ER operation was recorded but had never been logged in an online database. Without a database new staff members are hindered in their work by the lack of information. To combat the issue a database in Access was created to collect the historical data. The years from 2000 onward were entered and queries were created to analyze trends. These trends will aid engineering and operations staff to reach optimalmore » performance for the startup of the new lines.« less

  13. Operating boundaries of full-scale advanced water reuse treatment plants: many lessons learned from pilot plant experience.

    PubMed

    Bele, C; Kumar, Y; Walker, T; Poussade, Y; Zavlanos, V

    2010-01-01

    Three Advanced Water Treatment Plants (AWTP) have recently been built in South East Queensland as part of the Western Corridor Recycled Water Project (WCRWP) producing Purified Recycled Water from secondary treated waste water for the purpose of indirect potable reuse. At Luggage Point, a demonstration plant was primarily operated by the design team for design verification. The investigation program was then extended so that the operating team could investigate possible process optimisation, and operation flexibility. Extending the demonstration plant investigation program enabled monitoring of the long term performance of the microfiltration and reverse osmosis membranes, which did not appear to foul even after more than a year of operation. The investigation primarily identified several ways to optimise the process. It highlighted areas of risk for treated water quality, such as total nitrogen. Ample and rapid swings of salinity from 850 to 3,000 mg/l-TDS were predicted to affect the RO process day-to-day operation and monitoring. Most of the setpoints used for monitoring under HACCP were determined during the pilot plant trials.

  14. Cross-industry benchmarking: is it applicable to the operating room?

    PubMed

    Marco, A P; Hart, S

    2001-01-01

    The use of benchmarking has been growing in nonmedical industries. This concept is being increasingly applied to medicine as the industry strives to improve quality and improve financial performance. Benchmarks can be either internal (set by the institution) or external (use other's performance as a goal). In some industries, benchmarking has crossed industry lines to identify breakthroughs in thinking. In this article, we examine whether the airline industry can be used as a source of external process benchmarking for the operating room.

  15. Conceptual design of a piloted Mars sprint life support system

    NASA Technical Reports Server (NTRS)

    Cullingford, H. S.; Novara, M.

    1988-01-01

    This paper presents the conceptual design of a life support system sustaining a crew of six in a piloted Mars sprint. The requirements and constraints of the system are discussed along with its baseline performance parameters. An integrated operation is achieved with air, water, and waste processing and supplemental food production. The design philosophy includes maximized reliability considerations, regenerative operations, reduced expendables, and fresh harvest capability. The life support system performance will be described with characteristics of the associated physical-chemical subsystems and a greenhouse.

  16. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  17. Operational training for the mission operations at the Brazilian National Institute for Space Research (INPE)

    NASA Technical Reports Server (NTRS)

    Rozenfeld, Pawel

    1993-01-01

    This paper describes the selection and training process of satellite controllers and data network operators performed at INPE's Satellite Tracking and Control Center in order to prepare them for the mission operations of the INPE's first (SCD1) satellite. An overview of the ground control system and SCD1 architecture and mission is given. Different training phases are described, taking into account that the applicants had no previous knowledge of space operations requiring, therefore, a training which started from the basics.

  18. Cognitive consequences of clumsy automation on high workload, high consequence human performance

    NASA Technical Reports Server (NTRS)

    Cook, Richard I.; Woods, David D.; Mccolligan, Elizabeth; Howie, Michael B.

    1991-01-01

    The growth of computational power has fueled attempts to automate more of the human role in complex problem solving domains, especially those where system faults have high consequences and where periods of high workload may saturate the performance capacity of human operators. Examples of these domains include flightdecks, space stations, air traffic control, nuclear power operation, ground satellite control rooms, and surgical operating rooms. Automation efforts may have unanticipated effects on human performance, particularly if they increase the workload at peak workload times or change the practitioners' strategies for coping with workload. Smooth and effective changes in automation requires detailed understanding of the congnitive tasks confronting the user: it has been called user centered automation. The introduction of a new computerized technology in a group of hospital operating rooms used for heart surgery was observed. The study revealed how automation, especially 'clumsy automation', effects practitioner work patterns and suggest that clumsy automation constrains users in specific and significant ways. Users tailor both the new system and their tasks in order to accommodate the needs of process and production. The study of this tailoring may prove a powerful tool for exposing previously hidden patterns of user data processing, integration, and decision making which may, in turn, be useful in the design of more effective human-machine systems.

  19. Defining process design space for monoclonal antibody cell culture.

    PubMed

    Abu-Absi, Susan Fugett; Yang, LiYing; Thompson, Patrick; Jiang, Canping; Kandula, Sunitha; Schilling, Bernhard; Shukla, Abhinav A

    2010-08-15

    The concept of design space has been taking root as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. During mapping of the process design space, the multidimensional combination of operational variables is studied to quantify the impact on process performance in terms of productivity and product quality. An efficient methodology to map the design space for a monoclonal antibody cell culture process is described. A failure modes and effects analysis (FMEA) was used as the basis for the process characterization exercise. This was followed by an integrated study of the inoculum stage of the process which includes progressive shake flask and seed bioreactor steps. The operating conditions for the seed bioreactor were studied in an integrated fashion with the production bioreactor using a two stage design of experiments (DOE) methodology to enable optimization of operating conditions. A two level Resolution IV design was followed by a central composite design (CCD). These experiments enabled identification of the edge of failure and classification of the operational parameters as non-key, key or critical. In addition, the models generated from the data provide further insight into balancing productivity of the cell culture process with product quality considerations. Finally, process and product-related impurity clearance was evaluated by studies linking the upstream process with downstream purification. Production bioreactor parameters that directly influence antibody charge variants and glycosylation in CHO systems were identified.

  20. Right-Brain/Left-Brain Integrated Associative Processor Employing Convertible Multiple-Instruction-Stream Multiple-Data-Stream Elements

    NASA Astrophysics Data System (ADS)

    Hayakawa, Hitoshi; Ogawa, Makoto; Shibata, Tadashi

    2005-04-01

    A very large scale integrated circuit (VLSI) architecture for a multiple-instruction-stream multiple-data-stream (MIMD) associative processor has been proposed. The processor employs an architecture that enables seamless switching from associative operations to arithmetic operations. The MIMD element is convertible to a regular central processing unit (CPU) while maintaining its high performance as an associative processor. Therefore, the MIMD associative processor can perform not only on-chip perception, i.e., searching for the vector most similar to an input vector throughout the on-chip cache memory, but also arithmetic and logic operations similar to those in ordinary CPUs, both simultaneously in parallel processing. Three key technologies have been developed to generate the MIMD element: associative-operation-and-arithmetic-operation switchable calculation units, a versatile register control scheme within the MIMD element for flexible operations, and a short instruction set for minimizing the memory size for program storage. Key circuit blocks were designed and fabricated using 0.18 μm complementary metal-oxide-semiconductor (CMOS) technology. As a result, the full-featured MIMD element is estimated to be 3 mm2, showing the feasibility of an 8-parallel-MIMD-element associative processor in a single chip of 5 mm× 5 mm.

Top