Sample records for process operating time

  1. 9 CFR 381.304 - Operations in the thermal processing area.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... establishment at the time the processing cycle begins to assure that the temperature of the contents of every... processing operation times. Temperature/time recording devices shall correspond within 15 minutes to the time... (or operating process schedules) for daily production, including minimum initial temperatures and...

  2. 9 CFR 381.304 - Operations in the thermal processing area.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... establishment at the time the processing cycle begins to assure that the temperature of the contents of every... processing operation times. Temperature/time recording devices shall correspond within 15 minutes to the time... (or operating process schedules) for daily production, including minimum initial temperatures and...

  3. 9 CFR 381.304 - Operations in the thermal processing area.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... establishment at the time the processing cycle begins to assure that the temperature of the contents of every... processing operation times. Temperature/time recording devices shall correspond within 15 minutes to the time... (or operating process schedules) for daily production, including minimum initial temperatures and...

  4. 9 CFR 381.304 - Operations in the thermal processing area.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... establishment at the time the processing cycle begins to assure that the temperature of the contents of every... processing operation times. Temperature/time recording devices shall correspond within 15 minutes to the time... (or operating process schedules) for daily production, including minimum initial temperatures and...

  5. 9 CFR 318.304 - Operations in the thermal processing area.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... factor over the specified thermal processing operation times. Temperature/time recording devices shall... minimum initial temperatures and operating procedures for thermal processing equipment, shall be posted in... available to the thermal processing system operator and the inspector. (b) Process indicators and retort...

  6. 9 CFR 318.304 - Operations in the thermal processing area.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... factor over the specified thermal processing operation times. Temperature/time recording devices shall... minimum initial temperatures and operating procedures for thermal processing equipment, shall be posted in... available to the thermal processing system operator and the inspector. (b) Process indicators and retort...

  7. 9 CFR 318.304 - Operations in the thermal processing area.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... factor over the specified thermal processing operation times. Temperature/time recording devices shall... minimum initial temperatures and operating procedures for thermal processing equipment, shall be posted in... available to the thermal processing system operator and the inspector. (b) Process indicators and retort...

  8. 9 CFR 318.304 - Operations in the thermal processing area.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... factor over the specified thermal processing operation times. Temperature/time recording devices shall... minimum initial temperatures and operating procedures for thermal processing equipment, shall be posted in... available to the thermal processing system operator and the inspector. (b) Process indicators and retort...

  9. Statistical process control as a tool for controlling operating room performance: retrospective analysis and benchmarking.

    PubMed

    Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao

    2010-10-01

    There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.

  10. Stable and verifiable state estimation methods and systems with spacecraft applications

    NASA Technical Reports Server (NTRS)

    Li, Rongsheng (Inventor); Wu, Yeong-Wei Andy (Inventor)

    2001-01-01

    The stability of a recursive estimator process (e.g., a Kalman filter is assured for long time periods by periodically resetting an error covariance P(t.sub.n) of the system to a predetermined reset value P.sub.r. The recursive process is thus repetitively forced to start from a selected covariance and continue for a time period that is short compared to the system's total operational time period. The time period in which the process must maintain its numerical stability is significantly reduced as is the demand on the system's numerical stability. The process stability for an extended operational time period T.sub.o is verified by performing the resetting step at the end of at least one reset time period T.sub.r whose duration is less than the operational time period T.sub.o and then confirming stability of the process over the reset time period T.sub.r. Because the recursive process starts from a selected covariance at the beginning of each reset time period T.sub.r, confirming stability of the process over at least one reset time period substantially confirms stability over the longer operational time period T.sub.o.

  11. Real-Time Embedded High Performance Computing: Communications Scheduling.

    DTIC Science & Technology

    1995-06-01

    real - time operating system must explicitly limit the degradation of the timing performance of all processes as the number of processes...adequately supported by a real - time operating system , could compound the development problems encountered in the past. Many experts feel that the... real - time operating system support for an MPP, although they all provide some support for distributed real-time applications. A distributed real

  12. An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines

    PubMed Central

    Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John

    2015-01-01

    The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints. PMID:26062092

  13. An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines.

    PubMed

    Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John

    2015-01-01

    The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints.

  14. A Proof of Factorization Theorem of Drell-Yan Process at Operator Level

    NASA Astrophysics Data System (ADS)

    Zhou, Gao-Liang

    2016-02-01

    An alternative proof of factorization theorem for Drell-Yan process that works at operator level is presented in this paper. Contributions of interactions after the hard collision for such inclusive processes are proved to be canceled at operator level according to the unitarity of time evolution operator. After this cancellation, there are no longer leading pinch singular surface in Glauber region in the time evolution of electromagnetic currents. Effects of soft gluons are absorbed into Wilson lines of scalar-polarized gluons. Cancelation of soft gluons is attribute to unitarity of time evolution operator and such Wilson lines. Supported by the National Natural Science Foundation of China under Grant No. 11275242

  15. Testing single point incremental forming molds for thermoforming operations

    NASA Astrophysics Data System (ADS)

    Afonso, Daniel; de Sousa, Ricardo Alves; Torcato, Ricardo

    2016-10-01

    Low pressure polymer processing processes as thermoforming or rotational molding use much simpler molds then high pressure processes like injection. However, despite the low forces involved with the process, molds manufacturing for this operations is still a very material, energy and time consuming operation. The goal of the research is to develop and validate a method for manufacturing plastically formed sheets metal molds by single point incremental forming (SPIF) operation for thermoforming operation. Stewart platform based SPIF machines allow the forming of thick metal sheets, granting the required structural stiffness for the mold surface, and keeping the short lead time manufacture and low thermal inertia.

  16. Modeling operators' emergency response time for chemical processing operations.

    PubMed

    Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam

    2014-01-01

    Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations to determine performance coefficients. These coefficients represent the decrease in time required for various basic motions in emergency situations and were used to model an emergency response. This approach will make hazardous operations requiring operator response, alarm management, and evacuation processes easier to design and predict. An application of this methodology is included in the article. The time required for an emergency response was roughly a one-third faster than for a normal response time.

  17. An order insertion scheduling model of logistics service supply chain considering capacity and time factors.

    PubMed

    Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.

  18. [Performance development of a university operating room after implementation of a central operating room management].

    PubMed

    Waeschle, R M; Sliwa, B; Jipp, M; Pütz, H; Hinz, J; Bauer, M

    2016-08-01

    The difficult financial situation in German hospitals requires measures for improvement in process quality. Associated increases in revenues in the high income field "operating room (OR) area" are increasingly the responsibility of OR management but it has not been shown that the introduction of an efficiency-oriented management leads to an increase in process quality and revenues in the operating theatre. Therefore the performance in the operating theatre of the University Medical Center Göttingen was analyzed for working days in the core operating time from 7.45 a.m. to 3.30 p.m. from 2009 to 2014. The achievement of process target times for the morning surgery start time and the turnover times of anesthesia and OR-nurses were calculated as indicators of process quality. The number of operations and cumulative incision-suture time were also analyzed as aggregated performance indicators. In order to assess the development of revenues in the operating theatre, the revenues from diagnosis-related groups (DRG) in all inpatient and occupational accident cases, adjusted for the regional basic case value from 2009, were calculated for each year. The development of revenues was also analyzed after deduction of revenues resulting from altered economic case weighting. It could be shown that the achievement of process target values for the morning surgery start time could be improved by 40 %, the turnover times for anesthesia reduced by 50 % and for the OR-nurses by 36 %. Together with the introduction of central planning for reallocation, an increase in operation numbers of 21 % and cumulative incision-suture times of 12% could be realized. Due to these additional operations the DRG revenues in 2014 could be increased to 132 % compared to 2009 or 127 % if the revenues caused by economic case weighting were excluded. The personnel complement in anesthesia (-1.7 %) and OR-nurses (+2.6 %) as well as anesthetists (+6.7 %) increased less compared to the revenues or were slightly reduced. This improvement in process quality and cumulative incision-suture times as well as the increase in revenues, reflect the positive impact of an efficiency-oriented central OR management. The OR management releases due to measures of process optimization the necessary personnel and time resources and therefore achieves the basic prerequisites for increased revenues of surgical disciplines. The method presented can be used by other hospitals as a guideline to analyze performance development.

  19. On-Line Real-Time Management Information Systems and Their Impact Upon User Personnel and Organizational Structure in Aviation Maintenance Activities.

    DTIC Science & Technology

    1979-12-01

    the functional management level, a real-time production con- trol system and an order processing system at the operational level. SIDMS was designed...at any one time. 26 An overview of the major software systems in operation is listed below: a. Major Software Systems: Order processing system e Order ... processing for the supply support center/AWP locker. e Order processing for the airwing squadron material controls. e Order processing for the IMA

  20. Process for using surface strain measurements to obtain operational loads for complex structures

    NASA Technical Reports Server (NTRS)

    Ko, William L. (Inventor); Richards, William Lance (Inventor)

    2010-01-01

    The invention is an improved process for using surface strain data to obtain real-time, operational loads data for complex structures that significantly reduces the time and cost versus current methods.

  1. PILOT: An intelligent distributed operations support system

    NASA Technical Reports Server (NTRS)

    Rasmussen, Arthur N.

    1993-01-01

    The Real-Time Data System (RTDS) project is exploring the application of advanced technologies to the real-time flight operations environment of the Mission Control Centers at NASA's Johnson Space Center. The system, based on a network of engineering workstations, provides services such as delivery of real time telemetry data to flight control applications. To automate the operation of this complex distributed environment, a facility called PILOT (Process Integrity Level and Operation Tracker) is being developed. PILOT comprises a set of distributed agents cooperating with a rule-based expert system; together they monitor process operation and data flows throughout the RTDS network. The goal of PILOT is to provide unattended management and automated operation under user control.

  2. An Order Insertion Scheduling Model of Logistics Service Supply Chain Considering Capacity and Time Factors

    PubMed Central

    Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful. PMID:25276851

  3. Proposed algorithm to improve job shop production scheduling using ant colony optimization method

    NASA Astrophysics Data System (ADS)

    Pakpahan, Eka KA; Kristina, Sonna; Setiawan, Ari

    2017-12-01

    This paper deals with the determination of job shop production schedule on an automatic environment. On this particular environment, machines and material handling system are integrated and controlled by a computer center where schedule were created and then used to dictate the movement of parts and the operations at each machine. This setting is usually designed to have an unmanned production process for a specified interval time. We consider here parts with various operations requirement. Each operation requires specific cutting tools. These parts are to be scheduled on machines each having identical capability, meaning that each machine is equipped with a similar set of cutting tools therefore is capable of processing any operation. The availability of a particular machine to process a particular operation is determined by the remaining life time of its cutting tools. We proposed an algorithm based on the ant colony optimization method and embedded them on matlab software to generate production schedule which minimize the total processing time of the parts (makespan). We test the algorithm on data provided by real industry and the process shows a very short computation time. This contributes a lot to the flexibility and timelines targeted on an automatic environment.

  4. Development of Targeting UAVs Using Electric Helicopters and Yamaha RMAX

    DTIC Science & Technology

    2007-05-17

    including the QNX real - time operating system . The video overlay board is useful to display the onboard camera’s image with important information such as... real - time operating system . Fully utilizing the built-in multi-processing architecture with inter-process synchronization and communication

  5. IDSP- INTERACTIVE DIGITAL SIGNAL PROCESSOR

    NASA Technical Reports Server (NTRS)

    Mish, W. H.

    1994-01-01

    The Interactive Digital Signal Processor, IDSP, consists of a set of time series analysis "operators" based on the various algorithms commonly used for digital signal analysis work. The processing of a digital time series to extract information is usually achieved by the application of a number of fairly standard operations. However, it is often desirable to "experiment" with various operations and combinations of operations to explore their effect on the results. IDSP is designed to provide an interactive and easy-to-use system for this type of digital time series analysis. The IDSP operators can be applied in any sensible order (even recursively), and can be applied to single time series or to simultaneous time series. IDSP is being used extensively to process data obtained from scientific instruments onboard spacecraft. It is also an excellent teaching tool for demonstrating the application of time series operators to artificially-generated signals. IDSP currently includes over 43 standard operators. Processing operators provide for Fourier transformation operations, design and application of digital filters, and Eigenvalue analysis. Additional support operators provide for data editing, display of information, graphical output, and batch operation. User-developed operators can be easily interfaced with the system to provide for expansion and experimentation. Each operator application generates one or more output files from an input file. The processing of a file can involve many operators in a complex application. IDSP maintains historical information as an integral part of each file so that the user can display the operator history of the file at any time during an interactive analysis. IDSP is written in VAX FORTRAN 77 for interactive or batch execution and has been implemented on a DEC VAX-11/780 operating under VMS. The IDSP system generates graphics output for a variety of graphics systems. The program requires the use of Versaplot and Template plotting routines and IMSL Math/Library routines. These software packages are not included in IDSP. The virtual memory requirement for the program is approximately 2.36 MB. The IDSP system was developed in 1982 and was last updated in 1986. Versaplot is a registered trademark of Versatec Inc. Template is a registered trademark of Template Graphics Software Inc. IMSL Math/Library is a registered trademark of IMSL Inc.

  6. The embedded operating system project

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.

    1985-01-01

    The design and construction of embedded operating systems for real-time advanced aerospace applications was investigated. The applications require reliable operating system support that must accommodate computer networks. Problems that arise in the construction of such operating systems, reconfiguration, consistency and recovery in a distributed system, and the issues of real-time processing are reported. A thesis that provides theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based system is included. The following items are addressed: (1) atomic actions and fault-tolerance issues; (2) operating system structure; (3) program development; (4) a reliable compiler for path Pascal; and (5) mediators, a mechanism for scheduling distributed system processes.

  7. The X-33 range Operations Control Center

    NASA Technical Reports Server (NTRS)

    Shy, Karla S.; Norman, Cynthia L.

    1998-01-01

    This paper describes the capabilities and features of the X-33 Range Operations Center at NASA Dryden Flight Research Center. All the unprocessed data will be collected and transmitted over fiber optic lines to the Lockheed Operations Control Center for real-time flight monitoring of the X-33 vehicle. By using the existing capabilities of the Western Aeronautical Test Range, the Range Operations Center will provide the ability to monitor all down-range tracking sites for the Extended Test Range systems. In addition to radar tracking and aircraft telemetry data, the Telemetry and Radar Acquisition and Processing System is being enhanced to acquire vehicle command data, differential Global Positioning System corrections and telemetry receiver signal level status. The Telemetry and Radar Acquisition Processing System provides the flexibility to satisfy all X-33 data processing requirements quickly and efficiently. Additionally, the Telemetry and Radar Acquisition Processing System will run a real-time link margin analysis program. The results of this model will be compared in real-time with actual flight data. The hardware and software concepts presented in this paper describe a method of merging all types of data into a common database for real-time display in the Range Operations Center in support of the X-33 program. All types of data will be processed for real-time analysis and display of the range system status to ensure public safety.

  8. Time-critical multirate scheduling using contemporary real-time operating system services

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.

    1983-01-01

    Although real-time operating systems provide many of the task control services necessary to process time-critical applications (i.e., applications with fixed, invariant deadlines), it may still be necessary to provide a scheduling algorithm at a level above the operating system in order to coordinate a set of synchronized, time-critical tasks executing at different cyclic rates. The scheduling requirements for such applications and develops scheduling algorithms using services provided by contemporary real-time operating systems.

  9. On Real-Time Operating Systems.

    DTIC Science & Technology

    1987-04-01

    1Ri2 193 ONREAL-TIME OPERATING SYS EMS(U MAYLAN UN V COLLG PARK DEPT OF COMPUTER SCIENCE S LEVI ET AL APR 87 CS-TR-1838 NOSO14-87-K-9124 UNCLASSIFIED...and processes. In each instance the abstraction takes the form of some non- physical resource and benefits both the system and the user. ...The...service, which is important as an inter-process service (for physical synchronization) as well as an internal service for a process. A time service in a

  10. An open system approach to process reengineering in a healthcare operational environment.

    PubMed

    Czuchry, A J; Yasin, M M; Norris, J

    2000-01-01

    The objective of this study is to examine the applicability of process reengineering in a healthcare operational environment. The intake process of a mental healthcare service delivery system is analyzed systematically to identify process-related problems. A methodology which utilizes an open system orientation coupled with process reengineering is utilized to overcome operational and patient related problems associated with the pre-reengineered intake process. The systematic redesign of the intake process resulted in performance improvements in terms of cost, quality, service and timing.

  11. Continuous bind-and-elute protein A capture chromatography: Optimization under process scale column constraints and comparison to batch operation.

    PubMed

    Kaltenbrunner, Oliver; Diaz, Luis; Hu, Xiaochun; Shearer, Michael

    2016-07-08

    Recently, continuous downstream processing has become a topic of discussion and analysis at conferences while no industrial applications of continuous downstream processing for biopharmaceutical manufacturing have been reported. There is significant potential to increase the productivity of a Protein A capture step by converting the operation to simulated moving bed (SMB) mode. In this mode, shorter columns are operated at higher process flow and corresponding short residence times. The ability to significantly shorten the product residence time during loading without appreciable capacity loss can dramatically increase productivity of the capture step and consequently reduce the amount of Protein A resin required in the process. Previous studies have not considered the physical limitations of how short columns can be packed and the flow rate limitations due to pressure drop of stacked columns. In this study, we are evaluating the process behavior of a continuous Protein A capture column cycling operation under the known pressure drop constraints of a compressible media. The results are compared to the same resin operated under traditional batch operating conditions. We analyze the optimum system design point for a range of feed concentrations, bed heights, and load residence times and determine achievable productivity for any feed concentration and any column bed height. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:938-948, 2016. © 2016 American Institute of Chemical Engineers.

  12. Cardiac surgery productivity and throughput improvements.

    PubMed

    Lehtonen, Juha-Matti; Kujala, Jaakko; Kouri, Juhani; Hippeläinen, Mikko

    2007-01-01

    The high variability in cardiac surgery length--is one of the main challenges for staff managing productivity. This study aims to evaluate the impact of six interventions on open-heart surgery operating theatre productivity. A discrete operating theatre event simulation model with empirical operation time input data from 2603 patients is used to evaluate the effect that these process interventions have on the surgery output and overtime work. A linear regression model was used to get operation time forecasts for surgery scheduling while it also could be used to explain operation time. A forecasting model based on the linear regression of variables available before the surgery explains 46 per cent operating time variance. The main factors influencing operation length were type of operation, redoing the operation and the head surgeon. Reduction of changeover time between surgeries by inducing anaesthesia outside an operating theatre and by reducing slack time at the end of day after a second surgery have the strongest effects on surgery output and productivity. A more accurate operation time forecast did not have any effect on output, although improved operation time forecast did decrease overtime work. A reduction in the operation time itself is not studied in this article. However, the forecasting model can also be applied to discover which factors are most significant in explaining variation in the length of open-heart surgery. The challenge in scheduling two open-heart surgeries in one day can be partly resolved by increasing the length of the day, decreasing the time between two surgeries or by improving patient scheduling procedures so that two short surgeries can be paired. A linear regression model is created in the paper to increase the accuracy of operation time forecasting and to identify factors that have the most influence on operation time. A simulation model is used to analyse the impact of improved surgical length forecasting and five selected process interventions on productivity in cardiac surgery.

  13. Virtual time and time warp on the JPL hypercube. [operating system implementation for distributed simulation

    NASA Technical Reports Server (NTRS)

    Jefferson, David; Beckman, Brian

    1986-01-01

    This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.

  14. Cyclically optimized electrochemical processes

    NASA Astrophysics Data System (ADS)

    Ruedisueli, Robert Louis

    It has been frequently observed in experiment and industry practice that electrochemical processes (deposition, dissolution, fuel cells) operated in an intermittent or cyclic (AC) mode show improvements in efficiency and/or quality and yield over their steady (DC) mode of operation. Whether rationally invoked by design or empirically tuned-in, the optimal operating frequency and duty cycle is dependent upon the dominant relaxation time constant for the process in question. The electrochemical relaxation time constant is a function of: double-layer and reaction intermediary pseudo-capacitances, ion (charge) transport via electrical migration (mobility), and diffusion across a concentration gradient to electrode surface reaction sites where charge transfer and species incorporation or elimination occurs. The rate determining step dominates the time constant for the reaction or process. Electrochemical impedance spectroscopy (EIS) and piezoelectric crystal electrode (PCE) response analysis have proven to be useful tools in the study and identification of reaction mechanisms. This work explains and demonstrates with the electro-deposition of copper the application of EIS and PCE measurement and analysis to the selection of an optimum cyclic operating schedule, an optimum driving frequency for efficient, sustained cyclic (pulsed) operation.

  15. Historical data recording for process computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hale, J.C.; Sellars, H.L.

    1981-11-01

    Computers have been used to monitor and control chemical and refining processes for more than 15 years. During this time, there has been a steady growth in the variety and sophistication of the functions performed by these process computers. Early systems were limited to maintaining only current operating measurements, available through crude operator's consoles or noisy teletypes. The value of retaining a process history, that is, a collection of measurements over time, became apparent, and early efforts produced shift and daily summary reports. The need for improved process historians which record, retrieve and display process information has grown as processmore » computers assume larger responsibilities in plant operations. This paper describes newly developed process historian functions that have been used on several of its in-house process monitoring and control systems in Du Pont factories. 3 refs.« less

  16. Interactive Digital Signal Processor

    NASA Technical Reports Server (NTRS)

    Mish, W. H.

    1985-01-01

    Interactive Digital Signal Processor, IDSP, consists of set of time series analysis "operators" based on various algorithms commonly used for digital signal analysis. Processing of digital signal time series to extract information usually achieved by applications of number of fairly standard operations. IDSP excellent teaching tool for demonstrating application for time series operators to artificially generated signals.

  17. Landsat-5 bumper-mode geometric correction

    USGS Publications Warehouse

    Storey, James C.; Choate, Michael J.

    2004-01-01

    The Landsat-5 Thematic Mapper (TM) scan mirror was switched from its primary operating mode to a backup mode in early 2002 in order to overcome internal synchronization problems arising from long-term wear of the scan mirror mechanism. The backup bumper mode of operation removes the constraints on scan start and stop angles enforced in the primary scan angle monitor operating mode, requiring additional geometric calibration effort to monitor the active scan angles. It also eliminates scan timing telemetry used to correct the TM scan geometry. These differences require changes to the geometric correction algorithms used to process TM data. A mathematical model of the scan mirror's behavior when operating in bumper mode was developed. This model includes a set of key timing parameters that characterize the time-varying behavior of the scan mirror bumpers. To simplify the implementation of the bumper-mode model, the bumper timing parameters were recast in terms of the calibration and telemetry data items used to process normal TM imagery. The resulting geometric performance, evaluated over 18 months of bumper-mode operations, though slightly reduced from that achievable in the primary operating mode, is still within the Landsat specifications when the data are processed with the most up-to-date calibration parameters.

  18. A Distributed Operating System for BMD Applications.

    DTIC Science & Technology

    1982-01-01

    Defense) applications executing on distributed hardware with local and shared memories. The objective was to develop real - time operating system functions...make the Basic Real - Time Operating System , and the set of new EPL language primitives that provide BMD application processes with efficient mechanisms

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bickford, D.F.

    During the first two years of radioactive operation of the Defense Waste Processing Facility process, several areas for improvement in melter design were identified. Due to the need for a process that allows continuous melter operation, the down time associated with disruption to melter operation and pouring has significant cost impact. A major objective of this task is to address performance limitations and deficiencies identified by the user.

  20. Infantry Small-Unit Mountain Operations

    DTIC Science & Technology

    2011-02-01

    expended to traverse it.  Unique sustainment solutions. Sustainment in a mountain environment is a challenging and time-consuming process . Terrain...a particular environment during the intelligence preparation of the battlefield (IPB) process and provide the analysis to the company. The IPB...consists of a four-step process that includes—  Defining the operational environment.  Describing environmental effects on operations.  Evaluating the

  1. Developing infrared array controller with software real time operating system

    NASA Astrophysics Data System (ADS)

    Sako, Shigeyuki; Miyata, Takashi; Nakamura, Tomohiko; Motohara, Kentaro; Uchimoto, Yuka Katsuno; Onaka, Takashi; Kataza, Hirokazu

    2008-07-01

    Real-time capabilities are required for a controller of a large format array to reduce a dead-time attributed by readout and data transfer. The real-time processing has been achieved by dedicated processors including DSP, CPLD, and FPGA devices. However, the dedicated processors have problems with memory resources, inflexibility, and high cost. Meanwhile, a recent PC has sufficient resources of CPUs and memories to control the infrared array and to process a large amount of frame data in real-time. In this study, we have developed an infrared array controller with a software real-time operating system (RTOS) instead of the dedicated processors. A Linux PC equipped with a RTAI extension and a dual-core CPU is used as a main computer, and one of the CPU cores is allocated to the real-time processing. A digital I/O board with DMA functions is used for an I/O interface. The signal-processing cores are integrated in the OS kernel as a real-time driver module, which is composed of two virtual devices of the clock processor and the frame processor tasks. The array controller with the RTOS realizes complicated operations easily, flexibly, and at a low cost.

  2. Transitioning to Integrated Modular Avionics with a Mission Management System

    DTIC Science & Technology

    2000-10-01

    software structure, which is based on the use of a of interchangeable processing modules of a limited COTS Real - Time Operating System . number of...open standardised interfaces system hardware or the Real - Time Operating System directly supports the use of COTS components, which implementation, to...System RTOS Real - Time Operating System SMBP System Management Blueprint Interface SMOS System Management to Operating System Interface Figure 2: The ASAAC

  3. User's manual SIG: a general-purpose signal processing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lager, D.; Azevedo, S.

    1983-10-25

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Many of the basic operations one would perform on digitized data are contained in the core SIG package. Out of these core commands, more powerful signal processing algorithms may be built. Many different operations on time- and frequency-domain signals can be performed by SIG. They include operations on the samples of a signal, such as adding a scalar tomore » each sample, operations on the entire signal such as digital filtering, and operations on two or more signals such as adding two signals. Signals may be simulated, such as a pulse train or a random waveform. Graphics operations display signals and spectra.« less

  4. Operating Room Time Savings with the Use of Splint Packs: A Randomized Controlled Trial

    PubMed Central

    Gonzalez, Tyler A.; Bluman, Eric M.; Palms, David; Smith, Jeremy T.; Chiodo, Christopher P.

    2016-01-01

    Background: The most expensive variable in the operating room (OR) is time. Lean Process Management is being used in the medical field to improve efficiency in the OR. Streamlining individual processes within the OR is crucial to a comprehensive time saving and cost-cutting health care strategy. At our institution, one hour of OR time costs approximately $500, exclusive of supply and personnel costs. Commercially prepared splint packs (SP) contain all components necessary for plaster-of-Paris short-leg splint application and have the potential to decrease splint application time and overall costs by making it a more lean process. We conducted a randomized controlled trial comparing OR time savings between SP use and bulk supply (BS) splint application. Methods: Fifty consecutive adult operative patients on whom post-operative short-leg splint immobilization was indicated were randomized to either a control group using BS or an experimental group using SP. One orthopaedic surgeon (EMB) prepared and applied all of the splints in a standardized fashion. Retrieval time, preparation time, splint application time, and total splinting time for both groups were measured and statistically analyzed. Results: The retrieval time, preparation time and total splinting time were significantly less (p<0.001) in the SP group compared with the BS group. There was no significant difference in application time between the SP group and BS group. Conclusion: The use of SP made the process of splinting more lean. This has resulted in an average of 2 minutes 52 seconds saved in total splinting time compared to BS, making it an effective cost-cutting and time saving technique. For high volume ORs, use of splint packs may contribute to substantial time and cost savings without impacting patient safety. PMID:26894212

  5. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  6. Modeling the Technological Process for Harvesting of Agricultural Produce

    NASA Astrophysics Data System (ADS)

    Shepelev, S. D.; Shepelev, V. D.; Almetova, Z. V.; Shepeleva, N. P.; Cheskidov, M. V.

    2018-01-01

    The efficiency and the parameters of harvesting as a technological process being substantiated make it possible to reduce the cost of production and increase the profit of enterprises. To increase the efficiency of combine harvesters when the level of technical equipment declines is possible due to their efficient operating modes within daily and every season. Therefore, the correlation between the operational daily time and the seasonal load of combine harvesters is found, with the increase in the seasonal load causing the prolonged duration of operational daily time for harvesters being determined. To increase the efficient time of the seasonal load is possible due to a reasonable ratio of crop varieties according to their ripening periods, the necessary quantity of machines thereby to be reduced up to 40%. By timing and field testing the operational factor of the useful shift time of combine harvesters and the efficient modes of operating machines are defined, with the alternatives for improving the technical readiness of combine harvesters being identified.

  7. Controlling Real-Time Processes On The Space Station With Expert Systems

    NASA Astrophysics Data System (ADS)

    Leinweber, David; Perry, John

    1987-02-01

    Many aspects of space station operations involve continuous control of real-time processes. These processes include electrical power system monitoring, propulsion system health and maintenance, environmental and life support systems, space suit checkout, on-board manufacturing, and servicing of attached vehicles such as satellites, shuttles, orbital maneuvering vehicles, orbital transfer vehicles and remote teleoperators. Traditionally, monitoring of these critical real-time processes has been done by trained human experts monitoring telemetry data. However, the long duration of space station missions and the high cost of crew time in space creates a powerful economic incentive for the development of highly autonomous knowledge-based expert control procedures for these space stations. In addition to controlling the normal operations of these processes, the expert systems must also be able to quickly respond to anomalous events, determine their cause and initiate corrective actions in a safe and timely manner. This must be accomplished without excessive diversion of system resources from ongoing control activities and any events beyond the scope of the expert control and diagnosis functions must be recognized and brought to the attention of human operators. Real-time sensor based expert systems (as opposed to off-line, consulting or planning systems receiving data via the keyboard) pose particular problems associated with sensor failures, sensor degradation and data consistency, which must be explicitly handled in an efficient manner. A set of these systems must also be able to work together in a cooperative manner. This paper describes the requirements for real-time expert systems in space station control, and presents prototype implementations of space station expert control procedures in PICON (process intelligent control). PICON is a real-time expert system shell which operates in parallel with distributed data acquisition systems. It incorporates a specialized inference engine with a specialized scheduling portion specifically designed to match the allocation of system resources with the operational requirements of real-time control systems. Innovative knowledge engineering techniques used in PICON to facilitate the development of real-time sensor-based expert systems which use the special features of the inference engine are illustrated in the prototype examples.

  8. Design of an automatic production monitoring system on job shop manufacturing

    NASA Astrophysics Data System (ADS)

    Prasetyo, Hoedi; Sugiarto, Yohanes; Rosyidi, Cucuk Nur

    2018-02-01

    Every production process requires monitoring system, so the desired efficiency and productivity can be monitored at any time. This system is also needed in the job shop type of manufacturing which is mainly influenced by the manufacturing lead time. Processing time is one of the factors that affect the manufacturing lead time. In a conventional company, the recording of processing time is done manually by the operator on a sheet of paper. This method is prone to errors. This paper aims to overcome this problem by creating a system which is able to record and monitor the processing time automatically. The solution is realized by utilizing electric current sensor, barcode, RFID, wireless network and windows-based application. An automatic monitoring device is attached to the production machine. It is equipped with a touch screen-LCD so that the operator can use it easily. Operator identity is recorded through RFID which is embedded in his ID card. The workpiece data are collected from the database by scanning the barcode listed on its monitoring sheet. A sensor is mounted on the machine to measure the actual machining time. The system's outputs are actual processing time and machine's capacity information. This system is connected wirelessly to a workshop planning application belongs to the firm. Test results indicated that all functions of the system can run properly. This system successfully enables supervisors, PPIC or higher level management staffs to monitor the processing time quickly with a better accuracy.

  9. Spacelab Mission Implementation Cost Assessment (SMICA)

    NASA Technical Reports Server (NTRS)

    Guynes, B. V.

    1984-01-01

    A total savings of approximately 20 percent is attainable if: (1) mission management and ground processing schedules are compressed; (2) the equipping, staffing, and operating of the Payload Operations Control Center is revised, and (3) methods of working with experiment developers are changed. The development of a new mission implementation technique, which includes mission definition, experiment development, and mission integration/operations, is examined. The Payload Operations Control Center is to relocate and utilize new computer equipment to produce cost savings. Methods of reducing costs by minimizing the Spacelab and payload processing time during pre- and post-mission operation at KSC are analyzed. The changes required to reduce costs in the analytical integration process are studied. The influence of time, requirements accountability, and risk on costs is discussed. Recommendation for cost reductions developed by the Spacelab Mission Implementation Cost Assessment study are listed.

  10. 40 CFR 63.1542 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... materials are introduced into a sinter machine, blast furnace, or dross furnace. Dross furnace means any... which material is prepared for charging to a sinter machine or smelting furnace or other lead processing operation. Operating time means the period of time in hours that an affected source is in operation...

  11. 40 CFR 63.1542 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... materials are introduced into a sinter machine, blast furnace, or dross furnace. Dross furnace means any... which material is prepared for charging to a sinter machine or smelting furnace or other lead processing operation. Operating time means the period of time in hours that an affected source is in operation...

  12. 40 CFR 63.1542 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... materials are introduced into a sinter machine, blast furnace, or dross furnace. Dross furnace means any... which material is prepared for charging to a sinter machine or smelting furnace or other lead processing operation. Operating time means the period of time in hours that an affected source is in operation...

  13. Data Telemetry and Acquisition System for Acoustic Signal Processing Investigations.

    DTIC Science & Technology

    1996-02-20

    were VME- based computer systems operating under the VxWorks real - time operating system . Each system shared a common hardware and software... real - time operating system . It interfaces to the Berg PCM Decommutator board, which searches for the embedded synchronization word in the data and re...software were built on top of this architecture. The multi-tasking, message queue and memory management facilities of the VxWorks real - time operating system are

  14. Time Warp Operating System, Version 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, Steven F.; Gieselman, John S.; Hawley, Lawrence R.; Peterson, Judy; Presley, Matthew T.; Reiher, Peter L.; Springer, Paul L.; Tupman, John R.; Wedel, John J., Jr.; Wieland, Frederick P.; hide

    1993-01-01

    Time Warp Operating System, TWOS, is special purpose computer program designed to support parallel simulation of discrete events. Complete implementation of Time Warp software mechanism, which implements distributed protocol for virtual synchronization based on rollback of processes and annihilation of messages. Supports simulations and other computations in which both virtual time and dynamic load balancing used. Program utilizes underlying resources of operating system. Written in C programming language.

  15. Hurricane Wave Topography and Directional Wave Spectra in Near Real-Time

    DTIC Science & Technology

    2005-09-30

    Develop and/or modify the real - time operating system and analysis techniques and programs of the NASA Scanning Radar Altimeter (SRA) to process the...Wayne Wright is responsible for the real - time operating system of the SRA and making whatever modifications are required to enable near real-time

  16. From Prime to Extended Mission: Evolution of the MER Tactical Uplink Process

    NASA Technical Reports Server (NTRS)

    Mishkin, Andrew H.; Laubach, Sharon

    2006-01-01

    To support a 90-day surface mission for two robotic rovers, the Mars Exploration Rover mission designed and implemented an intensive tactical operations process, enabling daily commanding of each rover. Using a combination of new processes, custom software tools, a Mars-time staffing schedule, and seven-day-a-week operations, the MER team was able to compress the traditional weeks-long command-turnaround for a deep space robotic mission to about 18 hours. However, the pace of this process was never intended to be continued indefinitely. Even before the end of the three-month prime mission, MER operations began evolving towards greater sustainability. A combination of continued software tool development, increasing team experience, and availability of reusable sequences first reduced the mean process duration to approximately 11 hours. The number of workshifts required to perform the process dropped, and the team returned to a modified 'Earth-time' schedule. Additional process and tool adaptation eventually provided the option of planning multiple Martian days of activity within a single workshift, making 5-day-a-week operations possible. The vast majority of the science team returned to their home institutions, continuing to participate fully in the tactical operations process remotely. MER has continued to operate for over two Earth-years as many of its key personnel have moved on to other projects, the operations team and budget have shrunk, and the rovers have begun to exhibit symptoms of aging.

  17. Implementation of Canny and Isotropic Operator with Power Law Transformation to Identify Cervical Cancer

    NASA Astrophysics Data System (ADS)

    Amalia, A.; Rachmawati, D.; Lestari, I. A.; Mourisa, C.

    2018-03-01

    Colposcopy has been used primarily to diagnose pre-cancer and cancerous lesions because this procedure gives an exaggerated view of the tissues of the vagina and the cervix. But, the poor quality of colposcopy image sometimes makes physician challenging to recognize and analyze it. Generally, Implementation of image processing to identify cervical cancer have to implement a complex classification or clustering method. In this study, we wanted to prove that by only applying the identification of edge detection in the colposcopy image, identification of cervical cancer can be determined. In this study, we implement and comparing two edge detection operator which are isotropic and canny operator. Research methodology in this paper composed by image processing, training, and testing stages. In the image processing step, colposcopy image transformed by nth root power transformation to get better detection result and continued with edge detection process. Training is a process of labelling all dataset image with cervical cancer stage. This process involved pathology doctor as an expert in diagnosing the colposcopy image as a reference. Testing is a process of deciding cancer stage classification by comparing the similarity image of colposcopy results in the testing stage with the image of the results of the training process. We used 30 images as a dataset. The result gets same accuracy which is 80% for both Canny or Isotropic operator. Average running time for Canny operator implementation is 0.3619206 ms while Isotropic get 1.49136262 ms. The result showed that Canny operator is better than isotropic operator because Canny operator generates a more precise edge with a fast time instead.

  18. The Implementation of Payload Safety in an Operational Environment

    NASA Technical Reports Server (NTRS)

    Cissom, R. D.; Horvath, Tim J.; Watson, Kristi S.; Rogers, Mark N. (Technical Monitor); Vanhooser, T. (Technical Monitor)

    2002-01-01

    The objective of this paper is to define the safety life-cycle process for a payload beginning with the output of the Payload Safety Review Panel and continuing through the life of the payload on-orbit. It focuses on the processes and products of the operations safety implementation through the increment preparations and real-time operations processes. In addition, the paper addresses the role of the Payload Operations and Integration Center and the interfaces to the International Partner Payload Control Centers.

  19. Study on Operation Optimization of Pumping Station's 24 Hours Operation under Influences of Tides and Peak-Valley Electricity Prices

    NASA Astrophysics Data System (ADS)

    Yi, Gong; Jilin, Cheng; Lihua, Zhang; Rentian, Zhang

    2010-06-01

    According to different processes of tides and peak-valley electricity prices, this paper determines the optimal start up time in pumping station's 24 hours operation between the rating state and adjusting blade angle state respectively based on the optimization objective function and optimization model for single-unit pump's 24 hours operation taking JiangDu No.4 Pumping Station for example. In the meantime, this paper proposes the following regularities between optimal start up time of pumping station and the process of tides and peak-valley electricity prices each day within a month: (1) In the rating and adjusting blade angle state, the optimal start up time in pumping station's 24 hours operation which depends on the tide generation at the same day varies with the process of tides. There are mainly two kinds of optimal start up time which include the time at tide generation and 12 hours after it. (2) In the rating state, the optimal start up time on each day in a month exhibits a rule of symmetry from 29 to 28 of next month in the lunar calendar. The time of tide generation usually exists in the period of peak electricity price or the valley one. The higher electricity price corresponds to the higher minimum cost of water pumping at unit, which means that the minimum cost of water pumping at unit depends on the peak-valley electricity price at the time of tide generation on the same day. (3) In the adjusting blade angle state, the minimum cost of water pumping at unit in pumping station's 24 hour operation depends on the process of peak-valley electricity prices. And in the adjusting blade angle state, 4.85%˜5.37% of the minimum cost of water pumping at unit will be saved than that of in the rating state.

  20. Environmental assessment of alternative pasteurization technologies for fluid milk production using process simulation

    USDA-ARS?s Scientific Manuscript database

    Fluid milk processing (FMP) has significant environmental impact because of its high energy use. High temperature short time (HTST) pasteurization is the third most energy intensive operation comprising about 16% of total energy use, after clean-in-place operations and packaging. Nonthermal processe...

  1. SAR image formation with azimuth interpolation after azimuth transform

    DOEpatents

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  2. Influences of operational parameters on phosphorus removal in batch and continuous electrocoagulation process performance.

    PubMed

    Nguyen, Dinh Duc; Yoon, Yong Soo; Bui, Xuan Thanh; Kim, Sung Su; Chang, Soon Woong; Guo, Wenshan; Ngo, Huu Hao

    2017-11-01

    Performance of an electrocoagulation (EC) process in batch and continuous operating modes was thoroughly investigated and evaluated for enhancing wastewater phosphorus removal under various operating conditions, individually or combined with initial phosphorus concentration, wastewater conductivity, current density, and electrolysis times. The results revealed excellent phosphorus removal (72.7-100%) for both processes within 3-6 min of electrolysis, with relatively low energy requirements, i.e., less than 0.5 kWh/m 3 for treated wastewater. However, the removal efficiency of phosphorus in the continuous EC operation mode was better than that in batch mode within the scope of the study. Additionally, the rate and efficiency of phosphorus removal strongly depended on operational parameters, including wastewater conductivity, initial phosphorus concentration, current density, and electrolysis time. Based on experimental data, statistical model verification of the response surface methodology (RSM) (multiple factor optimization) was also established to provide further insights and accurately describe the interactive relationship between the process variables, thus optimizing the EC process performance. The EC process using iron electrodes is promising for improving wastewater phosphorus removal efficiency, and RSM can be a sustainable tool for predicting the performance of the EC process and explaining the influence of the process variables.

  3. Modeling and optimum time performance for concurrent processing

    NASA Technical Reports Server (NTRS)

    Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy

    1988-01-01

    The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.

  4. Use of high performance networks and supercomputers for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  5. SAR operational aspects

    NASA Astrophysics Data System (ADS)

    Holmdahl, P. E.; Ellis, A. B. E.; Moeller-Olsen, P.; Ringgaard, J. P.

    1981-12-01

    The basic requirements of the SAR ground segment of ERS-1 are discussed. A system configuration for the real time data acquisition station and the processing and archive facility is depicted. The functions of a typical SAR processing unit (SPU) are specified, and inputs required for near real time and full precision, deferred time processing are described. Inputs and the processing required for provision of these inputs to the SPU are dealt with. Data flow through the systems, and normal and nonnormal operational sequence, are outlined. Prerequisites for maintaining overall performance are identified, emphasizing quality control. The most demanding tasks to be performed by the front end are defined in order to determine types of processors and peripherals which comply with throughput requirements.

  6. Real-Time Nonlinear Optical Information Processing.

    DTIC Science & Technology

    1979-06-01

    operations aree presented. One approach realizes the halftone method of nonlinear optical processing in real time by replacing the conventional...photographic recording medium with a real-time image transducer. In the second approach halftoning is eliminated and the real-time device is used directly

  7. Double-time correlation functions of two quantum operations in open systems

    NASA Astrophysics Data System (ADS)

    Ban, Masashi

    2017-10-01

    A double-time correlation function of arbitrary two quantum operations is studied for a nonstationary open quantum system which is in contact with a thermal reservoir. It includes a usual correlation function, a linear response function, and a weak value of an observable. Time evolution of the correlation function can be derived by means of the time-convolution and time-convolutionless projection operator techniques. For this purpose, a quasidensity operator accompanied by a fictitious field is introduced, which makes it possible to derive explicit formulas for calculating a double-time correlation function in the second-order approximation with respect to a system-reservoir interaction. The derived formula explicitly shows that the quantum regression theorem for calculating the double-time correlation function cannot be used if a thermal reservoir has a finite correlation time. Furthermore, the formula is applied for a pure dephasing process and a linear dissipative process. The quantum regression theorem and the the Leggett-Garg inequality are investigated for an open two-level system. The results are compared with those obtained by exact calculation to examine whether the formula is a good approximation.

  8. Real-Time, General-Purpose, High-Speed Signal Processing Systems for Underwater Research. Proceedings of a Working Level Conference held at Supreme Allied Commander, Atlantic Anti-Submarine Warfare Research Center (SACLANTCEN) on 18-21 September 1979. Part 2. Sessions IV to VI.

    DTIC Science & Technology

    1979-12-01

    ACTIVATED, SYSTEM OPERATION AND TESTING MASCOT PROVIDES: 1. SYSTEM BUILD SOFTWARE COMPILE-TIME CHECKS,a. 2. RUN-TIME SUPERVISOR KERNEL, 3, MONITOR AND...p AD-AOBI 851 SACLANT ASW RESEARCH CENTRE LA SPEZIA 11ITALY) F/B 1711 REAL-TIME, GENERAL-PURPOSE, HIGH-SPEED SIGNAL PROCESSING SYSTEM -- ETC (U) DEC 79...Table of Contents Table of Contents (Cont’d) Page Signal processing language and operating system (w) 23-1 to 23-12 by S. Weinstein A modular signal

  9. Improving a Dental School's Clinic Operations Using Lean Process Improvement.

    PubMed

    Robinson, Fonda G; Cunningham, Larry L; Turner, Sharon P; Lindroth, John; Ray, Deborah; Khan, Talib; Yates, Audrey

    2016-10-01

    The term "lean production," also known as "Lean," describes a process of operations management pioneered at the Toyota Motor Company that contributed significantly to the success of the company. Although developed by Toyota, the Lean process has been implemented at many other organizations, including those in health care, and should be considered by dental schools in evaluating their clinical operations. Lean combines engineering principles with operations management and improvement tools to optimize business and operating processes. One of the core concepts is relentless elimination of waste (non-value-added components of a process). Another key concept is utilization of individuals closest to the actual work to analyze and improve the process. When the medical center of the University of Kentucky adopted the Lean process for improving clinical operations, members of the College of Dentistry trained in the process applied the techniques to improve inefficient operations at the Walk-In Dental Clinic. The purpose of this project was to reduce patients' average in-the-door-to-out-the-door time from over four hours to three hours within 90 days. Achievement of this goal was realized by streamlining patient flow and strategically relocating key phases of the process. This initiative resulted in patient benefits such as shortening average in-the-door-to-out-the-door time by over an hour, improving satisfaction by 21%, and reducing negative comments by 24%, as well as providing opportunity to implement the electronic health record, improving teamwork, and enhancing educational experiences for students. These benefits were achieved while maintaining high-quality patient care with zero adverse outcomes during and two years following the process improvement project.

  10. Sampling Operations on Big Data

    DTIC Science & Technology

    2015-11-29

    gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and

  11. Integrated payload and mission planning, phase 3. Volume 3: Ground real-time mission operations

    NASA Technical Reports Server (NTRS)

    White, W. J.

    1977-01-01

    The payloads tentatively planned to fly on the first two Spacelab missions were analyzed to examine the cost relationships of providing mission operations support from onboard vs the ground-based Payload Operations Control Center (POCC). The quantitative results indicate that use of a POCC, with data processing capability, to support real-time mission operations is the most cost effective case.

  12. Use telecommunications for real-time process control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zilberman, I.; Bigman, J.; Sela, I.

    1996-05-01

    Process operators design real-time accurate information to monitor and control product streams and to optimize unit operations. The challenge is how to cost-effectively install sophisticated analytical equipment in harsh environments such as process areas and maintain system reliability. Incorporating telecommunications technology with near infrared (NIR) spectroscopy may be the bridge to help operations achieve their online control goals. Coupling communications fiber optics with NIR analyzers enables the probe and sampling system to remain in the field and crucial analytical equipment to be remotely located in a general purpose area without specialized protection provisions. The case histories show how two refineriesmore » used NIR spectroscopy online to track octane levels for reformate streams.« less

  13. Understanding facilities design parameters for a remanufacturing system

    NASA Astrophysics Data System (ADS)

    Topcu, Aysegul; Cullinane, Thomas

    2005-11-01

    Remanufacturing is rapidly becoming a very important element in the economies of the world. Products such as washing machines, clothes driers, automobile parts, cell phones and a wide range of consumer durable goods are being reclaimed and sent through processes that restore these products to levels of operating performance that are as good or better than their new product performance. The operations involved in the remanufacturing process add several new dimensions to the work that must be performed. Disassembly is an operation that rarely appears on the operations chart of a typical production facility. The inspection and test functions in remanufacturing most often involve several more tasks than those involved in the first time manufacturing cycle. A close evaluation of most any remanufacturing operation reveals several points in the process in which parts must be cleaned, tested and stored. Although several researchers have focused their work on optimizing the disassembly function and the inspection, test and store functions, very little research has been devoted to studying the impact of the facilities design on the effectiveness of the remanufacturing process. The purpose of this paper will be to delineate the differences between first time manufacturing operations and remanufacturing operations for durable goods and to identify the features of the facilities design that must be considered if the remanufacturing operations are to be effective.

  14. Sono-leather technology with ultrasound: a boon for unit operations in leather processing - review of our research work at Central Leather Research Institute (CLRI), India.

    PubMed

    Sivakumar, Venkatasubramanian; Swaminathan, Gopalaraman; Rao, Paruchuri Gangadhar; Ramasami, Thirumalachari

    2009-01-01

    Ultrasound is a sound wave with a frequency above the human audible range of 16 Hz to 16 kHz. In recent years, numerous unit operations involving physical as well as chemical processes are reported to have been enhanced by ultrasonic irradiation. There have been benefits such as improvement in process efficiency, process time reduction, performing the processes under milder conditions and avoiding the use of some toxic chemicals to achieve cleaner processing. These could be a better way of augmentation for the processes as an advanced technique. The important point here is that ultrasonic irradiation is physical method activation rather than using chemical entities. Detailed studies have been made in the unit operations related to leather such as diffusion rate enhancement through porous leather matrix, cleaning, degreasing, tanning, dyeing, fatliquoring, oil-water emulsification process and solid-liquid tannin extraction from vegetable tanning materials as well as in precipitation reaction in wastewater treatment. The fundamental mechanism involved in these processes is ultrasonic cavitation in liquid media. In addition to this there also exist some process specific mechanisms for the enhancement of the processes. For instance, possible real-time reversible pore-size changes during ultrasound propagation through skin/leather matrix could be a reason for diffusion rate enhancement in leather processing as reported for the first time. Exhaustive scientific research work has been carried out in this area by our group working in Chemical Engineering Division of CLRI and most of these benefits have been proven with publications in valued peer-reviewed international journals. The overall results indicate that about 2-5-fold increase in the process efficiency due to ultrasound under the given process conditions for various unit operations with additional benefits. Scale-up studies are underway for converting these concepts in to a real viable larger scale operation. In the present paper, summary of our research findings from employing this technique in various unit operations such as cleaning, diffusion, emulsification, particle-size reduction, solid-liquid leaching (tannin and natural dye extraction) as well as precipitation has been presented.

  15. THE WASHINGTON DATA PROCESSING TRAINING STORY.

    ERIC Educational Resources Information Center

    MCKEE, R.L.

    A DATA PROCESSING TRAINING PROGRAM IN WASHINGTON HAD 10 DATA PROCESSING CENTERS IN OPERATION AND EIGHT MORE IN VARIOUS STAGES OF PLANNING IN 1963. THESE CENTERS WERE FULL-TIME DAY PREPARATORY 2-YEAR POST-HIGH SCHOOL TECHNICIAN TRAINING PROGRAMS, OPERATED AND ADMINISTERED BY THE LOCAL BOARDS OF EDUCATION. EACH SCHOOL HAD A COMPLETE DATA PROCESSING…

  16. Vortex information display system program description manual. [data acquisition from laser Doppler velocimeters and real time operation

    NASA Technical Reports Server (NTRS)

    Conway, R.; Matuck, G. N.; Roe, J. M.; Taylor, J.; Turner, A.

    1975-01-01

    A vortex information display system is described which provides flexible control through system-user interaction for collecting wing-tip-trailing vortex data, processing this data in real time, displaying the processed data, storing raw data on magnetic tape, and post processing raw data. The data is received from two asynchronous laser Doppler velocimeters (LDV's) and includes position, velocity, and intensity information. The raw data is written onto magnetic tape for permanent storage and is also processed in real time to locate vortices and plot their positions as a function of time. The interactive capability enables the user to make real time adjustments in processing data and provides a better definition of vortex behavior. Displaying the vortex information in real time produces a feedback capability to the LDV system operator allowing adjustments to be made in the collection of raw data. Both raw data and processing can be continually upgraded during flyby testing to improve vortex behavior studies. The post-analysis capability permits the analyst to perform in-depth studies of test data and to modify vortex behavior models to improve transport predictions.

  17. Real-time solar magnetograph operation system software design and user's guide

    NASA Technical Reports Server (NTRS)

    Wang, C.

    1984-01-01

    The Real Time Solar Magnetograph (RTSM) Operation system software design on PDP11/23+ is presented along with the User's Guide. The RTSM operation software is for real time instrumentation control, data collection and data management. The data is used for vector analysis, plotting or graphics display. The processed data is then easily compared with solar data from other sources, such as the Solar Maximum Mission (SMM).

  18. A real-time programming system.

    PubMed

    Townsend, H R

    1979-03-01

    The paper describes a Basic Operating and Scheduling System (BOSS) designed for a small computer. User programs are organised as self-contained modular 'processes' and the way in which the scheduler divides the time of the computer equally between them, while arranging for any process which has to respond to an interrupt from a peripheral device to be given the necessary priority, is described in detail. Next the procedures provided by the operating system to organise communication between processes are described, and how they are used to construct dynamically self-modifying real-time systems. Finally, the general philosophy of BOSS and applications to a multi-processor assembly are discussed.

  19. Study on reservoir time-varying design flood of inflow based on Poisson process with time-dependent parameters

    NASA Astrophysics Data System (ADS)

    Li, Jiqing; Huang, Jing; Li, Jianchang

    2018-06-01

    The time-varying design flood can make full use of the measured data, which can provide the reservoir with the basis of both flood control and operation scheduling. This paper adopts peak over threshold method for flood sampling in unit periods and Poisson process with time-dependent parameters model for simulation of reservoirs time-varying design flood. Considering the relationship between the model parameters and hypothesis, this paper presents the over-threshold intensity, the fitting degree of Poisson distribution and the design flood parameters are the time-varying design flood unit period and threshold discriminant basis, deduced Longyangxia reservoir time-varying design flood process at 9 kinds of design frequencies. The time-varying design flood of inflow is closer to the reservoir actual inflow conditions, which can be used to adjust the operating water level in flood season and make plans for resource utilization of flood in the basin.

  20. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate ( C'' or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability bymore » using DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  1. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate (``C`` or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability by usingmore » DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  2. A Versatile Image Processor For Digital Diagnostic Imaging And Its Application In Computed Radiography

    NASA Astrophysics Data System (ADS)

    Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.

    1986-06-01

    In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.

  3. Future electro-optical sensors and processing in urban operations

    NASA Astrophysics Data System (ADS)

    Grönwall, Christina; Schwering, Piet B.; Rantakokko, Jouni; Benoist, Koen W.; Kemp, Rob A. W.; Steinvall, Ove; Letalick, Dietmar; Björkert, Stefan

    2013-10-01

    In the electro-optical sensors and processing in urban operations (ESUO) study we pave the way for the European Defence Agency (EDA) group of Electro-Optics experts (IAP03) for a common understanding of the optimal distribution of processing functions between the different platforms. Combinations of local, distributed and centralized processing are proposed. In this way one can match processing functionality to the required power, and available communication systems data rates, to obtain the desired reaction times. In the study, three priority scenarios were defined. For these scenarios, present-day and future sensors and signal processing technologies were studied. The priority scenarios were camp protection, patrol and house search. A method for analyzing information quality in single and multi-sensor systems has been applied. A method for estimating reaction times for transmission of data through the chain of command has been proposed and used. These methods are documented and can be used to modify scenarios, or be applied to other scenarios. Present day data processing is organized mainly locally. Very limited exchange of information with other platforms is present; this is performed mainly at a high information level. Main issues that arose from the analysis of present-day systems and methodology are the slow reaction time due to the limited field of view of present-day sensors and the lack of robust automated processing. Efficient handover schemes between wide and narrow field of view sensors may however reduce the delay times. The main effort in the study was in forecasting the signal processing of EO-sensors in the next ten to twenty years. Distributed processing is proposed between hand-held and vehicle based sensors. This can be accompanied by cloud processing on board several vehicles. Additionally, to perform sensor fusion on sensor data originating from different platforms, and making full use of UAV imagery, a combination of distributed and centralized processing is essential. There is a central role for sensor fusion of heterogeneous sensors in future processing. The changes that occur in the urban operations of the future due to the application of these new technologies will be the improved quality of information, with shorter reaction time, and with lower operator load.

  4. Addressing the medicinal chemistry bottleneck: a lean approach to centralized purification.

    PubMed

    Weller, Harold N; Nirschl, David S; Paulson, James L; Hoffman, Steven L; Bullock, William H

    2012-09-10

    The use of standardized lean manufacturing principles to improve drug discovery productivity is often thought to be at odds with fostering innovation. This manuscript describes how selective implementation of a lean optimized process, in this case centralized purification for medicinal chemistry, can improve operational productivity and increase scientist time available for innovation. A description of the centralized purification process is provided along with both operational and impact (productivity) metrics, which indicate lower cost, higher output, and presumably more free time for innovation as a result of the process changes described.

  5. [Improving operating room efficiency: an observational and multidimensional approach in the San Camillo-Forlanini Hospital, Rome].

    PubMed

    Mitello, Lucia; D'Alba, Fabrizio; Milito, Francesca; Monaco, Cinzia; Orazi, Daniela; Battilana, Daniela; Marucci, Anna Rita; Longo, Angelo; Latina, Roberto

    2017-01-01

    The management of operating rooms (ORs) is a complex process which requires an effective organizational scheme. In order to amore convinient allocation of resources a rigorous monitoring plan is needed to ensure operating rooms performances. All the necessary actions should be taken to improve the quality of the planning and scheduling procedure. Between April-December, 2016 an organizational analysis has been carried out on the performances of the A.O. S. Camillo-Forlanini Hospital Operating Block applying the "process management" approach to the ORs efficiency. The project involved two different surgical areas of the same operating block the multi-specialist and elective surgery and cardio-vascular surgery . The analyses of the processes was made through the product, patient and safety approach and from different points of view: the "asis", process and stakeholder perspectives. Descriptive statistics was used to process raw data and Student's t-distribution was used to assess the difference between the two means (significant p value ˂0,05). The Coefficient of Variation (CV) was used to describe the variabilityamong data. The asis approach allowed us to describe the ORs inbound activities. For both operating block the most demanding weekly commitments in terms of time turned out to be the inventory management procedures of controlling and stocking medicines, general medical supplies and instruments (130[DS=±14] for BOE and 30[DS=±18] for CCH. The average time spent on preparing the operating room, separately calculated starting from the first surgical case, was of 27 minutes (SD=± 17) while for the following surgical procedures preparation time decreased to 15 minutes (SD= ± 10), which highlighted a meaningful difference of 12 minutes. A great variability was registered in CCH due to the unpredictability of these operations (CV 82%). The stakeholders' perspective revealed a reasonable level of satisfaction among nurses and surgeons (2.9 vs 2.3, respectively) and in anesthesiologist (2.8-BOE vs 2.4 CCH).Being brought to the surgical suite from an "external Unit" seems to have negatively influenced the patient's perception: preparation time turned out to be significantly lower for CCH patients rather than BOE ones (p˂0,001).The results of the safety procedure approach highlighted a moderate criticality in terms of cleaning up time and delay in the starting time of the first surgical case. More effort should be made to avoid any slowdown during the whole process. It is advisable to implement a lean system that may improve efficiency and quality of the service to reduce wastes and unproductive times. This would inevitably generate a more positive outcomes.

  6. Process mapping as a framework for performance improvement in emergency general surgery.

    PubMed

    DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad

    2017-12-01

    Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.

  7. Process mapping as a framework for performance improvement in emergency general surgery.

    PubMed

    DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad

    2018-02-01

    Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.

  8. Current and Future Flight Operating Systems

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan

    2007-01-01

    This viewgraph presentation reviews the current real time operating system (RTOS) type in use with current flight systems. A new RTOS model is described, i.e. the process model. Included is a review of the challenges of migrating from the classic RTOS to the Process Model type.

  9. Operations research methods improve chemotherapy patient appointment scheduling.

    PubMed

    Santibáñez, Pablo; Aristizabal, Ruben; Puterman, Martin L; Chow, Vincent S; Huang, Wenhai; Kollmannsberger, Christian; Nordin, Travis; Runzer, Nancy; Tyldesley, Scott

    2012-12-01

    Clinical complexity, scheduling restrictions, and outdated manual booking processes resulted in frequent clerical rework, long waitlists for treatment, and late appointment notification for patients at a chemotherapy clinic in a large cancer center in British Columbia, Canada. A 17-month study was conducted to address booking, scheduling and workload issues and to develop, implement, and evaluate solutions. A review of scheduling practices included process observation and mapping, analysis of historical appointment data, creation of a new performance metric (final appointment notification lead time), and a baseline patient satisfaction survey. Process improvement involved discrete event simulation to evaluate alternative booking practice scenarios, development of an optimization-based scheduling tool to improve scheduling efficiency, and change management for implementation of process changes. Results were evaluated through analysis of appointment data, a follow-up patient survey, and staff surveys. Process review revealed a two-stage scheduling process. Long waitlists and late notification resulted from an inflexible first-stage process. The second-stage process was time consuming and tedious. After a revised, more flexible first-stage process and an automated second-stage process were implemented, the median percentage of appointments exceeding the final appointment notification lead time target of one week was reduced by 57% and median waitlist size decreased by 83%. Patient surveys confirmed increased satisfaction while staff feedback reported reduced stress levels. Significant operational improvements can be achieved through process redesign combined with operations research methods.

  10. Evaluation of Improved Pushback Forecasts Derived from Airline Ground Operations Data

    NASA Technical Reports Server (NTRS)

    Carr, Francis; Theis, Georg; Feron, Eric; Clarke, John-Paul

    2003-01-01

    Accurate and timely predictions of airline pushbacks can potentially lead to improved performance of automated decision-support tools for airport surface traffic, thus reducing the variability and average duration of costly airline delays. One factor which affects the realization of these benefits is the level of uncertainty inherent in the turn processes. To characterize this inherent uncertainty, three techniques are developed for predicting time-to-go until pushback as a function of available ground-time; elapsed ground-time; and the status (not-started/in-progress/completed) of individual turn processes (cleaning, fueling, etc.). These techniques are tested against a large and detailed dataset covering approximately l0(exp 4) real-world turn operations obtained through collaboration with Deutsche Lufthansa AG. Even after the dataset is filtered to obtain a sample of turn operations with minimal uncertainty, the standard deviation of forecast error for all three techniques is lower-bounded away from zero, indicating that turn operations have a significant stochastic component. This lower-bound result shows that decision-support tools must be designed to incorporate robust mechanisms for coping with pushback demand stochasticity, rather than treating the pushback demand process as a known deterministic input.

  11. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  12. Process for operating equilibrium controlled reactions

    DOEpatents

    Nataraj, Shankar; Carvill, Brian Thomas; Hufton, Jeffrey Raymond; Mayorga, Steven Gerard; Gaffney, Thomas Richard; Brzozowski, Jeffrey Richard

    2001-01-01

    A cyclic process for operating an equilibrium controlled reaction in a plurality of reactors containing an admixture of an adsorbent and a reaction catalyst suitable for performing the desired reaction which is operated in a predetermined timed sequence wherein the heating and cooling requirements in a moving reaction mass transfer zone within each reactor are provided by indirect heat exchange with a fluid capable of phase change at temperatures maintained in each reactor during sorpreaction, depressurization, purging and pressurization steps during each process cycle.

  13. Optimization and planning of operating theatre activities: an original definition of pathways and process modeling.

    PubMed

    Barbagallo, Simone; Corradi, Luca; de Ville de Goyet, Jean; Iannucci, Marina; Porro, Ivan; Rosso, Nicola; Tanfani, Elena; Testi, Angela

    2015-05-17

    The Operating Room (OR) is a key resource of all major hospitals, but it also accounts for up 40% of resource costs. Improving cost effectiveness, while maintaining a quality of care, is a universal objective. These goals imply an optimization of planning and a scheduling of the activities involved. This is highly challenging due to the inherent variable and unpredictable nature of surgery. A Business Process Modeling Notation (BPMN 2.0) was used for the representation of the "OR Process" (being defined as the sequence of all of the elementary steps between "patient ready for surgery" to "patient operated upon") as a general pathway ("path"). The path was then both further standardized as much as possible and, at the same time, keeping all of the key-elements that would allow one to address or define the other steps of planning, and the inherent and wide variability in terms of patient specificity. The path was used to schedule OR activity, room-by-room, and day-by-day, feeding the process from a "waiting list database" and using a mathematical optimization model with the objective of ending up in an optimized planning. The OR process was defined with special attention paid to flows, timing and resource involvement. Standardization involved a dynamics operation and defined an expected operating time for each operation. The optimization model has been implemented and tested on real clinical data. The comparison of the results reported with the real data, shows that by using the optimization model, allows for the scheduling of about 30% more patients than in actual practice, as well as to better exploit the OR efficiency, increasing the average operating room utilization rate up to 20%. The optimization of OR activity planning is essential in order to manage the hospital's waiting list. Optimal planning is facilitated by defining the operation as a standard pathway where all variables are taken into account. By allowing a precise scheduling, it feeds the process of planning and, further up-stream, the management of a waiting list in an interactive and bi-directional dynamic process.

  14. EnergySolution's Clive Disposal Facility Operational Research Model - 13475

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nissley, Paul; Berry, Joanne

    2013-07-01

    EnergySolutions owns and operates a licensed, commercial low-level radioactive waste disposal facility located in Clive, Utah. The Clive site receives low-level radioactive waste from various locations within the United States via bulk truck, containerised truck, enclosed truck, bulk rail-cars, rail boxcars, and rail inter-modals. Waste packages are unloaded, characterized, processed, and disposed of at the Clive site. Examples of low-level radioactive waste arriving at Clive include, but are not limited to, contaminated soil/debris, spent nuclear power plant components, and medical waste. Generators of low-level radioactive waste typically include nuclear power plants, hospitals, national laboratories, and various United States government operatedmore » waste sites. Over the past few years, poor economic conditions have significantly reduced the number of shipments to Clive. With less revenue coming in from processing shipments, Clive needed to keep its expenses down if it was going to maintain past levels of profitability. The Operational Research group of EnergySolutions were asked to develop a simulation model to help identify any improvement opportunities that would increase overall operating efficiency and reduce costs at the Clive Facility. The Clive operations research model simulates the receipt, movement, and processing requirements of shipments arriving at the facility. The model includes shipment schedules, processing times of various waste types, labor requirements, shift schedules, and site equipment availability. The Clive operations research model has been developed using the WITNESS{sup TM} process simulation software, which is developed by the Lanner Group. The major goals of this project were to: - identify processing bottlenecks that could reduce the turnaround time from shipment arrival to disposal; - evaluate the use (or idle time) of labor and equipment; - project future operational requirements under different forecasted scenarios. By identifying processing bottlenecks and unused equipment and/or labor, improvements to operating efficiency could be determined and appropriate cost saving measures implemented. Model runs forecasting various scenarios helped illustrate potential impacts of certain conditions (e.g. 20% decrease in shipments arrived), variables (e.g. 20% decrease in labor), or other possible situations. (authors)« less

  15. Spitzer Space Telescope Sequencing Operations Software, Strategies, and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Bliss, David A.

    2006-01-01

    The Space Infrared Telescope Facility (SIRTF) was launched in August, 2003, and renamed to the Spitzer Space Telescope in 2004. Two years of observing the universe in the wavelength range from 3 to 180 microns has yielded enormous scientific discoveries. Since this magnificent observatory has a limited lifetime, maximizing science viewing efficiency (ie, maximizing time spent executing activities directly related to science observations) was the key operational objective. The strategy employed for maximizing science viewing efficiency was to optimize spacecraft flexibility, adaptability, and use of observation time. The selected approach involved implementation of a multi-engine sequencing architecture coupled with nondeterministic spacecraft and science execution times. This approach, though effective, added much complexity to uplink operations and sequence development. The Jet Propulsion Laboratory (JPL) manages Spitzer s operations. As part of the uplink process, Spitzer s Mission Sequence Team (MST) was tasked with processing observatory inputs from the Spitzer Science Center (SSC) into efficiently integrated, constraint-checked, and modeled review and command products which accommodated the complexity of non-deterministic spacecraft and science event executions without increasing operations costs. The MST developed processes, scripts, and participated in the adaptation of multi-mission core software to enable rapid processing of complex sequences. The MST was also tasked with developing a Downlink Keyword File (DKF) which could instruct Deep Space Network (DSN) stations on how and when to configure themselves to receive Spitzer science data. As MST and uplink operations developed, important lessons were learned that should be applied to future missions, especially those missions which employ command-intensive operations via a multi-engine sequence architecture.

  16. Operational concepts and implementation strategies for the design configuration management process.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trauth, Sharon Lee

    2007-05-01

    This report describes operational concepts and implementation strategies for the Design Configuration Management Process (DCMP). It presents a process-based systems engineering model for the successful configuration management of the products generated during the operation of the design organization as a business entity. The DCMP model focuses on Pro/E and associated activities and information. It can serve as the framework for interconnecting all essential aspects of the product design business. A design operation scenario offers a sense of how to do business at a time when DCMP is second nature within the design organization.

  17. LABORATORY PROCESS CONTROLLER USING NATURAL LANGUAGE COMMANDS FROM A PERSONAL COMPUTER

    NASA Technical Reports Server (NTRS)

    Will, H.

    1994-01-01

    The complex environment of the typical research laboratory requires flexible process control. This program provides natural language process control from an IBM PC or compatible machine. Sometimes process control schedules require changes frequently, even several times per day. These changes may include adding, deleting, and rearranging steps in a process. This program sets up a process control system that can either run without an operator, or be run by workers with limited programming skills. The software system includes three programs. Two of the programs, written in FORTRAN77, record data and control research processes. The third program, written in Pascal, generates the FORTRAN subroutines used by the other two programs to identify the user commands with the user-written device drivers. The software system also includes an input data set which allows the user to define the user commands which are to be executed by the computer. To set the system up the operator writes device driver routines for all of the controlled devices. Once set up, this system requires only an input file containing natural language command lines which tell the system what to do and when to do it. The operator can make up custom commands for operating and taking data from external research equipment at any time of the day or night without the operator in attendance. This process control system requires a personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. The program requires a FORTRAN77 compiler and user-written device drivers. This program was developed in 1989 and has a memory requirement of about 62 Kbytes.

  18. Testing single point incremental forming moulds for rotomoulding operations

    NASA Astrophysics Data System (ADS)

    Afonso, Daniel; de Sousa, Ricardo Alves; Torcato, Ricardo

    2017-10-01

    Low pressure polymer processes as thermoforming or rotational moulding use much simpler moulds than high pressure processes like injection. However, despite the low forces involved in the process, moulds manufacturing for these applications is still a very material, energy and time consuming operation. Particularly in rotational moulding there is no standard for the mould manufacture and very different techniques are applicable. The goal of this research is to develop and validate a method for manufacturing plastically formed sheet metal moulds by single point incremental forming (SPIF) for rotomoulding and rotocasting operations. A Stewart platform based SPIF machine allow the forming of thick metal sheets, granting the required structural stiffness for the mould surface, and keeping a short manufacture lead time and low thermal inertia. The experimental work involves the proposal of a hollow part, design and fabrication of a sheet metal mould using dieless incremental forming techniques and testing its operation in the production of prototype parts.

  19. SIG. Signal Processing, Analysis, & Display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J.; Lager, D.; Azevedo, S.

    1992-01-22

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG; a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time and frequency-domain signals includingmore » operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments, commenting lines, defining commands, and automatic execution for each item in a `repeat` sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less

  20. SIG. Signal Processing, Analysis, & Display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J.; Lager, D.; Azevedo, S.

    1992-01-22

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time-and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time and frequency-domain signals includingmore » operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments, commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less

  1. Signal Processing, Analysis, & Display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lager, Darrell; Azevado, Stephen

    1986-06-01

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time- and frequency-domain signalsmore » including operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments,commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less

  2. SIG. Signal Processing, Analysis, & Display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J.; Lager, D.; Azevedo, S.

    1992-01-22

    SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time- and frequency-domain signals. However, it has been designed to ultimately accommodate other representations for data such as multiplexed signals and complex matrices. Two user interfaces are provided in SIG - a menu mode for the unfamiliar user and a command mode for more experienced users. In both modes errors are detected as early as possible and are indicated by friendly, meaningful messages. An on-line HELP package is also included. A variety of operations can be performed on time- and frequency-domain signalsmore » including operations on the samples of a signal, operations on the entire signal, and operations on two or more signals. Signal processing operations that can be performed are digital filtering (median, Bessel, Butterworth, and Chebychev), ensemble average, resample, auto and cross spectral density, transfer function and impulse response, trend removal, convolution, Fourier transform and inverse window functions (Hamming, Kaiser-Bessel), simulation (ramp, sine, pulsetrain, random), and read/write signals. User definable signal processing algorithms are also featured. SIG has many options including multiple commands per line, command files with arguments,commenting lines, defining commands, and automatic execution for each item in a repeat sequence. Graphical operations on signals and spectra include: x-y plots of time signals; real, imaginary, magnitude, and phase plots of spectra; scaling of spectra for continuous or discrete domain; cursor zoom; families of curves; and multiple viewports.« less

  3. Process control strategy for ITER central solenoid operation

    NASA Astrophysics Data System (ADS)

    Maekawa, R.; Takami, S.; Iwamoto, A.; Chang, H.-S.; Forgeas, A.; Chalifour, M.

    2016-12-01

    ITER Central Solenoid (CS) pulse operation induces significant flow disturbance in the forced-flow Supercritical Helium (SHe) cooling circuit, which could impact primarily on the operation of cold circulator (SHe centrifugal pump) in Auxiliary Cold Box (ACB). Numerical studies using Venecia®, SUPERMAGNET and 4C have identified reverse flow at the CS module inlet due to the substantial thermal energy deposition at the inner-most winding. To assess the reliable operation of ACB-CS (dedicated ACB for CS), the process analyses have been conducted with a dynamic process simulation model developed by Cryogenic Process REal-time SimulaTor (C-PREST). As implementing process control of hydrodynamic instability, several strategies have been applied to evaluate their feasibility. The paper discusses control strategy to protect the centrifugal type cold circulator/compressor operations and its impact on the CS cooling.

  4. Numerical Simulation of Nonperiodic Rail Operation Diagram Characteristics

    PubMed Central

    Qian, Yongsheng; Wang, Bingbing; Zeng, Junwei; Wang, Xin

    2014-01-01

    This paper succeeded in utilizing cellular automata (CA) model to simulate the process of the train operation under the four-aspect color light system and getting the nonperiodic diagram of the mixed passenger and freight tracks. Generally speaking, the concerned models could simulate well the situation of wagon in preventing trains from colliding when parking and restarting and of the real-time changes the situation of train speeds and displacement and get hold of the current train states in their departures and arrivals. Finally the model gets the train diagram that simulates the train operation in different ratios of the van and analyzes some parameter characters in the process of train running, such as time, speed, through capacity, interval departing time, and departing numbers. PMID:25435863

  5. Systemic Operational Design: An Alternative to Estimate Planning

    DTIC Science & Technology

    2009-05-04

    relationships found in the COE. Framing and campaign design, with emphasis on systems theory , have therefore made their way to the forefront of doctrinal...short explanation of the systems theory behind SOD, examines how the SOD process happens, and compares SOD with the time proven “Commander’s Estimate... Theory , Campaign planning, Contemporary Operating Environment, Commander’s Estimate Process, Operational design 16. SECURITY CLASSIFICATION OF

  6. Proportional reasoning as a heuristic-based process: time constraint and dual task considerations.

    PubMed

    Gillard, Ellen; Van Dooren, Wim; Schaeken, Walter; Verschaffel, Lieven

    2009-01-01

    The present study interprets the overuse of proportional solution methods from a dual process framework. Dual process theories claim that analytic operations involve time-consuming executive processing, whereas heuristic operations are fast and automatic. In two experiments to test whether proportional reasoning is heuristic-based, the participants solved "proportional" problems, for which proportional solution methods provide correct answers, and "nonproportional" problems known to elicit incorrect answers based on the assumption of proportionality. In Experiment 1, the available solution time was restricted. In Experiment 2, the executive resources were burdened with a secondary task. Both manipulations induced an increase in proportional answers and a decrease in correct answers to nonproportional problems. These results support the hypothesis that the choice for proportional methods is heuristic-based.

  7. A multiprocessing architecture for real-time monitoring

    NASA Technical Reports Server (NTRS)

    Schmidt, James L.; Kao, Simon M.; Read, Jackson Y.; Weitzenkamp, Scott M.; Laffey, Thomas J.

    1988-01-01

    A multitasking architecture for performing real-time monitoring and analysis using knowledge-based problem solving techniques is described. To handle asynchronous inputs and perform in real time, the system consists of three or more distributed processes which run concurrently and communicate via a message passing scheme. The Data Management Process acquires, compresses, and routes the incoming sensor data to other processes. The Inference Process consists of a high performance inference engine that performs a real-time analysis on the state and health of the physical system. The I/O Process receives sensor data from the Data Management Process and status messages and recommendations from the Inference Process, updates its graphical displays in real time, and acts as the interface to the console operator. The distributed architecture has been interfaced to an actual spacecraft (NASA's Hubble Space Telescope) and is able to process the incoming telemetry in real-time (i.e., several hundred data changes per second). The system is being used in two locations for different purposes: (1) in Sunnyville, California at the Space Telescope Test Control Center it is used in the preflight testing of the vehicle; and (2) in Greenbelt, Maryland at NASA/Goddard it is being used on an experimental basis in flight operations for health and safety monitoring.

  8. Optimization and Improvement of Test Processes on a Production Line

    NASA Astrophysics Data System (ADS)

    Sujová, Erika; Čierna, Helena

    2018-06-01

    The paper deals with increasing processes efficiency at a production line of cylinder heads of engines in a production company operating in the automotive industry. The goal is to achieve improvement and optimization of test processes on a production line. It analyzes options for improving capacity, availability and productivity of processes of an output test by using modern technology available on the market. We have focused on analysis of operation times before and after optimization of test processes at specific production sections. By analyzing measured results we have determined differences in time before and after improvement of the process. We have determined a coefficient of efficiency OEE and by comparing outputs we have confirmed real improvement of the process of the output test of cylinder heads.

  9. Overview of the Smart Network Element Architecture and Recent Innovations

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.

    2008-01-01

    In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.

  10. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    NASA Astrophysics Data System (ADS)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  11. Methodology for the systems engineering process. Volume 3: Operational availability

    NASA Technical Reports Server (NTRS)

    Nelson, J. H.

    1972-01-01

    A detailed description and explanation of the operational availability parameter is presented. The fundamental mathematical basis for operational availability is developed, and its relationship to a system's overall performance effectiveness is illustrated within the context of identifying specific availability requirements. Thus, in attempting to provide a general methodology for treating both hypothetical and existing availability requirements, the concept of an availability state, in conjunction with the more conventional probability-time capability, is investigated. In this respect, emphasis is focused upon a balanced analytical and pragmatic treatment of operational availability within the system design process. For example, several applications of operational availability to typical aerospace systems are presented, encompassing the techniques of Monte Carlo simulation, system performance availability trade-off studies, analytical modeling of specific scenarios, as well as the determination of launch-on-time probabilities. Finally, an extensive bibliography is provided to indicate further levels of depth and detail of the operational availability parameter.

  12. Metallurgical Plant Optimization Through the use of Flowsheet Simulation Modelling

    NASA Astrophysics Data System (ADS)

    Kennedy, Mark William

    Modern metallurgical plants typically have complex flowsheets and operate on a continuous basis. Real time interactions within such processes can be complex and the impacts of streams such as recycles on process efficiency and stability can be highly unexpected prior to actual operation. Current desktop computing power, combined with state-of-the-art flowsheet simulation software like Metsim, allow for thorough analysis of designs to explore the interaction between operating rate, heat and mass balances and in particular the potential negative impact of recycles. Using plant information systems, it is possible to combine real plant data with simple steady state models, using dynamic data exchange links to allow for near real time de-bottlenecking of operations. Accurate analytical results can also be combined with detailed unit operations models to allow for feed-forward model-based-control. This paper will explore some examples of the application of Metsim to real world engineering and plant operational issues.

  13. Rise Time. Operational Control Tests for Wastewater Treatment Facilities. Instructor's Manual [and] Student Workbook.

    ERIC Educational Resources Information Center

    Carnegie, John W.

    The rise time test (along with the settleometer procedure) is used to monitor sludge behavior in the secondary clarifier of an activated sludge system. The test monitors the effect of the nitrification/denitrification process and aids the operator in determining optimum clarifier sludge detention time and, to some extent, optimum degree of…

  14. Intelligence Community Forum

    DTIC Science & Technology

    2008-11-05

    Description Operationally Feasible? EEG ms ms cm Measures electrical activity in the brain. Practical tool for applications - real time monitoring or...Cognitive Systems Device Development & Processing Methods Brain activity can be monitored in real-time in operational environments with EEG Brain...biological and cognitive findings about the user to customize the learning environment Neurofeedback • Present the user with real-time feedback

  15. US GEOLOGICAL SURVEY'S NATIONAL SYSTEM FOR PROCESSING AND DISTRIBUTION OF NEAR REAL-TIME HYDROLOGICAL DATA.

    USGS Publications Warehouse

    Shope, William G.; ,

    1987-01-01

    The US Geological Survey is utilizing a national network of more than 1000 satellite data-collection stations, four satellite-relay direct-readout ground stations, and more than 50 computers linked together in a private telecommunications network to acquire, process, and distribute hydrological data in near real-time. The four Survey offices operating a satellite direct-readout ground station provide near real-time hydrological data to computers located in other Survey offices through the Survey's Distributed Information System. The computerized distribution system permits automated data processing and distribution to be carried out in a timely manner under the control and operation of the Survey office responsible for the data-collection stations and for the dissemination of hydrological information to the water-data users.

  16. Structure-oriented versus process-oriented approach to enhance efficiency for emergency room operations: what lessons can we learn?

    PubMed

    Hwang, Taik Gun; Lee, Younsuk; Shin, Hojung

    2011-01-01

    The efficiency and quality of a healthcare system can be defined as interactions among the system structure, processes, and outcome. This article examines the effect of structural adjustment (change in floor plan or layout) and process improvement (critical pathway implementation) on performance of emergency room (ER) operations for acute cerebral infarction patients. Two large teaching hospitals participated in this study: Korea University (KU) Guro Hospital and KU Anam Hospital. The administration of Guro adopted a structure-oriented approach in improving its ER operations while the administration of Anam employed a process-oriented approach, facilitating critical pathways and protocols. To calibrate improvements, the data for time interval, length of stay, and hospital charges were collected, before and after the planned changes were implemented at each hospital. In particular, time interval is the most essential measure for handling acute stroke patients because patients' survival and recovery are affected by the promptness of diagnosis and treatment. Statistical analyses indicated that both redesign of layout at Guro and implementation of critical pathways at Anam had a positive influence on most of the performance measures. However, reduction in time interval was not consistent at Guro, demonstrating delays in processing time for a few processes. The adoption of critical pathways at Anam appeared more effective in reducing time intervals than the structural rearrangement at Guro, mainly as a result of the extensive employee training required for a critical pathway implementation. Thus, hospital managers should combine structure-oriented and process-oriented strategies to maximize effectiveness of improvement efforts.

  17. 40 CFR 63.1159 - Operational and equipment standards for existing, new, or reconstructed sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Pollutants for Steel Pickling-HCl Process Facilities and Hydrochloric Acid Regeneration Plants § 63.1159... regeneration plant. The owner or operator of an affected plant must operate the affected plant at all times...

  18. 40 CFR 63.1159 - Operational and equipment standards for existing, new, or reconstructed sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Pollutants for Steel Pickling-HCl Process Facilities and Hydrochloric Acid Regeneration Plants § 63.1159... regeneration plant. The owner or operator of an affected plant must operate the affected plant at all times...

  19. 40 CFR 63.1159 - Operational and equipment standards for existing, new, or reconstructed sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Pollutants for Steel Pickling-HCl Process Facilities and Hydrochloric Acid Regeneration Plants § 63.1159... regeneration plant. The owner or operator of an affected plant must operate the affected plant at all times...

  20. A 3D THz image processing methodology for a fully integrated, semi-automatic and near real-time operational system

    NASA Astrophysics Data System (ADS)

    Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.

    2012-05-01

    The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.

  1. Distributed Computing for Signal Processing: Modeling of Asynchronous Parallel Computation. Appendix G. On the Design and Modeling of Special Purpose Parallel Processing Systems.

    DTIC Science & Technology

    1985-05-01

    unit in the data base, with knowing one generic assembly language. °-’--a 139 The 5-tuple describing single operation execution time of the operations...TSi-- generate , random eventi ( ,.0-15 tieit tmls - ((floa egus ()16 274 r Ispt imet imel I at :EVE’JS- II ktime=0.0; /0 present time 0/ rrs ptime=0.0...computing machinery capable of performing these tasks within a given time constraint. Because the majority of the available computing machinery is general

  2. Missile signal processing common computer architecture for rapid technology upgrade

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  3. Time Warp Operating System (TWOS)

    NASA Technical Reports Server (NTRS)

    Bellenot, Steven F.

    1993-01-01

    Designed to support parallel discrete-event simulation, TWOS is complete implementation of Time Warp mechanism - distributed protocol for virtual time synchronization based on process rollback and message annihilation.

  4. Image processing operations achievable with the Microchannel Spatial Light Modulator

    NASA Astrophysics Data System (ADS)

    Warde, C.; Fisher, A. D.; Thackara, J. I.; Weiss, A. M.

    1980-01-01

    The Microchannel Spatial Light Modulator (MSLM) is a versatile, optically-addressed, highly-sensitive device that is well suited for low-light-level, real-time, optical information processing. It consists of a photocathode, a microchannel plate (MCP), a planar acceleration grid, and an electro-optic plate in proximity focus. A framing rate of 20 Hz with full modulation depth, and 100 Hz with 20% modulation depth has been achieved in a vacuum-demountable LiTaO3 device. A halfwave exposure sensitivity of 2.2 mJ/sq cm and an optical information storage time of more than 2 months have been achieved in a similar gridless LiTaO3 device employing a visible photocathode. Image processing operations such as analog and digital thresholding, real-time image hard clipping, contrast reversal, contrast enhancement, image addition and subtraction, and binary-level logic operations such as AND, OR, XOR, and NOR can be achieved with this device. This collection of achievable image processing characteristics makes the MSLM potentially useful for a number of smart sensor applications.

  5. The Cassini project: Lessons learned through operations

    NASA Astrophysics Data System (ADS)

    McCormick, Egan D.

    1998-01-01

    The Cassini space probe requires 180 238Pu Light-weight Radioisotopic Heater Units (LWRHU) and 216 238Pu General Purpose Heat Source (GPHS) pellets. Additional LWRHU and GPHS pellets required for non-destructive (NDA) and destructive assay purposes were fabricated bringing the original pellet requirement to 224 LWRHU and 252 GPHS. Due to rejection of pellets resulting from chemical impurities in the fuel and/or failure to meet dimensional specifications a total of 320 GPHS pellets were fabricated for the mission. Initial plans called for LANL to process a total of 30 kg of oxide powder for pressing into monolithic ceramic pellets. The original 30 kg commitment was processed within the time frame allotted; an additional 8 kg were required to replace fuel lost due to failure to meet Quality Assurance specifications for impurities and dimensions. During the time frame allotted for pellet production, operations were impacted by equipment failure, unacceptable fuel impurities levels, and periods of extended down time, >30 working days during which little or no processing occurred. Throughout the production process, the reality of operations requirements varied from the theory upon which production schedules were based.

  6. Development of a Low-Latency, High Data Rate, Differential GPS Relative Positioning System for UAV Formation Flight Control

    DTIC Science & Technology

    2006-09-01

    spiral development cycle involved transporting the software processes from a Windows XP / MATLAB environment to a Linux / C++ environment. This...tested on. Additionally, in the case of the GUMSTIX PC boards, the LINUX operating system is burned into the read-only memory. Lastly, both PC-104 and...both the real-time environment and the post-processed en - vironment. When the system operates in real-time mode, an output file is generated which

  7. A Study on Human Oriented Autonomous Distributed Manufacturing System —Real-time Scheduling Method Based on Preference of Human Operators

    NASA Astrophysics Data System (ADS)

    Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro

    Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.

  8. 12 CFR 614.4200 - General requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM LOAN POLICIES AND OPERATIONS Loan Terms...-related business, a marketing or processing operation, a rural residence, or real estate used as an... titles I or II of the Act shall be provided to the borrower at the time of execution and at any time...

  9. Complexity and the Fractional Calculus

    DTIC Science & Technology

    2013-01-01

    these trajectories over the entire Lotka - Volterra cycle thereby generating the mistaken impression that the resulting average trajectory reaches...interpreted as a form of phase decor- relation process rather than one with friction. The fractional version of the popular Lotka - Volterra ecological...trajectory is an ordinary Lotka - Volterra cycle in the operational time . Transitioning from the operational time to the chronological time spreads

  10. A Comparison of Two Fat Grafting Methods on Operating Room Efficiency and Costs.

    PubMed

    Gabriel, Allen; Maxwell, G Patrick; Griffin, Leah; Champaneria, Manish C; Parekh, Mousam; Macarios, David

    2017-02-01

    Centrifugation (Cf) is a common method of fat processing but may be time consuming, especially when processing large volumes. To determine the effects on fat grafting time, volume efficiency, reoperations, and complication rates of Cf vs an autologous fat processing system (Rv) that incorporates fat harvesting and processing in a single unit. We performed a retrospective cohort study of consecutive patients who underwent autologous fat grafting during reconstructive breast surgery with Rv or Cf. Endpoints measured were volume of fat harvested (lipoaspirate) and volume injected after processing, time to complete processing, reoperations, and complications. A budget impact model was used to estimate cost of Rv vs Cf. Ninety-eight patients underwent fat grafting with Rv, and 96 patients received Cf. Mean volumes of lipoaspirate (506.0 vs 126.1 mL) and fat injected (177.3 vs 79.2 mL) were significantly higher (P < .0001) in the Rv vs Cf group, respectively. Mean time to complete fat grafting was significantly shorter in the Rv vs Cf group (34.6 vs 90.1 minutes, respectively; P < .0001). Proportions of patients with nodule and cyst formation and/or who received reoperations were significantly less in the Rv vs Cf group. Based on these outcomes and an assumed per minute operating room cost, an average per patient cost savings of $2,870.08 was estimated with Rv vs Cf. Compared to Cf, the Rv fat processing system allowed for a larger volume of fat to be processed for injection and decreased operative time in these patients, potentially translating to cost savings. LEVEL OF EVIDENCE 3. © 2016 The American Society for Aesthetic Plastic Surgery, Inc.

  11. Hitchhiker mission operations: Past, present, and future

    NASA Technical Reports Server (NTRS)

    Anderson, Kathryn

    1995-01-01

    What is mission operations? Mission operations is an iterative process aimed at achieving the greatest possible mission success with the resources available. The process involves understanding of the science objectives, investigation of which system capabilities can best meet these objectives, integration of the objectives and resources into a cohesive mission operations plan, evaluation of the plan through simulations, and implementation of the plan in real-time. In this paper, the authors present a comprehensive description of what the Hitchhiker mission operations approach is and why it is crucial to mission success. The authors describe the significance of operational considerations from the beginning and throughout the experiment ground and flight systems development. The authors also address the necessity of training and simulations. Finally, the authors cite several examples illustrating the benefits of understanding and utilizing the mission operations process.

  12. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform

    PubMed Central

    Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance. PMID:29861711

  13. Application of process monitoring to anomaly detection in nuclear material processing systems via system-centric event interpretation of data from multiple sensors of varying reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao

    In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less

  14. Fractional Number Operator and Associated Fractional Diffusion Equations

    NASA Astrophysics Data System (ADS)

    Rguigui, Hafedh

    2018-03-01

    In this paper, we study the fractional number operator as an analog of the finite-dimensional fractional Laplacian. An important relation with the Ornstein-Uhlenbeck process is given. Using a semigroup approach, the solution of the Cauchy problem associated to the fractional number operator is presented. By means of the Mittag-Leffler function and the Laplace transform, we give the solution of the Caputo time fractional diffusion equation and Riemann-Liouville time fractional diffusion equation in infinite dimensions associated to the fractional number operator.

  15. Development of a prototype real-time automated filter for operational deep space navigation

    NASA Technical Reports Server (NTRS)

    Masters, W. C.; Pollmeier, V. M.

    1994-01-01

    Operational deep space navigation has been in the past, and is currently, performed using systems whose architecture requires constant human supervision and intervention. A prototype for a system which allows relatively automated processing of radio metric data received in near real-time from NASA's Deep Space Network (DSN) without any redesign of the existing operational data flow has been developed. This system can allow for more rapid response as well as much reduced staffing to support mission navigation operations.

  16. The California Integrated Seismic Network

    NASA Astrophysics Data System (ADS)

    Hellweg, M.; Given, D.; Hauksson, E.; Neuhauser, D.; Oppenheimer, D.; Shakal, A.

    2007-05-01

    The mission of the California Integrated Seismic Network (CISN) is to operate a reliable, modern system to monitor earthquakes throughout the state; to generate and distribute information in real-time for emergency response, for the benefit of public safety, and for loss mitigation; and to collect and archive data for seismological and earthquake engineering research. To meet these needs, the CISN operates data processing and archiving centers, as well as more than 3000 seismic stations. Furthermore, the CISN is actively developing and enhancing its infrastructure, including its automated processing and archival systems. The CISN integrates seismic and strong motion networks operated by the University of California Berkeley (UCB), the California Institute of Technology (Caltech), and the United States Geological Survey (USGS) offices in Menlo Park and Pasadena, as well as the USGS National Strong Motion Program (NSMP), and the California Geological Survey (CGS). The CISN operates two earthquake management centers (the NCEMC and SCEMC) where statewide, real-time earthquake monitoring takes place, and an engineering data center (EDC) for processing strong motion data and making it available in near real-time to the engineering community. These centers employ redundant hardware to minimize disruptions to the earthquake detection and processing systems. At the same time, dual feeds of data from a subset of broadband and strong motion stations are telemetered in real- time directly to both the NCEMC and the SCEMC to ensure the availability of statewide data in the event of a catastrophic failure at one of these two centers. The CISN uses a backbone T1 ring (with automatic backup over the internet) to interconnect the centers and the California Office of Emergency Services. The T1 ring enables real-time exchange of selected waveforms, derived ground motion data, phase arrivals, earthquake parameters, and ShakeMaps. With the goal of operating similar and redundant statewide earthquake processing systems at both real-time EMCs, the CISN is currently adopting and enhancing the database-centric, earthquake processing and analysis software originally developed for the Caltech/USGS Pasadena TriNet project. Earthquake data and waveforms are made available to researchers and to the public in near real-time through the CISN's Northern and Southern California Eathquake Data Centers (NCEDC and SCEDC) and through the USGS Earthquake Notification System (ENS). The CISN partners have developed procedures to automatically exchange strong motion data, both waveforms and peak parameters, for use in ShakeMap and in the rapid engineering reports which are available near real-time through the strong motion EDC.

  17. 12 CFR 1070.22 - Fees for processing requests for CFPB records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CFPB shall charge the requester for the actual direct cost of the search, including computer search time, runs, and the operator's salary. The fee for computer output will be the actual direct cost. For... and the cost of operating the computer to process a request) equals the equivalent dollar amount of...

  18. Telemetry distribution and processing for the second German Spacelab Mission D-2

    NASA Technical Reports Server (NTRS)

    Rabenau, E.; Kruse, W.

    1994-01-01

    For the second German Spacelab Mission D-2 all activities related to operating, monitoring and controlling the experiments on board the Spacelab were conducted from the German Space Operations Control Center (GSOC) operated by the Deutsche Forschungsanstalt fur Luft- und Raumfahrt (DLR) in Oberpfaffenhofen, Germany. The operational requirements imposed new concepts on the transfer of data between Germany and the NASA centers and the processing of data at the GSOC itself. Highlights were the upgrade of the Spacelab Data Processing Facility (SLDPF) to real time data processing, the introduction of packet telemetry and the development of the high-rate data handling front end, data processing and display systems at GSOC. For the first time, a robot on board the Spacelab was to be controlled from the ground in a closed loop environment. A dedicated forward channel was implemented to transfer the robot manipulation commands originating from the robotics experiment ground station to the Spacelab via the Orbiter's text and graphics system interface. The capability to perform telescience from an external user center was implemented. All interfaces proved successful during the course of the D-2 mission and are described in detail in this paper.

  19. Fickian dispersion is anomalous

    DOE PAGES

    Cushman, John H.; O’Malley, Dan

    2015-06-22

    The thesis put forward here is that the occurrence of Fickian dispersion in geophysical settings is a rare event and consequently should be labeled as anomalous. What people classically call anomalous is really the norm. In a Lagrangian setting, a process with mean square displacement which is proportional to time is generally labeled as Fickian dispersion. With a number of counter examples we show why this definition is fraught with difficulty. In a related discussion, we show an infinite second moment does not necessarily imply the process is super dispersive. By employing a rigorous mathematical definition of Fickian dispersion wemore » illustrate why it is so hard to find a Fickian process. We go on to employ a number of renormalization group approaches to classify non-Fickian dispersive behavior. Scaling laws for the probability density function for a dispersive process, the distribution for the first passage times, the mean first passage time, and the finite-size Lyapunov exponent are presented for fixed points of both deterministic and stochastic renormalization group operators. The fixed points of the renormalization group operators are p-self-similar processes. A generalized renormalization group operator is introduced whose fixed points form a set of generalized self-similar processes. Finally, power-law clocks are introduced to examine multi-scaling behavior. Several examples of these ideas are presented and discussed.« less

  20. Development and operation of a real-time simulation at the NASA Ames Vertical Motion Simulator

    NASA Technical Reports Server (NTRS)

    Sweeney, Christopher; Sheppard, Shirin; Chetelat, Monique

    1993-01-01

    The Vertical Motion Simulator (VMS) facility at the NASA Ames Research Center combines the largest vertical motion capability in the world with a flexible real-time operating system allowing research to be conducted quickly and effectively. Due to the diverse nature of the aircraft simulated and the large number of simulations conducted annually, the challenge for the simulation engineer is to develop an accurate real-time simulation in a timely, efficient manner. The SimLab facility and the software tools necessary for an operating simulation will be discussed. Subsequent sections will describe the development process through operation of the simulation; this includes acceptance of the model, validation, integration and production phases.

  1. Co-operation as a strategy for provision of welfare services--a study of a rehabilitation project in Sweden.

    PubMed

    Norman, Christina; Axelsson, Runo

    2007-10-01

    During the past 15 years, there have been many initiatives to improve the integration between different welfare agencies. This study is describing and analysing the co-operation between agencies involved in a rehabilitation project in Sweden, and discussing such inter-agency co-operation as a strategy for provision of complex welfare services. The study is based on a process evaluation, where the co-operation between the agencies was followed and documented during the time of the project. Different kinds of data were collected through interviews, focus groups and diaries. The contents of these data were analysed in order to evaluate the process of co-operation. In addition, there was also an evaluation of the effects of the co-operation, based on official documents, statistics, etc. The evaluation shows that it was possible to co-operate across the organizational boundaries of the different agencies, but there were obstacles related to organizational and cultural differences of the agencies, divided loyalties of the officials and limited resources available to deal with the complex needs of the clients. At the same time, the commitment and the relations between the officials were facilitating the co-operation. Based on the evaluation of this project, it seems that co-operation could be an effective strategy to deal with clients who need services from different welfare agencies. At the same time, however, it is clear that inter-agency co-operation requires a lot of time and energy and should therefore be used with caution.

  2. Transient overexpression of striatal D2 receptors impairs operant motivation and interval timing.

    PubMed

    Drew, Michael R; Simpson, Eleanor H; Kellendonk, Christoph; Herzberg, William G; Lipatova, Olga; Fairhurst, Stephen; Kandel, Eric R; Malapani, Chara; Balsam, Peter D

    2007-07-18

    The striatum receives prominent dopaminergic innervation that is integral to appetitive learning, performance, and motivation. Signaling through the dopamine D2 receptor is critical for all of these processes. For instance, drugs with high affinity for the D2 receptor potently alter timing of operant responses and modulate motivation. Recently, in an attempt to model a genetic abnormality encountered in schizophrenia, mice were generated that reversibly overexpress D2 receptors specifically in the striatum (Kellendonk et al., 2006). These mice have impairments in working memory and behavioral flexibility, components of the cognitive symptoms of schizophrenia, that are not rescued when D2 overexpression is reversed in the adult. Here we report that overexpression of striatal D2 receptors also profoundly affects operant performance, a potential index of negative symptoms. Mice overexpressing D2 exhibited impairments in the ability to time food rewards in an operant interval timing task and reduced motivation to lever press for food reward in both the operant timing task and a progressive ratio schedule of reinforcement. The motivational deficit, but not the timing deficit, was rescued in adult mice by reversing D2 overexpression with doxycycline. These results suggest that early D2 overexpression alters the organization of interval timing circuits and confirms that striatal D2 signaling in the adult regulates motivational process. Moreover, overexpression of D2 under pathological conditions such as schizophrenia and Parkinson's disease could give rise to motivational and timing deficits.

  3. Remotely Operated Aircraft (ROA) Impact on the National Airspace System (NAS) Work Package: Automation Impacts of ROA's in the NAS

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The purpose of this document is to analyze the impact of Remotely Operated Aircraft (ROA) operations on current and planned Air Traffic Control (ATC) automation systems in the En Route, Terminal, and Traffic Flow Management domains. The operational aspects of ROA flight, while similar, are not entirely identical to their manned counterparts and may not have been considered within the time-horizons of the automation tools. This analysis was performed to determine if flight characteristics of ROAs would be compatible with current and future NAS automation tools. Improvements to existing systems / processes are recommended that would give Air Traffic Controllers an indication that a particular aircraft is an ROA and modifications to IFR flight plan processing algorithms and / or designation of airspace where an ROA will be operating for long periods of time.

  4. Dynamic Quantum Allocation and Swap-Time Variability in Time-Sharing Operating Systems.

    ERIC Educational Resources Information Center

    Bhat, U. Narayan; Nance, Richard E.

    The effects of dynamic quantum allocation and swap-time variability on central processing unit (CPU) behavior are investigated using a model that allows both quantum length and swap-time to be state-dependent random variables. Effective CPU utilization is defined to be the proportion of a CPU busy period that is devoted to program processing, i.e.…

  5. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  6. RTEMS CENTRE- RTEMS Improvement

    NASA Astrophysics Data System (ADS)

    Silva, Helder; Constantino, Alexandre; Freitas, Daniel; Coutinho, Manuel; Faustino, Sergio; Sousa, Jose; Dias, Luis; Zulianello, Marco

    2010-08-01

    During the last two years, EDISOFT's RTEMS CENTRE team [1], jointly with the European Space Agency and with the support of the worldwide RTEMS community [2], have been developing an activity to facilitate the qualification of the real-time operating system RTEMS (Real-Time Operating System for Multiprocessor Systems). This paper intends to give a high level visibility of the progress and the results obtained in the RTEMS Improvement [3] activity. The primary objective [4] of the project is to improve the RTEMS product, its documentation and to facilitate the qualification of RTEMS for future space missions, taking into consideration the specific operational requirements. The sections below provide a brief overview of the RTEMS operating system and the activities performed in the RTEMS Improvement project, which includes the selection of API managers to be qualified, the tailoring process, the requirements analysis, the reverse engineering and design of the RTEMS, the quality assurance process, the ISVV activities, the test campaign, the results obtained, the criticality analysis and the facilitation of qualification process.

  7. NPITxt, a 21st-Century Reporting System: Engaging Residents in a Lean-Inspired Process.

    PubMed

    Raja, Pushpa V; Davis, Michael C; Bales, Alicia; Afsarmanesh, Nasim

    2015-05-01

    Operational waste, or workflow processes that do not add value, is a frustrating but nonetheless largely tolerated barrier to efficiency and morale for medical trainees. In this article, the authors tested a novel reporting system using several submission formats (text messaging, e-mail, Web form, mobile application) to allow residents to report various types of operational waste in real time. This system informally promoted "lean" principles of waste identification and continuous improvement. In all, 154 issues were submitted between March 30, 2011, and June 30, 2012, and categorized as closely as possible into lean categories of operational waste; 131 issues were completely addressed with the requested outcome partially or fully implemented or with successful clarification of existing policies. A real-time, voluntary reporting system can effectively capture trainee observations of waste in health care and training processes, give trainees a voice in a hierarchical system, and lead to meaningful operations improvement. © 2014 by the American College of Medical Quality.

  8. Note: Quasi-real-time analysis of dynamic near field scattering data using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.

    2012-10-01

    We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.

  9. COLA: Optimizing Stream Processing Applications via Graph Partitioning

    NASA Astrophysics Data System (ADS)

    Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra

    In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.

  10. Recent advances in the reconstruction of cranio-maxillofacial defects using computer-aided design/computer-aided manufacturing.

    PubMed

    Oh, Ji-Hyeon

    2018-12-01

    With the development of computer-aided design/computer-aided manufacturing (CAD/CAM) technology, it has been possible to reconstruct the cranio-maxillofacial defect with more accurate preoperative planning, precise patient-specific implants (PSIs), and shorter operation times. The manufacturing processes include subtractive manufacturing and additive manufacturing and should be selected in consideration of the material type, available technology, post-processing, accuracy, lead time, properties, and surface quality. Materials such as titanium, polyethylene, polyetheretherketone (PEEK), hydroxyapatite (HA), poly-DL-lactic acid (PDLLA), polylactide-co-glycolide acid (PLGA), and calcium phosphate are used. Design methods for the reconstruction of cranio-maxillofacial defects include the use of a pre-operative model printed with pre-operative data, printing a cutting guide or template after virtual surgery, a model after virtual surgery printed with reconstructed data using a mirror image, and manufacturing PSIs by directly obtaining PSI data after reconstruction using a mirror image. By selecting the appropriate design method, manufacturing process, and implant material according to the case, it is possible to obtain a more accurate surgical procedure, reduced operation time, the prevention of various complications that can occur using the traditional method, and predictive results compared to the traditional method.

  11. 47 CFR 36.123 - Operator systems equipment-Category 1.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... apportioned on the basis of the relative processor real time (i.e., actual seconds) required to process TSPS... relative processor real time (i.e., actual seconds) for the entire TSPS complex. [52 FR 17229, May 6, 1987... 47 Telecommunication 2 2014-10-01 2014-10-01 false Operator systems equipment-Category 1. 36.123...

  12. 47 CFR 36.123 - Operator systems equipment-Category 1.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... apportioned on the basis of the relative processor real time (i.e., actual seconds) required to process TSPS... relative processor real time (i.e., actual seconds) for the entire TSPS complex. [52 FR 17229, May 6, 1987... 47 Telecommunication 2 2013-10-01 2013-10-01 false Operator systems equipment-Category 1. 36.123...

  13. 47 CFR 36.123 - Operator systems equipment-Category 1.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... apportioned on the basis of the relative processor real time (i.e., actual seconds) required to process TSPS... relative processor real time (i.e., actual seconds) for the entire TSPS complex. [52 FR 17229, May 6, 1987... 47 Telecommunication 2 2012-10-01 2012-10-01 false Operator systems equipment-Category 1. 36.123...

  14. 47 CFR 36.123 - Operator systems equipment-Category 1.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... apportioned on the basis of the relative processor real time (i.e., actual seconds) required to process TSPS... relative processor real time (i.e., actual seconds) for the entire TSPS complex. [52 FR 17229, May 6, 1987... 47 Telecommunication 2 2011-10-01 2011-10-01 false Operator systems equipment-Category 1. 36.123...

  15. BIOREACTOR ECONOMICS, SIZE AND TIME OF OPERATION (BEST) COMPUTER SIMULATOR FOR DESIGNING SULFATE-REDUCING BACTERIA FIELD BIOREACTORS

    EPA Science Inventory

    BEST (bioreactor economics, size and time of operation) is an Excel™ spreadsheet-based model that is used in conjunction with the public domain geochemical modeling software, PHREEQCI. The BEST model is used in the design process of sulfate-reducing bacteria (SRB) field bioreacto...

  16. 47 CFR 36.123 - Operator systems equipment-Category 1.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... apportioned on the basis of the relative processor real time (i.e., actual seconds) required to process TSPS... relative processor real time (i.e., actual seconds) for the entire TSPS complex. [52 FR 17229, May 6, 1987... 47 Telecommunication 2 2010-10-01 2010-10-01 false Operator systems equipment-Category 1. 36.123...

  17. Improving Overall Equipment Effectiveness Using CPM and MOST: A Case Study of an Indonesian Pharmaceutical Company

    NASA Astrophysics Data System (ADS)

    Omega, Dousmaris; Andika, Aditya

    2017-12-01

    This paper discusses the results of a research conducted on the production process of an Indonesian pharmaceutical company. The company is experiencing low performance in the Overall Equipment Effectiveness (OEE) metric. The OEE of the company machines are below world class standard. The machine that has the lowest OEE is the filler machine. Through observation and analysis, it is found that the cleaning process of the filler machine consumes significant amount of time. The long duration of the cleaning process happens because there is no structured division of jobs between cleaning operators, differences in operators’ ability, and operators’ inability in utilizing available cleaning equipment. The company needs to improve the cleaning process. Therefore, Critical Path Method (CPM) analysis is conducted to find out what activities are critical in order to shorten and simplify the cleaning process in the division of tasks. Afterwards, The Maynard Operation and Sequence Technique (MOST) method is used to reduce ineffective movement and specify the cleaning process standard time. From CPM and MOST, it is obtained the shortest time of the cleaning process is 1 hour 28 minutes and the standard time is 1 hour 38.826 minutes.

  18. Decision Support Systems for Launch and Range Operations Using Jess

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar

    2007-01-01

    The virtual test bed for launch and range operations developed at NASA Ames Research Center consists of various independent expert systems advising on weather effects, toxic gas dispersions and human health risk assessment during space-flight operations. An individual dedicated server supports each expert system and the master system gather information from the dedicated servers to support the launch decision-making process. Since the test bed is based on the web system, reducing network traffic and optimizing the knowledge base is critical to its success of real-time or near real-time operations. Jess, a fast rule engine and powerful scripting environment developed at Sandia National Laboratory has been adopted to build the expert systems providing robustness and scalability. Jess also supports XML representation of knowledge base with forward and backward chaining inference mechanism. Facts added - to working memory during run-time operations facilitates analyses of multiple scenarios. Knowledge base can be distributed with one inference engine performing the inference process. This paper discusses details of the knowledge base and inference engine using Jess for a launch and range virtual test bed.

  19. Real-time processing of dual band HD video for maintaining operational effectiveness in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Parker, Steve C. J.; Hickman, Duncan L.; Smith, Moira I.

    2015-05-01

    Effective reconnaissance, surveillance and situational awareness, using dual band sensor systems, require the extraction, enhancement and fusion of salient features, with the processed video being presented to the user in an ergonomic and interpretable manner. HALO™ is designed to meet these requirements and provides an affordable, real-time, and low-latency image fusion solution on a low size, weight and power (SWAP) platform. The system has been progressively refined through field trials to increase its operating envelope and robustness. The result is a video processor that improves detection, recognition and identification (DRI) performance, whilst lowering operator fatigue and reaction times in complex and highly dynamic situations. This paper compares the performance of HALO™, both qualitatively and quantitatively, with conventional blended fusion for operation in degraded visual environments (DVEs), such as those experienced during ground and air-based operations. Although image blending provides a simple fusion solution, which explains its common adoption, the results presented demonstrate that its performance is poor compared to the HALO™ fusion scheme in DVE scenarios.

  20. Design and simulation of optoelectronic complementary dual neural elements for realizing a family of normalized vector 'equivalence-nonequivalence' operations

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Aleksandr I.; Lazarev, Alexander A.; Magas, Taras E.

    2010-04-01

    Equivalence models (EM) advantages of neural networks (NN) are shown in paper. EMs are based on vectormatrix procedures with basic operations of continuous neurologic: normalized vector operations "equivalence", "nonequivalence", "autoequivalence", "autononequivalence". The capacity of NN on the basis of EM and of its modifications, including auto-and heteroassociative memories for 2D images, exceeds in several times quantity of neurons. Such neuroparadigms are very perspective for processing, recognition, storing large size and strongly correlated images. A family of "normalized equivalence-nonequivalence" neuro-fuzzy logic operations on the based of generalized operations fuzzy-negation, t-norm and s-norm is elaborated. A biologically motivated concept and time pulse encoding principles of continuous logic photocurrent reflexions and sample-storage devices with pulse-width photoconverters have allowed us to design generalized structures for realization of the family of normalized linear vector operations "equivalence"-"nonequivalence". Simulation results show, that processing time in such circuits does not exceed units of micro seconds. Circuits are simple, have low supply voltage (1-3 V), low power consumption (milliwatts), low levels of input signals (microwatts), integrated construction, satisfy the problem of interconnections and cascading.

  1. Navigation Operations with Prototype Components of an Automated Real-Time Spacecraft Navigation System

    NASA Technical Reports Server (NTRS)

    Cangahuala, L.; Drain, T. R.

    1999-01-01

    At present, ground navigation support for interplanetary spacecraft requires human intervention for data pre-processing, filtering, and post-processing activities; these actions must be repeated each time a new batch of data is collected by the ground data system.

  2. Computer program compatible with a laser nephelometer

    NASA Technical Reports Server (NTRS)

    Paroskie, R. M.; Blau, H. H., Jr.; Blinn, J. C., III

    1975-01-01

    The laser nephelometer data system was updated to provide magnetic tape recording of data, and real time or near real time processing of data to provide particle size distribution and liquid water content. Digital circuits were provided to interface the laser nephelometer to a Data General Nova 1200 minicomputer. Communications are via a teletypewriter. A dual Linc Magnetic Tape System is used for program storage and data recording. Operational programs utilize the Data General Real-Time Operating System (RTOS) and the ERT AIRMAP Real-Time Operating System (ARTS). The programs provide for acquiring data from the laser nephelometer, acquiring data from auxiliary sources, keeping time, performing real time calculations, recording data and communicating with the teletypewriter.

  3. Burnishing of rotatory parts to improve surface quality

    NASA Astrophysics Data System (ADS)

    Celaya, A.; López de Lacalle, L. N.; Albizuri, J.; Alberdi, R.

    2009-11-01

    In this paper, the use of rolling burnishing process to improve the final quality of railway and automotive workpieces is studied. The results are focused on the improvement of the manufacturing processes of rotary workpieces used in railway and automotion industry, attending to generic target of achieving `maximum surface quality with minimal process time'. Burnishing is a finishing operation in which plastic deformation of surface irregularities occurs by applying pressure through a very hard element, a roller or a ceramic ball. This process gives additional advantages to the workpiece such as good surface roughness, increased hardness and high compressive residual stresses. The effect of the initial turning conditions on the final burnishing operation has also been studied. The results show that feeds used in the initial rough turning have little influence in the surface finish of the burnished workpieces. So, the process times of the combined turning and burnishing processes can be reduced, optimizing the shaft's machining process.

  4. Profitability Analysis of Soybean Oil Processes.

    PubMed

    Cheng, Ming-Hsun; Rosentrater, Kurt A

    2017-10-07

    Soybean oil production is the basic process for soybean applications. Cash flow analysis is used to estimate the profitability of a manufacturing venture. Besides capital investments, operating costs, and revenues, the interest rate is the factor to estimate the net present value (NPV), break-even points, and payback time; which are benchmarks for profitability evaluation. The positive NPV and reasonable payback time represent a profitable process, and provide an acceptable projection for real operating. Additionally, the capacity of the process is another critical factor. The extruding-expelling process and hexane extraction are the two typical approaches used in industry. When the capacities of annual oil production are larger than 12 and 173 million kg respectively, these two processes are profitable. The solvent free approach, known as enzyme assisted aqueous extraction process (EAEP), is profitable when the capacity is larger than 17 million kg of annual oil production.

  5. Profitability Analysis of Soybean Oil Processes

    PubMed Central

    2017-01-01

    Soybean oil production is the basic process for soybean applications. Cash flow analysis is used to estimate the profitability of a manufacturing venture. Besides capital investments, operating costs, and revenues, the interest rate is the factor to estimate the net present value (NPV), break-even points, and payback time; which are benchmarks for profitability evaluation. The positive NPV and reasonable payback time represent a profitable process, and provide an acceptable projection for real operating. Additionally, the capacity of the process is another critical factor. The extruding-expelling process and hexane extraction are the two typical approaches used in industry. When the capacities of annual oil production are larger than 12 and 173 million kg respectively, these two processes are profitable. The solvent free approach, known as enzyme assisted aqueous extraction process (EAEP), is profitable when the capacity is larger than 17 million kg of annual oil production. PMID:28991168

  6. A novel weight determination method for time series data aggregation

    NASA Astrophysics Data System (ADS)

    Xu, Paiheng; Zhang, Rong; Deng, Yong

    2017-09-01

    Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.

  7. NASA Headquarters Space Operations Center: Providing Situational Awareness for Spaceflight Contingency Response

    NASA Technical Reports Server (NTRS)

    Maxwell, Theresa G.; Bihner, William J.

    2010-01-01

    This paper discusses the NASA Headquarters mishap response process for the Space Shuttle and International Space Station programs, and how the process has evolved based on lessons learned from the Space Shuttle Challenger and Columbia accidents. It also describes the NASA Headquarters Space Operations Center (SOC) and its special role in facilitating senior management's overall situational awareness of critical spaceflight operations, before, during, and after a mishap, to ensure a timely and effective contingency response.

  8. EOS: A project to investigate the design and construction of real-time distributed embedded operating systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Essick, R. B.; Grass, J.; Johnston, G.; Kenny, K.; Russo, V.

    1986-01-01

    The EOS project is investigating the design and construction of a family of real-time distributed embedded operating systems for reliable, distributed aerospace applications. Using the real-time programming techniques developed in co-operation with NASA in earlier research, the project staff is building a kernel for a multiple processor networked system. The first six months of the grant included a study of scheduling in an object-oriented system, the design philosophy of the kernel, and the architectural overview of the operating system. In this report, the operating system and kernel concepts are described. An environment for the experiments has been built and several of the key concepts of the system have been prototyped. The kernel and operating system is intended to support future experimental studies in multiprocessing, load-balancing, routing, software fault-tolerance, distributed data base design, and real-time processing.

  9. The FOT tool kit concept

    NASA Technical Reports Server (NTRS)

    Fatig, Michael

    1993-01-01

    Flight operations and the preparation for it has become increasingly complex as mission complexities increase. Further, the mission model dictates that a significant increase in flight operations activities is upon us. Finally, there is a need for process improvement and economy in the operations arena. It is therefore time that we recognize flight operations as a complex process requiring a defined, structured, and life cycle approach vitally linked to space segment, ground segment, and science operations processes. With this recognition, an FOT Tool Kit consisting of six major components designed to provide tools to guide flight operations activities throughout the mission life cycle was developed. The major components of the FOT Tool Kit and the concepts behind the flight operations life cycle process as developed at NASA's GSFC for GSFC-based missions are addressed. The Tool Kit is therefore intended to increase productivity, quality, cost, and schedule performance of the flight operations tasks through the use of documented, structured methodologies; knowledge of past lessons learned and upcoming new technology; and through reuse and sharing of key products and special application programs made possible through the development of standardized key products and special program directories.

  10. Benefits to blood banks of a sales and operations planning process.

    PubMed

    Keal, Donald A; Hebert, Phil

    2010-12-01

    A formal sales and operations planning (S&OP) process is a decision making and communication process that balances supply and demand while integrating all business operational components with customer-focused business plans that links high level strategic plans to day-to-day operations. Furthermore, S&OP can assist in managing change across the organization as it provides the opportunity to be proactive in the face of problems and opportunities while establishing a plan for everyone to follow. Some of the key outcomes from a robust S&OP process in blood banking would include: higher customer satisfaction (donors and health care providers), balanced inventory across product lines and customers, more stable production rates and higher productivity, more cooperation across the entire operation, and timely updates to the business plan resulting in better forecasting and fewer surprises that negatively impact the bottom line. © 2010 American Association of Blood Banks.

  11. Operating room scheduling using hybrid clustering priority rule and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Santoso, Linda Wahyuni; Sinawan, Aisyah Ashrinawati; Wijaya, Andi Rahadiyan; Sudiarso, Andi; Masruroh, Nur Aini; Herliansyah, Muhammad Kusumawan

    2017-11-01

    Operating room is a bottleneck resource in most hospitals so that operating room scheduling system will influence the whole performance of the hospitals. This research develops a mathematical model of operating room scheduling for elective patients which considers patient priority with limit number of surgeons, operating rooms, and nurse team. Clustering analysis was conducted to the data of surgery durations using hierarchical and non-hierarchical methods. The priority rule of each resulting cluster was determined using Shortest Processing Time method. Genetic Algorithm was used to generate daily operating room schedule which resulted in the lowest values of patient waiting time and nurse overtime. The computational results show that this proposed model reduced patient waiting time by approximately 32.22% and nurse overtime by approximately 32.74% when compared to actual schedule.

  12. U. S. GEOLOGICAL SURVEY'S NATIONAL REAL-TIME HYDROLOGIC INFORMATION SYSTEM USING GOES SATELLITE TECHNOLOGY.

    USGS Publications Warehouse

    Shope, William G.

    1987-01-01

    The U. S. Geological Survey maintains the basic hydrologic data collection system for the United States. The Survey is upgrading the collection system with electronic communications technologies that acquire, telemeter, process, and disseminate hydrologic data in near real-time. These technologies include satellite communications via the Geostationary Operational Environmental Satellite, Data Collection Platforms in operation at over 1400 Survey gaging stations, Direct-Readout Ground Stations at nine Survey District Offices and a network of powerful minicomputers that allows data to be processed and disseminate quickly.

  13. Markovian limit for a reduced operation-valued stochastic process

    NASA Astrophysics Data System (ADS)

    Barchielli, Alberto

    1987-04-01

    Operation-valued stochastic processes give a formalization of the concept of continuous (in time) measurements in quantum mechanics. In this article, a first stage M of a measuring apparatus coupled to the system S is explicitly introduced, and continuous measurement of some observables of M is considered (one can speak of an indirect continuous measurement on S). When the degrees of freedom of the measuring apparatus M are eliminated and the weak coupling limit is taken, it is shown that an operation-valued stochastic process describing a direct continuous observation of the system S is obtained.

  14. Time operators in stroboscopic wave-packet basis and the time scales in tunneling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bokes, P.

    2011-03-15

    We demonstrate that the time operator that measures the time of arrival of a quantum particle into a chosen state can be defined as a self-adjoint quantum-mechanical operator using periodic boundary conditions and applied to wave functions in energy representation. The time becomes quantized into discrete eigenvalues; and the eigenstates of the time operator, i.e., the stroboscopic wave packets introduced recently [Phys. Rev. Lett. 101, 046402 (2008)], form an orthogonal system of states. The formalism provides simple physical interpretation of the time-measurement process and direct construction of normalized, positive definite probability distribution for the quantized values of the arrival time.more » The average value of the time is equal to the phase time but in general depends on the choice of zero time eigenstate, whereas the uncertainty of the average is related to the traversal time and is independent of this choice. The general formalism is applied to a particle tunneling through a resonant tunneling barrier in one dimension.« less

  15. General Recommendations on Fatigue Risk Management for the Canadian Forces

    DTIC Science & Technology

    2010-04-01

    missions performed in aviation require an individual(s) to process large amount of information in a short period of time and to do this on a continuous...information processing required during sustained operations can deteriorate an individual’s ability to perform a task. Given the high operational tempo...memory, which, in turn, is utilized to perform human thought processes (Baddeley, 2003). While various versions of this theory exist, they all share

  16. Multivariate statistical monitoring as applied to clean-in-place (CIP) and steam-in-place (SIP) operations in biopharmaceutical manufacturing.

    PubMed

    Roy, Kevin; Undey, Cenk; Mistretta, Thomas; Naugle, Gregory; Sodhi, Manbir

    2014-01-01

    Multivariate statistical process monitoring (MSPM) is becoming increasingly utilized to further enhance process monitoring in the biopharmaceutical industry. MSPM can play a critical role when there are many measurements and these measurements are highly correlated, as is typical for many biopharmaceutical operations. Specifically, for processes such as cleaning-in-place (CIP) and steaming-in-place (SIP, also known as sterilization-in-place), control systems typically oversee the execution of the cycles, and verification of the outcome is based on offline assays. These offline assays add to delays and corrective actions may require additional setup times. Moreover, this conventional approach does not take interactive effects of process variables into account and cycle optimization opportunities as well as salient trends in the process may be missed. Therefore, more proactive and holistic online continued verification approaches are desirable. This article demonstrates the application of real-time MSPM to processes such as CIP and SIP with industrial examples. The proposed approach has significant potential for facilitating enhanced continuous verification, improved process understanding, abnormal situation detection, and predictive monitoring, as applied to CIP and SIP operations. © 2014 American Institute of Chemical Engineers.

  17. Multi-Dimensional Signal Processing Research Program

    DTIC Science & Technology

    1981-09-30

    applications to real-time image processing and analysis. A specific long-range application is the automated processing of aerial reconnaissance imagery...Non-supervised image segmentation is a potentially im- portant operation in the automated processing of aerial reconnaissance pho- tographs since it

  18. Development of a real-time microchip PCR system for portable plant disease diagnosis.

    PubMed

    Koo, Chiwan; Malapi-Wight, Martha; Kim, Hyun Soo; Cifci, Osman S; Vaughn-Diaz, Vanessa L; Ma, Bo; Kim, Sungman; Abdel-Raziq, Haron; Ong, Kevin; Jo, Young-Ki; Gross, Dennis C; Shim, Won-Bo; Han, Arum

    2013-01-01

    Rapid and accurate detection of plant pathogens in the field is crucial to prevent the proliferation of infected crops. Polymerase chain reaction (PCR) process is the most reliable and accepted method for plant pathogen diagnosis, however current conventional PCR machines are not portable and require additional post-processing steps to detect the amplified DNA (amplicon) of pathogens. Real-time PCR can directly quantify the amplicon during the DNA amplification without the need for post processing, thus more suitable for field operations, however still takes time and require large instruments that are costly and not portable. Microchip PCR systems have emerged in the past decade to miniaturize conventional PCR systems and to reduce operation time and cost. Real-time microchip PCR systems have also emerged, but unfortunately all reported portable real-time microchip PCR systems require various auxiliary instruments. Here we present a stand-alone real-time microchip PCR system composed of a PCR reaction chamber microchip with integrated thin-film heater, a compact fluorescence detector to detect amplified DNA, a microcontroller to control the entire thermocycling operation with data acquisition capability, and a battery. The entire system is 25 × 16 × 8 cm(3) in size and 843 g in weight. The disposable microchip requires only 8-µl sample volume and a single PCR run consumes 110 mAh of power. A DNA extraction protocol, notably without the use of liquid nitrogen, chemicals, and other large lab equipment, was developed for field operations. The developed real-time microchip PCR system and the DNA extraction protocol were used to successfully detect six different fungal and bacterial plant pathogens with 100% success rate to a detection limit of 5 ng/8 µl sample.

  19. Development of a Real-Time Microchip PCR System for Portable Plant Disease Diagnosis

    PubMed Central

    Kim, Hyun Soo; Cifci, Osman S.; Vaughn-Diaz, Vanessa L.; Ma, Bo; Kim, Sungman; Abdel-Raziq, Haron; Ong, Kevin; Jo, Young-Ki; Gross, Dennis C.; Shim, Won-Bo; Han, Arum

    2013-01-01

    Rapid and accurate detection of plant pathogens in the field is crucial to prevent the proliferation of infected crops. Polymerase chain reaction (PCR) process is the most reliable and accepted method for plant pathogen diagnosis, however current conventional PCR machines are not portable and require additional post-processing steps to detect the amplified DNA (amplicon) of pathogens. Real-time PCR can directly quantify the amplicon during the DNA amplification without the need for post processing, thus more suitable for field operations, however still takes time and require large instruments that are costly and not portable. Microchip PCR systems have emerged in the past decade to miniaturize conventional PCR systems and to reduce operation time and cost. Real-time microchip PCR systems have also emerged, but unfortunately all reported portable real-time microchip PCR systems require various auxiliary instruments. Here we present a stand-alone real-time microchip PCR system composed of a PCR reaction chamber microchip with integrated thin-film heater, a compact fluorescence detector to detect amplified DNA, a microcontroller to control the entire thermocycling operation with data acquisition capability, and a battery. The entire system is 25×16×8 cm3 in size and 843 g in weight. The disposable microchip requires only 8-µl sample volume and a single PCR run consumes 110 mAh of power. A DNA extraction protocol, notably without the use of liquid nitrogen, chemicals, and other large lab equipment, was developed for field operations. The developed real-time microchip PCR system and the DNA extraction protocol were used to successfully detect six different fungal and bacterial plant pathogens with 100% success rate to a detection limit of 5 ng/8 µl sample. PMID:24349341

  20. How operational issues impact science peer review

    NASA Astrophysics Data System (ADS)

    Blacker, Brett S.; Golombek, Daniel; Macchetto, Duccio

    2006-06-01

    In some eyes, the Phase I proposal selection process is the most important activity handled by the Space Telescope Science Institute (STScI). Proposing for HST and other missions consists of requesting observing time and/or archival research funding. This step is called Phase I, where the scientific merit of a proposal is considered by a community based peer-review process. Accepted proposals then proceed thru Phase II, where the observations are specified in sufficient detail to enable scheduling on the telescope. Each cycle the Hubble Space Telescope (HST) Telescope Allocation Committee (TAC) reviews proposals and awards observing time that is valued at $0.5B, when the total expenditures for HST over its lifetime are figured on an annual basis. This is in fact a very important endeavor that we continue to fine-tune and tweak. This process is open to the science community and we constantly receive comments and praise for this process. In this last year we have had to deal with the loss of the Space Telescope Imaging Spectrograph (STIS) and move from 3-gyro operations to 2-gyro operations. This paper will outline how operational issues impact the HST science peer review process. We will discuss the process that was used to recover from the loss of the STIS instrument and how we dealt with the loss of 1/3 of the current science observations. We will also discuss the issues relating to 3-gyro vs. 2-gyro operations and how that changes impacted Proposers, our in-house processing and the TAC.

  1. 76 FR 30246 - Loan Policies and Operations; Loan Purchases From FDIC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-25

    ... source of credit and liquidity to borrowers whose operations are financed with System eligible... sound operation of System business. \\6\\ See NationsBank of North Carolina, N.A. v. Variable Annuity Life... provided restructuring financing, because the restructuring process takes time. One System institution...

  2. Time-dependent constitutive modeling of drive belts—II. The effect of the shape of material retardation spectrum on the strain accumulation process

    NASA Astrophysics Data System (ADS)

    Zupančič, B.; Emri, I.

    2009-11-01

    This is the second paper in the series addressing the constitutive modeling of dynamically loaded elastomeric products such as power transmission belts. During the normal operation of such belts certain segments of the belt structure are loaded via tooth-like cyclical loading. When the time-dependent properties of the elastomeric material “match” the time-scale of the dynamic loading a strain accumulation (incrementation) process occurs. It was shown that the location of a critical rotation speed strongly depends on the distribution (shape) of the retardation spectrum, whereas the magnitude of the accumulated strain is governed by the strength of the corresponding spectrum lines. These interrelations are extremely non-linear. The strain accumulation process is most intensive at the beginning of the drive belt operation, and is less intensive for longer belts. The strain accumulation process is governed by the spectrum lines that are positioned within a certain region, which we call the Strain Accumulation Window (SAW). An SAW is always located to the right of the spectrum line, L i , at log ( ω λ i )=0, where ω is the operational angular velocity. The width of the SAW depends on the width of the material spectrum. Based on the following analysis a new designing criterion is proposed for use in engineering applications for selecting a proper material for general drive-belt operations.

  3. [Use of four kinds of three-dimensional printing guide plate in bone tumor resection and reconstruction operation].

    PubMed

    Fu, Jun; Guo, Zheng; Wang, Zhen; Li, Xiangdong; Fan, Hongbin; Li, Jing; Pei, Yanjun; Pei, Guoxian; Li, Dan

    2014-03-01

    To explore the effectiveness of excision and reconstruction of bone tumor by using operation guide plate made by variety of three-dimensional (3-D) printing techniques, and to compare the advantages and disadvantages of different 3-D printing techniques in the manufacture and application of operation guide plate. Between September 2012 and January 2014, 31 patients with bone tumor underwent excision and reconstruction of bone tumor by using operation guide plate. There were 19 males and 12 females, aged 6-67 years (median, 23 years). The disease duration ranged from 15 days to 12 months (median, 2 months). There were 13 cases of malignant tumor and 18 cases of benign tumor. The tumor located in the femur (9 cases), the spine (7 cases), the tibia (6 cases), the pelvis (5 cases), the humerus (3 cases), and the fibula (1 case). Four kinds of 3-D printing technique were used in processing operation guide plate: fused deposition modeling (FDM) in 9 cases, stereo lithography appearance (SLA) in 14 cases, 3-D printing technique in 5 cases, and selective laser sintering (SLS) in 3 cases; the materials included ABS resin, photosensitive resin, plaster, and aluminum alloy, respectively. Before operation, all patients underwent thin layer CT scanning (0.625 mm) in addition to conventional imaging. The data were collected for tumor resection design, and operation guide plate was designed on the basis of excision plan. Preoperatively, the operation guide plates were made by 3-D printing equipment. After sterilization, the guide plates were used for excision and reconstruction of bone tumor. The time of plates processing cycle was recorded to analyse the efficiency of 4 kinds of 3-D printing techniques. The time for design and operation and intraoperative fluoroscopy frequency were recorded. Twenty-eight patients underwent similar operations during the same period as the control group. The processing time of operation guide plate was (19.3 +/- 6.5) hours in FDM, (5.2 +/- 1.3) hours in SLA, (8.6 +/- 1.9) hours in 3-D printing technique, and (51.7 +/- 12.9) hours in SLS. The preoperative design and operation guide plate were successfully made, which was used for excision and reconstruction of bone tumor in 31 cases. Except 3 failures (operation guide plate fracture), the resection and reconstruction operations followed the preoperative design in the other 28 cases. The patients had longer design time, shorter operation time, and less fluoroscopy frequency than the patients of the control group, showing significant differences (P < 0.05). The follow-up time was 1-12 months (mean, 3.7 months). Postoperative X-ray and CT showed complete tumor resection and stable reconstruction. 3-D printing operation guide plates are well adapted to the requirements of individual operation for bone tumor resection and reconstruction. The 4 kinds of 3-D printing techniques have their own advantages and should be chosen according to the need of operation.

  4. Online total organic carbon (TOC) monitoring for water and wastewater treatment plants processes and operations optimization

    NASA Astrophysics Data System (ADS)

    Assmann, Céline; Scott, Amanda; Biller, Dondra

    2017-08-01

    Organic measurements, such as biological oxygen demand (BOD) and chemical oxygen demand (COD) were developed decades ago in order to measure organics in water. Today, these time-consuming measurements are still used as parameters to check the water treatment quality; however, the time required to generate a result, ranging from hours to days, does not allow COD or BOD to be useful process control parameters - see (1) Standard Method 5210 B; 5-day BOD Test, 1997, and (2) ASTM D1252; COD Test, 2012. Online organic carbon monitoring allows for effective process control because results are generated every few minutes. Though it does not replace BOD or COD measurements still required for compliance reporting, it allows for smart, data-driven and rapid decision-making to improve process control and optimization or meet compliances. Thanks to the smart interpretation of generated data and the capability to now take real-time actions, municipal drinking water and wastewater treatment facility operators can positively impact their OPEX (operational expenditure) efficiencies and their capabilities to meet regulatory requirements. This paper describes how three municipal wastewater and drinking water plants gained process insights, and determined optimization opportunities thanks to the implementation of online total organic carbon (TOC) monitoring.

  5. The Power Plant Operating Data Based on Real-time Digital Filtration Technology

    NASA Astrophysics Data System (ADS)

    Zhao, Ning; Chen, Ya-mi; Wang, Hui-jie

    2018-03-01

    Real-time monitoring of the data of the thermal power plant was the basis of accurate analyzing thermal economy and accurate reconstruction of the operating state. Due to noise interference was inevitable; we need real-time monitoring data filtering to get accurate information of the units and equipment operating data of the thermal power plant. Real-time filtering algorithm couldn’t be used to correct the current data with future data. Compared with traditional filtering algorithm, there were a lot of constraints. First-order lag filtering method and weighted recursive average filtering method could be used for real-time filtering. This paper analyzes the characteristics of the two filtering methods and applications for real-time processing of the positive spin simulation data, and the thermal power plant operating data. The analysis was revealed that the weighted recursive average filtering method applied to the simulation and real-time plant data filtering achieved very good results.

  6. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  7. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses.

    PubMed

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  8. Process improvement of knives production in a small scale industry

    NASA Astrophysics Data System (ADS)

    Ananto, Gamawan; Muktasim, Irfan

    2017-06-01

    Small scale industry that produces several kinds of knive should increase its capacity due to the demand from the market. Qualitatively, this case study consisted of formulating the problems, collecting and analyzing the necessary data, and determining the possible recommendations for the improvement. While the current capacity is only 9 (nine), it is expected that 20 units of knife will produced per month. The processes sequence are: profiling (a), truing (b), beveling (c), heat treatment (d), polishing (e), assembly (f), sharpening (g) and finishing (h). The first process (a) is held by out-house vendor company while other steps from (b) to (g) are executed by in-house vendor. However, there is a high dependency upon the high skilled operator who executes the in -house processes that are mostly held manually with several unbalance successive tasks, where the processing time of one or two tasks require longer duration than others since the operation is merely relied on the operator's skill. The idea is the improvement or change of the profiling and beveling process. Due to the poor surface quality and suboptimal hardness resulted from the laser cut machine for profiling, it is considered to subst itute this kind of process with wire cut that is capable to obtain good surface quality with certain range levels of roughness. Through simple cutting experiments on the samples, it is expected that the generated surface quality is adequate to omit the truing process (b). In addition, the cutting experiments on one, two, and four test samples resulted the shortest time that was obtained through four pieces in one cut. The technical parameters were set according to the recommendation of machine standard as referred to samples condition such as thickness and path length that affect ed the rate of wear. Meanwhile, in order to guarantee the uniformity of knife angles that are formed through beveling process (c), a grinding fixture was created. This kind of tool diminishes the dependency upon the operator's skill as well. The main conclusions are: the substitution of laser cut with wire cut machine for the first task (a) could reduce the operation time from 60 to 39.26 minutes with good result of surface quality and the truing process (b) could be omitted; the additional grinding fixture in beveling process (c) is required and two workstations have to be assigned instead of one as in previous condition. They lead to improvements including the guarantee of the uniformity of knifes' angle, the reduction on the operators' skills dependency, the shortening of cycle time from 855 to 420 minutes, and the higher number of productivity from 9 units/month into 20units/month.

  9. Mission Operations of the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Bass, Deborah; Lauback, Sharon; Mishkin, Andrew; Limonadi, Daniel

    2007-01-01

    A document describes a system of processes involved in planning, commanding, and monitoring operations of the rovers Spirit and Opportunity of the Mars Exploration Rover mission. The system is designed to minimize command turnaround time, given that inherent uncertainties in terrain conditions and in successful completion of planned landed spacecraft motions preclude planning of some spacecraft activities until the results of prior activities are known by the ground-based operations team. The processes are partitioned into those (designated as tactical) that must be tied to the Martian clock and those (designated strategic) that can, without loss, be completed in a more leisurely fashion. The tactical processes include assessment of downlinked data, refinement and validation of activity plans, sequencing of commands, and integration and validation of sequences. Strategic processes include communications planning and generation of long-term activity plans. The primary benefit of this partition is to enable the tactical portion of the team to focus solely on tasks that contribute directly to meeting the deadlines for commanding the rover s each sol (1 sol = 1 Martian day) - achieving a turnaround time of 18 hours or less, while facilitating strategic team interactions with other organizations that do not work on a Mars time schedule.

  10. Implementing a combined polar-geostationary algorithm for smoke emissions estimation in near real time

    NASA Astrophysics Data System (ADS)

    Hyer, E. J.; Schmidt, C. C.; Hoffman, J.; Giglio, L.; Peterson, D. A.

    2013-12-01

    Polar and geostationary satellites are used operationally for fire detection and smoke source estimation by many near-real-time operational users, including operational forecast centers around the globe. The input satellite radiance data are processed by data providers to produce Level-2 and Level -3 fire detection products, but processing these data into spatially and temporally consistent estimates of fire activity requires a substantial amount of additional processing. The most significant processing steps are correction for variable coverage of the satellite observations, and correction for conditions that affect the detection efficiency of the satellite sensors. We describe a system developed by the Naval Research Laboratory (NRL) that uses the full raster information from the entire constellation to diagnose detection opportunities, calculate corrections for factors such as angular dependence of detection efficiency, and generate global estimates of fire activity at spatial and temporal scales suitable for atmospheric modeling. By incorporating these improved fire observations, smoke emissions products, such as NRL's FLAMBE, are able to produce improved estimates of global emissions. This talk provides an overview of the system, demonstrates the achievable improvement over older methods, and describes challenges for near-real-time implementation.

  11. New Process Controls for the Hera Cryogenic Plant

    NASA Astrophysics Data System (ADS)

    Böckmann, T.; Clausen, M.; Gerke, Chr.; Prüß, K.; Schoeneburg, B.; Urbschat, P.

    2010-04-01

    The cryogenic plant built for the HERA accelerator at DESY in Hamburg (Germany) is now in operation for more than two decades. The commercial process control system for the cryogenic plant is in operation for the same time period. Ever since the operator stations, the control network and the CPU boards in the process controllers went through several upgrade stages. Only the centralized Input/Output system was kept unchanged. Many components have been running beyond the expected lifetime. The control system for one at the three parts of the cryogenic plant has been replaced recently by a distributed I/O system. The I/O nodes are connected to several Profibus-DP field busses. Profibus provides the infrastructure to attach intelligent sensors and actuators directly to the process controllers which run the open source process control software EPICS. This paper describes the modification process on all levels from cabling through I/O configuration, the process control software up to the operator displays.

  12. Prognostics using Engineering and Environmental Parameters as Applied to State of Health (SOH) Radionuclide Aerosol Sampler Analyzer (RASA) Real-Time Monitoring

    NASA Astrophysics Data System (ADS)

    Hutchenson, K. D.; Hartley-McBride, S.; Saults, T.; Schmidt, D. P.

    2006-05-01

    The International Monitoring System (IMS) is composed in part of radionuclide particulate and gas monitoring systems. Monitoring the operational status of these systems is an important aspect of nuclear weapon test monitoring. Quality data, process control techniques, and predictive models are necessary to detect and predict system component failures. Predicting failures in advance provides time to mitigate these failures, thus minimizing operational downtime. The Provisional Technical Secretariat (PTS) requires IMS radionuclide systems be operational 95 percent of the time. The United States National Data Center (US NDC) offers contributing components to the IMS. This effort focuses on the initial research and process development using prognostics for monitoring and predicting failures of the RASA two (2) days into the future. The predictions, using time series methods, are input to an expert decision system, called SHADES (State of Health Airflow and Detection Expert System). The results enable personnel to make informed judgments about the health of the RASA system. Data are read from a relational database, processed, and displayed to the user in a GIS as a prototype GUI. This procedure mimics the real time application process that could be implemented as an operational system, This initial proof-of-concept effort developed predictive models focused on RASA components for a single site (USP79). Future work shall include the incorporation of other RASA systems, as well as their environmental conditions that play a significant role in performance. Similarly, SHADES currently accommodates specific component behaviors at this one site. Future work shall also include important environmental variables that play an important part of the prediction algorithms.

  13. Operating efficiency of an emergency Burns theatre: An eight month analysis.

    PubMed

    Mohan, Arvind; Lutterodt, Christopher; Leon-Villapalos, Jorge

    2017-11-01

    The efficient use of operating theatres is important to insure optimum cost-benefit for the hospital. We used the emergency Burns theatre as a model to assess theatre efficiency at our institution. Data was collected retrospectively on every operation performed in the Burns theatre between 01/04/15 and 30/11/15. Each component of the operating theatre process was considered and integrated to calculate values for surgical/anaesthetic time, changeover time and ultimately theatre efficiency. A total of 426 operations were carried out over 887h of allocated theatre time (ATT). Actual operating time represented 67.7%, anaesthetic time 8.8% and changeover time 14.2% of ATT. The average changeover time between patients was 30.1min. Lists started on average 27.7min late each day. There were a total of 5.8h of overruns and 9.6h of no useful activity. Operating theatre efficiency was 69.3% for the 8 month period. Our study highlights areas where theatre efficiency can be improved. We suggest various strategies to improve this that may be applied universally. Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.

  14. The Endurance of Children's Working Memory: A Recall Time Analysis

    ERIC Educational Resources Information Center

    Towse, John N.; Hitch, Graham J.; Hamilton, Z.; Pirrie, Sarah

    2008-01-01

    We analyze the timing of recall as a source of information about children's performance in complex working memory tasks. A group of 8-year-olds performed a traditional operation span task in which sequence length increased across trials and an operation period task in which processing requirements were extended across trials of constant sequence…

  15. Real time explosive hazard information sensing, processing, and communication for autonomous operation

    DOEpatents

    Versteeg, Roelof J; Few, Douglas A; Kinoshita, Robert A; Johnson, Doug; Linda, Ondrej

    2015-02-24

    Methods, computer readable media, and apparatuses provide robotic explosive hazard detection. A robot intelligence kernel (RIK) includes a dynamic autonomy structure with two or more autonomy levels between operator intervention and robot initiative A mine sensor and processing module (ESPM) operating separately from the RIK perceives environmental variables indicative of a mine using subsurface perceptors. The ESPM processes mine information to determine a likelihood of a presence of a mine. A robot can autonomously modify behavior responsive to an indication of a detected mine. The behavior is modified between detection of mines, detailed scanning and characterization of the mine, developing mine indication parameters, and resuming detection. Real time messages are passed between the RIK and the ESPM. A combination of ESPM bound messages and RIK bound messages cause the robot platform to switch between modes including a calibration mode, the mine detection mode, and the mine characterization mode.

  16. Real time explosive hazard information sensing, processing, and communication for autonomous operation

    DOEpatents

    Versteeg, Roelof J.; Few, Douglas A.; Kinoshita, Robert A.; Johnson, Douglas; Linda, Ondrej

    2015-12-15

    Methods, computer readable media, and apparatuses provide robotic explosive hazard detection. A robot intelligence kernel (RIK) includes a dynamic autonomy structure with two or more autonomy levels between operator intervention and robot initiative A mine sensor and processing module (ESPM) operating separately from the RIK perceives environmental variables indicative of a mine using subsurface perceptors. The ESPM processes mine information to determine a likelihood of a presence of a mine. A robot can autonomously modify behavior responsive to an indication of a detected mine. The behavior is modified between detection of mines, detailed scanning and characterization of the mine, developing mine indication parameters, and resuming detection. Real time messages are passed between the RIK and the ESPM. A combination of ESPM bound messages and RIK bound messages cause the robot platform to switch between modes including a calibration mode, the mine detection mode, and the mine characterization mode.

  17. Use of lean and six sigma methodology to improve operating room efficiency in a high-volume tertiary-care academic medical center.

    PubMed

    Cima, Robert R; Brown, Michael J; Hebl, James R; Moore, Robin; Rogers, James C; Kollengode, Anantha; Amstutz, Gwendolyn J; Weisbrod, Cheryl A; Narr, Bradly J; Deschamps, Claude

    2011-07-01

    Operating rooms (ORs) are resource-intense and costly hospital units. Maximizing OR efficiency is essential to maintaining an economically viable institution. OR efficiency projects often focus on a limited number of ORs or cases. Efforts across an entire OR suite have not been reported. Lean and Six Sigma methodologies were developed in the manufacturing industry to increase efficiency by eliminating non-value-added steps. We applied Lean and Six Sigma methodologies across an entire surgical suite to improve efficiency. A multidisciplinary surgical process improvement team constructed a value stream map of the entire surgical process from the decision for surgery to discharge. Each process step was analyzed in 3 domains, ie, personnel, information processed, and time. Multidisciplinary teams addressed 5 work streams to increase value at each step: minimizing volume variation; streamlining the preoperative process; reducing nonoperative time; eliminating redundant information; and promoting employee engagement. Process improvements were implemented sequentially in surgical specialties. Key performance metrics were collected before and after implementation. Across 3 surgical specialties, process redesign resulted in substantial improvements in on-time starts and reduction in number of cases past 5 pm. Substantial gains were achieved in nonoperative time, staff overtime, and ORs saved. These changes resulted in substantial increases in margin/OR/day. Use of Lean and Six Sigma methodologies increased OR efficiency and financial performance across an entire operating suite. Process mapping, leadership support, staff engagement, and sharing performance metrics are keys to enhancing OR efficiency. The performance gains were substantial, sustainable, positive financially, and transferrable to other specialties. Copyright © 2011 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  18. Using Multiple FPGA Architectures for Real-time Processing of Low-level Machine Vision Functions

    Treesearch

    Thomas H. Drayer; William E. King; Philip A. Araman; Joseph G. Tront; Richard W. Conners

    1995-01-01

    In this paper, we investigate the use of multiple Field Programmable Gate Array (FPGA) architectures for real-time machine vision processing. The use of FPGAs for low-level processing represents an excellent tradeoff between software and special purpose hardware implementations. A library of modules that implement common low-level machine vision operations is presented...

  19. Pre-operative CT angiography and three-dimensional image post processing for deep inferior epigastric perforator flap breast reconstructive surgery.

    PubMed

    Lam, D L; Mitsumori, L M; Neligan, P C; Warren, B H; Shuman, W P; Dubinsky, T J

    2012-12-01

    Autologous breast reconstructive surgery with deep inferior epigastric artery (DIEA) perforator flaps has become the mainstay for breast reconstructive surgery. CT angiography and three-dimensional image post processing can depict the number, size, course and location of the DIEA perforating arteries for the pre-operative selection of the best artery to use for the tissue flap. Knowledge of the location and selection of the optimal perforating artery shortens operative times and decreases patient morbidity.

  20. Final technical report. In-situ FT-IR monitoring of a black liquor recovery boiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James Markham; Joseph Cosgrove; David Marran

    1999-05-31

    This project developed and tested advanced Fourier transform infrared (FT-IR) instruments for process monitoring of black liquor recovery boilers. The state-of-the-art FT-IR instruments successfully operated in the harsh environment of a black liquor recovery boiler and provided a wealth of real-time process information. Concentrations of multiple gas species were simultaneously monitored in-situ across the combustion flow of the boiler and extractively at the stack. Sensitivity to changes of particulate fume and carryover levels in the process flow were also demonstrated. Boiler set-up and operation is a complex balance of conditions that influence the chemical and physical processes in the combustionmore » flow. Operating parameters include black liquor flow rate, liquor temperature, nozzle pressure, primary air, secondary air, tertiary air, boiler excess oxygen and others. The in-process information provided by the FT-IR monitors can be used as a boiler control tool since species indicative of combustion efficiency (carbon monoxide, methane) and pollutant emissions (sulfur dioxide, hydrochloric acid and fume) were monitored in real-time and observed to fluctuate as operating conditions were varied. A high priority need of the U.S. industrial boiler market is improved measurement and control technology. The sensor technology demonstrated in this project is applicable to the need of industry.« less

  1. Temporal Proof Methodologies for Real-Time Systems,

    DTIC Science & Technology

    1990-09-01

    real time systems that communicate either through shared variables or by message passing and real time issues such as time-outs, process priorities (interrupts) and process scheduling. The authors exhibit two styles for the specification of real - time systems . While the first approach uses bounded versions of temporal operators the second approach allows explicit references to time through a special clock variable. Corresponding to two styles of specification the authors present and compare two fundamentally different proof

  2. 29 CFR 784.152 - Operations performed on byproducts.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... resulting from processing or canning operations, to produce fish oil or meal, would come within the..., since fish oil is nonperishable in the sense that it may be held for a substantial period of time...

  3. 29 CFR 784.152 - Operations performed on byproducts.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... resulting from processing or canning operations, to produce fish oil or meal, would come within the..., since fish oil is nonperishable in the sense that it may be held for a substantial period of time...

  4. A customizable system for real-time image processing using the Blackfin DSProcessor and the MicroC/OS-II real-time kernel

    NASA Astrophysics Data System (ADS)

    Coffey, Stephen; Connell, Joseph

    2005-06-01

    This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.

  5. Cycle time and cost reduction in large-size optics production

    NASA Astrophysics Data System (ADS)

    Hallock, Bob; Shorey, Aric; Courtney, Tom

    2005-09-01

    Optical fabrication process steps have remained largely unchanged for decades. Raw glass blanks have been rough-machined, generated to near net shape, loose abrasive or fine bound diamond ground and then polished. This set of processes is sequential and each subsequent operation removes the damage and micro cracking induced by the prior operational step. One of the long-lead aspects of this process has been the glass polishing. Primarily, this has been driven by the need to remove relatively large volumes of glass material compared to the polishing removal rate to ensure complete damage removal. The secondary time driver has been poor convergence to final figure and the corresponding polish-metrology cycles. The overall cycle time and resultant cost due to labor, equipment utilization and shop efficiency is increased, often significantly, when the optical prescription is aspheric. In addition to the long polishing cycle times, the duration of the polishing time is often very difficult to predict given that current polishing processes are not deterministic processes. This paper will describe a novel approach to large optics finishing, relying on several innovative technologies to be presented and illustrated through a variety of examples. The cycle time reductions enabled by this approach promises to result in significant cost and lead-time reductions for large size optics. In addition, corresponding increases in throughput will provide for less capital expenditure per square meter of optic produced. This process, comparative cycles time estimates and preliminary results will be discussed.

  6. Operation of the solvent-refined-coal pilot plant, Wilsonville, Alabama. Annual technical report, January-December 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, H.E.

    1981-08-01

    The plant was in operation for the equivalent of 247 days, an on-stream factor of 67.7%. Kentucky 9 coals from the Lafayette, Dotiki and Fies mines were processed. During 1980, the operating conditions and equipment were adjusted to evaluate potential process improvements. These experiments produced significant results in the following areas: Operating V103 High Pressure Separator in the hot mode; varying T102 Vacuum Column operating temperature; adding light SRC (LSRC), a product of the third stage of the Critical Solvent Deashing (CSD) unit, to the process solvent; investigating the effects of the chlorine content of the feed coal on corrosionmore » in the process vessels; evaluating the effects of adding sodium carbonate on corrosion rates; operating under conditions of low severity; i.e., low reactor temperature and long residence time; and testing an alternate CSD deashing solvent. A series of simulation runs investigating the design operating conditions for a planned 6000 ton per day SRC-I demonstation plant were also completed. Numerous improvements were made in the CSD processing area, and the components for a hydrotreating unit were installed.« less

  7. A novelty detection diagnostic methodology for gearboxes operating under fluctuating operating conditions using probabilistic techniques

    NASA Astrophysics Data System (ADS)

    Schmidt, S.; Heyns, P. S.; de Villiers, J. P.

    2018-02-01

    In this paper, a fault diagnostic methodology is developed which is able to detect, locate and trend gear faults under fluctuating operating conditions when only vibration data from a single transducer, measured on a healthy gearbox are available. A two-phase feature extraction and modelling process is proposed to infer the operating condition and based on the operating condition, to detect changes in the machine condition. Information from optimised machine and operating condition hidden Markov models are statistically combined to generate a discrepancy signal which is post-processed to infer the condition of the gearbox. The discrepancy signal is processed and combined with statistical methods for automatic fault detection and localisation and to perform fault trending over time. The proposed methodology is validated on experimental data and a tacholess order tracking methodology is used to enhance the cost-effectiveness of the diagnostic methodology.

  8. Autoheated thermophilic aerobic digestion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deeny, K.; Hahn, H.; Leonhard, D.

    1991-10-01

    Autothermal thermophilic aerobic digestion (ATAD) is first and foremost a digestion process, the primary purpose of which is to decompose a portion of the waste organic solids generated from wastewater treatment. As a result of the high operating temperature, digestion is expected to occur within a short time period (6 days) and accomplish a high degree of pathogen reduction. ATAD systems are two-stage aerobic digestion processes that operate under thermophilic temperature conditions (40 to 80C) without supplemental heat. Like composting, the systems rely on the conservation of heat released during digestion itself to attain and sustain the desired operating temperature.more » Typical ATAD systems operate at 55C and may reach temperatures of 60 to 65C in the second-stage reactor. Perhaps because of the high operating temperature, this process has been referred to as Liquid Composting.' Major advantages associated with thermophilic operation include high biological reaction rates and a substantial degree of pathogen reduction.« less

  9. Assessment of Manual Operation Time for the Manufacturing of Thin Film Transistor Liquid Crystal Display: A Bayesian Approach

    NASA Astrophysics Data System (ADS)

    Shen, Chien-wen

    2009-01-01

    During the processes of TFT-LCD manufacturing, steps like visual inspection of panel surface defects still heavily rely on manual operations. As the manual inspection time of TFT-LCD manufacturing could range from 4 hours to 1 day, the reliability of time forecasting is thus important for production planning, scheduling and customer response. This study would like to propose a practical and easy-to-implement prediction model through the approach of Bayesian networks for time estimation of manual operated procedures in TFT-LCD manufacturing. Given the lack of prior knowledge about manual operation time, algorithms of necessary path condition and expectation-maximization are used for structural learning and estimation of conditional probability distributions respectively. This study also applied Bayesian inference to evaluate the relationships between explanatory variables and manual operation time. With the empirical applications of this proposed forecasting model, approach of Bayesian networks demonstrates its practicability and prediction accountability.

  10. Impact of lean six sigma process improvement methodology on cardiac catheterization laboratory efficiency.

    PubMed

    Agarwal, Shikhar; Gallo, Justin J; Parashar, Akhil; Agarwal, Kanika K; Ellis, Stephen G; Khot, Umesh N; Spooner, Robin; Murat Tuzcu, Emin; Kapadia, Samir R

    2016-03-01

    Operational inefficiencies are ubiquitous in several healthcare processes. To improve the operational efficiency of our catheterization laboratory (Cath Lab), we implemented a lean six sigma process improvement initiative, starting in June 2010. We aimed to study the impact of lean six sigma implementation on improving the efficiency and the patient throughput in our Cath Lab. All elective and urgent cardiac catheterization procedures including diagnostic coronary angiography, percutaneous coronary interventions, structural interventions and peripheral interventions performed between June 2009 and December 2012 were included in the study. Performance metrics utilized for analysis included turn-time, physician downtime, on-time patient arrival, on-time physician arrival, on-time start and manual sheath-pulls inside the Cath Lab. After implementation of lean six sigma in the Cath Lab, we observed a significant improvement in turn-time, physician downtime, on-time patient arrival, on-time physician arrival, on-time start as well as sheath-pulls inside the Cath Lab. The percentage of cases with optimal turn-time increased from 43.6% in 2009 to 56.6% in 2012 (p-trend<0.001). Similarly, the percentage of cases with an aggregate on-time start increased from 41.7% in 2009 to 62.8% in 2012 (p-trend<0.001). In addition, the percentage of manual sheath-pulls performed in the Cath Lab decreased from 60.7% in 2009 to 22.7% in 2012 (p-trend<0.001). The current longitudinal study illustrates the impact of successful implementation of a well-known process improvement initiative, lean six sigma, on improving and sustaining efficiency of our Cath Lab operation. After the successful implementation of this continuous quality improvement initiative, there was a significant improvement in the selected performance metrics namely turn-time, physician downtime, on-time patient arrival, on-time physician arrival, on-time start as well as sheath-pulls inside the Cath Lab. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. An expert systems application to space base data processing

    NASA Technical Reports Server (NTRS)

    Babb, Stephen M.

    1988-01-01

    The advent of space vehicles with their increased data requirements are reflected in the complexity of future telemetry systems. Space based operations with its immense operating costs will shift the burden of data processing and routine analysis from the space station to the Orbital Transfer Vehicle (OTV). A research and development project is described which addresses the real time onboard data processing tasks associated with a space based vehicle, specifically focusing on an implementation of an expert system.

  12. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  13. Effectiveness of facilitated introduction of a standard operating procedure into routine processes in the operating theatre: a controlled interrupted time series.

    PubMed

    Morgan, Lauren; New, Steve; Robertson, Eleanor; Collins, Gary; Rivero-Arias, Oliver; Catchpole, Ken; Pickering, Sharon P; Hadi, Mohammed; Griffin, Damian; McCulloch, Peter

    2015-02-01

    Standard operating procedures (SOPs) should improve safety in the operating theatre, but controlled studies evaluating the effect of staff-led implementation are needed. In a controlled interrupted time series, we evaluated three team process measures (compliance with WHO surgical safety checklist, non-technical skills and technical performance) and three clinical outcome measures (length of hospital stay, complications and readmissions) before and after a 3-month staff-led development of SOPs. Process measures were evaluated by direct observation, using Oxford Non-Technical Skills II for non-technical skills and the 'glitch count' for technical performance. All staff in two orthopaedic operating theatres were trained in the principles of SOPs and then assisted to develop standardised procedures. Staff in a control operating theatre underwent the same observations but received no training. The change in difference between active and control groups was compared before and after the intervention using repeated measures analysis of variance. We observed 50 operations before and 55 after the intervention and analysed clinical data on 1022 and 861 operations, respectively. The staff chose to structure their efforts around revising the 'whiteboard' which documented and prompted tasks, rather than directly addressing specific task problems. Although staff preferred and sustained the new system, we found no significant differences in process or outcome measures before/after intervention in the active versus the control group. There was a secular trend towards worse outcomes in the postintervention period, seen in both active and control theatres. SOPs when developed and introduced by frontline staff do not necessarily improve operative processes or outcomes. The inherent tension in improvement work between giving staff ownership of improvement and maintaining control of direction needs to be managed, to ensure staff are engaged but invest energy in appropriate change. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. Photonics-based real-time ultra-high-range-resolution radar with broadband signal generation and processing.

    PubMed

    Zhang, Fangzheng; Guo, Qingshui; Pan, Shilong

    2017-10-23

    Real-time and high-resolution target detection is highly desirable in modern radar applications. Electronic techniques have encountered grave difficulties in the development of such radars, which strictly rely on a large instantaneous bandwidth. In this article, a photonics-based real-time high-range-resolution radar is proposed with optical generation and processing of broadband linear frequency modulation (LFM) signals. A broadband LFM signal is generated in the transmitter by photonic frequency quadrupling, and the received echo is de-chirped to a low frequency signal by photonic frequency mixing. The system can operate at a high frequency and a large bandwidth while enabling real-time processing by low-speed analog-to-digital conversion and digital signal processing. A conceptual radar is established. Real-time processing of an 8-GHz LFM signal is achieved with a sampling rate of 500 MSa/s. Accurate distance measurement is implemented with a maximum error of 4 mm within a range of ~3.5 meters. Detection of two targets is demonstrated with a range-resolution as high as 1.875 cm. We believe the proposed radar architecture is a reliable solution to overcome the limitations of current radar on operation bandwidth and processing speed, and it is hopefully to be used in future radars for real-time and high-resolution target detection and imaging.

  15. High efficiency endocrine operation protocol: From design to implementation.

    PubMed

    Mascarella, Marco A; Lahrichi, Nadia; Cloutier, Fabienne; Kleiman, Simcha; Payne, Richard J; Rosenberg, Lawrence

    2016-10-01

    We developed a high efficiency endocrine operative protocol based on a mathematical programming approach, process reengineering, and value-stream mapping to increase the number of operations completed per day without increasing operating room time at a tertiary-care, academic center. Using this protocol, a case-control study of 72 patients undergoing endocrine operation during high efficiency days were age, sex, and procedure-matched to 72 patients undergoing operation during standard days. The demographic profile, operative times, and perioperative complications were noted. The average number of cases per 8-hour workday in the high efficiency and standard operating rooms were 7 and 5, respectively. Mean procedure times in both groups were similar. The turnaround time (mean ± standard deviation) in the high efficiency group was 8.5 (±2.7) minutes as compared with 15.4 (±4.9) minutes in the standard group (P < .001). Transient postoperative hypocalcemia was 6.9% (5/72) and 8.3% (6/72) for the high efficiency and standard groups, respectively (P = .99). In this study, patients undergoing high efficiency endocrine operation had similar procedure times and perioperative complications compared with the standard group. The proposed high efficiency protocol seems to better utilize operative time and decrease the backlog of patients waiting for endocrine operation in a country with a universal national health care program. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. The SARVIEWS Project: Automated SAR Processing in Support of Operational Near Real-time Volcano Monitoring

    NASA Astrophysics Data System (ADS)

    Meyer, F. J.; Webley, P. W.; Dehn, J.; Arko, S. A.; McAlpin, D. B.; Gong, W.

    2016-12-01

    Volcanic eruptions are among the most significant hazards to human society, capable of triggering natural disasters on regional to global scales. In the last decade, remote sensing has become established in operational volcano monitoring. Centers like the Alaska Volcano Observatory rely heavily on remote sensing data from optical and thermal sensors to provide time-critical hazard information. Despite this high use of remote sensing data, the presence of clouds and a dependence on solar illumination often limit their impact on decision making. Synthetic Aperture Radar (SAR) systems are widely considered superior to optical sensors in operational monitoring situations, due to their weather and illumination independence. Still, the contribution of SAR to operational volcano monitoring has been limited in the past due to high data costs, long processing times, and low temporal sampling rates of most SAR systems. In this study, we introduce the automatic SAR processing system SARVIEWS, whose advanced data analysis and data integration techniques allow, for the first time, a meaningful integration of SAR into operational monitoring systems. We will introduce the SARVIEWS database interface that allows for automatic, rapid, and seamless access to the data holdings of the Alaska Satellite Facility. We will also present a set of processing techniques designed to automatically generate a set of SAR-based hazard products (e.g. change detection maps, interferograms, geocoded images). The techniques take advantage of modern signal processing and radiometric normalization schemes, enabling the combination of data from different geometries. Finally, we will show how SAR-based hazard information is integrated in existing multi-sensor decision support tools to enable joint hazard analysis with data from optical and thermal sensors. We will showcase the SAR processing system using a set of recent natural disasters (both earthquakes and volcanic eruptions) to demonstrate its robustness. We will also show the benefit of integrating SAR with data from other sensors to support volcano monitoring. For historic eruptions at Okmok and Augustine volcano, both located in the North Pacific, we will demonstrate that the addition of SAR can lead to a significant improvement in activity detection and eruption forecasting.

  17. The impact of preventable disruption on the operative time for minimally invasive surgery.

    PubMed

    Al-Hakim, Latif

    2011-10-01

    Current ergonomic studies show that disruption exposes surgical teams to stress and musculoskeletal disorders. This study considers minimally invasive surgery as a sociotechnical process subjected to a variety of disruption events other than those recognized by ergonomic science. The research takes into consideration the impact of preventable disruption on operating time rather than on the physical and emotional status of the surgical team. Events inside operating rooms that disturbed operative time were recorded for 17 minimally invasive surgeries. The disruption events were classified into four main areas: prerequisite requirements, work design, communication during surgery, and other. Each area was further classified according to sources of disruption. Altogether, 11 sources of disruption were identified: patient record, protocol and policy, surgical requirements and surgeon preferences, operating table and patient positioning, arrangement of instruments, lighting, monitor, clothing, surgical teamwork, coordination, and other. Disruption prolonged operative time by more than 32%. Teamwork forms the main source of disruption followed by operating table and patient positioning and arrangement of instruments. These three sources represented approximately 20% of operative time. Failure to follow principles of work design had a significant negative impact, lengthening operative time by approximately 15%. Although lighting and monitors had a relatively small impact on operative time, these factors could create inconvenience and stress within the surgical teams. In addition, the effect of failure to follow surgical protocols and policies or having incomplete patient records may have a limited effect on operative time but could have serious consequences. This report demonstrates that preventable disruption caused an increase in operative time and forced surgeons and patients to endure unnecessary delay of more than 32%. Such additional time could be used to deal with the pressure of emergency cases and to reduce waiting lists for elective surgery.

  18. A Logical Process Calculus

    NASA Technical Reports Server (NTRS)

    Cleaveland, Rance; Luettgen, Gerald; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper presents the Logical Process Calculus (LPC), a formalism that supports heterogeneous system specifications containing both operational and declarative subspecifications. Syntactically, LPC extends Milner's Calculus of Communicating Systems with operators from the alternation-free linear-time mu-calculus (LT(mu)). Semantically, LPC is equipped with a behavioral preorder that generalizes Hennessy's and DeNicola's must-testing preorder as well as LT(mu's) satisfaction relation, while being compositional for all LPC operators. From a technical point of view, the new calculus is distinguished by the inclusion of: (1) both minimal and maximal fixed-point operators and (2) an unimple-mentability predicate on process terms, which tags inconsistent specifications. The utility of LPC is demonstrated by means of an example highlighting the benefits of heterogeneous system specification.

  19. Optical computing using optical flip-flops in Fourier processors: use in matrix multiplication and discrete linear transforms.

    PubMed

    Ando, S; Sekine, S; Mita, M; Katsuo, S

    1989-12-15

    An architecture and the algorithms for matrix multiplication using optical flip-flops (OFFs) in optical processors are proposed based on residue arithmetic. The proposed system is capable of processing all elements of matrices in parallel utilizing the information retrieving ability of optical Fourier processors. The employment of OFFs enables bidirectional data flow leading to a simpler architecture and the burden of residue-to-decimal (or residue-to-binary) conversion to operation time can be largely reduced by processing all elements in parallel. The calculated characteristics of operation time suggest a promising use of the system in a real time 2-D linear transform.

  20. Integrated electrocoagulation-electrooxidation process for the treatment of soluble coffee effluent: Optimization of COD degradation and operation time analysis.

    PubMed

    Ibarra-Taquez, Harold N; GilPavas, Edison; Blatchley, Ernest R; Gómez-García, Miguel-Ángel; Dobrosz-Gómez, Izabela

    2017-09-15

    Soluble coffee production generates wastewater containing complex mixtures of organic macromolecules. In this work, a sequential Electrocoagulation-Electrooxidation (EC-EO) process, using aluminum and graphite electrodes, was proposed as an alternative way for the treatment of soluble coffee effluent. Process operational parameters were optimized, achieving total decolorization, as well as 74% and 63.5% of COD and TOC removal, respectively. The integrated EC-EO process yielded a highly oxidized (AOS = 1.629) and biocompatible (BOD 5 /COD ≈ 0.6) effluent. The Molecular Weight Distribution (MWD) analysis showed that during the EC-EO process, EC effectively decomposed contaminants with molecular weight in the range of 10-30 kDa. In contrast, EO was quite efficient in mineralization of contaminants with molecular weight higher than 30 kDa. A kinetic analysis allowed determination of the time required to meet Colombian permissible discharge limits. Finally, a comprehensive operational cost analysis was performed. The integrated EC-EO process was demonstrated as an efficient alternative for the treatment of industrial effluents resulting from soluble coffee production. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Automatic-heuristic and executive-analytic processing during reasoning: Chronometric and dual-task considerations.

    PubMed

    De Neys, Wim

    2006-06-01

    Human reasoning has been shown to overly rely on intuitive, heuristic processing instead of a more demanding analytic inference process. Four experiments tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically. Participants solved conjunction fallacy problems and indicative and deontic selection tasks. Experiment 1 established that making correct analytic inferences demanded more processing time than did making heuristic inferences. Experiment 2 showed that burdening the executive resources with an attention-demanding secondary task decreased correct, analytic responding and boosted the rate of conjunction fallacies and indicative matching card selections. Results were replicated in Experiments 3 and 4 with a different secondary-task procedure. Involvement of executive resources for the deontic selection task was less clear. Findings validate basic processing assumptions of the dual-process framework and complete the correlational research programme of K. E. Stanovich and R. F. West (2000).

  2. Fossil fuel furnace reactor

    DOEpatents

    Parkinson, William J.

    1987-01-01

    A fossil fuel furnace reactor is provided for simulating a continuous processing plant with a batch reactor. An internal reaction vessel contains a batch of shale oil, with the vessel having a relatively thin wall thickness for a heat transfer rate effective to simulate a process temperature history in the selected continuous processing plant. A heater jacket is disposed about the reactor vessel and defines a number of independent controllable temperature zones axially spaced along the reaction vessel. Each temperature zone can be energized to simulate a time-temperature history of process material through the continuous plant. A pressure vessel contains both the heater jacket and the reaction vessel at an operating pressure functionally selected to simulate the continuous processing plant. The process yield from the oil shale may be used as feedback information to software simulating operation of the continuous plant to provide operating parameters, i.e., temperature profiles, ambient atmosphere, operating pressure, material feed rates, etc., for simulation in the batch reactor.

  3. Progress in Operational Analysis of Launch Vehicles in Nonstationary Flight

    NASA Technical Reports Server (NTRS)

    James, George; Kaouk, Mo; Cao, Timothy

    2013-01-01

    This paper presents recent results in an ongoing effort to understand and develop techniques to process launch vehicle data, which is extremely challenging for modal parameter identification. The primary source of difficulty is due to the nonstationary nature of the situation. The system is changing, the environment is not steady, and there is an active control system operating. Hence, the primary tool for producing clean operational results (significant data lengths and data averaging) is not available to the user. This work reported herein uses a correlation-based two step operational modal analysis approach to process the relevant data sets for understanding and development of processes. A significant drawback for such processing of short time histories is a series of beating phenomena due to the inability to average out random modal excitations. A recursive correlation process coupled to a new convergence metric (designed to mitigate the beating phenomena) is the object of this study. It has been found in limited studies that this process creates clean modal frequency estimates but numerically alters the damping.

  4. System for monitoring an industrial or biological process

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W.; Vilim, Rick B.; White, Andrew M.

    1998-01-01

    A method and apparatus for monitoring and responding to conditions of an industrial process. Industrial process signals, such as repetitive manufacturing, testing and operational machine signals, are generated by a system. Sensor signals characteristic of the process are generated over a time length and compared to reference signals over the time length. The industrial signals are adjusted over the time length relative to the reference signals, the phase shift of the industrial signals is optimized to the reference signals and the resulting signals output for analysis by systems such as SPRT.

  5. System for monitoring an industrial or biological process

    DOEpatents

    Gross, K.C.; Wegerich, S.W.; Vilim, R.B.; White, A.M.

    1998-06-30

    A method and apparatus are disclosed for monitoring and responding to conditions of an industrial process. Industrial process signals, such as repetitive manufacturing, testing and operational machine signals, are generated by a system. Sensor signals characteristic of the process are generated over a time length and compared to reference signals over the time length. The industrial signals are adjusted over the time length relative to the reference signals, the phase shift of the industrial signals is optimized to the reference signals and the resulting signals output for analysis by systems such as SPRT. 49 figs.

  6. Reductive stripping process for uranium recovery from organic extracts

    DOEpatents

    Hurst, F.J. Jr.

    1983-06-16

    In the reductive stripping of uranium from an organic extractant in a uranium recovery process, the use of phosphoric acid having a molarity in the range of 8 to 10 increases the efficiency of the reductive stripping and allows the strip step to operate with lower aqueous to organic recycle ratios and shorter retention time in the mixer stages. Under these operating conditions, less solvent is required in the process, and smaller, less expensive process equipment can be utilized. The high strength H/sub 3/PO/sub 4/ is available from the evaporator stage of the process.

  7. Reductive stripping process for uranium recovery from organic extracts

    DOEpatents

    Hurst, Jr., Fred J.

    1985-01-01

    In the reductive stripping of uranium from an organic extractant in a uranium recovery process, the use of phosphoric acid having a molarity in the range of 8 to 10 increases the efficiency of the reductive stripping and allows the strip step to operate with lower aqueous to organic recycle ratios and shorter retention time in the mixer stages. Under these operating conditions, less solvent is required in the process, and smaller, less expensive process equipment can be utilized. The high strength H.sub.3 PO.sub.4 is available from the evaporator stage of the process.

  8. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems.

    PubMed

    Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun; Wang, Gi-Nam

    2016-01-01

    Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively.

  9. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems

    PubMed Central

    Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun

    2016-01-01

    Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively. PMID:27974882

  10. Autonomous calibration of single spin qubit operations

    NASA Astrophysics Data System (ADS)

    Frank, Florian; Unden, Thomas; Zoller, Jonathan; Said, Ressa S.; Calarco, Tommaso; Montangero, Simone; Naydenov, Boris; Jelezko, Fedor

    2017-12-01

    Fully autonomous precise control of qubits is crucial for quantum information processing, quantum communication, and quantum sensing applications. It requires minimal human intervention on the ability to model, to predict, and to anticipate the quantum dynamics, as well as to precisely control and calibrate single qubit operations. Here, we demonstrate single qubit autonomous calibrations via closed-loop optimisations of electron spin quantum operations in diamond. The operations are examined by quantum state and process tomographic measurements at room temperature, and their performances against systematic errors are iteratively rectified by an optimal pulse engineering algorithm. We achieve an autonomous calibrated fidelity up to 1.00 on a time scale of minutes for a spin population inversion and up to 0.98 on a time scale of hours for a single qubit π/2 -rotation within the experimental error of 2%. These results manifest a full potential for versatile quantum technologies.

  11. Integrating SAR and derived products into operational volcano monitoring and decision support systems

    NASA Astrophysics Data System (ADS)

    Meyer, F. J.; McAlpin, D. B.; Gong, W.; Ajadi, O.; Arko, S.; Webley, P. W.; Dehn, J.

    2015-02-01

    Remote sensing plays a critical role in operational volcano monitoring due to the often remote locations of volcanic systems and the large spatial extent of potential eruption pre-cursor signals. Despite the all-weather capabilities of radar remote sensing and its high performance in monitoring of change, the contribution of radar data to operational monitoring activities has been limited in the past. This is largely due to: (1) the high costs associated with radar data; (2) traditionally slow data processing and delivery procedures; and (3) the limited temporal sampling provided by spaceborne radars. With this paper, we present new data processing and data integration techniques that mitigate some of these limitations and allow for a meaningful integration of radar data into operational volcano monitoring decision support systems. Specifically, we present fast data access procedures as well as new approaches to multi-track processing that improve near real-time data access and temporal sampling of volcanic systems with SAR data. We introduce phase-based (coherent) and amplitude-based (incoherent) change detection procedures that are able to extract dense time series of hazard information from these data. For a demonstration, we present an integration of our processing system with an operational volcano monitoring system that was developed for use by the Alaska Volcano Observatory (AVO). Through an application to a historic eruption, we show that the integration of SAR into systems such as AVO can significantly improve the ability of operational systems to detect eruptive precursors. Therefore, the developed technology is expected to improve operational hazard detection, alerting, and management capabilities.

  12. The Four-Day Operational Week Experience at Florida Junior College (FJC): Report of Evaluation Process Study.

    ERIC Educational Resources Information Center

    Stuckman, Jeffrey A.

    An evaluation was conducted at Florida Junior College (FJC) of the four-day operational week implemented during May through August, 1981. Surveys were administered in May and July to day students, full-time teaching faculty, and full-time noninstructional staff to determine their level of satisfaction or dissatisfaction with the four-day week.…

  13. Serial and Parallel Processes in Working Memory after Practice

    ERIC Educational Resources Information Center

    Oberauer, Klaus; Bialkova, Svetlana

    2011-01-01

    Six young adults practiced for 36 sessions on a working-memory updating task in which 2 digits and 2 spatial positions were continuously updated. Participants either did 1 updating operation at a time, or attempted 1 numerical and 1 spatial operation at the same time. In contrast to previous research using the same paradigm with a single digit and…

  14. MIRIADS: miniature infrared imaging applications development system description and operation

    NASA Astrophysics Data System (ADS)

    Baxter, Christopher R.; Massie, Mark A.; McCarley, Paul L.; Couture, Michael E.

    2001-10-01

    A cooperative effort between the U.S. Air Force Research Laboratory, Nova Research, Inc., the Raytheon Infrared Operations (RIO) and Optics 1, Inc. has successfully produced a miniature infrared camera system that offers significant real-time signal and image processing capabilities by virtue of its modular design. This paper will present an operational overview of the system as well as results from initial testing of the 'Modular Infrared Imaging Applications Development System' (MIRIADS) configured as a missile early-warning detection system. The MIRIADS device can operate virtually any infrared focal plane array (FPA) that currently exists. Programmable on-board logic applies user-defined processing functions to the real-time digital image data for a variety of functions. Daughterboards may be plugged onto the system to expand the digital and analog processing capabilities of the system. A unique full hemispherical infrared fisheye optical system designed and produced by Optics 1, Inc. is utilized by the MIRIADS in a missile warning application to demonstrate the flexibility of the overall system to be applied to a variety of current and future AFRL missions.

  15. Lean principles optimize on-time vascular surgery operating room starts and decrease resident work hours.

    PubMed

    Warner, Courtney J; Walsh, Daniel B; Horvath, Alexander J; Walsh, Teri R; Herrick, Daniel P; Prentiss, Steven J; Powell, Richard J

    2013-11-01

    Lean process improvement techniques are used in industry to improve efficiency and quality while controlling costs. These techniques are less commonly applied in health care. This study assessed the effectiveness of Lean principles on first case on-time operating room starts and quantified effects on resident work hours. Standard process improvement techniques (DMAIC methodology: define, measure, analyze, improve, control) were used to identify causes of delayed vascular surgery first case starts. Value stream maps and process flow diagrams were created. Process data were analyzed with Pareto and control charts. High-yield changes were identified and simulated in computer and live settings prior to implementation. The primary outcome measure was the proportion of on-time first case starts; secondary outcomes included hospital costs, resident rounding time, and work hours. Data were compared with existing benchmarks. Prior to implementation, 39% of first cases started on time. Process mapping identified late resident arrival in preoperative holding as a cause of delayed first case starts. Resident rounding process inefficiencies were identified and changed through the use of checklists, standardization, and elimination of nonvalue-added activity. Following implementation of process improvements, first case on-time starts improved to 71% at 6 weeks (P = .002). Improvement was sustained with an 86% on-time rate at 1 year (P < .001). Resident rounding time was reduced by 33% (from 70 to 47 minutes). At 9 weeks following implementation, these changes generated an opportunity cost potential of $12,582. Use of Lean principles allowed rapid identification and implementation of perioperative process changes that improved efficiency and resulted in significant cost savings. This improvement was sustained at 1 year. Downstream effects included improved resident efficiency with decreased work hours. Copyright © 2013 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.

  16. Mission Engineering of a Rapid Cycle Spacecraft Logistics Fleet

    NASA Technical Reports Server (NTRS)

    Holladay, Jon; McClendon, Randy (Technical Monitor)

    2002-01-01

    The requirement for logistics re-supply of the International Space Station has provided a unique opportunity for engineering the implementation of NASA's first dedicated pressurized logistics carrier fleet. The NASA fleet is comprised of three Multi-Purpose Logistics Modules (MPLM) provided to NASA by the Italian Space Agency in return for operations time aboard the International Space Station. Marshall Space Flight Center was responsible for oversight of the hardware development from preliminary design through acceptance of the third flight unit, and currently manages the flight hardware sustaining engineering and mission engineering activities. The actual MPLM Mission began prior to NASA acceptance of the first flight unit in 1999 and will continue until the de-commission of the International Space Station that is planned for 20xx. Mission engineering of the MPLM program requires a broad focus on three distinct yet inter-related operations processes: pre-flight, flight operations, and post-flight turn-around. Within each primary area exist several complex subsets of distinct and inter-related activities. Pre-flight processing includes the evaluation of carrier hardware readiness for space flight. This includes integration of payload into the carrier, integration of the carrier into the launch vehicle, and integration of the carrier onto the orbital platform. Flight operations include the actual carrier operations during flight and any required real-time ground support. Post-flight processing includes de-integration of the carrier hardware from the launch vehicle, de-integration of the payload, and preparation for returning the carrier to pre-flight staging. Typical space operations are engineered around the requirements and objectives of a dedicated mission on a dedicated operational platform (i.e. Launch or Orbiting Vehicle). The MPLM, however, has expanded this envelope by requiring operations with both vehicles during flight as well as pre-launch and post-landing operations. These unique requirements combined with a success-oriented schedule of four flights within a ten-month period have provided numerous opportunities for understanding and improving operations processes. Furthermore, it has increased the knowledge base of future Payload Carrier and Launch Vehicle hardware and requirement developments. Discussion of the process flows and target areas for process improvement are provided in the subject paper. Special emphasis is also placed on supplying guidelines for hardware development. The combination of process knowledge and hardware development knowledge will provide a comprehensive overview for future vehicle developments as related to integration and transportation of payloads.

  17. Using task analysis to understand the Data System Operations Team

    NASA Technical Reports Server (NTRS)

    Holder, Barbara E.

    1994-01-01

    The Data Systems Operations Team (DSOT) currently monitors the Multimission Ground Data System (MGDS) at JPL. The MGDS currently supports five spacecraft and within the next five years, it will support ten spacecraft simultaneously. The ground processing element of the MGDS consists of a distributed UNIX-based system of over 40 nodes and 100 processes. The MGDS system provides operators with little or no information about the system's end-to-end processing status or end-to-end configuration. The lack of system visibility has become a critical issue in the daily operation of the MGDS. A task analysis was conducted to determine what kinds of tools were needed to provide DSOT with useful status information and to prioritize the tool development. The analysis provided the formality and structure needed to get the right information exchange between development and operations. How even a small task analysis can improve developer-operator communications is described, and the challenges associated with conducting a task analysis in a real-time mission operations environment are examined.

  18. Natural gas operations: considerations on process transients, design, and control.

    PubMed

    Manenti, Flavio

    2012-03-01

    This manuscript highlights tangible benefits deriving from the dynamic simulation and control of operational transients of natural gas processing plants. Relevant improvements in safety, controllability, operability, and flexibility are obtained not only within the traditional applications, i.e. plant start-up and shutdown, but also in certain fields apparently time-independent such as the feasibility studies of gas processing plant layout and the process design of processes. Specifically, this paper enhances the myopic steady-state approach and its main shortcomings with respect to the more detailed studies that take into consideration the non-steady state behaviors. A portion of a gas processing facility is considered as case study. Process transients, design, and control solutions apparently more appealing from a steady-state approach are compared to the corresponding dynamic simulation solutions. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  19. The Bicycle Assembly Line Game

    ERIC Educational Resources Information Center

    Klotz, Dorothy

    2011-01-01

    "The Bicycle Assembly Line Game" is a team-based, in-class activity that helps students develop a basic understanding of continuously operating processes. Each team of 7-10 students selects one of seven prefigured bicycle assembly lines to operate. The lines are run in real-time, and the team that operates the line that yields the…

  20. Science--A Process Approach, Product Development Report No. 8.

    ERIC Educational Resources Information Center

    Sanderson, Barbara A.; Kratochvil, Daniel W.

    Science - A Process Approach, a science program for grades kindergarten through sixth, mainly focuses on scientific processes: observing, classifying, using numbers, measuring, space/time relationships, communicating, predicting, inferring, defining operationally, formulating hypotheses, interpreting data, controlling variables, and experimenting.…

  1. Mechanical Serial-Sectioning Data Assistant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulter, Gregory A.; Madison, Jonathan D.

    Mechanical Serial-Sectioning Data Assistant (MECH-SSDA) is a real-time data analytics software with graphical user-interface that; 1) tracks and visualizes material removal rates for mechanical serial-sectioning experiments using at least two height measurement methods; 2) tracks process time for specific segments of the serial-sectioning experiment; and 3) alerts the user to anomalies in expected removal rate, process time or unanticipated operational pauses

  2. Proposing a new iterative learning control algorithm based on a non-linear least square formulation - Minimising draw-in errors

    NASA Astrophysics Data System (ADS)

    Endelt, B.

    2017-09-01

    Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.

  3. Energy Consumption of Die Casting Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jerald Brevick; clark Mount-Campbell; Carroll Mobley

    2004-03-15

    Molten metal processing is inherently energy intensive and roughly 25% of the cost of die-cast products can be traced to some form of energy consumption [1]. The obvious major energy requirements are for melting and holding molten alloy in preparation for casting. The proper selection and maintenance of melting and holding equipment are clearly important factors in minimizing energy consumption in die-casting operations [2]. In addition to energy consumption, furnace selection also influences metal loss due to oxidation, metal quality, and maintenance requirements. Other important factors influencing energy consumption in a die-casting facility include geographic location, alloy(s) cast, starting formmore » of alloy (solid or liquid), overall process flow, casting yield, scrap rate, cycle times, number of shifts per day, days of operation per month, type and size of die-casting form of alloy (solid or liquid), overall process flow, casting yield, scrap rate, cycle times, number of shifts per day, days of operation per month, type and size of die-casting machine, related equipment (robots, trim presses), and downstream processing (machining, plating, assembly, etc.). Each of these factors also may influence the casting quality and productivity of a die-casting enterprise. In a die-casting enterprise, decisions regarding these issues are made frequently and are based on a large number of factors. Therefore, it is not surprising that energy consumption can vary significantly from one die-casting enterprise to the next, and within a single enterprise as function of time.« less

  4. Predictable turn-around time for post tape-out flow

    NASA Astrophysics Data System (ADS)

    Endo, Toshikazu; Park, Minyoung; Ghosh, Pradiptya

    2012-03-01

    A typical post-out flow data path at the IC Fabrication has following major components of software based processing - Boolean operations before the application of resolution enhancement techniques (RET) and optical proximity correctin (OPC), the RET and OPC step [etch retargeting, sub-resolution assist feature insertion (SRAF) and OPC], post-OPCRET Boolean operations and sometimes in the same flow simulation based verification. There are two objectives that an IC Fabrication tapeout flow manager wants to achieve with the flow - predictable completion time and fastest turn-around time (TAT). At times they may be competing. There have been studies in the literature modeling the turnaround time from historical data for runs with the same recipe and later using that to derive the resource allocation for subsequent runs. [3]. This approach is more feasible in predominantly simulation dominated tools but for edge operation dominated flow it may not be possible especially if some processing acceleration methods like pattern matching or hierarchical processing is involved. In this paper, we suggest an alternative method of providing target turnaround time and managing the priority of jobs while not doing any upfront resource modeling and resource planning. The methodology then systematically either meets the turnaround time need and potentially lets the user know if it will not as soon as possible. This builds on top of the Calibre Cluster Management (CalCM) resource management work previously published [1][2]. The paper describes the initial demonstration of the concept.

  5. Integration of multiple research disciplines on the International Space Station

    NASA Technical Reports Server (NTRS)

    Penley, N. J.; Uri, J.; Sivils, T.; Bartoe, J. D.

    2000-01-01

    The International Space Station will provide an extremely high-quality, long-duration microgravity environment for the conduct of research. In addition, the ISS offers a platform for performing observations of Earth and Space from a high-inclination orbit, outside of the Earth's atmosphere. This unique environment and observational capability offers the opportunity for advancement in a diverse set of research fields. Many of these disciplines do not relate to one another, and present widely differing approaches to study, as well as different resource and operational requirements. Significant challenges exist to ensure the highest quality research return for each investigation. Requirements from different investigations must be identified, clarified, integrated and communicated to ISS personnel in a consistent manner. Resources such as power, crew time, etc. must be apportioned to allow the conduct of each investigation. Decisions affecting research must be made at the strategic level as well as at a very detailed execution level. The timing of the decisions can range from years before an investigation to real-time operations. The international nature of the Space Station program adds to the complexity. Each participating country must be assured that their interests are represented during the entire planning and operations process. A process for making decisions regarding research planning, operations, and real-time replanning is discussed. This process ensures adequate representation of all research investigators. It provides a means for timely decisions, and it includes a means to ensure that all ISS International Partners have their programmatic interests represented. c 2000 Published by Elsevier Science Ltd. All rights reserved.

  6. Acceleration of linear stationary iterative processes in multiprocessor computers. II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romm, Ya.E.

    1982-05-01

    For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less

  7. How does processing affect storage in working memory tasks? Evidence for both domain-general and domain-specific effects.

    PubMed

    Jarrold, Christopher; Tam, Helen; Baddeley, Alan D; Harvey, Caroline E

    2011-05-01

    Two studies that examine whether the forgetting caused by the processing demands of working memory tasks is domain-general or domain-specific are presented. In each, separate groups of adult participants were asked to carry out either verbal or nonverbal operations on exactly the same processing materials while maintaining verbal storage items. The imposition of verbal processing tended to produce greater forgetting even though verbal processing operations took no longer to complete than did nonverbal processing operations. However, nonverbal processing did cause forgetting relative to baseline control conditions, and evidence from the timing of individuals' processing responses suggests that individuals in both processing groups slowed their responses in order to "refresh" the memoranda. Taken together the data suggest that processing has a domain-general effect on working memory performance by impeding refreshment of memoranda but can also cause effects that appear domain-specific and that result from either blocking of rehearsal or interference.

  8. KSC ground operations planning for Space Station

    NASA Technical Reports Server (NTRS)

    Lyon, J. R.; Revesz, W., Jr.

    1993-01-01

    At the Kennedy Space Center (KSC) in Florida, processing facilities are being built and activated to support the processing, checkout, and launch of Space Station elements. The generic capability of these facilities will be utilized to support resupply missions for payloads, life support services, and propellants for the 30-year life of the program. Special Ground Support Equipment (GSE) is being designed for Space Station hardware special handling requirements, and a Test, Checkout, and Monitoring System (TCMS) is under development to verify that the flight elements are ready for launch. The facilities and equipment used at KSC, along with the testing required to accomplish the mission, are described in detail to provide an understanding of the complexity of operations at the launch site. Assessments of hardware processing flows through KSC are being conducted to minimize the processing flow times for each hardware element. Baseline operations plans and the changes made to improve operations and reduce costs are described, recognizing that efficient ground operations are a major key to success of the Space Station.

  9. Realization of process improvement at a diagnostic radiology department with aid of simulation modeling.

    PubMed

    Oh, Hong-Choon; Toh, Hong-Guan; Giap Cheong, Eddy Seng

    2011-11-01

    Using the classical process improvement framework of Plan-Do-Study-Act (PDSA), the diagnostic radiology department of a tertiary hospital identified several patient cycle time reduction strategies. Experimentation of these strategies (which included procurement of new machines, hiring of new staff, redesign of queue system, etc.) through pilot scale implementation was impractical because it might incur substantial expenditure or be operationally disruptive. With this in mind, simulation modeling was used to test these strategies via performance of "what if" analyses. Using the output generated by the simulation model, the team was able to identify a cost-free cycle time reduction strategy, which subsequently led to a reduction of patient cycle time and achievement of a management-defined performance target. As healthcare professionals work continually to improve healthcare operational efficiency in response to rising healthcare costs and patient expectation, simulation modeling offers an effective scientific framework that can complement established process improvement framework like PDSA to realize healthcare process enhancement. © 2011 National Association for Healthcare Quality.

  10. Scheduling job shop - A case study

    NASA Astrophysics Data System (ADS)

    Abas, M.; Abbas, A.; Khan, W. A.

    2016-08-01

    The scheduling in job shop is important for efficient utilization of machines in the manufacturing industry. There are number of algorithms available for scheduling of jobs which depend on machines tools, indirect consumables and jobs which are to be processed. In this paper a case study is presented for scheduling of jobs when parts are treated on available machines. Through time and motion study setup time and operation time are measured as total processing time for variety of products having different manufacturing processes. Based on due dates different level of priority are assigned to the jobs and the jobs are scheduled on the basis of priority. In view of the measured processing time, the times for processing of some new jobs are estimated and for efficient utilization of the machines available an algorithm is proposed and validated.

  11. Parareal algorithms with local time-integrators for time fractional differential equations

    NASA Astrophysics Data System (ADS)

    Wu, Shu-Lin; Zhou, Tao

    2018-04-01

    It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.

  12. Automated Interactive Simulation Model (AISIM) VAX Version 5.0 Training Manual.

    DTIC Science & Technology

    1987-05-29

    action, activity, decision , etc. that consumes time. The entity is automatically created by the system when an ACTION Primitive is placed. 1.3.2.4 The...MODELED SYSTEM 1.3.2.1 The Process Entity. A Process is used to represent the operations, decisions , actions or activities that can be decomposed and...is associated with the Action entity described below, is included in Process definitions to indicate the time a certain Action (or process, decision

  13. 40 CFR Table 3 to Subpart Ddddd of... - Operating Limits for Boilers and Process Heaters With Mercury Emission Limits and Boilers and...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... minimum pressure drop and liquid flow-rate at or above the operating levels established during the... leak detection system alarm does not sound more than 5 percent of the operating time during a 6-month... control Maintain the minimum sorbent or carbon injection rate at or above the operating levels established...

  14. 40 CFR Table 3 to Subpart Ddddd of... - Operating Limits for Boilers and Process Heaters With Mercury Emission Limits and Boilers and...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... minimum pressure drop and liquid flow-rate at or above the operating levels established during the... leak detection system alarm does not sound more than 5 percent of the operating time during a 6-month... control Maintain the minimum sorbent or carbon injection rate at or above the operating levels established...

  15. Principles of Temporal Processing Across the Cortical Hierarchy.

    PubMed

    Himberger, Kevin D; Chien, Hsiang-Yun; Honey, Christopher J

    2018-05-02

    The world is richly structured on multiple spatiotemporal scales. In order to represent spatial structure, many machine-learning models repeat a set of basic operations at each layer of a hierarchical architecture. These iterated spatial operations - including pooling, normalization and pattern completion - enable these systems to recognize and predict spatial structure, while robust to changes in the spatial scale, contrast and noisiness of the input signal. Because our brains also process temporal information that is rich and occurs across multiple time scales, might the brain employ an analogous set of operations for temporal information processing? Here we define a candidate set of temporal operations, and we review evidence that they are implemented in the mammalian cerebral cortex in a hierarchical manner. We conclude that multiple consecutive stages of cortical processing can be understood to perform temporal pooling, temporal normalization and temporal pattern completion. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. The traverse planning process for D-RATS 2010

    NASA Astrophysics Data System (ADS)

    Hörz, Friedrich; Lofgren, Gary E.; Gruener, John E.; Eppler, Dean B.; Skinner, James A.; Fortezzo, Corey M.; Graf, Jodi S.; Bluethmann, William J.; Seibert, Marc A.; Bell, Ernest R.

    2013-10-01

    This report describes the traverse planning process for the Desert Research and Technology Studies (D-RATS) 2010 field simulation of a conceptual 14-day planetary mission. This activity took place between August 23 and September 17, 2010 in the San Francisco Volcanic Field, Arizona. It focused on the utilization of two pressurized rovers and a ground-based communication system, as well as on the development of mission operation concepts for long duration, dual-rover missions. The early planning process began some 12 months prior to the actual field tests and defined the first order engineering-, flight operations, and science objectives. The detailed implementation and refinement of these objectives took place over the ensuing 10 months, resulting in a large number of technical and operational constraints that affected the actual traverse route or the cumulative Extravehicular Activity (EVA) time available for detailed field observations. The science planning proceeded from the generation of photogeologic maps of the test area, to the establishment of prioritized science objectives and associated candidate sites for detailed field exploration. The combination of operational constraints and science objectives resulted in the final design of traverse routes and time lines for each of the 24 traverses needed to support 12 field days by two rovers. Examples of daily traverses will be given that will hopefully illustrate that the design of long duration, long distance planetary traverses is a highly interdisciplinary and time-consuming collaboration between diverse engineers, flight operations personnel, human factors interests, and planetary scientists.

  17. A novel spatter detection algorithm based on typical cellular neural network operations for laser beam welding processes

    NASA Astrophysics Data System (ADS)

    Nicolosi, L.; Abt, F.; Blug, A.; Heider, A.; Tetzlaff, R.; Höfler, H.

    2012-01-01

    Real-time monitoring of laser beam welding (LBW) has increasingly gained importance in several manufacturing processes ranging from automobile production to precision mechanics. In the latter, a novel algorithm for the real-time detection of spatters was implemented in a camera based on cellular neural networks. The latter can be connected to the optics of commercially available laser machines leading to real-time monitoring of LBW processes at rates up to 15 kHz. Such high monitoring rates allow the integration of other image evaluation tasks such as the detection of the full penetration hole for real-time control of process parameters.

  18. A key to success: optimizing the planning process

    NASA Astrophysics Data System (ADS)

    Turk, Huseyin; Karakaya, Kamil

    2014-05-01

    By adopting The NATO Strategic Concept Document in 2010, some important changes in the perception of threat and management of crisis were introduced. This new concept, named ''Comprehensive Approach'', includes the precautions of pre-crisis management, applications of crisis-duration management and reconstruction phase of post-intervention management. NATO will be interested in not only the political and military options , but also social, economical and informational aspects of crisis. NATO will take place in all phases of conflict. The conflicts which occur outside the borders of NATO's nations and terrorism are perceived as threat sources for peace and stability. In addition to conventional threats, cyber attacks which threaten network-supported communication systems, preventing applications from accessing to space that will be used in different fields of life. On the other hand, electronic warfare capabilities which can effect us negatively are added to threat list as new threats. In the process in which military is thought as option, a harder planning phase is waiting for NATO's decision makers who struggle for keeping peace and security. Operation planning process which depends on comprehensive approach, contains these steps: Situational awareness of battlefield, evaluation of the military intervention options, orientation, developing an operation plan, reviewing the plan and transition phases.1 To be successful in theater which is always changing with the technological advances, there has to be an accurate and timely planning on the table. So, spending time for planning can be shown as one of the biggest problem. In addition, sustaining situational awareness which is important for the whole operation planning process, technical command and control hitches, human factor, inability to determine the center of gravity of opponent in asymmetrical threat situations can be described as some of the difficulties in operation planning. In this study, a possible air operation planning process is analyzed according to a comprehensive approach. The difficulties of planning are identified. Consequently, for optimizing a decisionmaking process of an air operation, a planning process is identified in a virtual command and control structure.

  19. Time lens assisted photonic sampling extraction

    NASA Astrophysics Data System (ADS)

    Petrillo, Keith Gordon

    Telecommunication bandwidth demands have dramatically increased in recent years due to Internet based services like cloud computing and storage, large file sharing, and video streaming. Additionally, sensing systems such as wideband radar, magnetic imaging resonance systems, and complex modulation formats to handle large data transfer in telecommunications require high speed, high resolution analog-to-digital converters (ADCs) to interpret the data. Accurately processing and acquiring the information at next generation data rates from these systems has become challenging for electronic systems. The largest contributors to the electronic bottleneck are bandwidth and timing jitter which limit speed and reduce accuracy. Optical systems have shown to have at least three orders of magnitude increase in bandwidth capabilities and state of the art mode locked lasers have reduced timing jitters into thousands of attoseconds. Such features have encouraged processing signals without the use of electronics or using photonics to assist electronics. All optical signal processing has allowed the processing of telecommunication line rates up to 1.28 Tb/s and high resolution analog-to-digital converters in the 10s of gigahertz. The major drawback to these optical systems is the high cost of the components. The application of all optical processing techniques such as a time lens and chirped processing can greatly reduce bandwidth and cost requirements of optical serial to parallel converters and push photonically assisted ADCs into the 100s of gigahertz. In this dissertation, the building blocks to a high speed photonically assisted ADC are demonstrated, each providing benefits to its own respective application. A serial to parallel converter using a continuously operating time lens as an optical Fourier processor is demonstrated to fully convert a 160-Gb/s optical time division multiplexed signal to 16 10-Gb/s channels with error free operation. Using chirped processing, an optical sample and hold concept is demonstrated and analyzed as a resolution improvement to existing photonically assisted ADCs. Simulations indicate that the application of a continuously operating time lens to a photonically assisted sampling system can increase photonically sampled systems by an order of magnitude while acquiring properties similar to an optical sample and hold system.

  20. Local active information storage as a tool to understand distributed neural information processing

    PubMed Central

    Wibral, Michael; Lizier, Joseph T.; Vögler, Sebastian; Priesemann, Viola; Galuske, Ralf

    2013-01-01

    Every act of information processing can in principle be decomposed into the component operations of information storage, transfer, and modification. Yet, while this is easily done for today's digital computers, the application of these concepts to neural information processing was hampered by the lack of proper mathematical definitions of these operations on information. Recently, definitions were given for the dynamics of these information processing operations on a local scale in space and time in a distributed system, and the specific concept of local active information storage was successfully applied to the analysis and optimization of artificial neural systems. However, no attempt to measure the space-time dynamics of local active information storage in neural data has been made to date. Here we measure local active information storage on a local scale in time and space in voltage sensitive dye imaging data from area 18 of the cat. We show that storage reflects neural properties such as stimulus preferences and surprise upon unexpected stimulus change, and in area 18 reflects the abstract concept of an ongoing stimulus despite the locally random nature of this stimulus. We suggest that LAIS will be a useful quantity to test theories of cortical function, such as predictive coding. PMID:24501593

  1. Properties of the internal clock.

    PubMed

    Church, R M

    1984-01-01

    Evidence has been cited for the following properties of the parts of the psychological process used for timing intervals: The pacemaker has a mean rate that can be varied by drugs, diet, and stress. The switch has a latency to operate and it can be operated in various modes, such as run, stop, and reset. The accumulator times up, in absolute, arithmetic units. Working memory can be reset on command or, after lesions have been created in the fimbria fornix, when there is a gap in a signal. The transformation from the accumulator to reference memory is done with a multiplicative constant that is affected by drugs, lesions, and individual differences. The comparator uses a ratio between the value in the accumulator (or working memory) and reference memory. Finally, there must be multiple switch-accumulator modules to handle simultaneous temporal processing; and the psychological timing process may be used on some occasions and not on others.

  2. Testing and checkout experiences in the National Transonic Facility since becoming operational

    NASA Technical Reports Server (NTRS)

    Bruce, W. E., Jr.; Gloss, B. B.; Mckinney, L. W.

    1988-01-01

    The U.S. National Transonic Facility, constructed by NASA to meet the national needs for High Reynolds Number Testing, has been operational in a checkout and test mode since the operational readiness review (ORR) in late 1984. During this time, there have been problems centered around the effect of large temperature excursions on the mechanical movement of large components, the reliable performance of instrumentation systems, and an unexpected moisture problem with dry insulation. The more significant efforts since the ORR are reviewed and NTF status concerning hardware, instrumentation and process controls systems, operating constraints imposed by the cryogenic environment, and data quality and process controls is summarized.

  3. Characteristics of process oils from HTI coal/plastics co-liquefaction runs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robbins, G.A.; Brandes, S.D.; Winschel, R.A.

    1995-12-31

    The objective of this project is to provide timely analytical support to DOE`s liquefaction development effort. Specific objectives of the work reported here are presented. During a few operating periods of Run POC-2, HTI co-liquefied mixed plastics with coal, and tire rubber with coal. Although steady-state operation was not achieved during these brief tests periods, the results indicated that a liquefaction plant could operate with these waste materials as feedstocks. CONSOL analyzed 65 process stream samples from coal-only and coal/waste portions of the run. Some results obtained from characterization of samples from Run POC-2 coal/plastics operation are presented.

  4. 9 CFR 318.307 - Record review and maintenance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...

  5. 9 CFR 318.307 - Record review and maintenance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...

  6. 9 CFR 318.307 - Record review and maintenance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...

  7. 9 CFR 318.307 - Record review and maintenance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...

  8. 9 CFR 318.307 - Record review and maintenance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... temperature/time recording devices shall be identified by production date, container code, processing vessel... made available to Program employees for review. (b) Automated process monitoring and recordkeeping. Automated process monitoring and recordkeeping systems shall be designed and operated in a manner that will...

  9. The embedded operating system project

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.

    1984-01-01

    This progress report describes research towards the design and construction of embedded operating systems for real-time advanced aerospace applications. The applications concerned require reliable operating system support that must accommodate networks of computers. The report addresses the problems of constructing such operating systems, the communications media, reconfiguration, consistency and recovery in a distributed system, and the issues of realtime processing. A discussion is included on suitable theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based systems. In particular, this report addresses: atomic actions, fault tolerance, operating system structure, program development, reliability and availability, and networking issues. This document reports the status of various experiments designed and conducted to investigate embedded operating system design issues.

  10. Operations management tools to be applied for textile

    NASA Astrophysics Data System (ADS)

    Maralcan, A.; Ilhan, I.

    2017-10-01

    In this paper, basic concepts of process analysis such as flow time, inventory, bottleneck, labour cost and utilization are illustrated first. The effect of bottleneck on the results of a business are especially emphasized. In the next section, tools on productivity measurement; KPI (Key Performance Indicators) Tree, OEE (Overall Equipment Effectiveness) and Takt Time are introduced and exemplified. KPI tree is a diagram on which we can visualize all the variables of an operation which are driving financial results through cost and profit. OEE is a tool to measure a potential extra capacity of an equipment or an employee. Takt time is a tool to determine the process flow rate according to the customer demand. KPI tree is studied through the whole process while OEE is exemplified for a stenter frame machine which is the most important machine (and usually the bottleneck) and the most expensive investment in a finishing plant. Takt time is exemplified for the quality control department. Finally quality tools, six sigma, control charts and jidoka are introduced. Six sigma is a tool to measure process capability and by the way probability of a defect. Control chart is a powerful tool to monitor the process. The idea of jidoka (detect, stop and alert) is about alerting the people that there is a problem in the process.

  11. Comparative study on the removal of COD from POME by electrocoagulation and electro-Fenton methods: Process optimization

    NASA Astrophysics Data System (ADS)

    Chairunnisak, A.; Arifin, B.; Sofyan, H.; Lubis, M. R.; Darmadi

    2018-03-01

    This research focuses on the Chemical Oxygen Demand (COD) treatment in palm oil mill effluent by electrocoagulation and electro-Fenton methods to solve it. Initially, the aqueous solution precipitates in acid condition at pH of about two. This study focuses on the palm oil mill effluent degradation by Fe electrodes in a simple batch reactor. This work is conducted by using different parameters such as voltage, electrolyte concentration of NaCl, volume of H2O2 and operation time. The processing of data resulted is by using response surface method coupled with Box-Behnken design. The electrocoagulation method results in the optimum COD reduction of 94.53% from operating time of 39.28 minutes, 20 volts, and without electrolyte concentration. For electro-Fenton process, experiment points out that voltage 15.78 volts, electrolyte concentration 0.06 M and H2O2 volume 14.79 ml with time 35.92 minutes yield 99.56% degradation. The result concluded that the electro-Fenton process was more effective to degrade COD of the palm-oil-mill effluent compared to electrocoagulation process.

  12. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  13. Enhanced round robin CPU scheduling with burst time based time quantum

    NASA Astrophysics Data System (ADS)

    Indusree, J. R.; Prabadevi, B.

    2017-11-01

    Process scheduling is a very important functionality of Operating system. The main-known process-scheduling algorithms are First Come First Serve (FCFS) algorithm, Round Robin (RR) algorithm, Priority scheduling algorithm and Shortest Job First (SJF) algorithm. Compared to its peers, Round Robin (RR) algorithm has the advantage that it gives fair share of CPU to the processes which are already in the ready-queue. The effectiveness of the RR algorithm greatly depends on chosen time quantum value. Through this research paper, we are proposing an enhanced algorithm called Enhanced Round Robin with Burst-time based Time Quantum (ERRBTQ) process scheduling algorithm which calculates time quantum as per the burst-time of processes already in ready queue. The experimental results and analysis of ERRBTQ algorithm clearly indicates the improved performance when compared with conventional RR and its variants.

  14. InSAR data for monitoring land subsidence: time to think big

    NASA Astrophysics Data System (ADS)

    Ferretti, A.; Colombo, D.; Fumagalli, A.; Novali, F.; Rucci, A.

    2015-11-01

    Satellite interferometric synthetic aperture radar (InSAR) data have proven effective and valuable in the analysis of urban subsidence phenomena based on multi-temporal radar images. Results obtained by processing data acquired by different radar sensors, have shown the potential of InSAR and highlighted the key points for an operational use of this technology, namely: (1) regular acquisition over large areas of interferometric data stacks; (2) use of advanced processing algorithms, capable of estimating and removing atmospheric disturbances; (3) access to significant processing power for a regular update of the information over large areas. In this paper, we show how the operational potential of InSAR has been realized thanks to the recent advances in InSAR processing algorithms, the advent of cloud computing and the launch of new satellite platforms, specifically designed for InSAR analyses (e.g. Sentinel-1a operated by the ESA and ALOS2 operated by JAXA). The processing of thousands of SAR scenes to cover an entire nation has been performed successfully in Italy in a project financed by the Italian Ministry of the Environment. The challenge for the future is to pass from the historical analysis of SAR scenes already acquired in digital archives to a near real-time monitoring program where up to date deformation data are routinely provided to final users and decision makers.

  15. Development of a Patient-Based Model for Estimating Operative Times for Robot-Assisted Radical Prostatectomy.

    PubMed

    Huben, Neil; Hussein, Ahmed; May, Paul; Whittum, Michelle; Kraswowki, Collin; Ahmed, Youssef; Jing, Zhe; Khan, Hijab; Kim, Hyung; Schwaab, Thomas; Underwood Iii, Willie; Kauffman, Eric; Mohler, James L; Guru, Khurshid A

    2018-04-10

    To develop a methodology for predicting operative times for robot-assisted radical prostatectomy (RARP) using preoperative patient, disease, procedural and surgeon variables to facilitate operating room (OR) scheduling. The model included preoperative metrics: BMI, ASA score, clinical stage, National Comprehensive Cancer Network (NCCN) risk, prostate weight, nerve-sparing status, extent and laterality of lymph node dissection, and operating surgeon (6 surgeons were included in the study). A binary decision tree was fit using a conditional inference tree method to predict operative times. The variables most associated with operative time were determined using permutation tests. The data was split at the value of the variable that results in the largest difference in means for surgical time across the split. This process was repeated recursively on the resultant data. 1709 RARPs were included. The variable most strongly associated with operative time was the surgeon (surgeons 2 and 4 - 102 minutes shorter than surgeons 1, 3, 5, and 6, p<0.001). Among surgeons 2 and 4, BMI had the strongest association with surgical time (p<0.001). Among patients operated by surgeons 1, 3, 5 and 6, RARP time was again most strongly associated with the surgeon performing RARP. Surgeons 1, 3, and 6 were on average 76 minutes faster than surgeon 5 (p<0.001). The regression tree output in the form of box plots showed operative time median and ranges according to patient, disease, procedural and surgeon metrics. We developed a methodology that can predict operative times for RARP based on patient, disease and surgeon variables. This methodology can be utilized for quality control, facilitate OR scheduling and maximize OR efficiency.

  16. Brief Report. Educated Adults Are Still Affected by Intuitions about the Effect of Arithmetical Operations: Evidence from a Reaction-Time Study

    ERIC Educational Resources Information Center

    Vamvakoussi, Xenia; Van Dooren, Wim; Verschaffel, Lieven

    2013-01-01

    This study tested the hypothesis that intuitions about the effect of operations, e.g., "addition makes bigger" and "division makes smaller", are still present in educated adults, even after years of instruction. To establish the intuitive character, we applied a reaction time methodology, grounded in dual process theories of reasoning. Educated…

  17. Application study of evolutionary operation methods in optimization of process parameters for mosquito coils industry

    NASA Astrophysics Data System (ADS)

    Ginting, E.; Tambunanand, M. M.; Syahputri, K.

    2018-02-01

    Evolutionary Operation Methods (EVOP) is a method that is designed used in the process of running or operating routinely in the company to enables high productivity. Quality is one of the critical factors for a company to win the competition. Because of these conditions, the research for products quality has been done by gathering the production data of the company and make a direct observation to the factory floor especially the drying department to identify the problem which is the high water content in the mosquito incense coil. PT.X which is producing mosquito coils attempted to reduce product defects caused by the inaccuracy of operating conditions. One of the parameters of good quality insect repellent that is water content, that if the moisture content is too high then the product easy to mold and broken, and vice versa if it is too low the products are easily broken and burn shorter hours. Three factors that affect the value of the optimal water content, the stirring time, drying temperature and drying time. To obtain the required conditions Evolutionary Operation (EVOP) methods is used. Evolutionary Operation (EVOP) is used as an efficient technique for optimization of two or three variable experimental parameters using two-level factorial designs with center point. Optimal operating conditions in the experiment are stirring time performed for 20 minutes, drying temperature at 65°C, and drying time for 130 minutes. The results of the analysis based on the method of Evolutionary Operation (EVOP) value is the optimum water content of 6.90%, which indicates the value has approached the optimal in a production plant that is 7%.

  18. 40 CFR 461.73 - New source performance standards. (NSPS).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS BATTERY MANUFACTURING POINT SOURCE CATEGORY Zinc Subcategory § 461.73 New... times. (b) There shall be no discharge allowance for process wastewater pollutants from any battery manufacturing operation other than those battery manufacturing operations listed above. ...

  19. 40 CFR 63.5320 - How does my affected major source comply with the HAP emission standards?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... all times, including periods of startup, shutdown, and malfunction. (b) You must always operate and... record monthly the pounds of each type of finish applied for each leather product process operation and...

  20. 40 CFR 63.5320 - How does my affected major source comply with the HAP emission standards?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... all times, including periods of startup, shutdown, and malfunction. (b) You must always operate and... record monthly the pounds of each type of finish applied for each leather product process operation and...

  1. Operational summary of an electric propulsion long term test facility

    NASA Technical Reports Server (NTRS)

    Trump, G. E.; James, E. L.; Bechtel, R. T.

    1982-01-01

    An automated test facility capable of simultaneously operating three 2.5 kW, 30-cm mercury ion thrusters and their power processors is described, along with a test program conducted for the documentation of thruster characteristics as a function of time. Facility controls are analog, with full redundancy, so that in the event of malfunction the facility automaticcally activates a backup mode and notifies an operator. Test data are recorded by a central data collection system and processed as daily averages. The facility has operated continuously for a period of 37 months, over which nine mercury ion thrusters and four power processor units accumulated a total of over 14,500 hours of thruster operating time.

  2. Extended Operation of Turbojet Engine with Pentaborane

    NASA Technical Reports Server (NTRS)

    Useller, James W; Jones, William L

    1957-01-01

    A full-scale turbojet engine was operated with pentaborane fuel continuously for 22 minutes at conditions simulating flight at a Mach number of 0.8 at an altitude of 50,000 feet. This period of operation is approximately three times longer than previously reported operation times. Although the specific fuel consumption was reduced from 1.3 with JP-4 fuel to 0.98 with pentaborane, a 13.2-percent reduction in net thrust was also encountered. A portion of this thrust loss is potentially recoverable with proper design of the engine components. The boron oxide deposition and erosion processes within the engine approached an equilibrium condition after approximately 22 minutes of operation with pentaborane.

  3. The role of fractional time-derivative operators on anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Tateishi, Angel A.; Ribeiro, Haroldo V.; Lenzi, Ervin K.

    2017-10-01

    The generalized diffusion equations with fractional order derivatives have shown be quite efficient to describe the diffusion in complex systems, with the advantage of producing exact expressions for the underlying diffusive properties. Recently, researchers have proposed different fractional-time operators (namely: the Caputo-Fabrizio and Atangana-Baleanu) which, differently from the well-known Riemann-Liouville operator, are defined by non-singular memory kernels. Here we proposed to use these new operators to generalize the usual diffusion equation. By analyzing the corresponding fractional diffusion equations within the continuous time random walk framework, we obtained waiting time distributions characterized by exponential, stretched exponential, and power-law functions, as well as a crossover between two behaviors. For the mean square displacement, we found crossovers between usual and confined diffusion, and between usual and sub-diffusion. We obtained the exact expressions for the probability distributions, where non-Gaussian and stationary distributions emerged. This former feature is remarkable because the fractional diffusion equation is solved without external forces and subjected to the free diffusion boundary conditions. We have further shown that these new fractional diffusion equations are related to diffusive processes with stochastic resetting, and to fractional diffusion equations with derivatives of distributed order. Thus, our results suggest that these new operators may be a simple and efficient way for incorporating different structural aspects into the system, opening new possibilities for modeling and investigating anomalous diffusive processes.

  4. A performance comparison of the IBM RS/6000 and the Astronautics ZS-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, W.M.; Abraham, S.G.; Davidson, E.S.

    1991-01-01

    Concurrent uniprocessor architectures, of which vector and superscalar are two examples, are designed to capitalize on fine-grain parallelism. The authors have developed a performance evaluation method for comparing and improving these architectures, and in this article they present the methodology and a detailed case study of two machines. The runtime of many programs is dominated by time spent in loop constructs - for example, Fortran Do-loops. Loops generally comprise two logical processes: The access process generates addresses for memory operations while the execute process operates on floating-point data. Memory access patterns typically can be generated independently of the data inmore » the execute process. This independence allows the access process to slip ahead, thereby hiding memory latency. The IBM 360/91 was designed in 1967 to achieve slip dynamically, at runtime. One CPU unit executes integer operations while another handles floating-point operations. Other machines, including the VAX 9000 and the IBM RS/6000, use a similar approach.« less

  5. Improving perioperative performance: the use of operations management and the electronic health record.

    PubMed

    Foglia, Robert P; Alder, Adam C; Ruiz, Gardito

    2013-01-01

    Perioperative services require the orchestration of multiple staff, space and equipment. Our aim was to identify whether the implementation of operations management and an electronic health record (EHR) improved perioperative performance. We compared 2006, pre operations management and EHR implementation, to 2010, post implementation. Operations management consisted of: communication to staff of perioperative vision and metrics, obtaining credible data and analysis, and the implementation of performance improvement processes. The EHR allows: identification of delays and the accountable service or person, collection and collation of data for analysis in multiple venues, including operational, financial, and quality. Metrics assessed included: operative cases, first case on time starts; reason for delay, and operating revenue. In 2006, 19,148 operations were performed (13,545 in the Main Operating Room (OR) area, and 5603, at satellite locations); first case on time starts were 12%; reasons for first case delay were not identifiable; and operating revenue was $115.8M overall, with $78.1M in the Main OR area. In 2010, cases increased to 25,856 (+35%); Main OR area increased to 13,986 (+3%); first case on time starts improved to 46%; operations outside the Main OR area increased to 11,870 (112%); case delays were ascribed to nurses 7%, anesthesiologists 22%, surgeons 33%, and other (patient, hospital) 38%. Five surgeons (7%) accounted for 29% of surgical delays and 4 anesthesiologists (8%) for 45% of anesthesiology delays; operating revenue increased to $177.3M (+53%) overall, and in the Main OR area rose to $101.5M (+30%). The use of operations management and EHR resulted in improved processes, credible data, promptly sharing the metrics, and pinpointing individual provider performance. Implementation of these strategies allowed us to shift cases between facilities, reallocate OR blocks, increase first case on time starts four fold and operative cases by 35%, and these changes were associated with a 53% increase in operating revenue. The fact that revenue increase was greater than case volume (53% vs. 35%) speaks for improved performance. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Process Mining Methodology for Health Process Tracking Using Real-Time Indoor Location Systems.

    PubMed

    Fernandez-Llatas, Carlos; Lizondo, Aroa; Monton, Eduardo; Benedi, Jose-Miguel; Traver, Vicente

    2015-11-30

    The definition of efficient and accurate health processes in hospitals is crucial for ensuring an adequate quality of service. Knowing and improving the behavior of the surgical processes in a hospital can improve the number of patients that can be operated on using the same resources. However, the measure of this process is usually made in an obtrusive way, forcing nurses to get information and time data, affecting the proper process and generating inaccurate data due to human errors during the stressful journey of health staff in the operating theater. The use of indoor location systems can take time information about the process in an unobtrusive way, freeing nurses, allowing them to engage in purely welfare work. However, it is necessary to present these data in a understandable way for health professionals, who cannot deal with large amounts of historical localization log data. The use of process mining techniques can deal with this problem, offering an easily understandable view of the process. In this paper, we present a tool and a process mining-based methodology that, using indoor location systems, enables health staff not only to represent the process, but to know precise information about the deployment of the process in an unobtrusive and transparent way. We have successfully tested this tool in a real surgical area with 3613 patients during February, March and April of 2015.

  7. Process Mining Methodology for Health Process Tracking Using Real-Time Indoor Location Systems

    PubMed Central

    Fernandez-Llatas, Carlos; Lizondo, Aroa; Monton, Eduardo; Benedi, Jose-Miguel; Traver, Vicente

    2015-01-01

    The definition of efficient and accurate health processes in hospitals is crucial for ensuring an adequate quality of service. Knowing and improving the behavior of the surgical processes in a hospital can improve the number of patients that can be operated on using the same resources. However, the measure of this process is usually made in an obtrusive way, forcing nurses to get information and time data, affecting the proper process and generating inaccurate data due to human errors during the stressful journey of health staff in the operating theater. The use of indoor location systems can take time information about the process in an unobtrusive way, freeing nurses, allowing them to engage in purely welfare work. However, it is necessary to present these data in a understandable way for health professionals, who cannot deal with large amounts of historical localization log data. The use of process mining techniques can deal with this problem, offering an easily understandable view of the process. In this paper, we present a tool and a process mining-based methodology that, using indoor location systems, enables health staff not only to represent the process, but to know precise information about the deployment of the process in an unobtrusive and transparent way. We have successfully tested this tool in a real surgical area with 3613 patients during February, March and April of 2015. PMID:26633395

  8. Investigation of Ignition and Combustion Processes of Diesel Engines Operating with Turbulence and Air-storage Chambers

    NASA Technical Reports Server (NTRS)

    Petersen, Hans

    1938-01-01

    The flame photographs obtained with combustion-chamber models of engines operating respectively, with turbulence chamber and air-storage chambers or cells, provide an insight into the air and fuel movements that take place before and during combustion in the combustion chamber. The relation between air velocity, start of injection, and time of combustion was determined for the combustion process employing a turbulence chamber.

  9. Commander’s Handbook for Strategic Communication and Communication Strategy

    DTIC Science & Technology

    2010-06-24

    designed to gather SC educators and key practitioners for thoughtful discussions on SC education and training issues. KLE is not about engaging key...operational design and early joint operation planning process to identify indicators that will enable us to detect when it is time to “reframe” the problem...integrating process across DOD, included in concept and doctrine development, strategy and plan design , execution, and assessment, and incorporated

  10. Workflow and maintenance characteristics of five automated laboratory instruments for the diagnosis of sexually transmitted infections.

    PubMed

    Ratnam, Sam; Jang, Dan; Gilchrist, Jodi; Smieja, Marek; Poirier, Andre; Hatchette, Todd; Flandin, Jean-Frederic; Chernesky, Max

    2014-07-01

    The choice of a suitable automated system for a diagnostic laboratory depends on various factors. Comparative workflow studies provide quantifiable and objective metrics to determine hands-on time during specimen handling and processing, reagent preparation, return visits and maintenance, and test turnaround time and throughput. Using objective time study techniques, workflow characteristics for processing 96 and 192 tests were determined on m2000 RealTime (Abbott Molecular), Viper XTR (Becton Dickinson), cobas 4800 (Roche Molecular Diagnostics), Tigris (Hologic Gen-Probe), and Panther (Hologic Gen-Probe) platforms using second-generation assays for Chlamydia trachomatis and Neisseria gonorrhoeae. A combination of operational and maintenance steps requiring manual labor showed that Panther had the shortest overall hands-on times and Viper XTR the longest. Both Panther and Tigris showed greater efficiency whether 96 or 192 tests were processed. Viper XTR and Panther had the shortest times to results and m2000 RealTime the longest. Sample preparation and loading time was the shortest for Panther and longest for cobas 4800. Mandatory return visits were required only for m2000 RealTime and cobas 4800 when 96 tests were processed, and both required substantially more hands-on time than the other systems due to increased numbers of return visits when 192 tests were processed. These results show that there are substantial differences in the amount of labor required to operate each system. Assay performance, instrumentation, testing capacity, workflow, maintenance, and reagent costs should be considered in choosing a system. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  11. Embedded real-time operating system micro kernel design

    NASA Astrophysics Data System (ADS)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  12. Machine vision for real time orbital operations

    NASA Technical Reports Server (NTRS)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  13. Practical UXO Classification: Enhanced Data Processing Strategies for Technology Transition - Fort Ord: Dynamic and Cued Metalmapper Processing and Classification

    DTIC Science & Technology

    2017-06-06

    OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for...Geophysical Mapping, Electromagnetic Induction, Instrument Verification Strip, Time Domain Electromagnetic, Unexploded Ordnance 16. SECURITY...Munitions Response QA Quality Assurance QC Quality Control ROC Receiver Operating Characteristic RTK Real- time Kinematic s Second SNR

  14. Statistical process control: separating signal from noise in emergency department operations.

    PubMed

    Pimentel, Laura; Barrueto, Fermin

    2015-05-01

    Statistical process control (SPC) is a visually appealing and statistically rigorous methodology very suitable to the analysis of emergency department (ED) operations. We demonstrate that the control chart is the primary tool of SPC; it is constructed by plotting data measuring the key quality indicators of operational processes in rationally ordered subgroups such as units of time. Control limits are calculated using formulas reflecting the variation in the data points from one another and from the mean. SPC allows managers to determine whether operational processes are controlled and predictable. We review why the moving range chart is most appropriate for use in the complex ED milieu, how to apply SPC to ED operations, and how to determine when performance improvement is needed. SPC is an excellent tool for operational analysis and quality improvement for these reasons: 1) control charts make large data sets intuitively coherent by integrating statistical and visual descriptions; 2) SPC provides analysis of process stability and capability rather than simple comparison with a benchmark; 3) SPC allows distinction between special cause variation (signal), indicating an unstable process requiring action, and common cause variation (noise), reflecting a stable process; and 4) SPC keeps the focus of quality improvement on process rather than individual performance. Because data have no meaning apart from their context, and every process generates information that can be used to improve it, we contend that SPC should be seriously considered for driving quality improvement in emergency medicine. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    NASA Astrophysics Data System (ADS)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  16. Time-Lapse Motion Picture Technique Applied to the Study of Geological Processes.

    PubMed

    Miller, R D; Crandell, D R

    1959-09-25

    Light-weight, battery-operated timers were built and coupled to 16-mm motion-picture cameras having apertures controlled by photoelectric cells. The cameras were placed adjacent to Emmons Glacier on Mount Rainier. The film obtained confirms the view that exterior time-lapse photography can be applied to the study of slow-acting geologic processes.

  17. Definition of an auxiliary processor dedicated to real-time operating system kernels

    NASA Technical Reports Server (NTRS)

    Halang, Wolfgang A.

    1988-01-01

    In order to increase the efficiency of process control data processing, it is necessary to enhance the productivity of real time high level languages and to automate the task administration, because presently 60 percent or more of the applications are still programmed in assembly languages. This may be achieved by migrating apt functions for the support of process control oriented languages into the hardware, i.e., by new architectures. Whereas numerous high level languages have already been defined or realized, there are no investigations yet on hardware assisted implementation of real time features. The requirements to be fulfilled by languages and operating systems in hard real time environment are summarized. A comparison of the most prominent languages, viz. Ada, HAL/S, LTR, Pearl, as well as the real time extensions of FORTRAN and PL/1, reveals how existing languages meet these demands and which features still need to be incorporated to enable the development of reliable software with predictable program behavior, thus making it possible to carry out a technical safety approval. Accordingly, Pearl proved to be the closest match to the mentioned requirements.

  18. Concept of Operations Visualization in Support of Ares I Production

    NASA Technical Reports Server (NTRS)

    Chilton, James H.; Smith, Daid Alan

    2008-01-01

    Boeing was selected in 2007 to manufacture Ares I Upper Stage and Instrument Unit according to NASA's design which would require the use of the latest manufacturing and integration processes to meet NASA budget and schedule targets. Past production experience has established that the majority of the life cycle cost is established during the initial design process. Concept of Operations (CONOPs) visualizations/simulations help to reduce life cycle cost during the early design stage. Production and operation visualizations can reduce tooling, factory capacity, safety, and build process risks while spreading program support across government, academic, media and public constituencies. The NASA/Boeing production visualization (DELMIA; Digital Enterprise Lean Manufacturing Interactive Application) promotes timely, concurrent and collaborative producibility analysis (Boeing)while supporting Upper Stage Design Cycles (NASA). The DELMIA CONOPs visualization reduced overall Upper Stage production flow time at the manufacturing facility by over 100 man-days to 312.5 man-days and helped to identify technical access issues. The NASA/Boeing Interactive Concept of Operations (ICON) provides interactive access to Ares using real mission parameters, allows users to configure the mission which encourages ownership and identifies areas for improvement, allows mission operations or spacecraft detail to be added as needed, and provides an effective, low coast advocacy, outreach and education tool.

  19. Speech Signal Processing Research. Appendices 1 thru 9

    DTIC Science & Technology

    1975-12-01

    is 2400 rpm for a maximum rotational latency of 25 ms and an average of 12.5 ms. The track to track access time is 12 ms, the average access time...in Table 1-3. Table 1-3. Capabilities and Limitations Description Characteristics Start-Up Time Operating Temperature Operating Humidity...Storage Conditions - - ■ ■ ■ -*****•******* ~40 seconds 0oC (320F) to +50oC (1220F) ambient 10% to 80% with no condensation Temperature =0oC(32oF) to

  20. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  1. How gamma radiation processing systems are benefiting from the latest advances in information technology

    NASA Astrophysics Data System (ADS)

    Gibson, Wayne H.; Levesque, Daniel

    2000-03-01

    This paper discusses how gamma irradiation plants are putting the latest advances in computer and information technology to use for better process control, cost savings, and strategic advantages. Some irradiator operations are gaining significant benefits by integrating computer technology and robotics with real-time information processing, multi-user databases, and communication networks. The paper reports on several irradiation facilities that are making good use of client/server LANs, user-friendly graphics interfaces, supervisory control and data acquisition (SCADA) systems, distributed I/O with real-time sensor devices, trending analysis, real-time product tracking, dynamic product scheduling, and automated dosimetry reading. These plants are lowering costs by fast and reliable reconciliation of dosimetry data, easier validation to GMP requirements, optimizing production flow, and faster release of sterilized products to market. There is a trend in the manufacturing sector towards total automation using "predictive process control". Real-time verification of process parameters "on-the-run" allows control parameters to be adjusted appropriately, before the process strays out of limits. Applying this technology to the gamma radiation process, control will be based on monitoring the key parameters such as time, and making adjustments during the process to optimize quality and throughput. Dosimetry results will be used as a quality control measurement rather than as a final monitor for the release of the product. Results are correlated with the irradiation process data to quickly and confidently reconcile variations. Ultimately, a parametric process control system utilizing responsive control, feedback and verification will not only increase productivity and process efficiency, but can also result in operating within tighter dose control set points.

  2. Beyond the spectral theorem: Spectrally decomposing arbitrary functions of nondiagonalizable operators

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Crutchfield, James P.

    2018-06-01

    Nonlinearities in finite dimensions can be linearized by projecting them into infinite dimensions. Unfortunately, the familiar linear operator techniques that one would then hope to use often fail since the operators cannot be diagonalized. The curse of nondiagonalizability also plays an important role even in finite-dimensional linear operators, leading to analytical impediments that occur across many scientific domains. We show how to circumvent it via two tracks. First, using the well-known holomorphic functional calculus, we develop new practical results about spectral projection operators and the relationship between left and right generalized eigenvectors. Second, we generalize the holomorphic calculus to a meromorphic functional calculus that can decompose arbitrary functions of nondiagonalizable linear operators in terms of their eigenvalues and projection operators. This simultaneously simplifies and generalizes functional calculus so that it is readily applicable to analyzing complex physical systems. Together, these results extend the spectral theorem of normal operators to a much wider class, including circumstances in which poles and zeros of the function coincide with the operator spectrum. By allowing the direct manipulation of individual eigenspaces of nonnormal and nondiagonalizable operators, the new theory avoids spurious divergences. As such, it yields novel insights and closed-form expressions across several areas of physics in which nondiagonalizable dynamics arise, including memoryful stochastic processes, open nonunitary quantum systems, and far-from-equilibrium thermodynamics. The technical contributions include the first full treatment of arbitrary powers of an operator, highlighting the special role of the zero eigenvalue. Furthermore, we show that the Drazin inverse, previously only defined axiomatically, can be derived as the negative-one power of singular operators within the meromorphic functional calculus and we give a new general method to construct it. We provide new formulae for constructing spectral projection operators and delineate the relations among projection operators, eigenvectors, and left and right generalized eigenvectors. By way of illustrating its application, we explore several, rather distinct examples. First, we analyze stochastic transition operators in discrete and continuous time. Second, we show that nondiagonalizability can be a robust feature of a stochastic process, induced even by simple counting. As a result, we directly derive distributions of the time-dependent Poisson process and point out that nondiagonalizability is intrinsic to it and the broad class of hidden semi-Markov processes. Third, we show that the Drazin inverse arises naturally in stochastic thermodynamics and that applying the meromorphic functional calculus provides closed-form solutions for the dynamics of key thermodynamic observables. Finally, we draw connections to the Ruelle-Frobenius-Perron and Koopman operators for chaotic dynamical systems and propose how to extract eigenvalues from a time-series.

  3. Launch vehicle operations cost reduction through artificial intelligence techniques

    NASA Technical Reports Server (NTRS)

    Davis, Tom C., Jr.

    1988-01-01

    NASA's Kennedy Space Center has attempted to develop AI methods in order to reduce the cost of launch vehicle ground operations as well as to improve the reliability and safety of such operations. Attention is presently given to cost savings estimates for systems involving launch vehicle firing-room software and hardware real-time diagnostics, as well as the nature of configuration control and the real-time autonomous diagnostics of launch-processing systems by these means. Intelligent launch decisions and intelligent weather forecasting are additional applications of AI being considered.

  4. Quiet aircraft design and operational characteristics

    NASA Technical Reports Server (NTRS)

    Hodge, Charles G.

    1991-01-01

    The application of aircraft noise technology to the design and operation of aircraft is discussed. Areas of discussion include the setting of target airplane noise levels, operational considerations and their effect on noise, and the sequencing and timing of the design and development process. Primary emphasis is placed on commercial transport aircraft of the type operated by major airlines. Additionally, noise control engineering of other types of aircraft is briefly discussed.

  5. Constraints and System Primitives in Achieving Multilevel Security in Real Time Distributed System Environment

    DTIC Science & Technology

    1994-04-18

    because they represent a microkernel and monolithic kernel approach to MLS operating system issues. TMACH is I based on MACH, a distributed operating...the operating system is [L.sed on a microkernel design or a monolithic kernel design. This distinction requires some caution since monolithic operating...are provided by 3 user-level processes, in contrast to standard UNIX, which has a large monolithic kernel that pro- I - 22 - Distributed O)perating

  6. The Fourth Factor: The Case for Parity of Information as an Operational Factor With Space, Time & Force

    DTIC Science & Technology

    2008-10-31

    problem is devoid of a factor, it incurs additional (and sometimes unacceptable) risk. Operational art exists in an area that inter mingles science ...Operational Art as defined in U.S. Joint Doctrine is an accepted process for use by operational commanders to visualize how to most efficiently and...effectively employ military capabilities to achieve a desired objective. Within Operational Art are three accepted factors in which all other

  7. Image data-processing system for solar astronomy

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Teuber, D. L.; Watkins, J. R.; Thomas, D. T.; Cooper, C. M.

    1977-01-01

    The paper describes an image data processing system (IDAPS), its hardware/software configuration, and interactive and batch modes of operation for the analysis of the Skylab/Apollo Telescope Mount S056 X-Ray Telescope experiment data. Interactive IDAPS is primarily designed to provide on-line interactive user control of image processing operations for image familiarization, sequence and parameter optimization, and selective feature extraction and analysis. Batch IDAPS follows the normal conventions of card control and data input and output, and is best suited where the desired parameters and sequence of operations are known and when long image-processing times are required. Particular attention is given to the way in which this system has been used in solar astronomy and other investigations. Some recent results obtained by means of IDAPS are presented.

  8. Choosing order of operations to accelerate strip structure analysis in parameter range

    NASA Astrophysics Data System (ADS)

    Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.

    2018-05-01

    The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.

  9. Link monitor and control operator assistant: A prototype demonstrating semiautomated monitor and control

    NASA Technical Reports Server (NTRS)

    Lee, L. F.; Cooper, L. P.

    1993-01-01

    This article describes the approach, results, and lessons learned from an applied research project demonstrating how artificial intelligence (AI) technology can be used to improve Deep Space Network operations. Configuring antenna and associated equipment necessary to support a communications link is a time-consuming process. The time spent configuring the equipment is essentially overhead and results in reduced time for actual mission support operations. The NASA Office of Space Communications (Code O) and the NASA Office of Advanced Concepts and Technology (Code C) jointly funded an applied research project to investigate technologies which can be used to reduce configuration time. This resulted in the development and application of AI-based automated operations technology in a prototype system, the Link Monitor and Control Operator Assistant (LMC OA). The LMC OA was tested over the course of three months in a parallel experimental mode on very long baseline interferometry (VLBI) operations at the Goldstone Deep Space Communications Center. The tests demonstrated a 44 percent reduction in pre-calibration time for a VLBI pass on the 70-m antenna. Currently, this technology is being developed further under Research and Technology Operating Plan (RTOP)-72 to demonstrate the applicability of the technology to operations in the entire Deep Space Network.

  10. Process identification of the SCR system of coal-fired power plant for de-NOx based on historical operation data.

    PubMed

    Li, Jian; Shi, Raoqiao; Xu, Chuanlong; Wang, Shimin

    2018-05-08

    The selective catalytic reduction (SCR) system, as one principal flue gas treatment method employed for the NO x emission control of the coal-fired power plant, is nonlinear and time-varying with great inertia and large time delay. It is difficult for the present SCR control system to achieve satisfactory performance with the traditional feedback and feedforward control strategies. Although some improved control strategies, such as the Smith predictor control and the model predictive control, have been proposed for this issue, a well-matched identification model is essentially required to realize a superior control of the SCR system. Industrial field experiment is an alternative way to identify the SCR system model in the coal-fired power plant. But it undesirably disturbs the operation system and is costly in time and manpower. In this paper, a process identification model of the SCR system is proposed and developed by applying the asymptotic method to the sufficiently excited data, selected from the original historical operation database of a 350 MW coal-fired power plant according to the condition number of the Fisher information matrix. Numerical simulations are carried out based on the practical historical operation data to evaluate the performance of the proposed model. Results show that the proposed model can efficiently achieve the process identification of the SCR system.

  11. Historical data and analysis for the first five years of KSC STS payload processing

    NASA Technical Reports Server (NTRS)

    Ragusa, J. M.

    1986-01-01

    General and specific quantitative and qualitative results were identified from a study of actual operational experience while processing 186 science, applications, and commercial payloads for the first 5 years of Space Transportation System (STS) operations at the National Aeronautics and Space Administration's (NASA) John F. Kennedy Space Center (KSC). All non-Department of Defense payloads from STS-2 through STS-33 were part of the study. Historical data and cumulative program experiences from key personnel were used extensively. Emphasis was placed on various program planning and events that affected KSC processing, payload experiences and improvements, payload hardware condition after arrival, services to customers, and the impact of STS operations and delays. From these initial considerations, operational drivers were identified, data for selected processing parameters collected and analyzed, processing criteria and options determined, and STS payload results and conclusions reached. The study showed a significant reduction in time and effort needed by STS customers and KSC to process a wide variety of payload configurations. Also of significance is the fact that even the simplest payloads required more processing resources than were initially assumed. The success to date of payload integration, testing, and mission operations, however, indicates the soundness of the approach taken and the methods used.

  12. Global interrupt and barrier networks

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E; Heidelberger, Philip; Kopcsay, Gerard V.; Steinmacher-Burow, Burkhard D.; Takken, Todd E.

    2008-10-28

    A system and method for generating global asynchronous signals in a computing structure. Particularly, a global interrupt and barrier network is implemented that implements logic for generating global interrupt and barrier signals for controlling global asynchronous operations performed by processing elements at selected processing nodes of a computing structure in accordance with a processing algorithm; and includes the physical interconnecting of the processing nodes for communicating the global interrupt and barrier signals to the elements via low-latency paths. The global asynchronous signals respectively initiate interrupt and barrier operations at the processing nodes at times selected for optimizing performance of the processing algorithms. In one embodiment, the global interrupt and barrier network is implemented in a scalable, massively parallel supercomputing device structure comprising a plurality of processing nodes interconnected by multiple independent networks, with each node including one or more processing elements for performing computation or communication activity as required when performing parallel algorithm operations. One multiple independent network includes a global tree network for enabling high-speed global tree communications among global tree network nodes or sub-trees thereof. The global interrupt and barrier network may operate in parallel with the global tree network for providing global asynchronous sideband signals.

  13. Electronic digital display watch having solar and geographical functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salah, I.M.

    1984-10-30

    In order to provide easily accessible knowledge of the correlations between time, the geographical locale and the solar positions, the watch in question in addition to time-keeping means capable of displaying the current time also provides means capable of storing, processing in a microprocessor mode and displaying in a particular panel mode data of solar elevation and azimuth as well as date data, a computer performing correlating operations between these various values. Pushbuttons (BPH', BPM', BPB') allow using this watch in various operational and correction situations, and other pushbuttons (BPH, BPM, BPB) allow more specific commands for correction, for searchmore » operations regarding date and place based on the solar data, for storage and call from memory of the various processed data. This watch can easily be implemented as a small wrist watch. It will be advantageously used by those interested in knowing the solar positions, by solar facility engineers, architects, airline pilots, believers in the Moslem faith etc.« less

  14. Effects of operational conditions on sludge degradation and organic acids formation in low-critical wet air oxidation.

    PubMed

    Chung, Jinwook; Lee, Mikyung; Ahn, Jaehwan; Bae, Wookeun; Lee, Yong-Woo; Shim, Hojae

    2009-02-15

    Wet air oxidation processes are to treat highly concentrated organic compounds including refractory materials, sludge, and night soil, and usually operated at supercritical water conditions of high temperature and pressure. In this study, the effects of operational conditions including temperature, pressure, and oxidant dose on sludge degradation and conversion into subsequent intermediates such as organic acids were investigated at low critical wet oxidation conditions. The reaction time and temperature in the wet air oxidation process was shown an important factor affecting the liquefaction of volatile solids, with more significant effect on the thermal hydrolysis reaction rather than the oxidation reaction. The degradation efficiency of sludge and the formation of organic acids were improved with longer reaction time and higher reaction temperature. For the sludge reduction and the organic acids formation under the wet air oxidation, the optimal conditions for reaction temperature, time, pressure, and oxidant dose were shown approximately 240 degrees C, 30min, 60atm, and 2.0L/min, respectively.

  15. Valuing hydrological forecasts for a pumped storage assisted hydro facility

    NASA Astrophysics Data System (ADS)

    Zhao, Guangzhi; Davison, Matt

    2009-07-01

    SummaryThis paper estimates the value of a perfectly accurate short-term hydrological forecast to the operator of a hydro electricity generating facility which can sell its power at time varying but predictable prices. The expected value of a less accurate forecast will be smaller. We assume a simple random model for water inflows and that the costs of operating the facility, including water charges, will be the same whether or not its operator has inflow forecasts. Thus, the improvement in value from better hydrological prediction results from the increased ability of the forecast using facility to sell its power at high prices. The value of the forecast is therefore the difference between the sales of a facility operated over some time horizon with a perfect forecast, and the sales of a similar facility operated over the same time horizon with similar water inflows which, though governed by the same random model, cannot be forecast. This paper shows that the value of the forecast is an increasing function of the inflow process variance and quantifies how much the value of this perfect forecast increases with the variance of the water inflow process. Because the lifetime of hydroelectric facilities is long, the small increase observed here can lead to an increase in the profitability of hydropower investments.

  16. Error analysis of real time and post processed or bit determination of GFO using GPS tracking

    NASA Technical Reports Server (NTRS)

    Schreiner, William S.

    1991-01-01

    The goal of the Navy's GEOSAT Follow-On (GFO) mission is to map the topography of the world's oceans in both real time (operational) and post processed modes. Currently, the best candidate for supplying the required orbit accuracy is the Global Positioning System (GPS). The purpose of this fellowship was to determine the expected orbit accuracy for GFO in both the real time and post-processed modes when using GPS tracking. This report presents the work completed through the ending date of the fellowship.

  17. Stochastic dynamics of time correlation in complex systems with discrete time

    NASA Astrophysics Data System (ADS)

    Yulmetyev, Renat; Hänggi, Peter; Gafarov, Fail

    2000-11-01

    In this paper we present the concept of description of random processes in complex systems with discrete time. It involves the description of kinetics of discrete processes by means of the chain of finite-difference non-Markov equations for time correlation functions (TCFs). We have introduced the dynamic (time dependent) information Shannon entropy Si(t) where i=0,1,2,3,..., as an information measure of stochastic dynamics of time correlation (i=0) and time memory (i=1,2,3,...). The set of functions Si(t) constitute the quantitative measure of time correlation disorder (i=0) and time memory disorder (i=1,2,3,...) in complex system. The theory developed started from the careful analysis of time correlation involving dynamics of vectors set of various chaotic states. We examine two stochastic processes involving the creation and annihilation of time correlation (or time memory) in details. We carry out the analysis of vectors' dynamics employing finite-difference equations for random variables and the evolution operator describing their natural motion. The existence of TCF results in the construction of the set of projection operators by the usage of scalar product operation. Harnessing the infinite set of orthogonal dynamic random variables on a basis of Gram-Shmidt orthogonalization procedure tends to creation of infinite chain of finite-difference non-Markov kinetic equations for discrete TCFs and memory functions (MFs). The solution of the equations above thereof brings to the recurrence relations between the TCF and MF of senior and junior orders. This offers new opportunities for detecting the frequency spectra of power of entropy function Si(t) for time correlation (i=0) and time memory (i=1,2,3,...). The results obtained offer considerable scope for attack on stochastic dynamics of discrete random processes in a complex systems. Application of this technique on the analysis of stochastic dynamics of RR intervals from human ECG's shows convincing evidence for a non-Markovian phenomemena associated with a peculiarities in short- and long-range scaling. This method may be of use in distinguishing healthy from pathologic data sets based in differences in these non-Markovian properties.

  18. MPI Runtime Error Detection with MUST: Advances in Deadlock Detection

    DOE PAGES

    Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; ...

    2013-01-01

    The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require ( p ) analysis time permore » MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less

  19. The stem cell laboratory: design, equipment, and oversight.

    PubMed

    Wesselschmidt, Robin L; Schwartz, Philip H

    2011-01-01

    This chapter describes some of the major issues to be considered when setting up a laboratory for the culture of human pluripotent stem cells (hPSCs). The process of establishing a hPSC laboratory can be divided into two equally important parts. One is completely administrative and includes developing protocols, seeking approval, and establishing reporting processes and documentation. The other part of establishing a hPSC laboratory involves the physical plant and includes design, equipment and personnel. Proper planning of laboratory operations and proper design of the physical layout of the stem cell laboratory so that meets the scope of planned operations is a major undertaking, but the time spent upfront will pay long-term returns in operational efficiency and effectiveness. A well-planned, organized, and properly equipped laboratory supports research activities by increasing efficiency and reducing lost time and wasted resources.

  20. Improving Operating Room Efficiency: First Case On-Time Start Project.

    PubMed

    Phieffer, Laura; Hefner, Jennifer L; Rahmanian, Armin; Swartz, Jason; Ellison, Christopher E; Harter, Ronald; Lumbley, Joshua; Moffatt-Bruce, Susan D

    Operating rooms (ORs) are costly to run, and multiple factors influence efficiency. The first case on-time start (FCOS) of an OR is viewed as a harbinger of efficiency for the daily schedule. Across 26 ORs of a large, academic medical center, only 49% of cases started on time in October 2011. The Perioperative Services Department engaged an interdisciplinary Operating Room Committee to apply Six Sigma tools to this problem. The steps of this project included (1) problem mapping, (2) process improvements to preoperative readiness, (3) informatics support improvements, and (4) continuous measurement and feedback. By June 2013, there was a peak of 92% first case on-time starts across service lines, decreasing to 78% through 2014, still significantly above the preintervention level of 49% (p = .000). Delay minutes also significantly decreased through the study period (p = .000). Across 2013, the most common delay owners were the patient, the surgeon, the facility, and the anesthesia department. Continuous and sustained improvement of first case on-time starts is attributed to tracking the FCOS metric, establishing embedded process improvement resources and creating transparency of data. This article highlights success factors and barriers to program success and sustainability.

  1. Timeliner: Automating Procedures on the ISS

    NASA Technical Reports Server (NTRS)

    Brown, Robert; Braunstein, E.; Brunet, Rick; Grace, R.; Vu, T.; Zimpfer, Doug; Dwyer, William K.; Robinson, Emily

    2002-01-01

    Timeliner has been developed as a tool to automate procedural tasks. These tasks may be sequential tasks that would typically be performed by a human operator, or precisely ordered sequencing tasks that allow autonomous execution of a control process. The Timeliner system includes elements for compiling and executing sequences that are defined in the Timeliner language. The Timeliner language was specifically designed to allow easy definition of scripts that provide sequencing and control of complex systems. The execution environment provides real-time monitoring and control based on the commands and conditions defined in the Timeliner language. The Timeliner sequence control may be preprogrammed, compiled from Timeliner "scripts," or it may consist of real-time, interactive inputs from system operators. In general, the Timeliner system lowers the workload for mission or process control operations. In a mission environment, scripts can be used to automate spacecraft operations including autonomous or interactive vehicle control, performance of preflight and post-flight subsystem checkouts, or handling of failure detection and recovery. Timeliner may also be used for mission payload operations, such as stepping through pre-defined procedures of a scientific experiment.

  2. Development of an Integrated, Computer-Based Bibliographical Data System for a Large University Library. Annual Report to the National Science Foundation from the University of Chicago Library, 1966/67.

    ERIC Educational Resources Information Center

    Fussler, Herman; Payne, Charles T.

    Part I is a discussion of the following project tasks: A) development of an on-line, real-time bibliographic data processing system; B) implementation in library operations; C) character sets; D) Project MARC; E) circulation; and F) processing operation studies. Part II is a brief discussion of efforts to work out cooperative library systems…

  3. Information gathering, management and transfering for geospacial intelligence

    NASA Astrophysics Data System (ADS)

    Nunes, Paulo; Correia, Anacleto; Teodoro, M. Filomena

    2017-07-01

    Information is a key subject in modern organization operations. The success of joint and combined operations with organizations partners depends on the accurate information and knowledge flow concerning the operations theatre: provision of resources, environment evolution, markets location, where and when an event occurred. As in the past and nowadays we cannot conceive modern operations without maps and geo-spatial information (GI). Information and knowledge management is fundamental to the success of organizational decisions in an uncertainty environment. The georeferenced information management is a process of knowledge management, it begins in the raw data and ends on generating knowledge. GI and intelligence systems allow us to integrate all other forms of intelligence and can be a main platform to process and display geo-spatial-time referenced events. Combining explicit knowledge with peoples know-how to generate a continuous learning cycle that supports real time decisions mitigates the influences of fog of everyday competition and provides the knowledge supremacy. Extending the preliminary analysis done in [1], this work applies the exploratory factor analysis to a questionnaire about the GI and intelligence management in an organization company allowing to identify future lines of action to improve information process sharing and exploration of all the potential of this important resource.

  4. Software Correlator for Radioastron Mission

    NASA Astrophysics Data System (ADS)

    Likhachev, Sergey F.; Kostenko, Vladimir I.; Girin, Igor A.; Andrianov, Andrey S.; Rudnitskiy, Alexey G.; Zharov, Vladimir E.

    In this paper, we discuss the characteristics and operation of Astro Space Center (ASC) software FX correlator that is an important component of space-ground interferometer for Radioastron project. This project performs joint observations of compact radio sources using 10m space radio telescope (SRT) together with ground radio telescopes at 92, 18, 6 and 1.3 cm wavelengths. In this paper, we describe the main features of space-ground VLBI data processing of Radioastron project using ASC correlator. Quality of implemented fringe search procedure provides positive results without significant losses in correlated amplitude. ASC Correlator has a computational power close to real time operation. The correlator has a number of processing modes: “Continuum”, “Spectral Line”, “Pulsars”, “Giant Pulses”,“Coherent”. Special attention is paid to peculiarities of Radioastron space-ground VLBI data processing. The algorithms of time delay and delay rate calculation are also discussed, which is a matter of principle for data correlation of space-ground interferometers. During five years of Radioastron SRT successful operation, ASC correlator showed high potential of satisfying steady growing needs of current and future ground and space VLBI science. Results of ASC software correlator operation are demonstrated.

  5. Wheelclimb Derailment Processes and Derailment Criteria

    DOT National Transportation Integrated Search

    1984-06-01

    The most widely accepted criterion for wheelclimb derailment defines an upper limit for safe operation on wheel/rail contact forces on the climbing wheel, with the limit varying with time duration of the forces. For dynamic wheelclimb processes with ...

  6. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, J.F.; Siekhaus, W.J.

    1997-04-15

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule. 6 figs.

  7. Method for identifying biochemical and chemical reactions and micromechanical processes using nanomechanical and electronic signal identification

    DOEpatents

    Holzrichter, John F.; Siekhaus, Wigbert J.

    1997-01-01

    A scanning probe microscope, such as an atomic force microscope (AFM) or a scanning tunneling microscope (STM), is operated in a stationary mode on a site where an activity of interest occurs to measure and identify characteristic time-varying micromotions caused by biological, chemical, mechanical, electrical, optical, or physical processes. The tip and cantilever assembly of an AFM is used as a micromechanical detector of characteristic micromotions transmitted either directly by a site of interest or indirectly through the surrounding medium. Alternatively, the exponential dependence of the tunneling current on the size of the gap in the STM is used to detect micromechanical movement. The stationary mode of operation can be used to observe dynamic biological processes in real time and in a natural environment, such as polymerase processing of DNA for determining the sequence of a DNA molecule.

  8. Variability in the skin exposure of machine operators exposed to cutting fluids.

    PubMed

    Wassenius, O; Järvholm, B; Engström, T; Lillienberg, L; Meding, B

    1998-04-01

    This study describes a new technique for measuring skin exposure to cutting fluids and evaluates the variability of skin exposure among machine operators performing cyclic (repetitive) work. The technique is based on video recording and subsequent analysis of the video tape by means of computer-synchronized video equipment. The time intervals at which the machine operator's hand was exposed to fluid were registered, and the total wet time of the skin was calculated by assuming different evaporation times for the fluid. The exposure of 12 operators with different work methods was analyzed in 6 different workshops, which included a range of machine types, from highly automated metal cutting machines (ie, actual cutting and chip removal machines) requiring operator supervision to conventional metal cutting machines, where the operator was required to maneuver the machine and manually exchange products. The relative wet time varied between 0% and 100%. A significant association between short cycle time and high relative wet time was noted. However, there was no relationship between the degree of automatization of the metal cutting machines and wet time. The study shows that skin exposure to cutting fluids can vary considerably between machine operators involved in manufacturing processes using different types of metal cutting machines. The machine type was not associated with dermal wetness. The technique appears to give objective information about dermal wetness.

  9. Electro-optical processing of phased array data

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1973-01-01

    An on-line spatial light modulator for application as the input transducer for a real-time optical data processing system is described. The use of such a device in the analysis and processing of radar data in real time is reported. An interface from the optical processor to a control digital computer was designed, constructed, and tested. The input transducer, optical system, and computer interface have been operated in real time with real time radar data with the input data returns recorded on the input crystal, processed by the optical system, and the output plane pattern digitized, thresholded, and outputted to a display and storage in the computer memory. The correlation of theoretical and experimental results is discussed.

  10. Software engineering aspects of real-time programming concepts

    NASA Astrophysics Data System (ADS)

    Schoitsch, Erwin

    1986-08-01

    Real-time programming is a discipline of great importance not only in process control, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other. The second part deals with structuring and modularization of technical processes to build reliable and maintainable real time systems. Software-quality and software engineering aspects are considered throughout the paper.

  11. Flare forecasting at the Met Office Space Weather Operations Centre

    NASA Astrophysics Data System (ADS)

    Murray, S. A.; Bingham, S.; Sharpe, M.; Jackson, D. R.

    2017-04-01

    The Met Office Space Weather Operations Centre produces 24/7/365 space weather guidance, alerts, and forecasts to a wide range of government and commercial end-users across the United Kingdom. Solar flare forecasts are one of its products, which are issued multiple times a day in two forms: forecasts for each active region on the solar disk over the next 24 h and full-disk forecasts for the next 4 days. Here the forecasting process is described in detail, as well as first verification of archived forecasts using methods commonly used in operational weather prediction. Real-time verification available for operational flare forecasting use is also described. The influence of human forecasters is highlighted, with human-edited forecasts outperforming original model results and forecasting skill decreasing over longer forecast lead times.

  12. Impact of the time-out process on safety attitude in a tertiary neurosurgical department.

    PubMed

    McLaughlin, Nancy; Winograd, Deborah; Chung, Hallie R; Van de Wiele, Barbara; Martin, Neil A

    2014-11-01

    In July 2011, the UCLA Health System released its current time-out process protocol used across the Health System. Numerous interventions were performed to improve checklist completion and time-out process observance. This study assessed the impact of the current protocol for the time-out on healthcare providers' safety attitude and operating room safety climate. All members involved in neurosurgical procedures in the main operating room of the Ronald Reagan UCLA Medical Center were asked to anonymously complete an online survey on their overall perception of the time-out process. The survey was completed by 93 of 128 members of the surgical team. Overall, 98.9% felt that performing a pre-incision time-out improves patient safety. The majority of respondents (97.8%) felt that the team member introductions helped to promote a team spirit during the case. In addition, 93.5% felt that performing a time-out helped to ensure all team members were comfortable to voice safety concerns throughout the case. All respondents felt that the attending surgeon should be present during the time-out and 76.3% felt that he/she should lead the time-out. Unanimously, it was felt that the review of anticipated critical elements by the attending surgeon was helpful to respondents' role during the case. Responses revealed that although the time-out brings the team together physically, it does not necessarily reinforce teamwork. The time-out process favorably impacted team members' safety attitudes and perception as well as overall safety climate in neurosurgical ORs. Survey responses identified leadership training and teamwork training as two avenues for future improvement. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Variable Order and Distributed Order Fractional Operators

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Hartley, Tom T.

    2002-01-01

    Many physical processes appear to exhibit fractional order behavior that may vary with time or space. The continuum of order in the fractional calculus allows the order of the fractional operator to be considered as a variable. This paper develops the concept of variable and distributed order fractional operators. Definitions based on the Riemann-Liouville definitions are introduced and behavior of the operators is studied. Several time domain definitions that assign different arguments to the order q in the Riemann-Liouville definition are introduced. For each of these definitions various characteristics are determined. These include: time invariance of the operator, operator initialization, physical realization, linearity, operational transforms. and memory characteristics of the defining kernels. A measure (m2) for memory retentiveness of the order history is introduced. A generalized linear argument for the order q allows the concept of "tailored" variable order fractional operators whose a, memory may be chosen for a particular application. Memory retentiveness (m2) and order dynamic behavior are investigated and applications are shown. The concept of distributed order operators where the order of the time based operator depends on an additional independent (spatial) variable is also forwarded. Several definitions and their Laplace transforms are developed, analysis methods with these operators are demonstrated, and examples shown. Finally operators of multivariable and distributed order are defined in their various applications are outlined.

  14. Study and Analysis of The Robot-Operated Material Processing Systems (ROMPS)

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.

    1996-01-01

    This is a report presenting the progress of a research grant funded by NASA for work performed during 1 Oct. 1994 - 31 Sep. 1995. The report deals with the development and investigation of potential use of software for data processing for the Robot Operated Material Processing System (ROMPS). It reports on the progress of data processing of calibration samples processed by ROMPS in space and on earth. First data were retrieved using the I/O software and manually processed using MicroSoft Excel. Then the data retrieval and processing process was automated using a program written in C which is able to read the telemetry data and produce plots of time responses of sample temperatures and other desired variables. LabView was also employed to automatically retrieve and process the telemetry data.

  15. GEOTAIL Spacecraft historical data report

    NASA Technical Reports Server (NTRS)

    Boersig, George R.; Kruse, Lawrence F.

    1993-01-01

    The purpose of this GEOTAIL Historical Report is to document ground processing operations information gathered on the GEOTAIL mission during processing activities at the Cape Canaveral Air Force Station (CCAFS). It is hoped that this report may aid management analysis, improve integration processing and forecasting of processing trends, and reduce real-time schedule changes. The GEOTAIL payload is the third Delta 2 Expendable Launch Vehicle (ELV) mission to document historical data. Comparisons of planned versus as-run schedule information are displayed. Information will generally fall into the following categories: (1) payload stay times (payload processing facility/hazardous processing facility/launch complex-17A); (2) payload processing times (planned, actual); (3) schedule delays; (4) integrated test times (experiments/launch vehicle); (5) unique customer support requirements; (6) modifications performed at facilities; (7) other appropriate information (Appendices A & B); and (8) lessons learned (reference Appendix C).

  16. Spectrometer gun

    DOEpatents

    Waechter, David A.; Wolf, Michael A.; Umbarger, C. John

    1985-01-01

    A hand-holdable, battery-operated, microprocessor-based spectrometer gun includes a low-power matrix display and sufficient memory to permit both real-time observation and extended analysis of detected radiation pulses. Universality of the incorporated signal processing circuitry permits operation with various detectors having differing pulse detection and sensitivity parameters.

  17. 40 CFR Figure 1 to Subpart Tttt of... - Example Logs for Recording Leather Finish Use and HAP Content

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Finishing Operations Part 63, Subpt. TTTT, Fig. 1 Figure 1 to Subpart TTTT of Part 63—Example Logs for Recording Leather Finish Use and HAP Content Month:______Year:______ Finish Inventory Log Finish type Finish usage(pounds) HAP Content(mass fraction) Date and time Operator's name Product process operation Monthly...

  18. Impact of the reduction of anaesthesia turnover time on operating room efficiency.

    PubMed

    Sokolovic, E; Biro, P; Wyss, P; Werthemann, C; Haller, U; Spahn, D; Szucs, T

    2002-08-01

    We investigated whether an increase in anaesthesia staffing to permit induction of anaesthesia before the previous case had ended ('overlapping') would increase overall efficiency in the operating room. Hitherto, the average duration of operating sessions was too long, thus impeding the timely commencement of physicians' ward duties. The investigation was designed as a prospective, non-randomized, interrupted time-series analysis divided into three phases: (a) a baseline of 3.5 months, (b) a 2.5 month intervention phase, in which anaesthesia staffing was increased by one attending physician and one nurse, and (c) a further 2 months under baseline conditions. Data focussed on process management were collected from operating room staff, anaesthesia personnel and surgeons using a structured questionnaire collected daily during the entire study. Turnover time between consecutive operations decreased from 65 to 52 min per operation (95% CI: 9; 17; P = 0.0001). Operating room occupancy increased from 4:28 to 5:27 h day-1 (95% CI: 50; 68; P = 0.005). The surgeons began their work on the ward 35 min (95% CI: 30; 40) later than before the intervention and their overtime increased from 22:36 to 139:50 h. The time between surgical operations decreased significantly. Increased operating room efficiency owing to overlapping induction of anaesthesia allows more intense scheduling of operations. Thus, physicians and nurses can be released to spend more time with their patients in the ward. Improving the efficiency of the operating room alone is insufficient to improve human resource management at all levels of a surgical clinic.

  19. Adding the Human Element to Ship Manoeuvring Simulations

    NASA Astrophysics Data System (ADS)

    Aarsæther, Karl Gunnar; Moan, Torgeir

    Time-domain simulation of ship manoeuvring has been utilized in risk analysis to assess the effect of changes to the ship-lane, development in traffic volume and the associated risk. The process of ship manoeuvring in a wider socio-technical context consists of the technical systems, operational procedures, the human operators and support functions. Automated manoeuvring simulations without human operators in the simulation loop have often been preferred in simulation studies due to the low time required for simulations. Automatic control has represented the human element with little effort devoted to explain the relationship between the guidance and control algorithms and the human operator which they replace. This paper describes the development and application of a model for the human element for autonomous time-domain manoeuvring simulations. The method is applicable in the time-domain, modular and found to be capable of reproducing observed manoeuvre patterns, but limited to represent the intended behaviour.

  20. Analysis of dangerous area of single berth oil tanker operations based on CFD

    NASA Astrophysics Data System (ADS)

    Shi, Lina; Zhu, Faxin; Lu, Jinshu; Wu, Wenfeng; Zhang, Min; Zheng, Hailin

    2018-04-01

    Based on the single process in the liquid cargo tanker berths in the state as the research object, we analyzed the single berth oil tanker in the process of VOCs diffusion theory, built network model of VOCs diffusion with Gambit preprocessor, set up the simulation boundary conditions and simulated the five detection point sources in specific factors under the influence of VOCs concentration change with time by using Fluent software. We analyzed the dangerous area of single berth oil tanker operations through the diffusion of VOCs, so as to ensure the safe operation of oil tanker.

  1. The Efficacy of Cognitive Shock

    DTIC Science & Technology

    2015-05-21

    as the doctrinal forms of tempo, momentum, and simultaneity are described in self - referential terms of operating against an enemy. This exploration...will process events different then those physically experiencing them. The manner in which time and space are processed , interpreted, and utilized...command and control nodes, logistics, and other capabilities not in direct contact with friendly forces.” Time retains an inward, self -oriented focus

  2. Department of Homeland Security Cyber Resilience Review (Case Study)

    DTIC Science & Technology

    2014-01-23

    operational stress and c-Ues. TheCRRseek:stoelidtthecurrentstateof cyber seruritymanagementpracticesfromkeyc.yber .securitypersonnel...Institutionalization in the CRR Processes  are   acculturated ,   defined,   measured,   and   governed   Maturity indictor levels (MIL) are used in...processes that •  produce consistent results over time •  are retained during times of stress Level 0-Incomplete Level 1-Performed Level 2

  3. Real-Time Optical Image Processing Techniques

    DTIC Science & Technology

    1988-10-31

    pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-chan- nel spatial...required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness...pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the

  4. Intelligent alarming

    NASA Technical Reports Server (NTRS)

    Braden, W. B.

    1992-01-01

    This talk discusses the importance of providing a process operator with concise information about a process fault including a root cause diagnosis of the problem, a suggested best action for correcting the fault, and prioritization of the problem set. A decision tree approach is used to illustrate one type of approach for determining the root cause of a problem. Fault detection in several different types of scenarios is addressed, including pump malfunctions and pipeline leaks. The talk stresses the need for a good data rectification strategy and good process models along with a method for presenting the findings to the process operator in a focused and understandable way. A real time expert system is discussed as an effective tool to help provide operators with this type of information. The use of expert systems in the analysis of actual versus predicted results from neural networks and other types of process models is discussed.

  5. Playback system designed for X-Band SAR

    NASA Astrophysics Data System (ADS)

    Yuquan, Liu; Changyong, Dou

    2014-03-01

    SAR(Synthetic Aperture Radar) has extensive application because it is daylight and weather independent. In particular, X-Band SAR strip map, designed by Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, provides high ground resolution images, at the same time it has a large spatial coverage and a short acquisition time, so it is promising in multi-applications. When sudden disaster comes, the emergency situation acquires radar signal data and image as soon as possible, in order to take action to reduce loss and save lives in the first time. This paper summarizes a type of X-Band SAR playback processing system designed for disaster response and scientific needs. It describes SAR data workflow includes the payload data transmission and reception process. Playback processing system completes signal analysis on the original data, providing SAR level 0 products and quick image. Gigabit network promises radar signal transmission efficiency from recorder to calculation unit. Multi-thread parallel computing and ping pong operation can ensure computation speed. Through gigabit network, multi-thread parallel computing and ping pong operation, high speed data transmission and processing meet the SAR radar data playback real time requirement.

  6. FINAL REPORT: Transformational electrode drying process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Claus Daniel, C.; Wixom, M.

    2013-12-19

    This report includes major findings and outlook from the transformational electrode drying project performance period from January 6, 2012 to August 1, 2012. Electrode drying before cell assembly is an operational bottleneck in battery manufacturing due to long drying times and batch processing. Water taken up during shipment and other manufacturing steps needs to be removed before final battery assembly. Conventional vacuum ovens are limited in drying speed due to a temperature threshold needed to avoid damaging polymer components in the composite electrode. Roll to roll operation and alternative treatments can increase the water desorption and removal rate without overheatingmore » and damaging other components in the composite electrode, thus considerably reducing drying time and energy use. The objective of this project was the development of an electrode drying procedure, and the demonstration of processes with no decrease in battery performance. The benchmark for all drying data was an 80°C vacuum furnace treatment with a residence time of 18 – 22 hours. This report demonstrates an alternative roll to roll drying process with a 500-fold improvement in drying time down to 2 minutes and consumption of only 30% of the energy compared to vacuum furnace treatment.« less

  7. Numerical simulations on unsteady operation processes of N2O/HTPB hybrid rocket motor with/without diaphragm

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Hu, Fan; Wang, Donghui; Okolo. N, Patrick; Zhang, Weihua

    2017-07-01

    Numerical simulations on processes within a hybrid rocket motor were conducted in the past, where most of these simulations carried out majorly focused on steady state analysis. Solid fuel regression rate strongly depends on complicated physicochemical processes and internal fluid dynamic behavior within the rocket motor, which changes with both space and time during its operation, and are therefore more unsteady in characteristics. Numerical simulations on the unsteady operational processes of N2O/HTPB hybrid rocket motor with and without diaphragm are conducted within this research paper. A numerical model is established based on two dimensional axisymmetric unsteady Navier-Stokes equations having turbulence, combustion and coupled gas/solid phase formulations. Discrete phase model is used to simulate injection and vaporization of the liquid oxidizer. A dynamic mesh technique is applied to the non-uniform regression of fuel grain, while results of unsteady flow field, variation of regression rate distribution with time, regression process of burning surface and internal ballistics are all obtained. Due to presence of eddy flow, the diaphragm increases regression rate further downstream. Peak regression rates are observed close to flow reattachment regions, while these peak values decrease gradually, and peak position shift further downstream with time advancement. Motor performance is analyzed accordingly, and it is noticed that the case with diaphragm included results in combustion efficiency and specific impulse efficiency increase of roughly 10%, and ground thrust increase of 17.8%.

  8. Modeling of human operator dynamics in simple manual control utilizing time series analysis. [tracking (position)

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Osafo-Charles, F.; Oneill, W. D.; Gottlieb, G. L.

    1982-01-01

    Time series analysis is applied to model human operator dynamics in pursuit and compensatory tracking modes. The normalized residual criterion is used as a one-step analytical tool to encompass the processes of identification, estimation, and diagnostic checking. A parameter constraining technique is introduced to develop more reliable models of human operator dynamics. The human operator is adequately modeled by a second order dynamic system both in pursuit and compensatory tracking modes. In comparing the data sampling rates, 100 msec between samples is adequate and is shown to provide better results than 200 msec sampling. The residual power spectrum and eigenvalue analysis show that the human operator is not a generator of periodic characteristics.

  9. Integrating SAR with Optical and Thermal Remote Sensing for Operational Near Real-Time Volcano Monitoring

    NASA Astrophysics Data System (ADS)

    Meyer, F. J.; Webley, P.; Dehn, J.; Arko, S. A.; McAlpin, D. B.

    2013-12-01

    Volcanic eruptions are among the most significant hazards to human society, capable of triggering natural disasters on regional to global scales. In the last decade, remote sensing techniques have become established in operational forecasting, monitoring, and managing of volcanic hazards. Monitoring organizations, like the Alaska Volcano Observatory (AVO), are nowadays heavily relying on remote sensing data from a variety of optical and thermal sensors to provide time-critical hazard information. Despite the high utilization of these remote sensing data to detect and monitor volcanic eruptions, the presence of clouds and a dependence on solar illumination often limit their impact on decision making processes. Synthetic Aperture Radar (SAR) systems are widely believed to be superior to optical sensors in operational monitoring situations, due to the weather and illumination independence of their observations and the sensitivity of SAR to surface changes and deformation. Despite these benefits, the contributions of SAR to operational volcano monitoring have been limited in the past due to (1) high SAR data costs, (2) traditionally long data processing times, and (3) the low temporal sampling frequencies inherent to most SAR systems. In this study, we present improved data access, data processing, and data integration techniques that mitigate some of the above mentioned limitations and allow, for the first time, a meaningful integration of SAR into operational volcano monitoring systems. We will introduce a new database interface that was developed in cooperation with the Alaska Satellite Facility (ASF) and allows for rapid and seamless data access to all of ASF's SAR data holdings. We will also present processing techniques that improve the temporal frequency with which hazard-related products can be produced. These techniques take advantage of modern signal processing technology as well as new radiometric normalization schemes, both enabling the combination of multiple observation geometries in change detection procedures. Additionally, it will be shown how SAR-based hazard information can be integrated with data from optical satellites, thermal sensors, webcams and models to create near-real time volcano hazard information. We will introduce a prototype monitoring system that integrates SAR-based hazard information into the near real-time volcano hazard monitoring system of the Alaska Volcano Observatory. This prototype system was applied to historic eruptions of the volcanoes Okmok and Augustine, both located in the North Pacific. We will show that for these historic eruptions, the addition of SAR data lead to a significant improvement in activity detection and eruption monitoring, and improved the accuracy and timeliness of eruption alerts.

  10. RTD-based Material Tracking in a Fully-Continuous Dry Granulation Tableting Line.

    PubMed

    Martinetz, M C; Karttunen, A-P; Sacher, S; Wahl, P; Ketolainen, J; Khinast, J G; Korhonen, O

    2018-06-06

    Continuous manufacturing (CM) offers quality and cost-effectiveness benefits over currently dominating batch processing. One challenge that needs to be addressed when implementing CM is traceability of materials through the process, which is needed for the batch/lot definition and control strategy. In this work the residence time distributions (RTD) of single unit operations (blender, roller compactor and tablet press) of a continuous dry granulation tableting line were captured with NIR based methods at selected mass flow rates to create training data. RTD models for continuous operated unit operations and the entire line were developed based on transfer functions. For semi-continuously operated bucket conveyor and pneumatic transport an assumption based the operation frequency was used. For validation of the parametrized process model, a pre-defined API step change and its propagation through the manufacturing line was computed and compared to multi-scale experimental runs conducted with the fully assembled continuous operated manufacturing line. This novel approach showed a very good prediction power at the selected mass flow rates for a complete continuous dry granulation line. Furthermore, it shows and proves the capabilities of process simulation as a tool to support development and control of pharmaceutical manufacturing processes. Copyright © 2018. Published by Elsevier B.V.

  11. Dynamic Exergy Method for Evaluating the Control and Operation of Oxy-Combustion Boiler Island Systems.

    PubMed

    Jin, Bo; Zhao, Haibo; Zheng, Chuguang; Liang, Zhiwu

    2017-01-03

    Exergy-based methods are widely applied to assess the performance of energy conversion systems; however, these methods mainly focus on a certain steady-state and have limited applications for evaluating the control impacts on system operation. To dynamically obtain the thermodynamic behavior and reveal the influences of control structures, layers and loops, on system energy performance, a dynamic exergy method is developed, improved, and applied to a complex oxy-combustion boiler island system for the first time. The three most common operating scenarios are studied, and the results show that the flow rate change process leads to less energy consumption than oxygen purity and air in-leakage change processes. The variation of oxygen purity produces the largest impact on system operation, and the operating parameter sensitivity is not affected by the presence of process control. The control system saves energy during flow rate and oxygen purity change processes, while it consumes energy during the air in-leakage change process. More attention should be paid to the oxygen purity change because it requires the largest control cost. In the control system, the supervisory control layer requires the greatest energy consumption and the largest control cost to maintain operating targets, while the steam control loops cause the main energy consumption.

  12. Operation reliability analysis of independent power plants of gas-transmission system distant production facilities

    NASA Astrophysics Data System (ADS)

    Piskunov, Maksim V.; Voytkov, Ivan S.; Vysokomornaya, Olga V.; Vysokomorny, Vladimir S.

    2015-01-01

    The new approach was developed to analyze the failure causes in operation of linear facilities independent power supply sources (mini-CHP-plants) of gas-transmission system in Eastern part of Russia. Triggering conditions of ceiling operation substance temperature at condenser output were determined with mathematical simulation use of unsteady heat and mass transfer processes in condenser of mini-CHP-plants. Under these conditions the failure probability in operation of independent power supply sources is increased. Influence of environmental factors (in particular, ambient temperature) as well as output electric capability values of power plant on mini-CHP-plant operation reliability was analyzed. Values of mean time to failure and power plant failure density during operation in different regions of Eastern Siberia and Far East of Russia were received with use of numerical simulation results of heat and mass transfer processes at operation substance condensation.

  13. Polyhydroxyalkanoate production as a side stream process on a municipal waste water treatment plant.

    PubMed

    Pittmann, T; Steinmetz, H

    2014-09-01

    This work describes the production of polyhydroxyalkanoates (PHAs) as a side stream process on a municipal waste water treatment plant (WWTP) at different operation conditions. Therefore various tests were conducted regarding a high PHA production and stable PHA composition. Influence of substrate concentration, temperature, pH and cycle time of an installed feast/famine-regime were investigated. The results demonstrated a strong influence of the operating conditions on the PHA production. Lower substrate concentration, 20°C, neutral pH-value and a 24h cycle time are preferable for high PHA production up to 28.4% of cell dry weight (CDW). PHA composition was influenced by cycle time only and a stable PHA composition was reached. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Web Monitoring of EOS Front-End Ground Operations, Science Downlinks and Level 0 Processing

    NASA Technical Reports Server (NTRS)

    Cordier, Guy R.; Wilkinson, Chris; McLemore, Bruce

    2008-01-01

    This paper addresses the efforts undertaken and the technology deployed to aggregate and distribute the metadata characterizing the real-time operations associated with NASA Earth Observing Systems (EOS) high-rate front-end systems and the science data collected at multiple ground stations and forwarded to the Goddard Space Flight Center for level 0 processing. Station operators, mission project management personnel, spacecraft flight operations personnel and data end-users for various EOS missions can retrieve the information at any time from any location having access to the internet. The users are distributed and the EOS systems are distributed but the centralized metadata accessed via an external web server provide an effective global and detailed view of the enterprise-wide events as they are happening. The data-driven architecture and the implementation of applied middleware technology, open source database, open source monitoring tools, and external web server converge nicely to fulfill the various needs of the enterprise. The timeliness and content of the information provided are key to making timely and correct decisions which reduce project risk and enhance overall customer satisfaction. The authors discuss security measures employed to limit access of data to authorized users only.

  15. Enhancing The Army Operations Process Through The Incorportation of Holography

    DTIC Science & Technology

    2017-06-09

    the process and gives the user the sense of a noninvasive enhancement to quickly make decisions . Processes and information no longer create...mentally overlaying it onto the process . Data now augments reality and is a noninvasive process to decision making . v ACKNOWLEDGMENTS This paper...environment, augmented on top of reality decreases the amount of time needed to make decisions

  16. Perspective of Micro Process Engineering for Thermal Food Treatment

    PubMed Central

    Mathys, Alexander

    2018-01-01

    Micro process engineering as a process synthesis and intensification tool enables an ultra-short thermal treatment of foods within milliseconds (ms) using very high surface-area-to-volume ratios. The innovative application of ultra-short pasteurization and sterilization at high temperatures, but with holding times within the range of ms would allow the preservation of liquid foods with higher qualities, thereby avoiding many unwanted reactions with different temperature–time characteristics. Process challenges, such as fouling, clogging, and potential temperature gradients during such conditions need to be assessed on a case by case basis and optimized accordingly. Owing to the modularity, flexibility, and continuous operation of micro process engineering, thermal processes from the lab to the pilot and industrial scales can be more effectively upscaled. A case study on thermal inactivation demonstrated the feasibility of transferring lab results to the pilot scale. It was shown that micro process engineering applications in thermal food treatment may be relevant to both research and industrial operations. Scaling of micro structured devices is made possible through the use of numbering-up approaches; however, reduced investment costs and a hygienic design must be assured. PMID:29686990

  17. Time value of emission and technology discounting rate for off-grid electricity generation in India using intermediate pyrolysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Amit, E-mail: amitrp@iitrpr.ac.in; Faculty of Technology and Engineering, The Maharaja Sayajirao University of Baroda, Vadodara 390001, Gujarat; Sarkar, Prabir

    The environmental impact assessment of a process over its entire operational lifespan is an important issue. Estimation of life cycle emission helps in predicting the contribution of a given process to abate (or to pollute) the environmental emission scenario. Considering diminishing and time-dependent effect of emission, assessment of the overall effect of emissions is very complex. The paper presents a generalized methodology for arriving at a single emission discounting number for a process option, using the concept of time value of carbon emission flow. This number incorporates the effect of the emission resulting from the process over the entire operationalmore » lifespan. The advantage of this method is its quantitative aspect as well as its flexible nature. It can be applied to any process. The method is demonstrated with the help of an Intermediate Pyrolysis process when used to generate off-grid electricity and opting biochar route for disposing straw residue. The scenarios of very high net emission to very high net carbon sequestration is generated using process by careful selection of process parameters for different scenarios. For these different scenarios, the process discounting rate was determined and its outcome is discussed. The paper also proposes a process specific eco-label that mentions the discounting rates. - Highlight: • Methodology to obtain emission discounting rate for a process is proposed. • The method includes all components of life cycle emission converts into a time dependent discounting number. • A case study of Intermediate Pyrolysis is used to obtain such number for a range of processes. • The method is useful to determine if the effect from the operation of a process will lead to a net absorption of emission or net accumulation of emission in the environment.« less

  18. Turnaround Time Modeling for Conceptual Rocket Engines

    NASA Technical Reports Server (NTRS)

    Nix, Michael; Staton, Eric J.

    2004-01-01

    Recent years have brought about a paradigm shift within NASA and the Space Launch Community regarding the performance of conceptual design. Reliability, maintainability, supportability, and operability are no longer effects of design; they have moved to the forefront and are affecting design. A primary focus of this shift has been a planned decrease in vehicle turnaround time. Potentials for instituting this decrease include attacking the issues of removing, refurbishing, and replacing the engines after each flight. less, it is important to understand the operational affects of an engine on turnaround time, ground support personnel and equipment. One tool for visualizing this relationship involves the creation of a Discrete Event Simulation (DES). A DES model can be used to run a series of trade studies to determine if the engine is meeting its requirements, and, if not, what can be altered to bring it into compliance. Using DES, it is possible to look at the ways in which labor requirements, parallel maintenance versus serial maintenance, and maintenance scheduling affect the overall turnaround time. A detailed DES model of the Space Shuttle Main Engines (SSME) has been developed. Trades may be performed using the SSME Processing Model to see where maintenance bottlenecks occur, what the benefits (if any) are of increasing the numbers of personnel, or the number and location of facilities, in addition to trades previously mentioned, all with the goal of optimizing the operational turnaround time and minimizing operational cost. The SSME Processing Model was developed in such a way that it can easily be used as a foundation for developing DES models of other operational or developmental reusable engines. Performing a DES on a developmental engine during the conceptual phase makes it easier to affect the design and make changes to bring about a decrease in turnaround time and costs.

  19. Implementation of in-line infrared monitor in full-scale anaerobic digestion process.

    PubMed

    Spanjers, H; Bouvier, J C; Steenweg, P; Bisschops, I; van Gils, W; Versprille, B

    2006-01-01

    During start up but also during normal operation, anaerobic reactor systems should be run and monitored carefully to secure trouble-free operation, because the process is vulnerable to disturbances such as temporary overloading, biomass wash out and influent toxicity. The present method of monitoring is usually by manual sampling and subsequent laboratory analysis. Data collection, processing and feedback to system operation is manual and ad hoc, and involves high-level operator skills and attention. As a result, systems tend to be designed at relatively conservative design loading rates resulting in significant over-sizing of reactors and thus increased systems cost. It is therefore desirable to have on-line and continuous access to performance data on influent and effluent quality. Relevant variables to indicate process performance include VFA, COD, alkalinity, sulphate, and, if aerobic post-treatment is considered, total nitrogen, ammonia and nitrate. Recently, mid-IR spectrometry was demonstrated on a pilot scale to be suitable for in-line simultaneous measurement of these variables. This paper describes a full-scale application of the technique to test its ability to monitor continuously and without human intervention the above variables simultaneously in two process streams. For VFA, COD, sulphate, ammonium and TKN good agreement was obtained between in-line and manual measurements. During a period of six months the in-line measurements had to be interrupted several times because of clogging. It appeared that the sample pre-treatment unit was not able to cope with high solids concentrations all the time.

  20. Machine-Checkable Timed CSP

    NASA Technical Reports Server (NTRS)

    Goethel, Thomas; Glesner, Sabine

    2009-01-01

    The correctness of safety-critical embedded software is crucial, whereas non-functional properties like deadlock-freedom and real-time constraints are particularly important. The real-time calculus Timed Communicating Sequential Processes (CSP) is capable of expressing such properties and can therefore be used to verify embedded software. In this paper, we present our formalization of Timed CSP in the Isabelle/HOL theorem prover, which we have formulated as an operational coalgebraic semantics together with bisimulation equivalences and coalgebraic invariants. Furthermore, we apply these techniques in an abstract specification with real-time constraints, which is the basis for current work in which we verify the components of a simple real-time operating system deployed on a satellite.

  1. Dynamic Simulation of a Helium Liquefier

    NASA Astrophysics Data System (ADS)

    Maekawa, R.; Ooba, K.; Nobutoki, M.; Mito, T.

    2004-06-01

    Dynamic behavior of a helium liquefier has been studied in detail with a Cryogenic Process REal-time SimulaTor (C-PREST) at the National Institute for Fusion Science (NIFS). The C-PREST is being developed to integrate large-scale helium cryogenic plant design, operation and maintenance for optimum process establishment. As a first step of simulations of cooldown to 4.5 K with the helium liquefier model is conducted, which provides a plant-process validation platform. The helium liquefier consists of seven heat exchangers, a liquid-nitrogen (LN2) precooler, two expansion turbines and a liquid-helium (LHe) reservoir. Process simulations are fulfilled with sequence programs, which were implemented with C-PREST based on an existing liquefier operation. The interactions of a JT valve, a JT-bypass valve and a reservoir-return valve have been dynamically simulated. The paper discusses various aspects of refrigeration process simulation, including its difficulties such as a balance between complexity of the adopted models and CPU time.

  2. Assessment, Planning, and Execution Considerations for Conjunction Risk Assessment and Mitigation Operations

    NASA Technical Reports Server (NTRS)

    Frigm, Ryan C.; Levi, Joshua A.; Mantziaras, Dimitrios C.

    2010-01-01

    An operational Conjunction Assessment Risk Analysis (CARA) concept is the real-time process of assessing risk posed by close approaches and reacting to those risks if necessary. The most effective way to completely mitigate conjunction risk is to perform an avoidance maneuver. The NASA Goddard Space Flight Center has implemented a routine CARA process since 2005. Over this period, considerable experience has been gained and many lessons have been learned. This paper identifies and presents these experiences as general concepts in the description of the Conjunction Assessment, Flight Dynamics, and Flight Operations methodologies and processes. These general concepts will be tied together and will be exemplified through a case study of an actual high risk conjunction event for the Aura mission.

  3. Meta-control of combustion performance with a data mining approach

    NASA Astrophysics Data System (ADS)

    Song, Zhe

    Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.

  4. Time course of cognitive recovery after propofol anaesthesia: a level of processing approach.

    PubMed

    N'Kaoua, Bernard; Véron, Anne-Lise H; Lespinet, Véronique C; Claverie, Bernard; Sztark, François

    2002-09-01

    The aim of this study was to investigate the time course of recovery of verbal memory after general anaesthesia, as a function of the level (shallow or deep) of processing induced at the time of encoding. Thirty-one patients anaesthetized with propofol and alfentanil were compared with 28 control patients receiving only alfentanil. Memory functions were assessed the day before and 1, 6 and 24 hr after operation. Results show that for the anaesthetized group, shallow processing was impaired for 6 hr after surgery whereas the deeper processing was not recovered even at 24 hr. In addition, no specific effect of age was found.

  5. Spectrometer gun

    DOEpatents

    Waechter, D.A.; Wolf, M.A.; Umbarger, C.J.

    1981-11-03

    A hand-holdable, battery-operated, microprocessor-based spectrometer gun is described that includes a low-power matrix display and sufficient memory to permit both real-time observation and extended analysis of detected radiation pulses. Universality of the incorporated signal processing circuitry permits operation with various detectors having differing pulse detection and sensitivity parameters.

  6. The operational processing of wind estimates from cloud motions: Past, present and future

    NASA Technical Reports Server (NTRS)

    Novak, C.; Young, M.

    1977-01-01

    Current NESS winds operations provide approximately 1800 high quality wind estimates per day to about twenty domestic and foreign users. This marked improvement in NESS winds operations was the result of computer techniques development which began in 1969 to streamline and improve operational procedures. In addition, the launch of the SMS-1 satellite in 1974, the first in the second generation of geostationary spacecraft, provided an improved source of visible and infrared scanner data for the extraction of wind estimates. Currently, operational winds processing at NESS is accomplished by the automated and manual analyses of infrared data from two geostationary spacecraft. This system uses data from SMS-2 and GOES-1 to produce wind estimates valid for 00Z, 12Z and 18Z synoptic times.

  7. The Impact of Number Mismatch and Passives on the Real-Time Processing of Relative Clauses

    ERIC Educational Resources Information Center

    Contemori, Carla; Marinis, Theodoros

    2014-01-01

    Language processing plays a crucial role in language development, providing the ability to assign structural representations to input strings (e.g., Fodor, 1998). In this paper we aim at contributing to the study of children's processing routines, examining the operations underlying the auditory processing of relative clauses in children…

  8. Process for producing laser-formed video calibration markers.

    PubMed

    Franck, J B; Keller, P N; Swing, R A; Silberberg, G G

    1983-08-15

    A process for producing calibration markers directly on the photoconductive surface of video camera tubes has been developed. This process includes the use of a Nd:YAG laser operating at 1.06 microm with a 9.5-nsec pulse width (full width at half-maximum). The laser was constrained to operate in the TEM(00) spatial mode by intracavity aperturing. The use of this technology has produced an increase of up to 50 times the accuracy of geometric measurement. This is accomplished by a decrease in geometric distortion and an increase in geometric scaling. The process by which these laser-formed video calibrations are made will be discussed.

  9. Superior memory efficiency of quantum devices for the simulation of continuous-time stochastic processes

    NASA Astrophysics Data System (ADS)

    Elliott, Thomas J.; Gu, Mile

    2018-03-01

    Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.

  10. Army Science & Technology: Problems and Challenges

    DTIC Science & Technology

    2012-03-01

    Boundary Conditions: Who: Small Units is COIN/Stability Operations What: Provide affordable real-time translations and d t di f b h i f l i th t i...Soldiers, Leaders and Units in complex tactical operations exceeds the Army’s current capability for home-station Challenge: Formulate a S& T program...Formulate a S& T program to capture, process and electronically a vance rauma managemen . disseminate near-real-time medical information on Soldier

  11. Image enhancement software for underwater recovery operations: User's manual

    NASA Astrophysics Data System (ADS)

    Partridge, William J.; Therrien, Charles W.

    1989-06-01

    This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.

  12. Method for conducting electroless metal-plating processes

    DOEpatents

    Petit, George S.; Wright, Ralph R.

    1978-01-01

    This invention is an improved method for conducting electroless metal-plating processes in a metal tank which is exposed to the plating bath. The invention solves a problem commonly encountered in such processes: how to determine when it is advisable to shutdown the process in order to clean and/or re-passivate the tank. The new method comprises contacting the bath with a current-conducting, non-catalytic probe and, during plating operations, monitoring the gradually changing difference in electropotential between the probe and tank. It has been found that the value of this voltage is indicative of the extent to which nickel-bearing decomposition products accumulate on the tank. By utilizing the voltage to determine when shutdown for cleaning is advisable, the operator can avoid premature shutdown and at the same time avoid prolonging operations to the point that spontaneous decomposition occurs.

  13. Optimizing operating parameters of a honeycomb zeolite rotor concentrator for processing TFT-LCD volatile organic compounds with competitive adsorption characteristics.

    PubMed

    Lin, Yu-Chih; Chang, Feng-Tang

    2009-05-30

    In this study, we attempted to enhance the removal efficiency of a honeycomb zeolite rotor concentrator (HZRC), operated at optimal parameters, for processing TFT-LCD volatile organic compounds (VOCs) with competitive adsorption characteristics. The results indicated that when the HZRC processed a VOCs stream of mixed compounds, compounds with a high boiling point take precedence in the adsorption process. In addition, existing compounds with a low boiling point adsorbed onto the HZRC were also displaced by the high-boiling-point compounds. In order to achieve optimal operating parameters for high VOCs removal efficiency, results suggested controlling the inlet velocity to <1.5m/s, reducing the concentration ratio to 8 times, increasing the desorption temperature to 200-225 degrees C, and setting the rotation speed to 6.5rpm.

  14. Strategic planning for the International Space Station

    NASA Technical Reports Server (NTRS)

    Griner, Carolyn S.

    1990-01-01

    The concept for utilization and operations planning for the International Space Station Freedom was developed in a NASA Space Station Operations Task Force in 1986. Since that time the concept has been further refined to definitize the process and products required to integrate the needs of the international user community with the operational capabilities of the Station in its evolving configuration. The keystone to the process is the development of individual plans by the partners, with the parameters and formats common to the degree that electronic communications techniques can be effectively utilized, while maintaining the proper level and location of configuration control. The integration, evaluation, and verification of the integrated plan, called the Consolidated Operations and Utilization Plan (COUP), is being tested in a multilateral environment to prove out the parameters, interfaces, and process details necessary to produce the first COUP for Space Station in 1991. This paper will describe the concept, process, and the status of the multilateral test case.

  15. IRQN award paper: Operational rounds: a practical administrative process to improve safety and clinical services in radiology.

    PubMed

    Donnelly, Lane F; Dickerson, Julie M; Lehkamp, Todd W; Gessner, Kevin E; Moskovitz, Jay; Hutchinson, Sally

    2008-11-01

    As part of a patient safety program in the authors' department of radiology, operational rounds have been instituted. This process consists of radiology leaders' visiting imaging divisions at the site of imaging and discussing frontline employees' concerns about patient safety, the quality of care, and patient and family satisfaction. Operational rounds are executed at a time to optimize the number of attendees. Minutes that describe the issues identified, persons responsible for improvement, and updated improvement plan status are available to employees online. Via this process, multiple patient safety and other issues have been identified and remedied. The authors believe that the process has improved patient safety, the quality of care, and the efficiency of operations. Since the inception of the safety program, the mean number of days between serious safety events involving radiology has doubled. The authors review the background around such walk rounds, describe their particular program, and give multiple illustrative examples of issues identified and improvement plans put in place.

  16. Studies of ZVS soft switching of dual-active-bridge isolated bidirectional DC-DC converters

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Zhao, Feng; Shi, Qibiao; Wen, Xuhui

    2018-05-01

    To operate dual-active-bridge isolated bidirectional dc- dc converter (DAB) at high efficiency, the two bridge switches must operate with Zero-Voltage-Switching (ZVS) over as wide an operating range as possible. This paper proposes a new perspective on realizing ZVS in dead-time. An exact theoretical analysis and mathematical mode is built to explain the process of ZVS switching in dead-time under Single Phase Shift (SPS) control strategy. In order to assure the two bridge switches operate on soft switching, every SPS switching point is analyzed. Generally, dead-time will be determined when the power electronic devices is selected. The key factor to realizing ZVS is the size of the end time of resonance comparing to dead-time. Through detailed analysis, it can obtain the conditions of all switches achieving ZVS turn-on and turn-off. Finally, simulation validates the theoretical analysis and some advice are given to realize the ZVS soft switching.

  17. Putting ROSE to Work: A Proposed Application of a Request-Oriented Scheduling Engine for Space Station Operations

    NASA Technical Reports Server (NTRS)

    Jaap, John; Muery, Kim

    2000-01-01

    Scheduling engines are found at the core of software systems that plan and schedule activities and resources. A Request-Oriented Scheduling Engine (ROSE) is one that processes a single request (adding a task to a timeline) and then waits for another request. For the International Space Station, a robust ROSE-based system would support multiple, simultaneous users, each formulating requests (defining scheduling requirements), submitting these requests via the internet to a single scheduling engine operating on a single timeline, and immediately viewing the resulting timeline. ROSE is significantly different from the engine currently used to schedule Space Station operations. The current engine supports essentially one person at a time, with a pre-defined set of requirements from many payloads, working in either a "batch" scheduling mode or an interactive/manual scheduling mode. A planning and scheduling process that takes advantage of the features of ROSE could produce greater customer satisfaction at reduced cost and reduced flow time. This paper describes a possible ROSE-based scheduling process and identifies the additional software component required to support it. Resulting changes to the management and control of the process are also discussed.

  18. Influence of operational parameters on nitrogen removal efficiency and microbial communities in a full-scale activated sludge process.

    PubMed

    Kim, Young Mo; Cho, Hyun Uk; Lee, Dae Sung; Park, Donghee; Park, Jong Moon

    2011-11-01

    To improve the efficiency of total nitrogen (TN) removal, solid retention time (SRT) and internal recycling ratio controls were selected as operating parameters in a full-scale activated sludge process treating high strength industrial wastewater. Increased biomass concentration via SRT control enhanced TN removal. Also, decreasing the internal recycling ratio restored the nitrification process, which had been inhibited by phenol shock loading. Therefore, physiological alteration of the bacterial populations by application of specific operational strategies may stabilize the activated sludge process. Additionally, two dominant ammonia oxidizing bacteria (AOB) populations, Nitrosomonas europaea and Nitrosomonas nitrosa, were observed in all samples with no change in the community composition of AOB. In a nitrification tank, it was observed that the Nitrobacter populations consistently exceeded those of the Nitrospira within the nitrite oxidizing bacteria (NOB) community. Through using quantitative real-time PCR (qPCR), nirS, the nitrite reducing functional gene, was observed to predominate in the activated sludge of an anoxic tank, whereas there was the least amount of the narG gene, the nitrate reducing functional gene. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. International online support to process optimisation and operation decisions.

    PubMed

    Onnerth, T B; Eriksson, J

    2002-01-01

    The information level at all technical facilities has developed from almost nothing 30-40 years ago to advanced IT--Information Technology--systems based on both chemical and mechanical on-line sensors for process and equipment. Still the basic part of information is to get the right data at the right time for the decision to be made. Today a large amount of operational data is available at almost any European wastewater treatment plant, from laboratory and SCADA. The difficult part is to determine which data to keep, which to use in calculations and how and where to make data available. With the STARcontrol system it is possible to separate only process relevant data to use for on-line control and reporting at engineering level, to optimise operation. Furthermore, the use of IT makes it possible to communicate internationally, with full access to the whole amount of data on the single plant. In this way, expert supervision can be both very local in local language e.g. Polish and at the same time very professional with Danish experts advising on Danish processes in Poland or Sweden where some of the 12 STARcontrol systems are running.

  20. Evaluating photo-degradation of COD and TOC in petroleum refinery wastewater by using TiO2/ZnO photo-catalyst.

    PubMed

    Aljuboury, Dheeaa Al Deen Atallah; Palaniandy, Puganeshwary; Abdul Aziz, Hamidi Bin; Feroz, Shaik; Abu Amr, Salem S

    2016-09-01

    The aim of this study is to investigate the performance of combined solar photo-catalyst of titanium oxide/zinc oxide (TiO 2 /ZnO) with aeration processes to treat petroleum wastewater. Central composite design with response surface methodology was used to evaluate the relationships between operating variables for TiO 2 dosage, ZnO dosage, air flow, pH, and reaction time to identify the optimum operating conditions. Quadratic models for chemical oxygen demand (COD) and total organic carbon (TOC) removals prove to be significant with low probabilities (<0.0001). The obtained optimum conditions included a reaction time of 170 min, TiO 2 dosage (0.5 g/L), ZnO dosage (0.54 g/L), air flow (4.3 L/min), and pH 6.8 COD and TOC removal rates of 99% and 74%, respectively. The TOC and COD removal rates correspond well with the predicted models. The maximum removal rate for TOC and COD was 99.3% and 76%, respectively at optimum operational conditions of TiO 2 dosage (0.5 g/L), ZnO dosage (0.54 g/L), air flow (4.3 L/min), reaction time (170 min) and pH (6.8). The new treatment process achieved higher degradation efficiencies for TOC and COD and reduced the treatment time comparing with other related processes.

  1. Characterization of Tactical Departure Scheduling in the National Airspace System

    NASA Technical Reports Server (NTRS)

    Capps, Alan; Engelland, Shawn A.

    2011-01-01

    This paper discusses and analyzes current day utilization and performance of the tactical departure scheduling process in the National Airspace System (NAS) to understand the benefits in improving this process. The analysis used operational air traffic data from over 1,082,000 flights during the month of January, 2011. Specific metrics included the frequency of tactical departure scheduling, site specific variances in the technology's utilization, departure time prediction compliance used in the tactical scheduling process and the performance with which the current system can predict the airborne slot that aircraft are being scheduled into from the airport surface. Operational data analysis described in this paper indicates significant room for improvement exists in the current system primarily in the area of reduced departure time prediction uncertainty. Results indicate that a significant number of tactically scheduled aircraft did not meet their scheduled departure slot due to departure time uncertainty. In addition to missed slots, the operational data analysis identified increased controller workload associated with tactical departures which were subject to traffic management manual re-scheduling or controller swaps. An analysis of achievable levels of departure time prediction accuracy as obtained by a new integrated surface and tactical scheduling tool is provided to assess the benefit it may provide as a solution to the identified shortfalls. A list of NAS facilities which are likely to receive the greatest benefit from the integrated surface and tactical scheduling technology are provided.

  2. The value of oxygen-isotope data and multiple discharge records in calibrating a fully-distributed, physically-based rainfall-runoff model (CRUM3) to improve predictive capability

    NASA Astrophysics Data System (ADS)

    Neill, Aaron; Reaney, Sim

    2015-04-01

    Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Yueying; Kruger, Albert A.

    The Hanford Tank Waste Treatment and Immobilization Plant (WTP) Statement of Work (Department of Energy Contract DE-AC27-01RV14136, Section C) requires the contractor to develop and use process models for flowsheet analyses and pre-operational planning assessments. The Dynamic (G2) Flowsheet is a discrete-time process model that enables the project to evaluate impacts to throughput from eventdriven activities such as pumping, sampling, storage, recycle, separation, and chemical reactions. The model is developed by the Process Engineering (PE) department, and is based on the Flowsheet Bases, Assumptions, and Requirements Document (24590-WTP-RPT-PT-02-005), commonly called the BARD. The terminologies of Dynamic (G2) Flowsheet and Dynamicmore » (G2) Model are interchangeable in this document. The foundation of this model is a dynamic material balance governed by prescribed initial conditions, boundary conditions, and operating logic. The dynamic material balance is achieved by tracking the storage and material flows within the plant as time increments. The initial conditions include a feed vector that represents the waste compositions and delivery sequence of the Tank Farm batches, and volumes and concentrations of solutions in process equipment before startup. The boundary conditions are the physical limits of the flowsheet design, such as piping, volumes, flowrates, operation efficiencies, and physical and chemical environments that impact separations, phase equilibriums, and reaction extents. The operating logic represents the rules and strategies of running the plant.« less

  4. Rolling element bearing defect diagnosis under variable speed operation through angle synchronous averaging of wavelet de-noised estimate

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-05-01

    Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1

  5. Real-time support for high performance aircraft operation

    NASA Technical Reports Server (NTRS)

    Vidal, Jacques J.

    1989-01-01

    The feasibility of real-time processing schemes using artificial neural networks (ANNs) is investigated. A rationale for digital neural nets is presented and a general processor architecture for control applications is illustrated. Research results on ANN structures for real-time applications are given. Research results on ANN algorithms for real-time control are also shown.

  6. Operational evaluation of high-throughput community-based mass prophylaxis using Just-in-time training.

    PubMed

    Spitzer, James D; Hupert, Nathaniel; Duckart, Jonathan; Xiong, Wei

    2007-01-01

    Community-based mass prophylaxis is a core public health operational competency, but staffing needs may overwhelm the local trained health workforce. Just-in-time (JIT) training of emergency staff and computer modeling of workforce requirements represent two complementary approaches to address this logistical problem. Multnomah County, Oregon, conducted a high-throughput point of dispensing (POD) exercise to test JIT training and computer modeling to validate POD staffing estimates. The POD had 84% non-health-care worker staff and processed 500 patients per hour. Post-exercise modeling replicated observed staff utilization levels and queue formation, including development and amelioration of a large medical evaluation queue caused by lengthy processing times and understaffing in the first half-hour of the exercise. The exercise confirmed the feasibility of using JIT training for high-throughput antibiotic dispensing clinics staffed largely by nonmedical professionals. Patient processing times varied over the course of the exercise, with important implications for both staff reallocation and future POD modeling efforts. Overall underutilization of staff revealed the opportunity for greater efficiencies and even higher future throughputs.

  7. Reduced-Density-Matrix Description of Decoherence and Relaxation Processes for Electron-Spin Systems

    NASA Astrophysics Data System (ADS)

    Jacobs, Verne

    2017-04-01

    Electron-spin systems are investigated using a reduced-density-matrix description. Applications of interest include trapped atomic systems in optical lattices, semiconductor quantum dots, and vacancy defect centers in solids. Complimentary time-domain (equation-of-motion) and frequency-domain (resolvent-operator) formulations are self-consistently developed. The general non-perturbative and non-Markovian formulations provide a fundamental framework for systematic evaluations of corrections to the standard Born (lowest-order-perturbation) and Markov (short-memory-time) approximations. Particular attention is given to decoherence and relaxation processes, as well as spectral-line broadening phenomena, that are induced by interactions with photons, phonons, nuclear spins, and external electric and magnetic fields. These processes are treated either as coherent interactions or as environmental interactions. The environmental interactions are incorporated by means of the general expressions derived for the time-domain and frequency-domain Liouville-space self-energy operators, for which the tetradic-matrix elements are explicitly evaluated in the diagonal-resolvent, lowest-order, and Markov (short-memory time) approximations. Work supported by the Office of Naval Research through the Basic Research Program at The Naval Research Laboratory.

  8. A Coordinated Patient Transport System for ICU Patients Requiring Surgery: Impact on Operating Room Efficiency and ICU Workflow.

    PubMed

    Brown, Michael J; Kor, Daryl J; Curry, Timothy B; Marmor, Yariv; Rohleder, Thomas R

    2015-01-01

    Transfer of intensive care unit (ICU) patients to the operating room (OR) is a resource-intensive, time-consuming process that often results in patient throughput inefficiencies, deficiencies in information transfer, and suboptimal nurse to patient ratios. This study evaluates the implementation of a coordinated patient transport system (CPTS) designed to address these issues. Using data from 1,557 patient transfers covering the 2006-2010 period, interrupted time series and before and after designs were used to analyze the effect of implementing a CPTS at Mayo Clinic, Rochester. Using a segmented regression for the interrupted time series, on-time OR start time deviations were found to be significantly lower after the implementation of CPTS (p < .0001). The implementation resulted in a fourfold improvement in on-time OR starts (p < .01) while significantly reducing idle OR time (p < .01). A coordinated patient transfer process for moving patient from ICUs to ORs can significantly improve OR efficiency, reduce nonvalue added time, and ensure quality of care by preserving appropriate care provider to patient ratios.

  9. Scale-up of industrial biodiesel production to 40 m(3) using a liquid lipase formulation.

    PubMed

    Price, Jason; Nordblad, Mathias; Martel, Hannah H; Chrabas, Brent; Wang, Huali; Nielsen, Per Munk; Woodley, John M

    2016-08-01

    In this work, we demonstrate the scale-up from an 80 L fed-batch scale to 40 m(3) along with the design of a 4 m(3) continuous process for enzymatic biodiesel production catalyzed by NS-40116 (a liquid formulation of a modified Thermomyces lanuginosus lipase). Based on the analysis of actual pilot plant data for the transesterification of used cooking oil and brown grease, we propose a method applying first order integral analysis to fed-batch data based on either the bound glycerol or free fatty acid content in the oil. This method greatly simplifies the modeling process and gives an indication of the effect of mixing at the various scales (80 L to 40 m(3) ) along with the prediction of the residence time needed to reach a desired conversion in a CSTR. Suitable process metrics reflecting commercial performance such as the reaction time, enzyme efficiency, and reactor productivity were evaluated for both the fed-batch and CSTR cases. Given similar operating conditions, the CSTR operation on average, has a reaction time which is 1.3 times greater than the fed-batch operation. We also showed how the process metrics can be used to quickly estimate the selling price of the enzyme. Assuming a biodiesel selling price of 0.6 USD/kg and a one-time use of the enzyme (0.1% (w/woil ) enzyme dosage); the enzyme can then be sold for 30 USD/kg which ensures that that the enzyme cost is not more than 5% of the biodiesel revenue. Biotechnol. Bioeng. 2016;113: 1719-1728. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Development of a support software system for real-time HAL/S applications

    NASA Technical Reports Server (NTRS)

    Smith, R. S.

    1984-01-01

    Methodologies employed in defining and implementing a software support system for the HAL/S computer language for real-time operations on the Shuttle are detailed. Attention is also given to the management and validation techniques used during software development and software maintenance. Utilities developed to support the real-time operating conditions are described. With the support system being produced on Cyber computers and executable code then processed through Cyber or PDP machines, the support system has a production level status and can serve as a model for other software development projects.

  11. Improved NSGA model for multi objective operation scheduling and its evaluation

    NASA Astrophysics Data System (ADS)

    Li, Weining; Wang, Fuyu

    2017-09-01

    Reasonable operation can increase the income of the hospital and improve the patient’s satisfactory level. In this paper, by using multi object operation scheduling method with improved NSGA algorithm, it shortens the operation time, reduces the operation costand lowers the operation risk, the multi-objective optimization model is established for flexible operation scheduling, through the MATLAB simulation method, the Pareto solution is obtained, the standardization of data processing. The optimal scheduling scheme is selected by using entropy weight -Topsis combination method. The results show that the algorithm is feasible to solve the multi-objective operation scheduling problem, and provide a reference for hospital operation scheduling.

  12. HL-20 operations and support requirements for the Personnel Launch System mission

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; White, Nancy H.; Caldwell, Ronald G.

    1993-01-01

    The processing, mission planning, and support requirements were defined for the HL-20 lifting-body configuration that can serve as a Personnel Launch System. These requirements were based on the assumption of an operating environment that incorporates aircraft and airline support methods and techniques that are applicable to operations. The study covered the complete turnaround process for the HL-20, including landing through launch, and mission operations, but did not address the support requirements of the launch vehicle except for the integrated activities. Support is defined in terms of manpower, staffing levels, facilities, ground support equipment, maintenance/sparing requirements, and turnaround processing time. Support results were drawn from two contracted studies, plus an in-house analysis used to define the maintenance manpower. The results of the contracted studies were used as the basis for a stochastic simulation of the support environment to determine the sufficiency of support and the effect of variance on vehicle processing. Results indicate the levels of support defined for the HL-20 through this process to be sufficient to achieve the desired flight rate of eight flights per year.

  13. Protected quantum computing: interleaving gate operations with dynamical decoupling sequences.

    PubMed

    Zhang, Jingfu; Souza, Alexandre M; Brandao, Frederico Dias; Suter, Dieter

    2014-02-07

    Implementing precise operations on quantum systems is one of the biggest challenges for building quantum devices in a noisy environment. Dynamical decoupling attenuates the destructive effect of the environmental noise, but so far, it has been used primarily in the context of quantum memories. Here, we experimentally demonstrate a general scheme for combining dynamical decoupling with quantum logical gate operations using the example of an electron-spin qubit of a single nitrogen-vacancy center in diamond. We achieve process fidelities >98% for gate times that are 2 orders of magnitude longer than the unprotected dephasing time T2.

  14. Workspace definition for navigated control functional endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Gessat, Michael; Hofer, Mathias; Audette, Michael; Dietz, Andreas; Meixensberger, Jürgen; Stauß, Gero; Burgert, Oliver

    2007-03-01

    For the pre-operative definition of a surgical workspace for Navigated Control ® Functional Endoscopic Sinus Surgery (FESS), we developed a semi-automatic image processing system. Based on observations of surgeons using a manual system, we implemented a workflow-based engineering process that led us to the development of a system reducing time and workload spent during the workspace definition. The system uses a feature based on local curvature to align vertices of a polygonal outline along the bone structures defining the cavities of the inner nose. An anisotropic morphologic operator was developed solve problems arising from artifacts from noise and partial volume effects. We used time measurements and NASA's TLX questionnaire to evaluate our system.

  15. Research on the Mean Logistic Delay Time of the Development Phrass

    NASA Astrophysics Data System (ADS)

    Na, Hou; Yi, Li; Wang, Yi-Gang; Liu, Jun-jie; Bo, Zhang; Lv, Xue-Zhi

    MIDT is a key parameter affecting operational availability though equipment designing, operation and support management. In operation process, how to strengthen the support management, layout rationally supports resource, provide support resource of the equipment maintenance, in order to avoid or reduce support; ensure MLDT satisfied to Ao's requests. It's an urgently solved question that how to assort with the RMS of equipment.

  16. Ignition and combustion characteristics of metallized propellants

    NASA Technical Reports Server (NTRS)

    Mueller, D. C.; Turns, Stephen R.

    1991-01-01

    Over the past six months, experimental investigations were continued and theoretical work on the secondary atomization process was begun. Final shakedown of the sizing/velocity measuring system was completed and the aluminum combustion detection system was modified and tested. Atomizer operation was improved to allow steady state operation over long periods of time for several slurries. To validate the theoretical modeling, work involving carbon slurry atomization and combustion was begun and qualitative observations were made. Simultaneous measurements of aluminum slurry droplet size distributions and detection of burning aluminum particles were performed at several axial locations above the burner. The principle theoretical effort was the application of a rigid shell formation model to aluminum slurries and an investigation of the effects of various parameters on the shell formation process. This shell formation model was extended to include the process leading up to droplet disruption, and previously developed analytical models were applied to yield theoretical aluminum agglomerate ignition and combustion times. The several theoretical times were compared with the experimental results.

  17. SciBox, an end-to-end automated science planning and commanding system

    NASA Astrophysics Data System (ADS)

    Choo, Teck H.; Murchie, Scott L.; Bedini, Peter D.; Steele, R. Josh; Skura, Joseph P.; Nguyen, Lillian; Nair, Hari; Lucks, Michael; Berman, Alice F.; McGovern, James A.; Turner, F. Scott

    2014-01-01

    SciBox is a new technology for planning and commanding science operations for Earth-orbital and planetary space missions. It has been incrementally developed since 2001 and demonstrated on several spaceflight projects. The technology has matured to the point that it is now being used to plan and command all orbital science operations for the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) mission to Mercury. SciBox encompasses the derivation of observing sequences from science objectives, the scheduling of those sequences, the generation of spacecraft and instrument commands, and the validation of those commands prior to uploading to the spacecraft. Although the process is automated, science and observing requirements are incorporated at each step by a series of rules and parameters to optimize observing opportunities, which are tested and validated through simulation and review. Except for limited special operations and tests, there is no manual scheduling of observations or construction of command sequences. SciBox reduces the lead time for operations planning by shortening the time-consuming coordination process, reduces cost by automating the labor-intensive processes of human-in-the-loop adjudication of observing priorities, reduces operations risk by systematically checking constraints, and maximizes science return by fully evaluating the trade space of observing opportunities to meet MESSENGER science priorities within spacecraft recorder, downlink, scheduling, and orbital-geometry constraints.

  18. An extended transfer operator approach to identify separatrices in open flows

    NASA Astrophysics Data System (ADS)

    Lünsmann, Benedict; Kantz, Holger

    2018-05-01

    Vortices of coherent fluid volume are considered to have a substantial impact on transport processes in turbulent media. Yet, due to their Lagrangian nature, detecting these structures is highly nontrivial. In this respect, transfer operator approaches have been proven to provide useful tools: Approximating a possibly time-dependent flow as a discrete Markov process in space and time, information about coherent structures is contained in the operator's eigenvectors, which is usually extracted by employing clustering methods. Here, we propose an extended approach that couples surrounding filaments using "mixing boundary conditions" and focuses on the separation of the inner coherent set and embedding outer flow. The approach refrains from using unsupervised machine learning techniques such as clustering and uses physical arguments by maximizing a coherence ratio instead. We show that this technique improves the reconstruction of separatrices in stationary open flows and succeeds in finding almost-invariant sets in periodically perturbed flows.

  19. The Stem Cell Laboratory: Design, Equipment, and Oversight

    PubMed Central

    Wesselschmidt, Robin L.; Schwartz, Philip H.

    2013-01-01

    This chapter describes some of the major issues to be considered when setting up a laboratory for the culture of human pluripotent stem cells (hPSCs). The process of establishing a hPSC laboratory can be divided into two equally important parts. One is completely administrative and includes developing protocols, seeking approval, and establishing reporting processes and documentation. The other part of establishing a hPSC laboratory involves the physical plant and includes design, equipment and personnel. Proper planning of laboratory operations and proper design of the physical layout of the stem cell laboratory so that meets the scope of planned operations is a major undertaking, but the time spent upfront will pay long-term returns in operational efficiency and effectiveness. A well-planned, organized, and properly equipped laboratory supports research activities by increasing efficiency and reducing lost time and wasted resources. PMID:21822863

  20. Optimal superadiabatic population transfer and gates by dynamical phase corrections

    NASA Astrophysics Data System (ADS)

    Vepsäläinen, A.; Danilin, S.; Paraoanu, G. S.

    2018-04-01

    In many quantum technologies adiabatic processes are used for coherent quantum state operations, offering inherent robustness to errors in the control parameters. The main limitation is the long operation time resulting from the requirement of adiabaticity. The superadiabatic method allows for faster operation, by applying counterdiabatic driving that corrects for excitations resulting from the violation of the adiabatic condition. In this article we show how to construct the counterdiabatic Hamiltonian in a system with forbidden transitions by using two-photon processes and how to correct for the resulting time-dependent ac-Stark shifts in order to enable population transfer with unit fidelity. We further demonstrate that superadiabatic stimulated Raman passage can realize a robust unitary NOT-gate between the ground state and the second excited state of a three-level system. The results can be readily applied to a three-level transmon with the ladder energy level structure.

  1. Real-Time Operation of the International Space Station

    NASA Astrophysics Data System (ADS)

    Suffredini, M. T.

    2002-01-01

    The International Space Station is on orbit and real-time operations are well underway. Along with the assembly challenges of building and operating the International Space Station , scientific activities are also underway. Flight control teams in three countries are working together as a team to plan, coordinate and command the systems on the International Space Station.Preparations are being made to add the additional International Partner elements including their operations teams and facilities. By October 2002, six Expedition crews will have lived on the International Space Station. Management of real-time operations has been key to these achievements. This includes the activities of ground teams in control centers around the world as well as the crew on orbit. Real-time planning is constantly challenged with balancing the requirements and setting the priorities for the assembly, maintenance, science and crew health functions on the International Space Station. It requires integrating the Shuttle, Soyuz and Progress requirements with the Station. It is also necessary to be able to respond in case of on-orbit anomalies and to set plans and commands in place to ensure the continues safe operation of the Station. Bringing together the International Partner operations teams has been challenging and intensely rewarding. Utilization of the assets of each partner has resulted in efficient solutions to problems. This paper will describe the management of the major real-time operations processes, significant achievements, and future challenges.

  2. Industrial process surveillance system

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W.; Singer, Ralph M.; Mott, Jack E.

    1998-01-01

    A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.

  3. Industrial process surveillance system

    DOEpatents

    Gross, K.C.; Wegerich, S.W.; Singer, R.M.; Mott, J.E.

    1998-06-09

    A system and method are disclosed for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy. 96 figs.

  4. Industrial Process Surveillance System

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W; Singer, Ralph M.; Mott, Jack E.

    2001-01-30

    A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.

  5. Further characterization of the time transfer capabilities of precise point positioning (PPP): the Sliding Batch Procedure.

    PubMed

    Guyennon, Nicolas; Cerretto, Giancarlo; Tavella, Patrizia; Lahaye, François

    2009-08-01

    In recent years, many national timing laboratories have installed geodetic Global Positioning System receivers together with their traditional GPS/GLONASS Common View receivers and Two Way Satellite Time and Frequency Transfer equipment. Many of these geodetic receivers operate continuously within the International GNSS Service (IGS), and their data are regularly processed by IGS Analysis Centers. From its global network of over 350 stations and its Analysis Centers, the IGS generates precise combined GPS ephemeredes and station and satellite clock time series referred to the IGS Time Scale. A processing method called Precise Point Positioning (PPP) is in use in the geodetic community allowing precise recovery of GPS antenna position, clock phase, and atmospheric delays by taking advantage of these IGS precise products. Previous assessments, carried out at Istituto Nazionale di Ricerca Metrologica (INRiM; formerly IEN) with a PPP implementation developed at Natural Resources Canada (NRCan), showed PPP clock solutions have better stability over short/medium term than GPS CV and GPS P3 methods and significantly reduce the day-boundary discontinuities when used in multi-day continuous processing, allowing time-limited, campaign-style time-transfer experiments. This paper reports on follow-on work performed at INRiM and NRCan to further characterize and develop the PPP method for time transfer applications, using data from some of the National Metrology Institutes. We develop a processing procedure that takes advantage of the improved stability of the phase-connected multi-day PPP solutions while allowing the generation of continuous clock time series, more applicable to continuous operation/monitoring of timing equipment.

  6. Evaluation of dispersive Bragg gratings (BG) structures for the processing of RF signals with large time delays and bandwidths

    NASA Astrophysics Data System (ADS)

    Kaba, M.; Zhou, F. C.; Lim, A.; Decoster, D.; Huignard, J.-P.; Tonda, S.; Dolfi, D.; Chazelas, J.

    2007-11-01

    The applications of microwave optoelectronics are extremely large since they extend from the Radio-over-Fibre to the Homeland security and defence systems. Then, the improved maturity of the optoelectronic components operating up to 40GHz permit to consider new optical processing functions (filtering, beamforming, ...) which can operate over very wideband microwave analogue signals. Specific performances are required which imply optical delay lines able to exhibit large Time-Bandwidth product values. It is proposed to evaluate slow light approach through highly dispersive structures based on either uniform or chirped Bragg Gratings. Therefore, we highlight the impact of the major parameters of such structures: index modulation depth, grating length, grating period, chirp coefficient and demonstrate the high potentiality of Bragg Grating for Large RF signals bandwidth processing under slow-light propagation.

  7. Real time computer controlled weld skate

    NASA Technical Reports Server (NTRS)

    Wall, W. A., Jr.

    1977-01-01

    A real time, adaptive control, automatic welding system was developed. This system utilizes the general case geometrical relationships between a weldment and a weld skate to precisely maintain constant weld speed and torch angle along a contoured workplace. The system is compatible with the gas tungsten arc weld process or can be adapted to other weld processes. Heli-arc cutting and machine tool routing operations are possible applications.

  8. Ideas that Work!. Retuning the Building Automation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Steven

    A building automation system (BAS) can save considerable energy by effectively and efficiently operating building energy systems (fans, pumps, chillers boilers, etc.), but only when the BAS is properly set up and operated. Tuning, or retuning, the BAS is a cost effective process worthy of your time and attention.

  9. 40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... appropriate, monitor malfunctions, associated repairs, and required quality assurance or control activities... monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring malfunctions, associated repairs...

  10. 40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... control activities (including, as applicable, calibration checks and required zero and span adjustments), you must conduct all monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring...

  11. 40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... appropriate, monitor malfunctions, associated repairs, and required quality assurance or control activities... monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring malfunctions, associated repairs...

  12. 40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... control activities (including, as applicable, calibration checks and required zero and span adjustments), you must conduct all monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring...

  13. 12 CFR 1402.21 - Categories of requesters-fees.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...

  14. 12 CFR 1402.22 - Fees to be charged.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for Provision of...) (i.e., basic pay plus 16 percent of that rate) of the employee(s) making the search. (c) Computer... the cost of operating the central processing unit for that portion of operating time that is directly...

  15. 12 CFR 1402.22 - Fees to be charged.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for Provision of...) (i.e., basic pay plus 16 percent of that rate) of the employee(s) making the search. (c) Computer... the cost of operating the central processing unit for that portion of operating time that is directly...

  16. 12 CFR 1402.22 - Fees to be charged.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for Provision of...) (i.e., basic pay plus 16 percent of that rate) of the employee(s) making the search. (c) Computer... the cost of operating the central processing unit for that portion of operating time that is directly...

  17. 12 CFR 1402.22 - Fees to be charged.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for Provision of...) (i.e., basic pay plus 16 percent of that rate) of the employee(s) making the search. (c) Computer... the cost of operating the central processing unit for that portion of operating time that is directly...

  18. 12 CFR 1402.21 - Categories of requesters-fees.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...

  19. 12 CFR 1402.21 - Categories of requesters-fees.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...

  20. 12 CFR 1402.21 - Categories of requesters-fees.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...

  1. 12 CFR 1402.21 - Categories of requesters-fees.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...

  2. 12 CFR 1402.22 - Fees to be charged.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for Provision of...) (i.e., basic pay plus 16 percent of that rate) of the employee(s) making the search. (c) Computer... the cost of operating the central processing unit for that portion of operating time that is directly...

  3. Path Planning For A Class Of Cutting Operations

    NASA Astrophysics Data System (ADS)

    Tavora, Jose

    1989-03-01

    Optimizing processing time in some contour-cutting operations requires solving the so-called no-load path problem. This problem is formulated and an approximate resolution method (based on heuristic search techniques) is described. Results for real-life instances (clothing layouts in the apparel industry) are presented and evaluated.

  4. Performance of high intensity fed-batch mammalian cell cultures in disposable bioreactor systems.

    PubMed

    Smelko, John Paul; Wiltberger, Kelly Rae; Hickman, Eric Francis; Morris, Beverly Janey; Blackburn, Tobias James; Ryll, Thomas

    2011-01-01

    The adoption of disposable bioreactor technology as an alternate to traditional nondisposable technology is gaining momentum in the biotechnology industry. Evaluation of current disposable bioreactors systems to sustain high intensity fed-batch mammalian cell culture processes needs to be explored. In this study, an assessment was performed comparing single-use bioreactors (SUBs) systems of 50-, 250-, and 1,000-L operating scales with traditional stainless steel (SS) and glass vessels using four distinct mammalian cell culture processes. This comparison focuses on expansion and production stage performance. The SUB performance was evaluated based on three main areas: operability, process scalability, and process performance. The process performance and operability aspects were assessed over time and product quality performance was compared at the day of harvest. Expansion stage results showed disposable bioreactors mirror traditional bioreactors in terms of cellular growth and metabolism. Set-up and disposal times were dramatically reduced using the SUB systems when compared with traditional systems. Production stage runs for both Chinese hamster ovary and NS0 cell lines in the SUB system were able to model SS bioreactors runs at 100-, 200-, 2,000-, and 15,000-L scales. A single 1,000-L SUB run applying a high intensity fed-batch process was able to generate 7.5 kg of antibody with comparable product quality. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  5. A new intuitionistic fuzzy rule-based decision-making system for an operating system process scheduler.

    PubMed

    Butt, Muhammad Arif; Akram, Muhammad

    2016-01-01

    We present a new intuitionistic fuzzy rule-based decision-making system based on intuitionistic fuzzy sets for a process scheduler of a batch operating system. Our proposed intuitionistic fuzzy scheduling algorithm, inputs the nice value and burst time of all available processes in the ready queue, intuitionistically fuzzify the input values, triggers appropriate rules of our intuitionistic fuzzy inference engine and finally calculates the dynamic priority (dp) of all the processes in the ready queue. Once the dp of every process is calculated the ready queue is sorted in decreasing order of dp of every process. The process with maximum dp value is sent to the central processing unit for execution. Finally, we show complete working of our algorithm on two different data sets and give comparisons with some standard non-preemptive process schedulers.

  6. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  7. Monitoring real-time navigation processes using the automated reasoning tool (ART)

    NASA Technical Reports Server (NTRS)

    Maletz, M. C.; Culbert, C. J.

    1985-01-01

    An expert system is described for monitoring and controlling navigation processes in real-time. The ART-based system features data-driven computation, accommodation of synchronous and asynchronous data, temporal modeling for individual time intervals and chains of time intervals, and hypothetical reasoning capabilities that consider alternative interpretations of the state of navigation processes. The concept is illustrated in terms of the NAVEX system for monitoring and controlling the high speed ground navigation console for Mission Control at Johnson Space Center. The reasoning processes are outlined, including techniques used to consider alternative data interpretations. Installation of the system has permitted using a single operator, instead of three, to monitor the ascent and entry phases of a Shuttle mission.

  8. Deterministic processes guide long-term synchronised population dynamics in replicate anaerobic digesters

    PubMed Central

    Vanwonterghem, Inka; Jensen, Paul D; Dennis, Paul G; Hugenholtz, Philip; Rabaey, Korneel; Tyson, Gene W

    2014-01-01

    A replicate long-term experiment was conducted using anaerobic digestion (AD) as a model process to determine the relative role of niche and neutral theory on microbial community assembly, and to link community dynamics to system performance. AD is performed by a complex network of microorganisms and process stability relies entirely on the synergistic interactions between populations belonging to different functional guilds. In this study, three independent replicate anaerobic digesters were seeded with the same diverse inoculum, supplied with a model substrate, α-cellulose, and operated for 362 days at a 10-day hydraulic residence time under mesophilic conditions. Selective pressure imposed by the operational conditions and model substrate caused large reproducible changes in community composition including an overall decrease in richness in the first month of operation, followed by synchronised population dynamics that correlated with changes in reactor performance. This included the synchronised emergence and decline of distinct Ruminococcus phylotypes at day 148, and emergence of a Clostridium and Methanosaeta phylotype at day 178, when performance became stable in all reactors. These data suggest that many dynamic functional niches are predictably filled by phylogenetically coherent populations over long time scales. Neutral theory would predict that a complex community with a high degree of recognised functional redundancy would lead to stochastic changes in populations and community divergence over time. We conclude that deterministic processes may play a larger role in microbial community dynamics than currently appreciated, and under controlled conditions it may be possible to reliably predict community structural and functional changes over time. PMID:24739627

  9. Immediate liposuction could shorten the time for endoscopic axillary lymphadenectomy in breast cancer patients.

    PubMed

    Shi, Fujun; Huang, Zonghai; Yu, Jinlong; Zhang, Pusheng; Deng, Jianwen; Zou, Linhan; Zhang, Cheng; Luo, Yunfeng

    2017-01-31

    Endoscopic axillary lymphadenectomy (EALND) was introduced to clinical work to reduce side effects of conventional axillary lymphadenectomy, while the lipolysis and liposuction of EALND made the process consume more time. The aim of the study was to determine whether immediate liposuction after tumescent solution injection to the axilla could shorten the total time of EALND. Fifty-nine patients were enrolled in the study, 30 of them received EALND with traditional liposuction method (TLM), and the rest 29 patients received EALND with immediate liposuction method (ILM). The operation time, cosmetic result, drainage amount, and hospitalization time of the two groups were compared. The median EALND operation time of TLM group and ILM group were 68 and 46 min, respectively, the difference was significant (P < 0.05); the median cosmetic results of the two groups were 6.6 and 6.4, respectively; the median drainage amount of the two groups were 366 and 385 ml, respectively; the hospitalization time of the two groups were 15 and 16 days, respectively. For the last three measures, no significant difference was confirmed (P > 0.05). Our work suggests immediate liposuction could shorten the endoscopic axillary lymphadenectomy process, and this method would not compromise the operation results. However, due to the limitations of the research, more work needs to be done to prove the availability and feasibility of immediate liposuction.

  10. Comparison of the Operative Outcomes and Learning Curves between Laparoscopic and Robotic Gastrectomy for Gastric Cancer

    PubMed Central

    Huang, Kuo-Hung; Lan, Yuan-Tzu; Fang, Wen-Liang; Chen, Jen-Hao; Lo, Su-Shun; Li, Anna Fen-Yau; Chiou, Shih-Hwa; Wu, Chew-Wun; Shyr, Yi-Ming

    2014-01-01

    Background Minimally invasive surgery, including laparoscopic and robotic gastrectomy, has become more popular in the treatment of gastric cancer. However, few studies have compared the learning curves between laparoscopic and robotic gastrectomy for gastric cancer. Methods Data were prospectively collected between July 2008 and Aug 2014. A total of 145 patients underwent minimally invasive gastrectomy for gastric cancer by a single surgeon, including 73 laparoscopic and 72 robotic gastrectomies. The clinicopathologic characteristics, operative outcomes and learning curves were compared between the two groups. Results Compared with the laparoscopic group, the robotic group was associated with less blood loss and longer operative time. After the surgeon learning curves were overcome for each technique, the operative outcomes became similar between the two groups except longer operative time in the robotic group. After accumulating more cases of robotic gastrectomy, the operative time in the laparoscopic group decreased dramatically. Conclusions After overcoming the learning curves, the operative outcomes became similar between laparoscopic and robotic gastrectomy. The experience of robotic gastrectomy could affect the learning process of laparoscopic gastrectomy. PMID:25360767

  11. On the energy budget in the current disruption region. [of geomagnetic tail

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Birn, Joachim

    1993-01-01

    This study investigates the energy budget in the current disruption region of the magnetotail, coincident with a pre-onset thin current sheet, around substorm onset time using published observational data and theoretical estimates. We find that the current disruption/dipolarization process typically requires energy inflow into the primary disruption region. The disruption dipolarization process is therefore endoenergetic, i.e., requires energy input to operate. Therefore we argue that some other simultaneously operating process, possibly a large scale magnetotail instability, is required to provide the necessary energy input into the current disruption region.

  12. Functional correlation approach to operational risk in banking organizations

    NASA Astrophysics Data System (ADS)

    Kühn, Reimer; Neu, Peter

    2003-05-01

    A Value-at-Risk-based model is proposed to compute the adequate equity capital necessary to cover potential losses due to operational risks, such as human and system process failures, in banking organizations. Exploring the analogy to a lattice gas model from physics, correlations between sequential failures are modeled by as functionally defined, heterogeneous couplings between mutually supportive processes. In contrast to traditional risk models for market and credit risk, where correlations are described as equal-time-correlations by a covariance matrix, the dynamics of the model shows collective phenomena such as bursts and avalanches of process failures.

  13. 28 CFR 40.7 - Operation and decision.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... review. (e) Fixed time limits. Responses shall be made within fixed time limits at each level of decision. Time limits may vary between institutions, but expeditious processing of grievances at each level of..., written responses. Each grievance shall be answered in writing at each level of decision and review. The...

  14. Final Technical Report - Advanced Optical Sensors to Minimize Energy Consumption in Polymer Extrusion Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Susan J. Foulk

    Project Objective: The objectives of this study are to develop an accurate and stable on-line sensor system to monitor color and composition on-line in polymer melts, to develop a scheme for using the output to control extruders to eliminate the energy, material and operational costs of off-specification product, and to combine or eliminate some extrusion processes. Background: Polymer extrusion processes are difficult to control because the quality achieved in the final product is complexly affected by the properties of the extruder screw, speed of extrusion, temperature, polymer composition, strength and dispersion properties of additives, and feeder system properties. Extruder systemsmore » are engineered to be highly reproducible so that when the correct settings to produce a particular product are found, that product can be reliably produced time after time. However market conditions often require changes in the final product, different products or grades may be processed in the same equipment, and feed materials vary from lot to lot. All of these changes require empirical adjustment of extruder settings to produce a product meeting specifications. Optical sensor systems that can continuously monitor the composition and color of the extruded polymer could detect process upsets, drift, blending oscillations, and changes in dispersion of additives. Development of an effective control algorithm using the output of the monitor would enable rapid corrections for changes in materials and operating conditions, thereby eliminating most of the scrap and recycle of current processing. This information could be used to identify extruder systems issues, diagnose problem sources, and suggest corrective actions in real-time to help keep extruder system settings within the optimum control region. Using these advanced optical sensor systems would give extruder operators real-time feedback from their process. They could reduce the amount of off-spec product produced and significantly reduce energy consumption. Also, because blending and dispersion of additives and components in the final product could be continuously verified, we believe that, in many cases, intermediate compounding steps could be eliminated (saving even more time and energy).« less

  15. 40 CFR 63.117 - Process vent provisions-reporting and recordkeeping requirements for group and TRE determinations...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Vents, Storage Vessels, Transfer Operations, and Wastewater § 63.117 Process vent provisions—reporting... incinerators, boilers or process heaters specified in table 3 of this subpart, and averaged over the same time... content determinations, flow rate measurements, and exit velocity determinations made during the...

  16. 40 CFR 63.117 - Process vent provisions-reporting and recordkeeping requirements for group and TRE determinations...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Vents, Storage Vessels, Transfer Operations, and Wastewater § 63.117 Process vent provisions—reporting... incinerators, boilers or process heaters specified in table 3 of this subpart, and averaged over the same time... content determinations, flow rate measurements, and exit velocity determinations made during the...

  17. Beyond ROC Curvature: Strength Effects and Response Time Data Support Continuous-Evidence Models of Recognition Memory

    ERIC Educational Resources Information Center

    Dube, Chad; Starns, Jeffrey J.; Rotello, Caren M.; Ratcliff, Roger

    2012-01-01

    A classic question in the recognition memory literature is whether retrieval is best described as a continuous-evidence process consistent with signal detection theory (SDT), or a threshold process consistent with many multinomial processing tree (MPT) models. Because receiver operating characteristics (ROCs) based on confidence ratings are…

  18. Neural Correlates of Individual Differences in Strategic Retrieval Processing

    ERIC Educational Resources Information Center

    Bridger, Emma K.; Herron, Jane E.; Elward, Rachael L.; Wilding, Edward L.

    2009-01-01

    Processes engaged when information is encoded into memory are an important determinant of whether that information will be recovered subsequently. Also influential, however, are processes engaged at the time of retrieval, and these were investigated here by using event-related potentials (ERPs) to measure a specific class of retrieval operations.…

  19. [Implementation of modern operating room management -- experiences made at an university hospital].

    PubMed

    Hensel, M; Wauer, H; Bloch, A; Volk, T; Kox, W J; Spies, C

    2005-07-01

    Caused by structural changes in health care the general need for cost control is evident for all hospitals. As operating room is one of the most cost-intensive sectors in a hospital, optimisation of workflow processes in this area is of particular interest for health care providers. While modern operating room management is established in several clinics yet, others are less prepared for economic challenges. Therefore, the operating room statute of the Charité university hospital useful for other hospitals to develop an own concept is presented. In addition, experiences made with implementation of new management structures are described and results obtained over the last 5 years are reported. Whereas the total number of operation procedures increased by 15 %, the operating room utilization increased more markedly in terms of time and cases. Summarizing the results, central operating room management has been proved to be an effective tool to increase the efficiency of workflow processes in the operating room.

  20. Treatment of leachate by electrocoagulation using aluminum and iron electrodes.

    PubMed

    Ilhan, Fatih; Kurt, Ugur; Apaydin, Omer; Gonullu, M Talha

    2008-06-15

    In this paper, treatment of leachate by electrocoagulation (EC) has been investigated in a batch process. The sample of leachate was supplied from Odayeri Landfill Site in Istanbul. Firstly, EC was compared with classical chemical coagulation (CC) process via COD removal. The first comparison results with 348 A/m2 current density showed that EC process has higher treatment performance than CC process. Secondly, effects of process variables such as electrode material, current density (from 348 to 631 A/m2), pH, treatment cost, and operating time for EC process are investigated on COD and NH4-N removal efficiencies. The appropriate electrode type search for EC provided that aluminum supplies more COD removal (56%) than iron electrode (35%) at the end of the 30 min operating time. Finally, EC experiments were also continued to determine the efficiency of ammonia removal, and the effects of current density, mixing, and aeration. All the findings of the study revealed that treatment of leachate by EC can be used as a step of a joint treatment.

  1. Optimization of the synthesis process of an iron oxide nanocatalyst supported on activated carbon for the inactivation of Ascaris eggs in water using the heterogeneous Fenton-like reaction.

    PubMed

    Morales-Pérez, Ariadna A; Maravilla, Pablo; Solís-López, Myriam; Schouwenaars, Rafael; Durán-Moreno, Alfonso; Ramírez-Zamora, Rosa-María

    2016-01-01

    An experimental design methodology was used to optimize the synthesis of an iron-supported nanocatalyst as well as the inactivation process of Ascaris eggs (Ae) using this material. A factor screening design was used for identifying the significant experimental factors for nanocatalyst support (supported %Fe, (w/w), temperature and time of calcination) and for the inactivation process called the heterogeneous Fenton-like reaction (H2O2 dose, mass ratio Fe/H2O2, pH and reaction time). The optimization of the significant factors was carried out using a face-centered central composite design. The optimal operating conditions for both processes were estimated with a statistical model and implemented experimentally with five replicates. The predicted value of the Ae inactivation rate was close to the laboratory results. At the optimal operating conditions of the nanocatalyst production and Ae inactivation process, the Ascaris ova showed genomic damage to the point that no cell reparation was possible showing that this advanced oxidation process was highly efficient for inactivating this pathogen.

  2. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  3. Experience Transitioning Models and Data at the NOAA Space Weather Prediction Center

    NASA Astrophysics Data System (ADS)

    Berger, Thomas

    2016-07-01

    The NOAA Space Weather Prediction Center has a long history of transitioning research data and models into operations and with the validation activities required. The first stage in this process involves demonstrating that the capability has sufficient value to customers to justify the cost needed to transition it and to run it continuously and reliably in operations. Once the overall value is demonstrated, a substantial effort is then required to develop the operational software from the research codes. The next stage is to implement and test the software and product generation on the operational computers. Finally, effort must be devoted to establishing long-term measures of performance, maintaining the software, and working with forecasters, customers, and researchers to improve over time the operational capabilities. This multi-stage process of identifying, transitioning, and improving operational space weather capabilities will be discussed using recent examples. Plans for future activities will also be described.

  4. Oceanic Flights and Airspace: Improving Efficiency by Trajectory-Based Operations

    NASA Technical Reports Server (NTRS)

    Fernandes, Alicia Borgman; Rebollo, Juan; Koch, Michael

    2016-01-01

    Oceanic operations suffer from multiple inefficiencies, including pre-departure planning that does not adequately consider uncertainty in the proposed trajectory, restrictions on the routes that a flight operator can choose for an oceanic crossing, time-consuming processes and procedures for amending en route trajectories, and difficulties exchanging data between Flight Information Regions (FIRs). These inefficiencies cause aircraft to fly suboptimal trajectories, burning fuel and time that could be conserved. A concept to support integration of existing and emerging capabilities and concepts is needed to transition to an airspace system that employs Trajectory Based Operations (TBO) to improve efficiency and safety in oceanic operations. This paper describes such a concept and the results of preliminary activities to evaluate the concept, including a stakeholder feedback activity, user needs analysis, and high level benefits analysis.

  5. Real Time, On Line Crop Monitoring and Analysis with Near Global Landsat-class Mosaics

    NASA Astrophysics Data System (ADS)

    Varlyguin, D.; Hulina, S.; Crutchfield, J.; Reynolds, C. A.; Frantz, R.

    2015-12-01

    The presentation will discuss the current status of GDA technology for operational, automated generation of 10-30 meter near global mosaics of Landsat-class data for visualization, monitoring, and analysis. Current version of the mosaic combines Landsat 8 and Landsat 7. Sentinel-2A imagery will be added once it is operationally available. The mosaics are surface reflectance calibrated and are analysis ready. They offer full spatial resolution and all multi-spectral bands of the source imagery. Each mosaic covers all major agricultural regions of the world and 16 day time window. 2014-most current dates are supported. The mosaics are updated in real-time, as soon as GDA downloads Landsat imagery, calibrates it to the surface reflectances, and generates data gap masks (all typically under 10 minutes for a Landsat scene). The technology eliminates the complex, multi-step, hands-on process of data preparation and provides imagery ready for repetitive, field-to-country analysis of crop conditions, progress, acreages, yield, and production. The mosaics can be used for real-time, on-line interactive mapping and time series drilling via GeoSynergy webGIS platform. The imagery is of great value for improved, persistent monitoring of global croplands and for the operational in-season analysis and mapping of crops across the globe in USDA FAS purview as mandated by the US government. The presentation will overview operational processing of Landsat-class mosaics in support of USDA FAS efforts and will look into 2015 and beyond.

  6. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  7. Real-Time Reconfigurable Adaptive Speech Recognition Command and Control Apparatus and Method

    NASA Technical Reports Server (NTRS)

    Salazar, George A. (Inventor); Haynes, Dena S. (Inventor); Sommers, Marc J. (Inventor)

    1998-01-01

    An adaptive speech recognition and control system and method for controlling various mechanisms and systems in response to spoken instructions and in which spoken commands are effective to direct the system into appropriate memory nodes, and to respective appropriate memory templates corresponding to the voiced command is discussed. Spoken commands from any of a group of operators for which the system is trained may be identified, and voice templates are updated as required in response to changes in pronunciation and voice characteristics over time of any of the operators for which the system is trained. Provisions are made for both near-real-time retraining of the system with respect to individual terms which are determined not be positively identified, and for an overall system training and updating process in which recognition of each command and vocabulary term is checked, and in which the memory templates are retrained if necessary for respective commands or vocabulary terms with respect to an operator currently using the system. In one embodiment, the system includes input circuitry connected to a microphone and including signal processing and control sections for sensing the level of vocabulary recognition over a given period and, if recognition performance falls below a given level, processing audio-derived signals for enhancing recognition performance of the system.

  8. Affordable multisensor digital video architecture for 360° situational awareness displays

    NASA Astrophysics Data System (ADS)

    Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana

    2011-06-01

    One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.

  9. Application of underground microseismic monitoring for ground failure and secure longwall coal mining operation: A case study in an Indian mine

    NASA Astrophysics Data System (ADS)

    Ghosh, G. K.; Sivakumar, C.

    2018-03-01

    Longwall mining technique has been widely used around the globe due to its safe mining process. However, mining operations are suspended when various problems arise like collapse of roof falls, cracks and fractures propagation in the roof and complexity in roof strata behaviors. To overcome these colossal problems, an underground real time microseismic monitoring technique has been implemented in the working panel-P2 in the Rajendra longwall underground coal mine at South Eastern Coalfields Limited (SECL), India. The target coal seams appears at the panel P-2 within a depth of 70 m to 76 m. In this process, 10 to 15 uniaxial geophones were placed inside a borehole at depth range of 40 m to 60 m located over the working panel-P2 with high rock quality designation value for better seismic signal. Various microseismic events were recorded with magnitude ranging from -5 to 2 in the Richter scale. The time-series processing was carried out to get various seismic parameters like activity rate, potential energy, viscosity rate, seismic moment, energy index, apparent volume and potential energy with respect to time. The used of these parameters helped tracing the events, understanding crack and fractures propagation and locating both high and low stress distribution zones prior to roof fall occurrence. In most of the cases, the events were divided into three stage processes: initial or preliminary, middle or building, and final or falling. The results of this study reveal that underground microseismic monitoring provides sufficient prior information of underground weighting events. The information gathered during the study was conveyed to the mining personnel in advance prior to roof fall event. This permits to take appropriate action for safer mining operations and risk reduction during longwall operation.

  10. Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT

    NASA Astrophysics Data System (ADS)

    Livschitz, Yakov; Munro, Rosemary; Lang, Rüdiger; Fiedler, Lars; Dyer, Richard; Eisinger, Michael

    2010-05-01

    The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument's health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument's degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.

  11. Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT

    NASA Astrophysics Data System (ADS)

    Livschitz, Y.; Munro, R.; Lang, R.; Fiedler, L.; Dyer, R.; Eisinger, M.

    2009-12-01

    The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument’s health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument’s degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.

  12. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  13. Influence of thermophilic aerobic digestion as a sludge pre-treatment and solids retention time of mesophilic anaerobic digestion on the methane production, sludge digestion and microbial communities in a sequential digestion process.

    PubMed

    Jang, Hyun Min; Cho, Hyun Uk; Park, Sang Kyu; Ha, Jeong Hyub; Park, Jong Moon

    2014-01-01

    In this study, the changes in sludge reduction, methane production and microbial community structures in a process involving two-stage thermophilic aerobic digestion (TAD) and mesophilic anaerobic digestion (MAD) under different solid retention times (SRTs) between 10 and 40 days were investigated. The TAD reactor (RTAD) was operated with a 1-day SRT and the MAD reactor (RMAD) was operated at three different SRTs: 39, 19 and 9 days. For a comparison, control MAD (RCONTROL) was operated at three different SRTs of 40, 20 and 10 days. Our results reveal that the sequential TAD-MAD process has about 42% higher methane production rate (MPR) and 15% higher TCOD removal than those of RCONTROL when the SRT decreased from 40 to 20 days. Denaturing gradient gel electrophoresis (DGGE) and real-time PCR results indicate that RMAD maintained a more diverse bacteria and archaea population compared to RCONTROL, due to the application of the biological TAD pre-treatment process. In RTAD, Ureibacillus thermophiles and Bacterium thermus were the major contributors to the increase in soluble organic matter. In contrast, Methanosaeta concilii, a strictly aceticlastic methanogen, showed the highest population during the operation of overall SRTs in RMAD. Interestingly, as the SRT decreased to 20 days, syntrophic VFA oxidizing bacteria, Clostridium ultunense sp., and a hydrogenotrophic methanogen, Methanobacterium beijingense were detected in RMAD and RCONTROL. Meanwhile, the proportion of archaea to total microbe in RMAD and RCONTROL shows highest values of 10.5 and 6.5% at 20-d SRT operation, respectively. Collectively, these results demonstrate that the increased COD removal and methane production at different SRTs in RMAD might be attributed to the increased synergism among microbial species by improving the hydrolysis of the rate limiting step in sludge with the help of the biological TAD pre-treatment. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. An operations management system for the Space Station

    NASA Astrophysics Data System (ADS)

    Savage, Terry R.

    A description is provided of an Operations Management System (OMS) for the planned NASA Space Station. The OMS would be distributed both in space and on the ground, and provide a transparent interface to the communications and data processing facilities of the Space Station Program. The allocation of OMS responsibilities has, in the most current Space Station design, been fragmented among the Communications and Tracking Subsystem (CTS), the Data Management System (DMS), and a redefined OMS. In this current view, OMS is less of a participant in the real-time processing, and more an overseer of the health and management of the Space Station operations.

  15. Digital ultrasonic signal processing: Primary ultrasonics task and transducer characterization use and detailed description

    NASA Technical Reports Server (NTRS)

    Hammond, P. L.

    1979-01-01

    This manual describes the use of the primary ultrasonics task (PUT) and the transducer characterization system (XC) for the collection, processing, and recording of data received from a pulse-echo ultrasonic system. Both PUT and XC include five primary functions common to many real-time data acquisition systems. Some of these functions are implemented using the same code in both systems. The solicitation and acceptance of operator control input is emphasized. Those operations not under user control are explained.

  16. Microbial solubilization of phosphate

    DOEpatents

    Rogers, R.D.; Wolfram, J.H.

    1993-10-26

    A process is provided for solubilizing phosphate from phosphate containing ore by treatment with microorganisms which comprises forming an aqueous mixture of phosphate ore, microorganisms operable for solubilizing phosphate from the phosphate ore and maintaining the aqueous mixture for a period of time and under conditions operable to effect the microbial solubilization process. An aqueous solution containing soluble phosphorus can be separated from the reacted mixture by precipitation, solvent extraction, selective membrane, exchange resin or gravity methods to recover phosphate from the aqueous solution. 6 figures.

  17. Microbial solubilization of phosphate

    DOEpatents

    Rogers, Robert D.; Wolfram, James H.

    1993-01-01

    A process is provided for solubilizing phosphate from phosphate containing ore by treatment with microorganisms which comprises forming an aqueous mixture of phosphate ore, microorganisms operable for solubilizing phosphate from the phosphate ore and maintaining the aqueous mixture for a period of time and under conditions operable to effect the microbial solubilization process. An aqueous solution containing soluble phosphorous can be separated from the reacted mixture by precipitation, solvent extraction, selective membrane, exchange resin or gravity methods to recover phosphate from the aqueous solution.

  18. 40 CFR 63.984 - Fuel gas systems and processes to which storage vessel, transfer rack, or equipment leak...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... times when regulated material emissions are routed to it. (2) The owner or operator of a transfer rack... function in that process; (ii) Transformed by chemical reaction into materials that are not regulated...

  19. 40 CFR 63.984 - Fuel gas systems and processes to which storage vessel, transfer rack, or equipment leak...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... times when regulated material emissions are routed to it. (2) The owner or operator of a transfer rack... function in that process; (ii) Transformed by chemical reaction into materials that are not regulated...

  20. Nanofiber adsorbents for high productivity continuous downstream processing.

    PubMed

    Hardick, Oliver; Dods, Stewart; Stevens, Bob; Bracewell, Daniel G

    2015-11-10

    An ever increasing focus is being placed on the manufacturing costs of biotherapeutics. The drive towards continuous processing offers one opportunity to address these costs through the advantages it offers. Continuous operation presents opportunities for real-time process monitoring and automated control with potential benefits including predictable product specification, reduced labour costs, and integration with other continuous processes. Specifically to chromatographic operations continuous processing presents an opportunity to use expensive media more efficiently while reducing their size and therefore cost. Here for the first time we show how a new adsorbent material (cellulosic nanofibers) having advantageous convective mass transfer properties can be combined with a high frequency simulated moving bed (SMB) design to provide superior productivity in a simple bioseparation. Electrospun polymeric nanofiber adsorbents offer an alternative ligand support surface for bioseparations. Their non-woven fiber structure with diameters in the sub-micron range creates a remarkably high surface area material that allows for rapid convective flow operations. A proof of concept study demonstrated the performance of an anion exchange nanofiber adsorbent based on criteria including flow and mass transfer properties, binding capacity, reproducibility and life-cycle performance. Binding capacities of the DEAE adsorbents were demonstrated to be 10mg/mL, this is indeed only a fraction of what is achievable from porous bead resins but in combination with a very high flowrate, the productivity of the nanofiber system is shown to be significant. Suitable packing into a flow distribution device has allowed for reproducible bind-elute operations at flowrates of 2,400 cm/h, many times greater than those used in typical beaded systems. These characteristics make them ideal candidates for operation in continuous chromatography systems. A SMB system was developed and optimised to demonstrate the productivity of nanofiber adsorbents through rapid bind-elute cycle times of 7s which resulted in a 15-fold increase in productivity compared with packed bed resins. Reproducible performance of BSA purification was demonstrated using a 2-component protein solution of BSA and cytochrome c. The SMB system exploits the advantageous convective mass transfer properties of nanofiber adsorbents to provide productivities much greater than those achievable with conventional chromatography media. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Integrated Logistics Support Analysis of the International Space Station Alpha: An Overview of the Maintenance Time Dependent Parameter Prediction Methods Enhancement

    NASA Technical Reports Server (NTRS)

    Sepehry-Fard, F.; Coulthard, Maurice H.

    1995-01-01

    The objective of this publication is to introduce the enhancement methods for the overall reliability and maintainability methods of assessment on the International Space Station. It is essential that the process to predict the values of the maintenance time dependent variable parameters such as mean time between failure (MTBF) over time do not in themselves generate uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. Furthermore, the very acute problems of micrometeorite, Cosmic rays, flares, atomic oxygen, ionization effects, orbital plumes and all the other factors that differentiate maintainable space operations from non-maintainable space operations and/or ground operations must be accounted for. Therefore, these parameters need be subjected to a special and complex process. Since reliability and maintainability strongly depend on the operating conditions that are encountered during the entire life of the International Space Station, it is important that such conditions are accurately identified at the beginning of the logistics support requirements process. Environmental conditions which exert a strong influence on International Space Station will be discussed in this report. Concurrent (combined) space environments may be more detrimental to the reliability and maintainability of the International Space Station than the effects of a single environment. In characterizing the logistics support requirements process, the developed design/test criteria must consider both the single and/or combined environments in anticipation of providing hardware capability to withstand the hazards of the International Space Station profile. The effects of the combined environments (typical) in a matrix relationship on the International Space Station will be shown. The combinations of the environments where the total effect is more damaging than the cumulative effects of the environments acting singly, may include a combination such as temperature, humidity, altitude, shock, and vibration while an item is being transported. The item's acceptance to its end-of-life sequence must be examined for these effects.

  2. ISS Operations Cost Reductions Through Automation of Real-Time Planning Tasks

    NASA Technical Reports Server (NTRS)

    Hall, Timothy A.

    2011-01-01

    In 2008 the Johnson Space Center s Mission Operations Directorate (MOD) management team challenged their organization to find ways to reduce the costs of International Space station (ISS) console operations in the Mission Control Center (MCC). Each MOD organization was asked to identify projects that would help them attain a goal of a 30% reduction in operating costs by 2012. The MOD Operations and Planning organization responded to this challenge by launching several software automation projects that would allow them to greatly improve ISS console operations and reduce staffing and operating costs. These projects to date have allowed the MOD Operations organization to remove one full time (7 x 24 x 365) ISS console position in 2010; with the plan of eliminating two full time ISS console support positions by 2012. This will account for an overall 10 EP reduction in staffing for the Operations and Planning organization. These automation projects focused on utilizing software to automate many administrative and often repetitive tasks involved with processing ISS planning and daily operations information. This information was exchanged between the ground flight control teams in Houston and around the globe, as well as with the ISS astronaut crew. These tasks ranged from managing mission plan changes from around the globe, to uploading and downloading information to and from the ISS crew, to even more complex tasks that required multiple decision points to process the data, track approvals and deliver it to the correct recipient across network and security boundaries. The software solutions leveraged several different technologies including customized web applications and implementation of industry standard web services architecture between several planning tools; as well as a engaging a previously research level technology (TRL 2-3) developed by Ames Research Center (ARC) that utilized an intelligent agent based system to manage and automate file traffic flow, archiving f data, and generating console logs. This technology called OCAMS (OCA (Orbital Communication System) Management System), is now considered TRL level 9 and is in daily use in the Mission Control Center in support of ISS operations. These solutions have not only allowed for improved efficiency on console; but since many of the previously manual data transfers are now automated, many of the human error prone steps have been removed, and the quality of the planning products has improved tremendously. This has also allowed our Planning Flight Controllers more time to focus on the abstract areas of the job, (like the complexities of planning a mission for 6 international crew members with a global planning team), instead of being burdened with the administrative tasks that took significant time each console shift to process. The resulting automation solutions have allowed the Operations and Planning organization to realize significant cost savings for the ISS program through 2020 and many of these solutions could be a viable

  3. Soft Real-Time PID Control on a VME Computer

    NASA Technical Reports Server (NTRS)

    Karayan, Vahag; Sander, Stanley; Cageao, Richard

    2007-01-01

    microPID (uPID) is a computer program for real-time proportional + integral + derivative (PID) control of a translation stage in a Fourier-transform ultraviolet spectrometer. microPID implements a PID control loop over a position profile at sampling rate of 8 kHz (sampling period 125microseconds). The software runs in a strippeddown Linux operating system on a VersaModule Eurocard (VME) computer operating in real-time priority queue using an embedded controller, a 16-bit digital-to-analog converter (D/A) board, and a laser-positioning board (LPB). microPID consists of three main parts: (1) VME device-driver routines, (2) software that administers a custom protocol for serial communication with a control computer, and (3) a loop section that obtains the current position from an LPB-driver routine, calculates the ideal position from the profile, and calculates a new voltage command by use of an embedded PID routine all within each sampling period. The voltage command is sent to the D/A board to control the stage. microPID uses special kernel headers to obtain microsecond timing resolution. Inasmuch as microPID implements a single-threaded process and all other processes are disabled, the Linux operating system acts as a soft real-time system.

  4. Method of Data storing, collection and aggregation for definition of life-cycle resources of electromechanical equipment

    NASA Astrophysics Data System (ADS)

    Zhukovskiy, Y.; Koteleva, N.

    2017-10-01

    Analysis of technical and technological conditions for the emergence of emergency situations during the operation of electromechanical equipment of enterprises of the mineral and raw materials complex shows that when developing the basis for ensuring safe operation, it is necessary to take into account not only the technical condition, but also the non-stationary operation of the operating conditions of equipment, and the nonstationarity of operational operating parameters of technological processes. Violations of the operation of individual parts of the machine, not detected in time, can lead to severe accidents at work, as well as to unplanned downtime and loss of profits. That is why, the issues of obtaining and processing Big data obtained during the life cycle of electromechanical equipment, for assessing the current state of the electromechanical equipment used, timely diagnostics of emergency and pre-emergency modes of its operation, estimating the residual resource, as well as prediction the technical state on the basis of machine learning are very important. This article is dedicated to developing the special method of data storing, collection and aggregation for definition of life-cycle resources of electromechanical equipment. This method can be used in working with big data and can allow extracting the knowledge from different data types: the plants’ historical data and the factory historical data. The data of the plants contains the information about electromechanical equipment operation and the data of the factory contains the information about a production of electromechanical equipment.

  5. Chapter 5, "License Renewal and Aging Management for Continued Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naus, Dan J

    As of August 2011, there were 104 commercial nuclear power reactors licensed to operate in 31 states in the United States. Initial operating licenses in the United States are granted for a period of 40 years. In order to help assure an adequate energy supply, the USNRC has established a timely license renewal process and clear requirements that are needed to ensure safe plant operation for an extended plant life. The principals of license renewal and the basic requirements that address license renewal are identified as well as additional sources of guidance that can be utilized as part of themore » license renewal process. Aging management program inspections and operating experience related to the concrete and steel containment structures are provided. Finally, several lessons learned are provided based on containment operating experience.« less

  6. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  7. Effects of a malfunctional column on conventional and FeedCol-simulated moving bed chromatography performance.

    PubMed

    Song, Ji-Yeon; Oh, Donghoon; Lee, Chang-Ha

    2015-07-17

    The effects of a malfunctional column on the performance of a simulated moving bed (SMB) process were studied experimentally and theoretically. The experimental results of conventional four-zone SMB (2-2-2-2 configuration) and FeedCol operation (2-2-2-2 configuration with one feed column) with one malfunctional column were compared with simulation results of the corresponding SMB processes with a normal column configuration. The malfunctional column in SMB processes significantly deteriorated raffinate purity. However, the extract purity was equivalent or slightly improved compared with the corresponding normal SMB operation because the complete separation zone of the malfunctional column moved to a lower flow rate range in zones II and III. With the malfunctional column configuration, FeedCol operation gave better experimental performance (up to 7%) than conventional SMB operation because controlling product purity with FeedCol operation was more flexible through the use of two additional operating variables, injection time and injection length. Thus, compared with conventional SMB separation, extract with equivalent or slightly better purity could be produced from FeedCol operation even with a malfunctional column, while minimizing the decrease in raffinate purity (less than 2%). Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Soil formation: Chapter 6

    USGS Publications Warehouse

    Goldhaber, Martin B.; Banwart, Steven A.

    2015-01-01

    Soil formation reflects the complex interaction of many factors, among the most important of which are (i) the nature of the soil parent material, (ii) regional climate, (iii) organisms, including humans, (iv) topography and (v) time. These processes operate in Earth's critical zone; the thin veneer of our planet where rock meets life. Understanding the operation of these soil-forming factors requires an interdisciplinary approach and is a necessary predicate to charactering soil processes and functions, mitigating soil degradation and adapting soil management to environmental change. In this chapter, we discuss how these soil-forming factors operate both singly and in concert in natural and human modified environments. We emphasize the role that soil organic matter plays in these processes to provide context for understanding the benefits that it bestows on humanity.

  9. 24 CFR 15.110 - What fees will HUD charge?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... duplicating machinery. The computer run time includes the cost of operating a central processing unit for that... Applies. (6) Computer run time (includes only mainframe search time not printing) The direct cost of... estimated fee is more than $250.00 or you have a history of failing to pay FOIA fees to HUD in a timely...

  10. Evaluation of telerobotic systems using an instrumented task board

    NASA Technical Reports Server (NTRS)

    Carroll, John D.; Gierow, Paul A.; Bryan, Thomas C.

    1991-01-01

    An instrumented task board was developed at NASA Marshall Space Flight Center (MSFC). An overview of the task board design, and current development status is presented. The task board was originally developed to evaluate operator performance using the Protoflight Manipulator Arm (PFMA) at MSFC. The task board evaluates tasks for Orbital Replacement Unit (ORU), fluid connect and transfers, electrical connect/disconnect, bolt running, and other basic tasks. The instrumented task board measures the 3-D forces and torques placed on the board, determines the robot arm's 3-D position relative to the task board using IR optics, and provides the information in real-time. The PFMA joint input signals can also be measured from a breakout box to evaluate the sensitivity or response of the arm operation to control commands. The data processing system provides the capability for post processing of time-history graphics and plots of the PFMA positions, the operator's actions, and the PFMA servo reactions in addition to real-time force/torque data presentation. The instrumented task board's most promising use is developing benchmarks for NASA centers for comparison and evaluation of telerobotic performance.

  11. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2006-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  12. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  13. How smart is your BEOL? productivity improvement through intelligent automation

    NASA Astrophysics Data System (ADS)

    Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Garetto, Anthony

    2017-07-01

    The back end of line (BEOL) workflow in the mask shop still has crucial issues throughout all standard steps which are inspection, disposition, photomask repair and verification of repair success. All involved tools are typically run by highly trained operators or engineers who setup jobs and recipes, execute tasks, analyze data and make decisions based on the results. No matter how experienced operators are and how good the systems perform, there is one aspect that always limits the productivity and effectiveness of the operation: the human aspect. Human errors can range from seemingly rather harmless slip-ups to mistakes with serious and direct economic impact including mask rejects, customer returns and line stops in the wafer fab. Even with the introduction of quality control mechanisms that help to reduce these critical but unavoidable faults, they can never be completely eliminated. Therefore the mask shop BEOL cannot run in the most efficient manner as unnecessary time and money are spent on processes that still remain labor intensive. The best way to address this issue is to automate critical segments of the workflow that are prone to human errors. In fact, manufacturing errors can occur for each BEOL step where operators intervene. These processes comprise of image evaluation, setting up tool recipes, data handling and all other tedious but required steps. With the help of smart solutions, operators can work more efficiently and dedicate their time to less mundane tasks. Smart solutions connect tools, taking over the data handling and analysis typically performed by operators and engineers. These solutions not only eliminate the human error factor in the manufacturing process but can provide benefits in terms of shorter cycle times, reduced bottlenecks and prediction of an optimized workflow. In addition such software solutions consist of building blocks that seamlessly integrate applications and allow the customers to use tailored solutions. To accommodate for the variability and complexity in mask shops today, individual workflows can be supported according to the needs of any particular manufacturing line with respect to necessary measurement and production steps. At the same time the efficiency of assets is increased by avoiding unneeded cycle time and waste of resources due to the presence of process steps that are very crucial for a given technology. In this paper we present details of which areas of the BEOL can benefit most from intelligent automation, what solutions exist and the quantification of benefits to a mask shop with full automation by the use of a back end of line model.

  14. Induction graphitizing furnace acceptance test report

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The induction furnace was designed to provide the controlled temperature and environment required for the post-cure, carbonization and graphitization processes for the fabrication of a fibrous graphite NERVA nozzle extension. The acceptance testing required six tests and a total operating time of 298 hrs. Low temperature mode operations, 120 to 850 C, were completed in one test run. High temperature mode operations, 120 to 2750 C, were completed during five tests.

  15. Operability engineering in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Wilkinson, Belinda

    1993-01-01

    Many operability problems exist at the three Deep Space Communications Complexes (DSCC's) of the Deep Space Network (DSN). Four years ago, the position of DSN Operability Engineer was created to provide the opportunity for someone to take a system-level approach to solving these problems. Since that time, a process has been developed for personnel and development engineers and for enforcing user interface standards in software designed for the DSCC's. Plans are for the participation of operations personnel in the product life-cycle to expand in the future.

  16. Measurement of SIFT operating system overhead

    NASA Technical Reports Server (NTRS)

    Palumbo, D. L.; Butler, R. W.

    1985-01-01

    The overhead of the software implemented fault tolerance (SIFT) operating system was measured. Several versions of the operating system evolved. Each version represents different strategies employed to improve the measured performance. Three of these versions are analyzed. The internal data structures of the operating systems are discussed. The overhead of the SIFT operating system was found to be of two types: vote overhead and executive task overhead. Both types of overhead were found to be significant in all versions of the system. Improvements substantially reduced this overhead; even with these improvements, the operating system consumed well over 50% of the available processing time.

  17. Optimization of the Ethanol Recycling Reflux Extraction Process for Saponins Using a Design Space Approach

    PubMed Central

    Gong, Xingchu; Zhang, Ying; Pan, Jianyang; Qu, Haibin

    2014-01-01

    A solvent recycling reflux extraction process for Panax notoginseng was optimized using a design space approach to improve the batch-to-batch consistency of the extract. Saponin yields, total saponin purity, and pigment yield were defined as the process critical quality attributes (CQAs). Ethanol content, extraction time, and the ratio of the recycling ethanol flow rate and initial solvent volume in the extraction tank (RES) were identified as the critical process parameters (CPPs) via quantitative risk assessment. Box-Behnken design experiments were performed. Quadratic models between CPPs and process CQAs were developed, with determination coefficients higher than 0.88. As the ethanol concentration decreases, saponin yields first increase and then decrease. A longer extraction time leads to higher yields of the ginsenosides Rb1 and Rd. The total saponin purity increases as the ethanol concentration increases. The pigment yield increases as the ethanol concentration decreases or extraction time increases. The design space was calculated using a Monte-Carlo simulation method with an acceptable probability of 0.90. Normal operation ranges to attain process CQA criteria with a probability of more than 0.914 are recommended as follows: ethanol content of 79–82%, extraction time of 6.1–7.1 h, and RES of 0.039–0.040 min−1. Most of the results of the verification experiments agreed well with the predictions. The verification experiment results showed that the selection of proper operating ethanol content, extraction time, and RES within the design space can ensure that the CQA criteria are met. PMID:25470598

  18. Optimization of a thermal hydrolysis process for sludge pre-treatment.

    PubMed

    Sapkaite, I; Barrado, E; Fdz-Polanco, F; Pérez-Elvira, S I

    2017-05-01

    At industrial scale, thermal hydrolysis is the most used process to enhance biodegradability of the sludge produced in wastewater treatment plants. Through statistically guided Box-Behnken experimental design, the present study analyses the effect of TH as pre-treatment applied to activated sludge. The selected process variables were temperature (130-180 °C), time (5-50 min) and decompression mode (slow or steam-explosion effect), and the parameters evaluated were sludge solubilisation and methane production by anaerobic digestion. A quadratic polynomial model was generated to compare the process performance for the 15 different combinations of operation conditions by modifying the process variables evaluated. The statistical analysis performed exhibited that methane production and solubility were significantly affected by pre-treatment time and temperature. During high intensity pre-treatment (high temperature and long times), the solubility increased sharply while the methane production exhibited the opposite behaviour, indicating the formation of some soluble but non-biodegradable materials. Therefore, solubilisation is not a reliable parameter to quantify the efficiency of a thermal hydrolysis pre-treatment, since it is not directly related to methane production. Based on the operational parameters optimization, the estimated optimal thermal hydrolysis conditions to enhance of sewage sludge digestion were: 140-170 °C heating temperature, 5-35min residence time, and one sudden decompression. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Processing of Mars Exploration Rover Imagery for Science and Operations Planning

    NASA Technical Reports Server (NTRS)

    Alexander, Douglass A.; Deen, Robert G.; Andres, Paul M.; Zamani, Payam; Mortensen, Helen B.; Chen, Amy C.; Cayanan, Michael K.; Hall, Jeffrey R.; Klochko, Vadim S.; Pariser, Oleg; hide

    2006-01-01

    The twin Mars Exploration Rovers (MER) delivered an unprecedented array of image sensors to the Mars surface. These cameras were essential for operations, science, and public engagement. The Multimission Image Processing Laboratory (MIPL) at the Jet Propulsion Laboratory was responsible for the first-order processing of all of the images returned by these cameras. This processing included reconstruction of the original images, systematic and ad hoc generation of a wide variety of products derived from those images, and delivery of the data to a variety of customers, within tight time constraints. A combination of automated and manual processes was developed to meet these requirements, with significant inheritance from prior missions. This paper describes the image products generated by MIPL for MER and the processes used to produce and deliver them.

  20. A neural network strategy for end-point optimization of batch processes.

    PubMed

    Krothapally, M; Palanki, S

    1999-01-01

    The traditional way of operating batch processes has been to utilize an open-loop "golden recipe". However, there can be substantial batch to batch variation in process conditions and this open-loop strategy can lead to non-optimal operation. In this paper, a new approach is presented for end-point optimization of batch processes by utilizing neural networks. This strategy involves the training of two neural networks; one to predict switching times and the other to predict the input profile in the singular region. This approach alleviates the computational problems associated with the classical Pontryagin's approach and the nonlinear programming approach. The efficacy of this scheme is illustrated via simulation of a fed-batch fermentation.

  1. Latest processing status and quality assessment of the GOMOS, MIPAS and SCIAMACHY ESA dataset

    NASA Astrophysics Data System (ADS)

    Niro, F.; Brizzi, G.; Saavedra de Miguel, L.; Scarpino, G.; Dehn, A.; Fehr, T.; von Kuhlmann, R.

    2011-12-01

    GOMOS, MIPAS and SCIAMACHY instruments are successfully observing the changing Earth's atmosphere since the launch of the ENVISAT-ESA platform on March 2002. The measurements recorded by these instruments are relevant for the Atmospheric-Chemistry community both in terms of time extent and variety of observing geometry and techniques. In order to fully exploit these measurements, it is crucial to maintain a good reliability in the data processing and distribution and to continuously improving the scientific output. The goal is to meet the evolving needs of both the near-real-time and research applications. Within this frame, the ESA operational processor remains the reference code, although many scientific algorithms are nowadays available to the users. In fact, the ESA algorithm has a well-established calibration and validation scheme, a certified quality assessment process and the possibility to reach a wide users' community. Moreover, the ESA algorithm upgrade procedures and the re-processing performances have much improved during last two years, thanks to the recent updates of the Ground Segment infrastructure and overall organization. The aim of this paper is to promote the usage and stress the quality of the ESA operational dataset for the GOMOS, MIPAS and SCIAMACHY missions. The recent upgrades in the ESA processor (GOMOS V6, MIPAS V5 and SCIAMACHY V5) will be presented, with detailed information on improvements in the scientific output and preliminary validation results. The planned algorithm evolution and on-going re-processing campaigns will be mentioned that involves the adoption of advanced set-up, such as the MIPAS V6 re-processing on a clouds-computing system. Finally, the quality control process will be illustrated that allows to guarantee a standard of quality to the users. In fact, the operational ESA algorithm is carefully tested before switching into operations and the near-real time and off-line production is thoughtfully verified via the implementation of automatic quality control procedures. The scientific validity of the ESA dataset will be additionally illustrated with examples of applications that can be supported, such as ozone-hole monitoring, volcanic ash detection and analysis of atmospheric composition changes during the past years.

  2. EARLINET: potential operationality of a research network

    NASA Astrophysics Data System (ADS)

    Sicard, M.; D'Amico, G.; Comerón, A.; Mona, L.; Alados-Arboledas, L.; Amodeo, A.; Baars, H.; Belegante, L.; Binietoglou, I.; Bravo-Aranda, J. A.; Fernández, A. J.; Fréville, P.; García-Vizcaíno, D.; Giunta, A.; Granados-Muñoz, M. J.; Guerrero-Rascado, J. L.; Hadjimitsis, D.; Haefele, A.; Hervo, M.; Iarlori, M.; Kokkalis, P.; Lange, D.; Mamouri, R. E.; Mattis, I.; Molero, F.; Montoux, N.; Muñoz, A.; Muñoz Porcar, C.; Navas-Guzmán, F.; Nicolae, D.; Nisantzi, A.; Papagiannopoulos, N.; Papayannis, A.; Pereira, S.; Preißler, J.; Pujadas, M.; Rizi, V.; Rocadenbosch, F.; Sellegri, K.; Simeonov, V.; Tsaknakis, G.; Wagner, F.; Pappalardo, G.

    2015-07-01

    In the framework of ACTRIS summer 2012 measurement campaign (8 June-17 July 2012), EARLINET organized and performed a controlled exercise of feasibility to demonstrate its potential to perform operational, coordinated measurements and deliver products in near-real time. Eleven lidar stations participated to the exercise which started on 9 July 2012 at 06:00 UT and ended 72 h later on 12 July at 06:00 UT. For the first time the Single-Calculus Chain (SCC), the common calculus chain developed within EARLINET for the automatic evaluation of lidar data from raw signals up to the final products, was used. All stations sent in real time measurements of 1 h of duration to the SCC server in a predefined netcdf file format. The pre-processing of the data was performed in real time by the SCC while the optical processing was performed in near-real time after the exercise ended. 98 and 84 % of the files sent to SCC were successfully pre-processed and processed, respectively. Those percentages are quite large taking into account that no cloud screening was performed on lidar data. The paper shows time series of continuous and homogeneously obtained products retrieved at different levels of the SCC: range-square corrected signals (pre-processing) and daytime backscatter and nighttime extinction coefficient profiles (optical processing), as well as combined plots of all direct and derived optical products. The derived products include backscatter- and extinction-related Ångström exponents, lidar ratios and color ratios. The combined plots reveal extremely valuable for aerosol classification. The efforts made to define the measurements protocol and to configure properly the SCC pave the way for applying this protocol for specific applications such as the monitoring of special events, atmospheric modelling, climate research and calibration/validation activities of spaceborne observations.

  3. A new method for defining and managing process alarms and for correcting process operation when an alarm occurs.

    PubMed

    Brooks, Robin; Thorpe, Richard; Wilson, John

    2004-11-11

    A new mathematical treatment of alarms that considers them as multi-variable interactions between process variables has provided the first-ever method to calculate values for alarm limits. This has resulted in substantial reductions in false alarms and hence in alarm annunciation rates in field trials. It has also unified alarm management, process control and product quality control into a single mathematical framework so that operations improvement and hence economic benefits are obtained at the same time as increased process safety. Additionally, an algorithm has been developed that advises what changes should be made to Manipulable process variables to clear an alarm. The multi-variable Best Operating Zone at the heart of the method is derived from existing historical data using equation-free methods. It does not require a first-principles process model or an expensive series of process identification experiments. Integral with the method is a new format Process Operator Display that uses only existing variables to fully describe the multi-variable operating space. This combination of features makes it an affordable and maintainable solution for small plants and single items of equipment as well as for the largest plants. In many cases, it also provides the justification for the investments about to be made or already made in process historian systems. Field Trials have been and are being conducted at IneosChlor and Mallinckrodt Chemicals, both in the UK, of the new geometric process control (GPC) method for improving the quality of both process operations and product by providing Process Alarms and Alerts of much high quality than ever before. The paper describes the methods used, including a simple visual method for Alarm Rationalisation that quickly delivers large sets of Consistent Alarm Limits, and the extension to full Alert Management with highlights from the Field Trials to indicate the overall effectiveness of the method in practice.

  4. Models for discrete-time self-similar vector processes with application to network traffic

    NASA Astrophysics Data System (ADS)

    Lee, Seungsin; Rao, Raghuveer M.; Narasimha, Rajesh

    2003-07-01

    The paper defines self-similarity for vector processes by employing the discrete-time continuous-dilation operation which has successfully been used previously by the authors to define 1-D discrete-time stochastic self-similar processes. To define self-similarity of vector processes, it is required to consider the cross-correlation functions between different 1-D processes as well as the autocorrelation function of each constituent 1-D process in it. System models to synthesize self-similar vector processes are constructed based on the definition. With these systems, it is possible to generate self-similar vector processes from white noise inputs. An important aspect of the proposed models is that they can be used to synthesize various types of self-similar vector processes by choosing proper parameters. Additionally, the paper presents evidence of vector self-similarity in two-channel wireless LAN data and applies the aforementioned systems to simulate the corresponding network traffic traces.

  5. Future supply chains enabled by continuous processing--opportunities and challenges. May 20-21, 2014 Continuous Manufacturing Symposium.

    PubMed

    Srai, Jagjit Singh; Badman, Clive; Krumme, Markus; Futran, Mauricio; Johnston, Craig

    2015-03-01

    This paper examines the opportunities and challenges facing the pharmaceutical industry in moving to a primarily "continuous processing"-based supply chain. The current predominantly "large batch" and centralized manufacturing system designed for the "blockbuster" drug has driven a slow-paced, inventory heavy operating model that is increasingly regarded as inflexible and unsustainable. Indeed, new markets and the rapidly evolving technology landscape will drive more product variety, shorter product life-cycles, and smaller drug volumes, which will exacerbate an already unsustainable economic model. Future supply chains will be required to enhance affordability and availability for patients and healthcare providers alike despite the increased product complexity. In this more challenging supply scenario, we examine the potential for a more pull driven, near real-time demand-based supply chain, utilizing continuous processing where appropriate as a key element of a more "flow-through" operating model. In this discussion paper on future supply chain models underpinned by developments in the continuous manufacture of pharmaceuticals, we have set out; The significant opportunities to moving to a supply chain flow-through operating model, with substantial opportunities in inventory reduction, lead-time to patient, and radically different product assurance/stability regimes. Scenarios for decentralized production models producing a greater variety of products with enhanced volume flexibility. Production, supply, and value chain footprints that are radically different from today's monolithic and centralized batch manufacturing operations. Clinical trial and drug product development cost savings that support more rapid scale-up and market entry models with early involvement of SC designers within New Product Development. The major supply chain and industrial transformational challenges that need to be addressed. The paper recognizes that although current batch operational performance in pharma is far from optimal and not necessarily an appropriate end-state benchmark for batch technology, the adoption of continuous supply chain operating models underpinned by continuous production processing, as full or hybrid solutions in selected product supply chains, can support industry transformations to deliver right-first-time quality at substantially lower inventory profiles. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  6. Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks

    NASA Astrophysics Data System (ADS)

    Karpov, Kirill; Fedotova, Irina; Siemens, Eduard

    2017-07-01

    In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.

  7. 40 CFR 60.482-1 - Standards: General.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...

  8. 40 CFR 60.482-1 - Standards: General.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...

  9. 40 CFR 60.482-1 - Standards: General.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...

  10. 40 CFR 60.482-1 - Standards: General.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... are shared among two or more batch process units that are subject to this subpart may be monitored at... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...

  11. 40 CFR 60.482-1 - Standards: General.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...

  12. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than

  13. Assessing carbon and nitrogen removal in a novel anoxic-aerobic cyanobacterial-bacterial photobioreactor configuration with enhanced biomass sedimentation.

    PubMed

    de Godos, I; Vargas, V A; Guzmán, H O; Soto, R; García, B; García, P A; Muñoz, R

    2014-09-15

    The carbon and nitrogen removal potential of an innovative anoxic-aerobic photobioreactor configuration operated with both internal and external recyclings was evaluated under different cyanobacterial-bacterial sludge residence times (9-31 days) during the treatment of wastewaters with low C/N ratios. Under optimal operating conditions, the two-stage photobioreactor was capable of providing organic carbon and nitrogen removals over 95% and 90%, respectively. The continuous biomass recycling from the settler resulted in the enrichment and predominance of rapidly-settling cyanobacterial-bacterial flocs and effluent suspended solid concentrations lower than 35 mg VSS L(-1). These flocs exhibited sedimentation rates of 0.28-0.42 m h(-1) but sludge volumetric indexes of 333-430 ml/g. The decoupling between the hydraulic retention time and sludge retention time mediated by the external recycling also avoided the washout of nitrifying bacteria and supported process operation at biomass concentrations of 1000-1500 mg VSS L(-1). The addition of additional NaHCO3 to the process overcame the CO2 limitation resulting from the intense competition for inorganic carbon between cyanobacteria and nitrifying bacteria in the photobioreactor, which supported the successful implementation of a nitrification-denitrification process. Unexpectedly, this nitrification-denitrification process occurred both simultaneously in the photobioreactor alone (as a result of the negligible dissolved oxygen concentrations) and sequentially in the two-stage anoxic-aerobic configuration with internal NO3(-)/NO2(-) recycling. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Parametric Study of Carbon Nanotube Production by Laser Ablation Process

    NASA Technical Reports Server (NTRS)

    Arepalli, Sivaram; Nikolaev, Pavel; Holmes, William; Hadjiev, Victor; Scott, Carl

    2002-01-01

    Carbon nanotubes form a new class of nanomaterials that are presumed to have extraordinary mechanical, electrical and thermal properties. The single wall nanotubes (SWNTs) are estimated to be 100 times stronger than steel with 1/6th the weight; electrical carrying capacity better than copper and thermal conductivity better than diamond. Applications of these SWNTs include possible weight reduction of aerospace structures, multifunctional materials, nanosensors and nanoelectronics. Double pulsed laser vaporization process produces SWNTs with the highest percentage of nanotubes in the output material. The normal operating conditions include a green laser pulse closely followed by an infrared laser pulse. Lasers ab late a metal-containing graphite target located in a flow tube maintained in an oven at 1473K with argon flow of 100 sccm at a 500 Torr pressure. In the present work a number of production runs were carried out, changing one operating condition at a time. We have studied the effects of nine parameters, including the sequencing of the laser pulses, pulse separation times, laser energy densities, the type of buffer gas used, oven temperature, operating pressure, flow rate and inner flow tube diameters. All runs were done using the same graphite target. The collected nanotube material was characterized by a variety of analytical techniques including scanning electron microscopy (SEM), transmission electron microscopy (TEM), Raman and thermo gravimetric analysis (TGA). Results indicate trends that could be used to optimize the process and increase the efficiency of the production process.

  15. A Description of the Development, Capabilities, and Operational Status of the Test SLATE Data Acquisition System at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Cramer, Christopher J.; Wright, James D.; Simmons, Scott A.; Bobbitt, Lynn E.; DeMoss, Joshua A.

    2015-01-01

    The paper will present a brief background of the previous data acquisition system at the National Transonic Facility (NTF) and the reasoning and goals behind the upgrade to the current Test SLATE (Test Software Laboratory and Automated Testing Environments) data acquisition system. The components, performance characteristics, and layout of the Test SLATE system within the NTF control room will be discussed. The development, testing, and integration of Test SLATE within NTF operations will be detailed. The operational capabilities of the system will be outlined including: test setup, instrumentation calibration, automatic test sequencer setup, data recording, communication between data and facility control systems, real time display monitoring, and data reduction. The current operational status of the Test SLATE system and its performance during recent NTF testing will be highlighted including high-speed, frame-by-frame data acquisition with conditional sampling post-processing applied. The paper concludes with current development work on the system including the capability for real-time conditional sampling during data acquisition and further efficiency enhancements to the wind tunnel testing process.

  16. Memory Operations That Support Language Comprehension: Evidence From Verb-Phrase Ellipsis

    PubMed Central

    Martin, Andrea E.; McElree, Brian

    2010-01-01

    Comprehension of verb-phrase ellipsis (VPE) requires reevaluation of recently processed constituents, which often necessitates retrieval of information about the elided constituent from memory. A. E. Martin and B. McElree (2008) argued that representations formed during comprehension are content addressable and that VPE antecedents are retrieved from memory via a cue-dependent direct-access pointer rather than via a search process. This hypothesis was further tested by manipulating the location of interfering material—either before the onset of the antecedent (proactive interference; PI) or intervening between antecedent and ellipsis site (retroactive interference; RI). The speed–accuracy tradeoff procedure was used to measure the time course of VPE processing. The location of the interfering material affected VPE comprehension accuracy: RI conditions engendered lower accuracy than PI conditions. Crucially, location did not affect the speed of processing VPE, which is inconsistent with both forward and backward search mechanisms. The observed time-course profiles are consistent with the hypothesis that VPE antecedents are retrieved via a cue-dependent direct-access operation. PMID:19686017

  17. Workload Capacity: A Response Time-Based Measure of Automation Dependence.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2016-05-01

    An experiment used the workload capacity measure C(t) to quantify the processing efficiency of human-automation teams and identify operators' automation usage strategies in a speeded decision task. Although response accuracy rates and related measures are often used to measure the influence of an automated decision aid on human performance, aids can also influence response speed. Mean response times (RTs), however, conflate the influence of the human operator and the automated aid on team performance and may mask changes in the operator's performance strategy under aided conditions. The present study used a measure of parallel processing efficiency, or workload capacity, derived from empirical RT distributions as a novel gauge of human-automation performance and automation dependence in a speeded task. Participants performed a speeded probabilistic decision task with and without the assistance of an automated aid. RT distributions were used to calculate two variants of a workload capacity measure, COR(t) and CAND(t). Capacity measures gave evidence that a diagnosis from the automated aid speeded human participants' responses, and that participants did not moderate their own decision times in anticipation of diagnoses from the aid. Workload capacity provides a sensitive and informative measure of human-automation performance and operators' automation dependence in speeded tasks. © 2016, Human Factors and Ergonomics Society.

  18. DSN Resource Scheduling

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Baldwin, John

    2007-01-01

    TIGRAS is client-side software, which provides tracking-station equipment planning, allocation, and scheduling services to the DSMS (Deep Space Mission System). TIGRAS provides functions for schedulers to coordinate the DSN (Deep Space Network) antenna usage time and to resolve the resource usage conflicts among tracking passes, antenna calibrations, maintenance, and system testing activities. TIGRAS provides a fully integrated multi-pane graphical user interface for all scheduling operations. This is a great improvement over the legacy VAX VMS command line user interface. TIGRAS has the capability to handle all DSN resource scheduling aspects from long-range to real time. TIGRAS assists NASA mission operations for DSN tracking of station equipment resource request processes from long-range load forecasts (ten years or longer), to midrange, short-range, and real-time (less than one week) emergency tracking plan changes. TIGRAS can be operated by NASA mission operations worldwide to make schedule requests for the DSN station equipment.

  19. Subwavelength grating enabled on-chip ultra-compact optical true time delay line

    PubMed Central

    Wang, Junjia; Ashrafi, Reza; Adams, Rhys; Glesk, Ivan; Gasulla, Ivana; Capmany, José; Chen, Lawrence R.

    2016-01-01

    An optical true time delay line (OTTDL) is a basic photonic building block that enables many microwave photonic and optical processing operations. The conventional design for an integrated OTTDL that is based on spatial diversity uses a length-variable waveguide array to create the optical time delays, which can introduce complexities in the integrated circuit design. Here we report the first ever demonstration of an integrated index-variable OTTDL that exploits spatial diversity in an equal length waveguide array. The approach uses subwavelength grating waveguides in silicon-on-insulator (SOI), which enables the realization of OTTDLs having a simple geometry and that occupy a compact chip area. Moreover, compared to conventional wavelength-variable delay lines with a few THz operation bandwidth, our index-variable OTTDL has an extremely broad operation bandwidth practically exceeding several tens of THz, which supports operation for various input optical signals with broad ranges of central wavelength and bandwidth. PMID:27457024

  20. Subwavelength grating enabled on-chip ultra-compact optical true time delay line.

    PubMed

    Wang, Junjia; Ashrafi, Reza; Adams, Rhys; Glesk, Ivan; Gasulla, Ivana; Capmany, José; Chen, Lawrence R

    2016-07-26

    An optical true time delay line (OTTDL) is a basic photonic building block that enables many microwave photonic and optical processing operations. The conventional design for an integrated OTTDL that is based on spatial diversity uses a length-variable waveguide array to create the optical time delays, which can introduce complexities in the integrated circuit design. Here we report the first ever demonstration of an integrated index-variable OTTDL that exploits spatial diversity in an equal length waveguide array. The approach uses subwavelength grating waveguides in silicon-on-insulator (SOI), which enables the realization of OTTDLs having a simple geometry and that occupy a compact chip area. Moreover, compared to conventional wavelength-variable delay lines with a few THz operation bandwidth, our index-variable OTTDL has an extremely broad operation bandwidth practically exceeding several tens of THz, which supports operation for various input optical signals with broad ranges of central wavelength and bandwidth.

Top