Sample records for processing system performance

  1. Integrated Main Propulsion System Performance Reconstruction Process/Models

    NASA Technical Reports Server (NTRS)

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  2. Thermal performance of a photographic laboratory process: Solar Hot Water System

    NASA Technical Reports Server (NTRS)

    Walker, J. A.; Jensen, R. N.

    1982-01-01

    The thermal performance of a solar process hot water system is described. The system was designed to supply 22,000 liters (5,500 gallons) per day of 66 C (150 F) process water for photographic processing. The 328 sq m (3,528 sq. ft.) solar field has supplied 58% of the thermal energy for the system. Techniques used for analyzing various thermal values are given. Load and performance factors and the resulting solar contribution are discussed.

  3. Measuring information processing in a client with extreme agitation following traumatic brain injury using the Perceive, Recall, Plan and Perform System of Task Analysis.

    PubMed

    Nott, Melissa T; Chapparo, Christine

    2008-09-01

    Agitation following traumatic brain injury is characterised by a heightened state of activity with disorganised information processing that interferes with learning and achieving functional goals. This study aimed to identify information processing problems during task performance of a severely agitated adult using the Perceive, Recall, Plan and Perform (PRPP) System of Task Analysis. Second, this study aimed to examine the sensitivity of the PRPP System to changes in task performance over a short period of rehabilitation, and third, to evaluate the guidance provided by the PRPP in directing intervention. A case study research design was employed. The PRPP System of Task Analysis was used to assess changes in task embedded information processing capacity during occupational therapy intervention with a severely agitated adult in a rehabilitation context. Performance is assessed on three selected tasks over a one-month period. Information processing difficulties during task performance can be clearly identified when observing a severely agitated adult following a traumatic brain injury. Processing skills involving attention, sensory processing and planning were most affected at this stage of rehabilitation. These processing difficulties are linked to established descriptions of agitated behaviour. Fluctuations in performance across three tasks of differing processing complexity were evident, leading to hypothesised relationships between task complexity, environment and novelty with information processing errors. Changes in specific information processing capacity over time were evident based on repeated measures using the PRPP System of Task Analysis. This lends preliminary support for its utility as an outcome measure, and raises hypotheses about the type of therapy required to enhance information processing in people with severe agitation. The PRPP System is sensitive to information processing changes in severely agitated adults when used to reassess performance over short intervals and can provide direct guidance to occupational therapy intervention to improve task embedded information processing by categorising errors under four stages of an information processing model: Perceive, Recall, Plan and Perform.

  4. Understanding product cost vs. performance through an in-depth system Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Sanson, Mark C.

    2017-08-01

    The manner in which an optical system is toleranced and compensated greatly affects the cost to build it. By having a detailed understanding of different tolerance and compensation methods, the end user can decide on the balance of cost and performance. A detailed phased approach Monte Carlo analysis can be used to demonstrate the tradeoffs between cost and performance. In complex high performance optical systems, performance is fine-tuned by making adjustments to the optical systems after they are initially built. This process enables the overall best system performance, without the need for fabricating components to stringent tolerance levels that often can be outside of a fabricator's manufacturing capabilities. A good performance simulation of as built performance can interrogate different steps of the fabrication and build process. Such a simulation may aid the evaluation of whether the measured parameters are within the acceptable range of system performance at that stage of the build process. Finding errors before an optical system progresses further into the build process saves both time and money. Having the appropriate tolerances and compensation strategy tied to a specific performance level will optimize the overall product cost.

  5. Spacelab Data Processing Facility (SLDPF) quality assurance expert systems development

    NASA Technical Reports Server (NTRS)

    Basile, Lisa R.; Kelly, Angelita C.

    1987-01-01

    The Spacelab Data Processing Facility (SLDPF) is an integral part of the Space Shuttle data network for missions that involve attached scientific payloads. Expert system prototypes were developed to aid in the performance of the quality assurance function of the Spacelab and/or Attached Shuttle Payloads processed telemetry data. The Spacelab Input Processing System (SIPS) and the Spacelab Output Processing System (SOPS), two expert systems, were developed to determine their feasibility and potential in the quality assurance of processed telemetry data. The capabilities and performance of these systems are discussed.

  6. Health-care process improvement decisions: a systems perspective.

    PubMed

    Walley, Paul; Silvester, Kate; Mountford, Shaun

    2006-01-01

    The paper seeks to investigate decision-making processes within hospital improvement activity, to understand how performance measurement systems influence decisions and potentially lead to unsuccessful or unsustainable process changes. A longitudinal study over a 33-month period investigates key events, decisions and outcomes at one medium-sized hospital in the UK. Process improvement events are monitored using process control methods and by direct observation. The authors took a systems perspective of the health-care processes, ensuring that the impacts of decisions across the health-care supply chain were appropriately interpreted. The research uncovers the ways in which measurement systems disguise failed decisions and encourage managers to take a low-risk approach of "symptomatic relief" when trying to improve performance metrics. This prevents many managers from trying higher risk, sustainable process improvement changes. The behaviour of the health-care system is not understood by many managers and this leads to poor analysis of problem situations. Measurement using time-series methodologies, such as statistical process control are vital for a better understanding of the systems impact of changes. Senior managers must also be aware of the behavioural influence of similar performance measurement systems that discourage sustainable improvement. There is a risk that such experiences will tarnish the reputation of performance management as a discipline. Recommends process control measures as a way of creating an organization memory of how decisions affect performance--something that is currently lacking.

  7. System design package for the solar heating and cooling central data processing system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The central data processing system provides the resources required to assess the performance of solar heating and cooling systems installed at remote sites. These sites consist of residential, commercial, government, and educational types of buildings, and the solar heating and cooling systems can be hot-water, space heating, cooling, and combinations of these. The instrumentation data associated with these systems will vary according to the application and must be collected, processed, and presented in a form which supports continuity of performance evaluation across all applications. Overall software system requirements were established for use in the central integration facility which transforms raw data collected at remote sites into performance evaluation information for assessing the performance of solar heating and cooling systems.

  8. Improvement of Organizational Performance and Instructional Design: An Analogy Based on General Principles of Natural Information Processing Systems

    ERIC Educational Resources Information Center

    Darabi, Aubteen; Kalyuga, Slava

    2012-01-01

    The process of improving organizational performance through designing systemic interventions has remarkable similarities to designing instruction for improving learners' performance. Both processes deal with subjects (learners and organizations correspondingly) with certain capabilities that are exposed to novel information designed for producing…

  9. Process for predicting structural performance of mechanical systems

    DOEpatents

    Gardner, David R.; Hendrickson, Bruce A.; Plimpton, Steven J.; Attaway, Stephen W.; Heinstein, Martin W.; Vaughan, Courtenay T.

    1998-01-01

    A process for predicting the structural performance of a mechanical system represents the mechanical system by a plurality of surface elements. The surface elements are grouped according to their location in the volume occupied by the mechanical system so that contacts between surface elements can be efficiently located. The process is well suited for efficient practice on multiprocessor computers.

  10. Central Data Processing System (CDPS) user's manual: Solar heating and cooling program

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The software and data base management system required to assess the performance of solar heating and cooling systems installed at multiple sites is presented. The instrumentation data associated with these systems is collected, processed, and presented in a form which supported continuity of performance evaluation across all applications. The CDPS consisted of three major elements: communication interface computer, central data processing computer, and performance evaluation data base. Users of the performance data base were identified, and procedures for operation, and guidelines for software maintenance were outlined. The manual also defined the output capabilities of the CDPS in support of external users of the system.

  11. Performance of high intensity fed-batch mammalian cell cultures in disposable bioreactor systems.

    PubMed

    Smelko, John Paul; Wiltberger, Kelly Rae; Hickman, Eric Francis; Morris, Beverly Janey; Blackburn, Tobias James; Ryll, Thomas

    2011-01-01

    The adoption of disposable bioreactor technology as an alternate to traditional nondisposable technology is gaining momentum in the biotechnology industry. Evaluation of current disposable bioreactors systems to sustain high intensity fed-batch mammalian cell culture processes needs to be explored. In this study, an assessment was performed comparing single-use bioreactors (SUBs) systems of 50-, 250-, and 1,000-L operating scales with traditional stainless steel (SS) and glass vessels using four distinct mammalian cell culture processes. This comparison focuses on expansion and production stage performance. The SUB performance was evaluated based on three main areas: operability, process scalability, and process performance. The process performance and operability aspects were assessed over time and product quality performance was compared at the day of harvest. Expansion stage results showed disposable bioreactors mirror traditional bioreactors in terms of cellular growth and metabolism. Set-up and disposal times were dramatically reduced using the SUB systems when compared with traditional systems. Production stage runs for both Chinese hamster ovary and NS0 cell lines in the SUB system were able to model SS bioreactors runs at 100-, 200-, 2,000-, and 15,000-L scales. A single 1,000-L SUB run applying a high intensity fed-batch process was able to generate 7.5 kg of antibody with comparable product quality. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  12. Process for predicting structural performance of mechanical systems

    DOEpatents

    Gardner, D.R.; Hendrickson, B.A.; Plimpton, S.J.; Attaway, S.W.; Heinstein, M.W.; Vaughan, C.T.

    1998-05-19

    A process for predicting the structural performance of a mechanical system represents the mechanical system by a plurality of surface elements. The surface elements are grouped according to their location in the volume occupied by the mechanical system so that contacts between surface elements can be efficiently located. The process is well suited for efficient practice on multiprocessor computers. 12 figs.

  13. The Interaction of Spacecraft Cabin Atmospheric Quality and Water Processing System Performance

    NASA Technical Reports Server (NTRS)

    Perry, Jay L.; Croomes, Scott D. (Technical Monitor)

    2002-01-01

    Although designed to remove organic contaminants from a variety of waste water streams, the planned U.S.- and present Russian-provided water processing systems onboard the International Space Station (ISS) have capacity limits for some of the more common volatile cleaning solvents used for housekeeping purposes. Using large quantities of volatile cleaning solvents during the ground processing and in-flight operational phases of a crewed spacecraft such as the ISS can lead to significant challenges to the water processing systems. To understand the challenges facing the management of water processing capacity, the relationship between cabin atmospheric quality and humidity condensate loading is presented. This relationship is developed as a tool to determine the cabin atmospheric loading that may compromise water processing system performance. A comparison of cabin atmospheric loading with volatile cleaning solvents from ISS, Mir, and Shuttle are presented to predict acceptable limits to maintain optimal water processing system performance.

  14. Improving Process Heating System Performance v3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-04-11

    Improving Process Heating System Performance: A Sourcebook for Industry is a development of the U.S. Department of Energy (DOE) Advanced Manufacturing Office (AMO) and the Industrial Heating Equipment Association (IHEA). The AMO and IHEA undertook this project as part of an series of sourcebook publications developed by AMO on energy-consuming industrial systems, and opportunities to improve performance. Other topics in this series include compressed air systems, pumping systems, fan systems, steam systems, and motors and drives

  15. Manufacturing Execution Systems: Examples of Performance Indicator and Operational Robustness Tools.

    PubMed

    Gendre, Yannick; Waridel, Gérard; Guyon, Myrtille; Demuth, Jean-François; Guelpa, Hervé; Humbert, Thierry

    Manufacturing Execution Systems (MES) are computerized systems used to measure production performance in terms of productivity, yield, and quality. In the first part, performance indicator and overall equipment effectiveness (OEE), process robustness tools and statistical process control are described. The second part details some tools to help process robustness and control by operators by preventing deviations from target control charts. MES was developed by Syngenta together with CIMO for automation.

  16. Robust fusion-based processing for military polarimetric imaging systems

    NASA Astrophysics Data System (ADS)

    Hickman, Duncan L.; Smith, Moira I.; Kim, Kyung Su; Choi, Hyun-Jin

    2017-05-01

    Polarisation information within a scene can be exploited in military systems to give enhanced automatic target detection and recognition (ATD/R) performance. However, the performance gain achieved is highly dependent on factors such as the geometry, viewing conditions, and the surface finish of the target. Such performance sensitivities are highly undesirable in many tactical military systems where operational conditions can vary significantly and rapidly during a mission. Within this paper, a range of processing architectures and fusion methods is considered in terms of their practical viability and operational robustness for systems requiring ATD/R. It is shown that polarisation information can give useful performance gains but, to retained system robustness, the introduction of polarimetric processing should be done in such a way as to not compromise other discriminatory scene information in the spectral and spatial domains. The analysis concludes that polarimetric data can be effectively integrated with conventional intensity-based ATD/R by either adapting the ATD/R processing function based on the scene polarisation or else by detection-level fusion. Both of these approaches avoid the introduction of processing bottlenecks and limit the impact of processing on system latency.

  17. Methodological aspects of fuel performance system analysis at raw hydrocarbon processing plants

    NASA Astrophysics Data System (ADS)

    Kulbjakina, A. V.; Dolotovskij, I. V.

    2018-01-01

    The article discusses the methodological aspects of fuel performance system analysis at raw hydrocarbon (RH) processing plants. Modern RH processing facilities are the major consumers of energy resources (ER) for their own needs. To reduce ER, including fuel consumption, and to develop rational fuel system structure are complex and relevant scientific tasks that can only be done using system analysis and complex system synthesis. In accordance with the principles of system analysis, the hierarchical structure of the fuel system, the block scheme for the synthesis of the most efficient alternative of the fuel system using mathematical models and the set of performance criteria have been developed on the main stages of the study. The results from the introduction of specific engineering solutions to develop their own energy supply sources for RH processing facilities have been provided.

  18. The simulation study on optical target laser active detection performance

    NASA Astrophysics Data System (ADS)

    Li, Ying-chun; Hou, Zhao-fei; Fan, Youchen

    2014-12-01

    According to the working principle of laser active detection system, the paper establishes the optical target laser active detection simulation system, carry out the simulation study on the detection process and detection performance of the system. For instance, the performance model such as the laser emitting, the laser propagation in the atmosphere, the reflection of optical target, the receiver detection system, the signal processing and recognition. We focus on the analysis and modeling the relationship between the laser emitting angle and defocus amount and "cat eye" effect echo laser in the reflection of optical target. Further, in the paper some performance index such as operating range, SNR and the probability of the system have been simulated. The parameters including laser emitting parameters, the reflection of the optical target and the laser propagation in the atmosphere which make a great influence on the performance of the optical target laser active detection system. Finally, using the object-oriented software design methods, the laser active detection system with the opening type, complete function and operating platform, realizes the process simulation that the detection system detect and recognize the optical target, complete the performance simulation of each subsystem, and generate the data report and the graph. It can make the laser active detection system performance models more intuitive because of the visible simulation process. The simulation data obtained from the system provide a reference to adjust the structure of the system parameters. And it provides theoretical and technical support for the top level design of the optical target laser active detection system and performance index optimization.

  19. Development of patient collation system by kinetic analysis for chest dynamic radiogram with flat panel detector

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Yuichiro; Kodera, Yoshie

    2006-03-01

    In the picture archiving and communication system (PACS) environment, it is important that all images be stored in the correct location. However, if information such as the patient's name or identification number has been entered incorrectly, it is difficult to notice the error. The present study was performed to develop a system of patient collation automatically for dynamic radiogram examination by a kinetic analysis, and to evaluate the performance of the system. Dynamic chest radiographs during respiration were obtained by using a modified flat panel detector system. Our computer algorithm developed in this study was consisted of two main procedures, kinetic map imaging processing, and collation processing. Kinetic map processing is a new algorithm to visualize a movement for dynamic radiography; direction classification of optical flows and intensity-density transformation technique was performed. Collation processing consisted of analysis with an artificial neural network (ANN) and discrimination for Mahalanobis' generalized distance, those procedures were performed to evaluate a similarity of combination for the same person. Finally, we investigated the performance of our system using eight healthy volunteers' radiographs. The performance was shown as a sensitivity and specificity. The sensitivity and specificity for our system were shown 100% and 100%, respectively. This result indicated that our system has excellent performance for recognition of a patient. Our system will be useful in PACS management for dynamic chest radiography.

  20. Space station definitions, design, and development. Task 5: Multiple arm telerobot coordination and control: Manipulator design methodology

    NASA Technical Reports Server (NTRS)

    Stoughton, R. M.

    1990-01-01

    A proposed methodology applicable to the design of manipulator systems is described. The current design process is especially weak in the preliminary design phase, since there is no accepted measure to be used in trading off different options available for the various subsystems. The design process described uses Cartesian End-Effector Impedance as a measure of performance for the system. Having this measure of performance, it is shown how it may be used to determine the trade-offs necessary to the preliminary design phase. The design process involves three main parts: (1) determination of desired system performance in terms of End-Effector Impedance; (2) trade-off design options to achieve this desired performance; and (3) verification of system performance through laboratory testing. The design process is developed using numerous examples and experiments to demonstrate the feasability of this approach to manipulator design.

  1. Business Performer-Centered Design of User Interfaces

    NASA Astrophysics Data System (ADS)

    Sousa, Kênia; Vanderdonckt, Jean

    Business Performer-Centered Design of User Interfaces is a new design methodology that adopts business process (BP) definition and a business performer perspective for managing the life cycle of user interfaces of enterprise systems. In this methodology, when the organization has a business process culture, the business processes of an organization are firstly defined according to a traditional methodology for this kind of artifact. These business processes are then transformed into a series of task models that represent the interactive parts of the business processes that will ultimately lead to interactive systems. When the organization has its enterprise systems, but not yet its business processes modeled, the user interfaces of the systems help derive tasks models, which are then used to derive the business processes. The double linking between a business process and a task model, and between a task model and a user interface model makes it possible to ensure traceability of the artifacts in multiple paths and enables a more active participation of business performers in analyzing the resulting user interfaces. In this paper, we outline how a human-perspective is used tied to a model-driven perspective.

  2. High-efficiency high-reliability optical components for a large, high-average-power visible laser system

    NASA Astrophysics Data System (ADS)

    Taylor, John R.; Stolz, Christopher J.

    1993-08-01

    Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.

  3. High-efficiency high-reliability optical components for a large, high-average-power visible laser system

    NASA Astrophysics Data System (ADS)

    Taylor, J. R.; Stolz, C. J.

    1992-12-01

    Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.

  4. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  5. The improved dissolution performance of a post processing treated spray-dried crystalline solid dispersion of poorly soluble drugs.

    PubMed

    Chan, Siok-Yee; Toh, Seok-Ming; Khan, Nasir Hayat; Chung, Yin-Ying; Cheah, Xin-Zi

    2016-11-01

    Solution-mediated transformation has been cited as one of the main problems that deteriorate dissolution performances of solid dispersion (SD). This is mainly attributed by the recrystallization tendency of poorly soluble drug. Eventually, it will lead to extensive agglomeration which is a key process in reducing the dissolution performance of SD and offsets the true benefit of SD system. Here, a post-processing treatment is suggested in order to reduce the recrystallization tendency and hence bring forth the dissolution advantage of SD system. The current study investigates the effect of a post processing treatment on dissolution performance of SD in comparison to their performances upon production. Two poorly soluble drugs were spray dried into SD using polyvinyl alcohol (PVA) as hydrophilic carrier. The obtained samples were post processing treated by exposure to high humidity, i.e. 75% RH at room temperature. The physical properties and release rate of the SD system were characterized upon production and after the post-processing treatment. XRPD, Infrared and DSC results showed partial crystallinity of the fresh SD systems. Crystallinity of these products was further increased after the post-processing treatment at 75% RH. This may be attributed to the high moisture absorption of the SD system that promotes recrystallization process of the drug. However, dissolution efficiencies of the post-treated systems were higher and more consistent than the fresh SD. The unexpected dissolution trend was further supported by the results intrinsic dissolution and solubility studies. An increase of crystallinity in a post humidity treated SD did not exert detrimental effect to their dissolution profiles. A more stabilized system with a preferable enhanced dissolution rate was obtained by exposing the SD to a post processing humidity treatment.

  6. A corpus of full-text journal articles is a robust evaluation tool for revealing differences in performance of biomedical natural language processing tools

    PubMed Central

    2012-01-01

    Background We introduce the linguistic annotation of a corpus of 97 full-text biomedical publications, known as the Colorado Richly Annotated Full Text (CRAFT) corpus. We further assess the performance of existing tools for performing sentence splitting, tokenization, syntactic parsing, and named entity recognition on this corpus. Results Many biomedical natural language processing systems demonstrated large differences between their previously published results and their performance on the CRAFT corpus when tested with the publicly available models or rule sets. Trainable systems differed widely with respect to their ability to build high-performing models based on this data. Conclusions The finding that some systems were able to train high-performing models based on this corpus is additional evidence, beyond high inter-annotator agreement, that the quality of the CRAFT corpus is high. The overall poor performance of various systems indicates that considerable work needs to be done to enable natural language processing systems to work well when the input is full-text journal articles. The CRAFT corpus provides a valuable resource to the biomedical natural language processing community for evaluation and training of new models for biomedical full text publications. PMID:22901054

  7. Inter-Annotator Agreement and the Upper Limit on Machine Performance: Evidence from Biomedical Natural Language Processing.

    PubMed

    Boguslav, Mayla; Cohen, Kevin Bretonnel

    2017-01-01

    Human-annotated data is a fundamental part of natural language processing system development and evaluation. The quality of that data is typically assessed by calculating the agreement between the annotators. It is widely assumed that this agreement between annotators is the upper limit on system performance in natural language processing: if humans can't agree with each other about the classification more than some percentage of the time, we don't expect a computer to do any better. We trace the logical positivist roots of the motivation for measuring inter-annotator agreement, demonstrate the prevalence of the widely-held assumption about the relationship between inter-annotator agreement and system performance, and present data that suggest that inter-annotator agreement is not, in fact, an upper bound on language processing system performance.

  8. [Supply services at health facilities: measuring performance].

    PubMed

    Dacosta Claro, I

    2001-01-01

    Performance measurement, in their different meanings--either balance scorecard or outputs measurement--have become an essential tool in today's organizations (World-Class organizations) to improve service quality and reduce costs. This paper presents a performance measurement system for the hospital supply chain. The system is organized in different levels and groups of indicators in order to show a hierarchical, coherent and integrated vision of the processes. Thus, supply services performance is measured according to (1) financial aspects, (2) customers satisfaction aspects and (3) internal aspects of the processes performed. Since the informational needs of the managers vary within the administrative structure, the performance measurement system is defined in three hierarchical levels. Firstly, the whole supply chain, with the different interrelation of activities. Secondly, the three main processes of the chain--physical management of products, purchasing and negotiation processes and the local storage units. And finally, the performance measurement of each activity involved. The system and the indicators have been evaluated with the participation of 17 health services of Quebec (Canada), however, and due to the similarities of the operation, could be equally implemented in Spanish hospitals.

  9. Evaluation methodologies for an advanced information processing system

    NASA Technical Reports Server (NTRS)

    Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.

    1984-01-01

    The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.

  10. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  11. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    NASA Astrophysics Data System (ADS)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.

  12. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  13. A second generation 50 Mbps VLSI level zero processing system prototype

    NASA Technical Reports Server (NTRS)

    Harris, Jonathan C.; Shi, Jeff; Speciale, Nick; Bennett, Toby

    1994-01-01

    Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies.

  14. Resource Management Scheme Based on Ubiquitous Data Analysis

    PubMed Central

    Lee, Heung Ki; Jung, Jaehee

    2014-01-01

    Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients' requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme. PMID:25197692

  15. Application of high-throughput mini-bioreactor system for systematic scale-down modeling, process characterization, and control strategy development.

    PubMed

    Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2015-01-01

    High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.

  16. The Vehicle Integrated Performance Analysis Experience: Reconnecting With Technical Integration

    NASA Technical Reports Server (NTRS)

    McGhee, D. S.

    2006-01-01

    Very early in the Space Launch Initiative program, a small team of engineers at MSFC proposed a process for performing system-level assessments of a launch vehicle. Aimed primarily at providing insight and making NASA a smart buyer, the Vehicle Integrated Performance Analysis (VIPA) team was created. The difference between the VIPA effort and previous integration attempts is that VIPA a process using experienced people from various disciplines, which focuses them on a technically integrated assessment. The foundations of VIPA s process are described. The VIPA team also recognized the need to target early detailed analysis toward identifying significant systems issues. This process is driven by the T-model for technical integration. VIPA s approach to performing system-level technical integration is discussed in detail. The VIPA process significantly enhances the development and monitoring of realizable project requirements. VIPA s assessment validates the concept s stated performance, identifies significant issues either with the concept or the requirements, and then reintegrates these issues to determine impacts. This process is discussed along with a description of how it may be integrated into a program s insight and review process. The VIPA process has gained favor with both engineering and project organizations for being responsive and insightful

  17. Troubleshooting crude vacuum tower overhead ejector systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, J.R.; Frens, L.L.

    1995-03-01

    Routinely surveying tower overhead vacuum systems can improve performance and product quality. These vacuum systems normally provide reliable and consistent operation. However, process conditions, supplied utilities, corrosion, erosion and fouling all have an impact on ejector system performance. Refinery vacuum distillation towers use ejector systems to maintain tower top pressure and remove overhead gases. However, as with virtually all refinery equipment, performance may be affected by a number of variables. These variables may act independently or concurrently. It is important to understand basic operating principles of vacuum systems and how performance is affected by: utilities, corrosion and erosion, fouling, andmore » process conditions. Reputable vacuum-system suppliers have service engineers that will come to a refinery to survey the system and troubleshoot performance or offer suggestions for improvement. A skilled vacuum-system engineer may be needed to diagnose and remedy system problems. The affect of these variables on performance is discussed. A case history is described of a vacuum system on a crude tower in a South American refinery.« less

  18. Simulation modelling of central order processing system under resource sharing strategy in demand-driven garment supply chains

    NASA Astrophysics Data System (ADS)

    Ma, K.; Thomassey, S.; Zeng, X.

    2017-10-01

    In this paper we proposed a central order processing system under resource sharing strategy for demand-driven garment supply chains to increase supply chain performances. We examined this system by using simulation technology. Simulation results showed that significant improvement in various performance indicators was obtained in new collaborative model with proposed system.

  19. Performance measures for rural transportation systems : guidebook.

    DOT National Transportation Integrated Search

    2006-06-01

    This Performance Measures for Rural Transportation Systems Guidebook provides a : standardized and supportable performance measurement process that can be applied to : transportation systems in rural areas. The guidance included in this guidebook was...

  20. 40 CFR 60.254 - Standards for coal processing and conveying equipment, coal storage systems, transfer and loading...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Standards for coal processing and conveying equipment, coal storage systems, transfer and loading systems, and open storage piles. 60.254... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Coal Preparation...

  1. Intelligent Work Process Engineering System

    NASA Technical Reports Server (NTRS)

    Williams, Kent E.

    2003-01-01

    Optimizing performance on work activities and processes requires metrics of performance for management to monitor and analyze in order to support further improvements in efficiency, effectiveness, safety, reliability and cost. Information systems are therefore required to assist management in making timely, informed decisions regarding these work processes and activities. Currently information systems regarding Space Shuttle maintenance and servicing do not exist to make such timely decisions. The work to be presented details a system which incorporates various automated and intelligent processes and analysis tools to capture organize and analyze work process related data, to make the necessary decisions to meet KSC organizational goals. The advantages and disadvantages of design alternatives to the development of such a system will be discussed including technologies, which would need to bedesigned, prototyped and evaluated.

  2. Preliminary design review package for the solar heating and cooling central data processing system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The Central Data Processing System (CDPS) is designed to transform the raw data collected at remote sites into performance evaluation information for assessing the performance of solar heating and cooling systems. Software requirements for the CDPS are described. The programming standards to be used in development, documentation, and maintenance of the software are discussed along with the CDPS operations approach in support of daily data collection and processing.

  3. An array processing system for lunar geochemical and geophysical data

    NASA Technical Reports Server (NTRS)

    Eliason, E. M.; Soderblom, L. A.

    1977-01-01

    A computerized array processing system has been developed to reduce, analyze, display, and correlate a large number of orbital and earth-based geochemical, geophysical, and geological measurements of the moon on a global scale. The system supports the activities of a consortium of about 30 lunar scientists involved in data synthesis studies. The system was modeled after standard digital image-processing techniques but differs in that processing is performed with floating point precision rather than integer precision. Because of flexibility in floating-point image processing, a series of techniques that are impossible or cumbersome in conventional integer processing were developed to perform optimum interpolation and smoothing of data. Recently color maps of about 25 lunar geophysical and geochemical variables have been generated.

  4. Stochastic availability analysis of operational data systems in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Issa, T. N.

    1991-01-01

    Existing availability models of standby redundant systems consider only an operator's performance and its interaction with the hardware performance. In the case of operational data systems in the Deep Space Network (DSN), in addition to an operator system interface, a controller reconfigures the system and links a standby unit into the network data path upon failure of the operating unit. A stochastic (Markovian) process technique is used to model and analyze the availability performance and occurrence of degradation due to partial failures are quantitatively incorporated into the model. Exact expressions of the steady state availability and proportion degraded performance measures are derived for the systems under study. The interaction among the hardware, operator, and controller performance parameters and that interaction's effect on data availability are evaluated and illustrated for an operational data processing system.

  5. Evaluation of phase separator number in hydrodesulfurization (HDS) unit

    NASA Astrophysics Data System (ADS)

    Jayanti, A. D.; Indarto, A.

    2016-11-01

    The removal process of acid gases such as H2S in natural gas processing industry is required in order to meet sales gas specification. Hydrodesulfurization (HDS)is one of the processes in the refinery that is dedicated to reduce sulphur.InHDS unit, phase separator plays important role to remove H2S from hydrocarbons, operated at a certain pressure and temperature. Optimization of the number of separator performed on the system is then evaluated to understand the performance and economics. From the evaluation, it shows that all systems were able to meet the specifications of H2S in the desired product. However, one separator system resulted the highest capital and operational costs. The process of H2S removal with two separator systems showed the best performance in terms of both energy efficiency with the lowest capital and operating cost. The two separator system is then recommended as a reference in the HDS unit to process the removal of H2S from natural gas.

  6. System verification and validation: a fundamental systems engineering task

    NASA Astrophysics Data System (ADS)

    Ansorge, Wolfgang R.

    2004-09-01

    Systems Engineering (SE) is the discipline in a project management team, which transfers the user's operational needs and justifications for an Extremely Large Telescope (ELT) -or any other telescope-- into a set of validated required system performance characteristics. Subsequently transferring these validated required system performance characteris-tics into a validated system configuration, and eventually into the assembled, integrated telescope system with verified performance characteristics and provided it with "objective evidence that the particular requirements for the specified intended use are fulfilled". The latter is the ISO Standard 8402 definition for "Validation". This presentation describes the verification and validation processes of an ELT Project and outlines the key role System Engineering plays in these processes throughout all project phases. If these processes are implemented correctly into the project execution and are started at the proper time, namely at the very beginning of the project, and if all capabilities of experienced system engineers are used, the project costs and the life-cycle costs of the telescope system can be reduced between 25 and 50 %. The intention of this article is, to motivate and encourage project managers of astronomical telescopes and scientific instruments to involve the entire spectrum of Systems Engineering capabilities performed by trained and experienced SYSTEM engineers for the benefit of the project by explaining them the importance of Systems Engineering in the AIV and validation processes.

  7. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  8. Human Engineering Operations and Habitability Assessment: A Process for Advanced Life Support Ground Facility Testbeds

    NASA Technical Reports Server (NTRS)

    Connolly, Janis H.; Arch, M.; Elfezouaty, Eileen Schultz; Novak, Jennifer Blume; Bond, Robert L. (Technical Monitor)

    1999-01-01

    Design and Human Engineering (HE) processes strive to ensure that the human-machine interface is designed for optimal performance throughout the system life cycle. Each component can be tested and assessed independently to assure optimal performance, but it is not until full integration that the system and the inherent interactions between the system components can be assessed as a whole. HE processes (which are defining/app lying requirements for human interaction with missions/systems) are included in space flight activities, but also need to be included in ground activities and specifically, ground facility testbeds such as Bio-Plex. A unique aspect of the Bio-Plex Facility is the integral issue of Habitability which includes qualities of the environment that allow humans to work and live. HE is a process by which Habitability and system performance can be assessed.

  9. Fault-tolerant Control of a Cyber-physical System

    NASA Astrophysics Data System (ADS)

    Roxana, Rusu-Both; Eva-Henrietta, Dulf

    2017-10-01

    Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.

  10. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  11. Simulation of mass storage systems operating in a large data processing facility

    NASA Technical Reports Server (NTRS)

    Holmes, R.

    1972-01-01

    A mass storage simulation program was written to aid system designers in the design of a data processing facility. It acts as a tool for measuring the overall effect on the facility of on-line mass storage systems, and it provides the means of measuring and comparing the performance of competing mass storage systems. The performance of the simulation program is demonstrated.

  12. Investigation of Capabilities and Technologies Supporting Rapid UAV Launch System Development

    DTIC Science & Technology

    2015-06-01

    NUMBERS 6. AUTHOR(S) Patrick Alan Livesay 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943 8. PERFORMING ...to operate. This enabled the launcher design team to more clearly determine and articulate system require- ments and performance parameters. Next, a...Process (AHP) was performed to xvii prioritize the capabilities and assist in the decision-making process [1]. The AHP decision-analysis technique is

  13. A novel process control method for a TT-300 E-Beam/X-Ray system

    NASA Astrophysics Data System (ADS)

    Mittendorfer, Josef; Gallnböck-Wagner, Bernhard

    2018-02-01

    This paper presents some aspects of the process control method for a TT-300 E-Beam/X-Ray system at Mediscan, Austria. The novelty of the approach is the seamless integration of routine monitoring dosimetry with process data. This allows to calculate a parametric dose for each production unit and consequently a fine grain and holistic process performance monitoring. Process performance is documented in process control charts for the analysis of individual runs as well as historic trending of runs of specific process categories over a specified time range.

  14. Teamwork for Oversight of Processes and Systems (TOPS). Implementation guide for TOPS version 2.0, 10 August 1992

    NASA Technical Reports Server (NTRS)

    Strand, Albert A.; Jackson, Darryl J.

    1992-01-01

    As the nation redefines priorities to deal with a rapidly changing world order, both government and industry require new approaches for oversight of management systems, particularly for high technology products. Declining defense budgets will lead to significant reductions in government contract management personnel. Concurrently, defense contractors are reducing administrative and overhead staffing to control costs. These combined pressures require bold approaches for the oversight of management systems. In the Spring of 1991, the DPRO and TRW created a Process Action Team (PAT) to jointly prepare a Performance Based Management (PBM) system titled Teamwork for Oversight of Processes and Systems (TOPS). The primary goal is implementation of a performance based management system based on objective data to review critical TRW processes with an emphasis on continuous improvement. The processes are: Finance and Business Systems, Engineering and Manufacturing Systems, Quality Assurance, and Software Systems. The team established a number of goals: delivery of quality products to contractual terms and conditions; ensure that TRW management systems meet government guidance and good business practices; use of objective data to measure critical processes; elimination of wasteful/duplicative reviews and audits; emphasis on teamwork--all efforts must be perceived to add value by both sides and decisions are made by consensus; and synergy and the creation of a strong working trust between TRW and the DPRO. TOPS permits the adjustment of oversight resources when conditions change or when TRW systems performance indicate either an increase or decrease in surveillance is appropriate. Monthly Contractor Performance Assessments (CPA) are derived from a summary of supporting system level and process-level ratings obtained from objective process-level data. Tiered, objective, data-driven metrics are highly successful in achieving a cooperative and effective method of measuring performance. The teamwork-based culture developed by TOPS proved an unequaled success in removing adversarial relationships and creating an atmosphere of continuous improvement in quality processes at TRW. The new working relationship does not decrease the responsibility or authority of the DPRO to ensure contract compliance and it permits both parties to work more effectively to improve total quality and reduce cost. By emphasizing teamwork in developing a stronger approach to efficient management of the defense industrial base TOPS is a singular success.

  15. Teamwork for Oversight of Processes and Systems (TOPS). Implementation guide for TOPS version 2.0, 10 August 1992

    NASA Astrophysics Data System (ADS)

    Strand, Albert A.; Jackson, Darryl J.

    As the nation redefines priorities to deal with a rapidly changing world order, both government and industry require new approaches for oversight of management systems, particularly for high technology products. Declining defense budgets will lead to significant reductions in government contract management personnel. Concurrently, defense contractors are reducing administrative and overhead staffing to control costs. These combined pressures require bold approaches for the oversight of management systems. In the Spring of 1991, the DPRO and TRW created a Process Action Team (PAT) to jointly prepare a Performance Based Management (PBM) system titled Teamwork for Oversight of Processes and Systems (TOPS). The primary goal is implementation of a performance based management system based on objective data to review critical TRW processes with an emphasis on continuous improvement. The processes are: Finance and Business Systems, Engineering and Manufacturing Systems, Quality Assurance, and Software Systems. The team established a number of goals: delivery of quality products to contractual terms and conditions; ensure that TRW management systems meet government guidance and good business practices; use of objective data to measure critical processes; elimination of wasteful/duplicative reviews and audits; emphasis on teamwork--all efforts must be perceived to add value by both sides and decisions are made by consensus; and synergy and the creation of a strong working trust between TRW and the DPRO. TOPS permits the adjustment of oversight resources when conditions change or when TRW systems performance indicate either an increase or decrease in surveillance is appropriate. Monthly Contractor Performance Assessments (CPA) are derived from a summary of supporting system level and process-level ratings obtained from objective process-level data. Tiered, objective, data-driven metrics are highly successful in achieving a cooperative and effective method of measuring performance. The teamwork-based culture developed by TOPS proved an unequaled success in removing adversarial relationships and creating an atmosphere of continuous improvement in quality processes at TRW. The new working relationship does not decrease the responsibility or authority of the DPRO to ensure contract compliance and it permits both parties to work more effectively to improve total quality and reduce cost. By emphasizing teamwork in developing a stronger approach to efficient management of the defense industrial base TOPS is a singular success.

  16. Onboard FPGA-based SAR processing for future spaceborne systems

    NASA Technical Reports Server (NTRS)

    Le, Charles; Chan, Samuel; Cheng, Frank; Fang, Winston; Fischman, Mark; Hensley, Scott; Johnson, Robert; Jourdan, Michael; Marina, Miguel; Parham, Bruce; hide

    2004-01-01

    We present a real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images in future spaceborne system. In particular, we will discuss the integrated design approach, from top-level algorithm specifications and system requirements, design methodology, functional verification and performance validation, down to hardware design and implementation.

  17. Photonic single nonlinear-delay dynamical node for information processing

    NASA Astrophysics Data System (ADS)

    Ortín, Silvia; San-Martín, Daniel; Pesquera, Luis; Gutiérrez, José Manuel

    2012-06-01

    An electro-optical system with a delay loop based on semiconductor lasers is investigated for information processing by performing numerical simulations. This system can replace a complex network of many nonlinear elements for the implementation of Reservoir Computing. We show that a single nonlinear-delay dynamical system has the basic properties to perform as reservoir: short-term memory and separation property. The computing performance of this system is evaluated for two prediction tasks: Lorenz chaotic time series and nonlinear auto-regressive moving average (NARMA) model. We sweep the parameters of the system to find the best performance. The results achieved for the Lorenz and the NARMA-10 tasks are comparable to those obtained by other machine learning methods.

  18. A unified method for evaluating real-time computer controllers: A case study. [aircraft control

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Krishna, C. M.; Lee, Y. H.

    1982-01-01

    A real time control system consists of a synergistic pair, that is, a controlled process and a controller computer. Performance measures for real time controller computers are defined on the basis of the nature of this synergistic pair. A case study of a typical critical controlled process is presented in the context of new performance measures that express the performance of both controlled processes and real time controllers (taken as a unit) on the basis of a single variable: controller response time. Controller response time is a function of current system state, system failure rate, electrical and/or magnetic interference, etc., and is therefore a random variable. Control overhead is expressed as a monotonically nondecreasing function of the response time and the system suffers catastrophic failure, or dynamic failure, if the response time for a control task exceeds the corresponding system hard deadline, if any. A rigorous probabilistic approach is used to estimate the performance measures. The controlled process chosen for study is an aircraft in the final stages of descent, just prior to landing. First, the performance measures for the controller are presented. Secondly, control algorithms for solving the landing problem are discussed and finally the impact of the performance measures on the problem is analyzed.

  19. Aircraft Alerting Systems Standardization Study. Phase IV. Accident Implications on Systems Design.

    DTIC Science & Technology

    1982-06-01

    computing and processing to assimilate and process status informa- 5 tion using...provided with capabilities in computing and processing , sensing, interfacing, and controlling and displaying. 17 o Computing and Processing - Algorithms...alerting system to perform a flight status monitor function would require additional sensinq, computing and processing , interfacing, and controlling

  20. Passive serialization in a multitasking environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennessey, J.P.; Osisek, D.L.; Seigh, J.W. II

    1989-02-28

    In a multiprocessing system having a control program in which data objects are shared among processes, this patent describes a method for serializing references to a data object by the processes so as to prevent invalid references to the data object by any process when an operation requiring exclusive access is performed by another process, comprising the steps of: permitting the processes to reference data objects on a shared access basis without obtaining a shared lock; monitoring a point of execution of the control program which is common to all processes in the system, which occurs regularly in the process'more » execution and across which no references to any data object can be maintained by any process, except references using locks; establishing a system reference point which occurs after each process in the system has passed the point of execution at least once since the last such system reference point; requesting an operation requiring exclusive access on a selected data object; preventing subsequent references by other processes to the selected data object; waiting until two of the system references points have occurred; and then performing the requested operation.« less

  1. Process Management inside ATLAS DAQ

    NASA Astrophysics Data System (ADS)

    Alexandrov, I.; Amorim, A.; Badescu, E.; Burckhart-Chromek, D.; Caprini, M.; Dobson, M.; Duval, P. Y.; Hart, R.; Jones, R.; Kazarov, A.; Kolos, S.; Kotov, V.; Liko, D.; Lucio, L.; Mapelli, L.; Mineev, M.; Moneta, L.; Nassiakou, M.; Pedro, L.; Ribeiro, A.; Roumiantsev, V.; Ryabov, Y.; Schweiger, D.; Soloviev, I.; Wolters, H.

    2002-10-01

    The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.

  2. Light Water Reactor Sustainability Program Operator Performance Metrics for Control Room Modernization: A Practical Guide for Early Design Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald Boring; Roger Lew; Thomas Ulrich

    2014-03-01

    As control rooms are modernized with new digital systems at nuclear power plants, it is necessary to evaluate the operator performance using these systems as part of a verification and validation process. There are no standard, predefined metrics available for assessing what is satisfactory operator interaction with new systems, especially during the early design stages of a new system. This report identifies the process and metrics for evaluating human system interfaces as part of control room modernization. The report includes background information on design and evaluation, a thorough discussion of human performance measures, and a practical example of how themore » process and metrics have been used as part of a turbine control system upgrade during the formative stages of design. The process and metrics are geared toward generalizability to other applications and serve as a template for utilities undertaking their own control room modernization activities.« less

  3. Probabilistic performance assessment of complex energy process systems - The case of a self-sustained sanitation system.

    PubMed

    Kolios, Athanasios; Jiang, Ying; Somorin, Tosin; Sowale, Ayodeji; Anastasopoulou, Aikaterini; Anthony, Edward J; Fidalgo, Beatriz; Parker, Alison; McAdam, Ewan; Williams, Leon; Collins, Matt; Tyrrel, Sean

    2018-05-01

    A probabilistic modelling approach was developed and applied to investigate the energy and environmental performance of an innovative sanitation system, the "Nano-membrane Toilet" (NMT). The system treats human excreta via an advanced energy and water recovery island with the aim of addressing current and future sanitation demands. Due to the complex design and inherent characteristics of the system's input material, there are a number of stochastic variables which may significantly affect the system's performance. The non-intrusive probabilistic approach adopted in this study combines a finite number of deterministic thermodynamic process simulations with an artificial neural network (ANN) approximation model and Monte Carlo simulations (MCS) to assess the effect of system uncertainties on the predicted performance of the NMT system. The joint probability distributions of the process performance indicators suggest a Stirling Engine (SE) power output in the range of 61.5-73 W with a high confidence interval (CI) of 95%. In addition, there is high probability (with 95% CI) that the NMT system can achieve positive net power output between 15.8 and 35 W. A sensitivity study reveals the system power performance is mostly affected by SE heater temperature. Investigation into the environmental performance of the NMT design, including water recovery and CO 2 /NO x emissions, suggests significant environmental benefits compared to conventional systems. Results of the probabilistic analysis can better inform future improvements on the system design and operational strategy and this probabilistic assessment framework can also be applied to similar complex engineering systems.

  4. Grammatical Aspect and Mental Simulation

    ERIC Educational Resources Information Center

    Bergen, Benjamin; Wheeler, Kathryn

    2010-01-01

    When processing sentences about perceptible scenes and performable actions, language understanders activate perceptual and motor systems to perform mental simulations of those events. But little is known about exactly what linguistic elements activate modality-specific systems during language processing. While it is known that content words, like…

  5. The effect of requirements prioritization on avionics system conceptual design

    NASA Astrophysics Data System (ADS)

    Lorentz, John

    This dissertation will provide a detailed approach and analysis of a new collaborative requirements prioritization methodology that has been used successfully on four Coast Guard avionics acquisition and development programs valued at $400M+. A statistical representation of participant study results will be discussed and analyzed in detail. Many technically compliant projects fail to deliver levels of performance and capability that the customer desires. Some of these systems completely meet "threshold" levels of performance; however, the distribution of resources in the process devoted to the development and management of the requirements does not always represent the voice of the customer. This is especially true for technically complex projects such as modern avionics systems. A simplified facilitated process for prioritization of system requirements will be described. The collaborative prioritization process, and resulting artifacts, aids the systems engineer during early conceptual design. All requirements are not the same in terms of customer priority. While there is a tendency to have many thresholds inside of a system design, there is usually a subset of requirements and system performance that is of the utmost importance to the design. These critical capabilities and critical levels of performance typically represent the reason the system is being built. The systems engineer needs processes to identify these critical capabilities, the associated desired levels of performance, and the risks associated with the specific requirements that define the critical capability. The facilitated prioritization exercise is designed to collaboratively draw out these critical capabilities and levels of performance so they can be emphasized in system design. Developing the purpose, scheduling and process for prioritization events are key elements of systems engineering and modern project management. The benefits of early collaborative prioritization flow throughout the project schedule, resulting in greater success during system deployment and operational testing. This dissertation will discuss the data and findings from participant studies, present a literature review of systems engineering and design processes, and test the hypothesis that the prioritization process had no effect on stakeholder sentiment related to the conceptual design. In addition, the "Requirements Rationalization" process will be discussed in detail. Avionics, like many other systems, has transitioned from a discrete electronics engineering, hard engineering discipline to incorporate software engineering as a core process of the technology development cycle. As with other software-based systems, avionics now has significant soft system attributes that must be considered in the design process. The boundless opportunities that exist in software design demand prioritization to focus effort onto the critical functions that the software must provide. This has been a well documented and understood phenomenon in the software development community for many years. This dissertation will attempt to link the effect of software integrated avionics to the benefits of prioritization of requirements in the problem space and demonstrate the sociological and technical benefits of early prioritization practices.

  6. A Conceptual Framework for the Electronic Performance Support Systems within IBM Lotus Notes 6 (LN6) Example

    ERIC Educational Resources Information Center

    Bayram, Servet

    2005-01-01

    The concept of Electronic Performance Support Systems (EPSS) is containing multimedia or computer based instruction components that improves human performance by providing process simplification, performance information and decision support system. EPSS has become a hot topic for organizational development, human resources, performance technology,…

  7. Dynamic Systems Analysis for Turbine Based Aero Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.

    2016-01-01

    The aircraft engine design process seeks to optimize the overall system-level performance, weight, and cost for a given concept. Steady-state simulations and data are used to identify trade-offs that should be balanced to optimize the system in a process known as systems analysis. These systems analysis simulations and data may not adequately capture the true performance trade-offs that exist during transient operation. Dynamic systems analysis provides the capability for assessing the dynamic tradeoffs at an earlier stage of the engine design process. The dynamic systems analysis concept, developed tools, and potential benefit are presented in this paper. To provide this capability, the Tool for Turbine Engine Closed-loop Transient Analysis (TTECTrA) was developed to provide the user with an estimate of the closed-loop performance (response time) and operability (high pressure compressor surge margin) for a given engine design and set of control design requirements. TTECTrA along with engine deterioration information, can be used to develop a more generic relationship between performance and operability that can impact the engine design constraints and potentially lead to a more efficient engine.

  8. The Systems Engineering Process for Human Support Technology Development

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2005-01-01

    Systems engineering is designing and optimizing systems. This paper reviews the systems engineering process and indicates how it can be applied in the development of advanced human support systems. Systems engineering develops the performance requirements, subsystem specifications, and detailed designs needed to construct a desired system. Systems design is difficult, requiring both art and science and balancing human and technical considerations. The essential systems engineering activity is trading off and compromising between competing objectives such as performance and cost, schedule and risk. Systems engineering is not a complete independent process. It usually supports a system development project. This review emphasizes the NASA project management process as described in NASA Procedural Requirement (NPR) 7120.5B. The process is a top down phased approach that includes the most fundamental activities of systems engineering - requirements definition, systems analysis, and design. NPR 7120.5B also requires projects to perform the engineering analyses needed to ensure that the system will operate correctly with regard to reliability, safety, risk, cost, and human factors. We review the system development project process, the standard systems engineering design methodology, and some of the specialized systems analysis techniques. We will discuss how they could apply to advanced human support systems development. The purpose of advanced systems development is not directly to supply human space flight hardware, but rather to provide superior candidate systems that will be selected for implementation by future missions. The most direct application of systems engineering is in guiding the development of prototype and flight experiment hardware. However, anticipatory systems engineering of possible future flight systems would be useful in identifying the most promising development projects.

  9. Integrating policy-based management and SLA performance monitoring

    NASA Astrophysics Data System (ADS)

    Liu, Tzong-Jye; Lin, Chin-Yi; Chang, Shu-Hsin; Yen, Meng-Tzu

    2001-10-01

    Policy-based management system provides the configuration capability for the system administrators to focus on the requirements of customers. The service level agreement performance monitoring mechanism helps system administrators to verify the correctness of policies. However, it is difficult for a device to process the policies directly because the policies are the management concept. This paper proposes a mechanism to decompose a policy into rules that can be efficiently processed by a device. Thus, the device may process the rule and collect the performance statistics information efficiently; and the policy-based management system may collect these performance statistics information and report the service-level agreement performance monitoring information to the system administrator. The proposed policy-based management system achieves both the policy configuration and service-level agreement performance monitoring requirements. A policy consists of a condition part and an action part. The condition part is a Boolean expression of a source host IP group, a destination host IP group, etc. The action part is the parameters of services. We say that an address group is compact if it only consists of a range of IP address that can be denoted by a pair of IP address and corresponding IP mask. If the condition part of a policy only consists of the compact address group, we say that the policy is a rule. Since a device can efficiently process a compact address and a system administrator prefers to define a range of IP address, the policy-based management system has to translate policy into rules and supplements the gaps between policy and rules. The proposed policy-based management system builds the relationships between VPN and policies, policy and rules. Since the system administrator wants to monitor the system performance information of VPNs and policies, the proposed policy-based management system downloads the relationships among VPNs, policies and rules to the SNMP agents. The SNMP agents build the management information base (MIB) of all VPNs, policies and rules according to the relationships obtained from the management server. Thus, the proposed policy-based management system may get all performance monitoring information of VPNs and policies from agents. The proposed policy-based manager achieves two goals: a) provide a management environment for the system administrator to configure their network only considering the policy requirement issues and b) let the device have only to process the packet and then collect the required performance information. These two things make the proposed management system satisfy both the user and device requirements.

  10. Task allocation in a distributed computing system

    NASA Technical Reports Server (NTRS)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  11. Tailoring Systems Engineering Processes in a Conceptual Design Environment: A Case Study at NASA Marshall Spaceflight Center's ACO

    NASA Technical Reports Server (NTRS)

    Mulqueen, John; Maples, C. Dauphne; Fabisinski, Leo, III

    2012-01-01

    This paper provides an overview of Systems Engineering as it is applied in a conceptual design space systems department at the National Aeronautics and Space Administration (NASA) Marshall Spaceflight Center (MSFC) Advanced Concepts Office (ACO). Engineering work performed in the NASA MFSC's ACO is targeted toward the Exploratory Research and Concepts Development life cycle stages, as defined in the International Council on Systems Engineering (INCOSE) System Engineering Handbook. This paper addresses three ACO Systems Engineering tools that correspond to three INCOSE Technical Processes: Stakeholder Requirements Definition, Requirements Analysis, and Integration, as well as one Project Process Risk Management. These processes are used to facilitate, streamline, and manage systems engineering processes tailored for the earliest two life cycle stages, which is the environment in which ACO engineers work. The role of systems engineers and systems engineering as performed in ACO is explored in this paper. The need for tailoring Systems Engineering processes, tools, and products in the ever-changing engineering services ACO provides to its customers is addressed.

  12. Performance improvements of binary diffractive structures via optimization of the photolithography and dry etch processes

    NASA Astrophysics Data System (ADS)

    Welch, Kevin; Leonard, Jerry; Jones, Richard D.

    2010-08-01

    Increasingly stringent requirements on the performance of diffractive optical elements (DOEs) used in wafer scanner illumination systems are driving continuous improvements in their associated manufacturing processes. Specifically, these processes are designed to improve the output pattern uniformity of off-axis illumination systems to minimize degradation in the ultimate imaging performance of a lithographic tool. In this paper, we discuss performance improvements in both photolithographic patterning and RIE etching of fused silica diffractive optical structures. In summary, optimized photolithographic processes were developed to increase critical dimension uniformity and featuresize linearity across the substrate. The photoresist film thickness was also optimized for integration with an improved etch process. This etch process was itself optimized for pattern transfer fidelity, sidewall profile (wall angle, trench bottom flatness), and across-wafer etch depth uniformity. Improvements observed with these processes on idealized test structures (for ease of analysis) led to their implementation in product flows, with comparable increases in performance and yield on customer designs.

  13. Implementation of Lean System on Erbium Doped Fibre Amplifier Manufacturing Process to Reduce Production Time

    NASA Astrophysics Data System (ADS)

    Maneechote, T.; Luangpaiboon, P.

    2010-10-01

    A manufacturing process of erbium doped fibre amplifiers is complicated. It needs to meet the customers' requirements under a present economic status that products need to be shipped to customers as soon as possible after purchasing orders. This research aims to study and improve processes and production lines of erbium doped fibre amplifiers using lean manufacturing systems via an application of computer simulation. Three scenarios of lean tooled box systems are selected via the expert system. Firstly, the production schedule based on shipment date is combined with a first in first out control system. The second scenario focuses on a designed flow process plant layout. Finally, the previous flow process plant layout combines with production schedule based on shipment date including the first in first out control systems. The computer simulation with the limited data via an expected value is used to observe the performance of all scenarios. The most preferable resulted lean tooled box systems from a computer simulation are selected to implement in the real process of a production of erbium doped fibre amplifiers. A comparison is carried out to determine the actual performance measures via an analysis of variance of the response or the production time per unit achieved in each scenario. The goodness of an adequacy of the linear statistical model via experimental errors or residuals is also performed to check the normality, constant variance and independence of the residuals. The results show that a hybrid scenario of lean manufacturing system with the first in first out control and flow process plant lay out statistically leads to better performance in terms of the mean and variance of production times.

  14. Progress towards an Optimization Methodology for Combustion-Driven Portable Thermoelectric Power Generation Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.

    2012-03-13

    Enormous military and commercial interests exist in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. Design and development of a portable TE power system using a JP-8 combustor as a high temperature heat source and optimal process flows depend on efficient heat generation, transfer, and recovery within the system are explored. Design optimization of the system required considering the combustion system efficiency and TE conversion efficiency simultaneously. The combustor performance and TE sub-system performance were coupled directlymore » through exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation of this system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed thermoelectric converter thermal/mechanical modeling. To this end, this work reports on design integration of systemlevel process flow simulations using commercial software CHEMCADTM with in-house thermoelectric converter and module optimization, and heat exchanger analyses using COMSOLTM software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem level conversion efficiencies exceeding 10%. These TE advances are integrated with a high performance microtechnology combustion reactor based on recent advances at the Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation established a basis for optimal selection of fuel and air flow rates, thermoelectric module design and operating conditions, and microtechnology heat-exchanger design criteria. This paper will discuss this simulation process that leads directly to system efficiency power maps defining potentially available optimal system operating conditions and regimes. This coupled simulation approach enables pathways for integrated use of high-performance combustor components, high performance TE devices, and microtechnologies to produce a compact, lightweight, combustion driven TE power system prototype that operates on common fuels.« less

  15. Comparing performance in discrete and continuous comparison tasks.

    PubMed

    Leibovich, Tali; Henik, Avishai

    2014-05-01

    The approximate number system (ANS) theory suggests that all magnitudes, discrete (i.e., number of items) or continuous (i.e., size, density, etc.), are processed by a shared system and comply with Weber's law. The current study reexamined this notion by comparing performance in discrete (comparing numerosities of dot arrays) and continuous (comparisons of area of squares) tasks. We found that: (a) threshold of discrimination was higher for continuous than for discrete comparisons; (b) while performance in the discrete task complied with Weber's law, performance in the continuous task violated it; and (c) performance in the discrete task was influenced by continuous properties (e.g., dot density, dot cumulative area) of the dot array that were not predictive of numerosities or task relevant. Therefore, we propose that the magnitude processing system (MPS) is actually divided into separate (yet interactive) systems for discrete and continuous magnitude processing. Further subdivisions are discussed. We argue that cooperation between these systems results in a holistic comparison of magnitudes, one that takes into account continuous properties in addition to numerosities. Considering the MPS as two systems opens the door to new and important questions that shed light on both normal and impaired development of the numerical system.

  16. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  17. A Framework to Guide the Assessment of Human-Machine Systems.

    PubMed

    Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo

    2017-03-01

    We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.

  18. Theory of constraints for publicly funded health systems.

    PubMed

    Sadat, Somayeh; Carter, Michael W; Golden, Brian

    2013-03-01

    Originally developed in the context of publicly traded for-profit companies, theory of constraints (TOC) improves system performance through leveraging the constraint(s). While the theory seems to be a natural fit for resource-constrained publicly funded health systems, there is a lack of literature addressing the modifications required to adopt TOC and define the goal and performance measures. This paper develops a system dynamics representation of the classical TOC's system-wide goal and performance measures for publicly traded for-profit companies, which forms the basis for developing a similar model for publicly funded health systems. The model is then expanded to include some of the factors that affect system performance, providing a framework to apply TOC's process of ongoing improvement in publicly funded health systems. Future research is required to more accurately define the factors affecting system performance and populate the model with evidence-based estimates for various parameters in order to use the model to guide TOC's process of ongoing improvement.

  19. Systems level test and simulation for photonic processing systems

    NASA Astrophysics Data System (ADS)

    Erteza, I. A.; Stalker, K. T.

    1995-08-01

    Photonic technology is growing in importance throughout DOD. Programs have been underway in each of the Services to demonstrate the ability of photonics to enhance current electronic performance in several prototype systems, such as the Navy's SLQ-32 radar warning receiver, the Army's multi-role survivable radar and the phased array radar controller for the Airborne Warning and Control System (AWACS) upgrade. Little, though, is known about radiation effects; the component studies do not furnish the information needed to predict overall system performance in a radiation environment. To date, no comprehensive test and analysis program has been conducted to evaluate sensitivity of overall system performance to the radiation environment. The goal of this program is to relate component level effects to system level performance through modeling and testing of a selected optical processing system, and to help direct component testing to items which can directly and adversely affect overall system performance. This report gives a broad overview of the project, highlighting key results.

  20. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  1. The Research and Implementation of MUSER CLEAN Algorithm Based on OpenCL

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Chen, K.; Deng, H.; Wang, F.; Mei, Y.; Wei, S. L.; Dai, W.; Yang, Q. P.; Liu, Y. B.; Wu, J. P.

    2017-03-01

    It's urgent to carry out high-performance data processing with a single machine in the development of astronomical software. However, due to the different configuration of the machine, traditional programming techniques such as multi-threading, and CUDA (Compute Unified Device Architecture)+GPU (Graphic Processing Unit) have obvious limitations in portability and seamlessness between different operation systems. The OpenCL (Open Computing Language) used in the development of MUSER (MingantU SpEctral Radioheliograph) data processing system is introduced. And the Högbom CLEAN algorithm is re-implemented into parallel CLEAN algorithm by the Python language and PyOpenCL extended package. The experimental results show that the CLEAN algorithm based on OpenCL has approximately equally operating efficiency compared with the former CLEAN algorithm based on CUDA. More important, the data processing in merely CPU (Central Processing Unit) environment of this system can also achieve high performance, which has solved the problem of environmental dependence of CUDA+GPU. Overall, the research improves the adaptability of the system with emphasis on performance of MUSER image clean computing. In the meanwhile, the realization of OpenCL in MUSER proves its availability in scientific data processing. In view of the high-performance computing features of OpenCL in heterogeneous environment, it will probably become the preferred technology in the future high-performance astronomical software development.

  2. 40 CFR 65.164 - Performance test and flare compliance determination notifications and reports.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONSOLIDATED FEDERAL AIR RULE Closed Vent Systems, Control Devices, and Routing to a Fuel Gas System or a Process § 65.164 Performance test and flare... complete test report shall include a brief process description, sampling site description, description of...

  3. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  4. Performance measurement for information systems: Industry perspectives

    NASA Technical Reports Server (NTRS)

    Bishop, Peter C.; Yoes, Cissy; Hamilton, Kay

    1992-01-01

    Performance measurement has become a focal topic for information systems (IS) organizations. Historically, IS performance measures have dealt with the efficiency of the data processing function. Today, the function of most IS organizations goes beyond simple data processing. To understand how IS organizations have developed meaningful performance measures that reflect their objectives and activities, industry perspectives on IS performance measurement was studied. The objectives of the study were to understand the state of the practice in IS performance techniques for IS performance measurement; to gather approaches and measures of actual performance measures used in industry; and to report patterns, trends, and lessons learned about performance measurement to NASA/JSC. Examples of how some of the most forward looking companies are shaping their IS processes through measurement is provided. Thoughts on the presence of a life-cycle to performance measures development and a suggested taxonomy for performance measurements are included in the appendices.

  5. Solar industrial process heat systems: An assessment of standards for materials and components

    NASA Astrophysics Data System (ADS)

    Rossiter, W. J.; Shipp, W. E.

    1981-09-01

    A study was conducted to obtain information on the performance of materials and components in operational solar industrial process heat (PH) systems, and to provide recommendations for the development of standards including evaluative test procedures for materials and components. An assessment of the needs for standards for evaluating the long-term performance of materials and components of IPH systems was made. The assessment was based on the availability of existing standards, and information obtained from a field survey of operational systems, the literature, and discussions with individuals in the industry. Field inspections of 10 operational IPH systems were performed.

  6. Application of agent-based system for bioprocess description and process improvement.

    PubMed

    Gao, Ying; Kipling, Katie; Glassey, Jarka; Willis, Mark; Montague, Gary; Zhou, Yuhong; Titchener-Hooker, Nigel J

    2010-01-01

    Modeling plays an important role in bioprocess development for design and scale-up. Predictive models can also be used in biopharmaceutical manufacturing to assist decision-making either to maintain process consistency or to identify optimal operating conditions. To predict the whole bioprocess performance, the strong interactions present in a processing sequence must be adequately modeled. Traditionally, bioprocess modeling considers process units separately, which makes it difficult to capture the interactions between units. In this work, a systematic framework is developed to analyze the bioprocesses based on a whole process understanding and considering the interactions between process operations. An agent-based approach is adopted to provide a flexible infrastructure for the necessary integration of process models. This enables the prediction of overall process behavior, which can then be applied during process development or once manufacturing has commenced, in both cases leading to the capacity for fast evaluation of process improvement options. The multi-agent system comprises a process knowledge base, process models, and a group of functional agents. In this system, agent components co-operate with each other in performing their tasks. These include the description of the whole process behavior, evaluating process operating conditions, monitoring of the operating processes, predicting critical process performance, and providing guidance to decision-making when coping with process deviations. During process development, the system can be used to evaluate the design space for process operation. During manufacture, the system can be applied to identify abnormal process operation events and then to provide suggestions as to how best to cope with the deviations. In all cases, the function of the system is to ensure an efficient manufacturing process. The implementation of the agent-based approach is illustrated via selected application scenarios, which demonstrate how such a framework may enable the better integration of process operations by providing a plant-wide process description to facilitate process improvement. Copyright 2009 American Institute of Chemical Engineers

  7. Learner Performance Accounting: A Tri-Cycle Process

    ERIC Educational Resources Information Center

    Brown, Thomas C.; McCleary, Lloyd E.

    1973-01-01

    The Tri-Cycle Process described in the model permits for the first time an integrated system for designing an individualized instructional system that would permit a rational, diagnosis-prescription-evaluation system keyed to an accounting system. (Author)

  8. Thermal hydraulic feasibility assessment of the hot conditioning system and process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heard, F.J.

    1996-10-10

    The Spent Nuclear Fuel Project was established to develop engineered solutions for the expedited removal, stabilization, and storage of spent nuclear fuel from the K Basins at the U.S. Department of Energy`s Hanford Site in Richland, Washington. A series of analyses have been completed investigating the thermal-hydraulic performance and feasibility of the proposed Hot Conditioning System and process for the Spent Nuclear Fuel Project. The analyses were performed using a series of thermal-hydraulic models that could respond to all process and safety-related issues that may arise pertaining to the Hot Conditioning System. The subject efforts focus on independently investigating, quantifying,more » and establishing the governing heat production and removal mechanisms, flow distributions within the multi-canister overpack, and performing process simulations for various purge gases under consideration for the Hot Conditioning System, as well as obtaining preliminary results for comparison with and verification of other analyses, and providing technology- based recommendations for consideration and incorporation into the Hot Conditioning System design bases.« less

  9. Performance analysis of the ascent propulsion system of the Apollo spacecraft

    NASA Technical Reports Server (NTRS)

    Hooper, J. C., III

    1973-01-01

    Activities involved in the performance analysis of the Apollo lunar module ascent propulsion system are discussed. A description of the ascent propulsion system, including hardware, instrumentation, and system characteristics, is included. The methods used to predict the inflight performance and to establish performance uncertainties of the ascent propulsion system are discussed. The techniques of processing the telemetered flight data and performing postflight performance reconstruction to determine actual inflight performance are discussed. Problems that have been encountered and results from the analysis of the ascent propulsion system performance during the Apollo 9, 10, and 11 missions are presented.

  10. Information theoretic analysis of edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2010-08-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.

  11. Process innovation in high-performance systems: From polymeric composites R&D to design and build of airplane showers

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Jui

    In the aerospace industry reducing aircraft weight is key because it increases flight performance and drives down operating costs. With fierce competition in the commercial aircraft industry, companies that focused primarily on exterior aircraft performance design issues are turning more attention to the design of aircraft interior. Simultaneously, there has been an increase in the number of new amenities offered to passengers especially in first class travel and executive jets. These new amenities present novel and challenging design parameters that include integration into existing aircraft systems without sacrificing flight performance. The objective of this study was to design a re-circulating shower system for an aircraft that weighs significantly less than pre-existing shower designs. This was accomplished by integrating processes from polymeric composite materials, water filtration, and project management. Carbon/epoxy laminates exposed to hygrothermal cycling conditions were evaluated and compared to model calculations. Novel materials and a variety of fabrication processes were developed to create new types of paper for honeycomb applications. Experiments were then performed on the properties and honeycomb processability of these new papers. Standard water quality tests were performed on samples taken from the re-circulating system to see if current regulatory standards were being met. These studies were executed and integrated with tools from project management to design a better shower system for commercial aircraft applications.

  12. The participatory design of a performance oriented monitoring and evaluation system in an international development environment.

    PubMed

    Guerra-López, Ingrid; Hicks, Karen

    2015-02-01

    This article illustrates the application of the impact monitoring and evaluation process for the design and development of a performance monitoring and evaluation framework in the context of human and institutional capacity development. This participative process facilitated stakeholder ownership in several areas including the design, development, and use of a new monitoring and evaluation system, as well their targeted results and accomplishments through the use of timely performance data gathered through ongoing monitoring and evaluation. The process produced a performance indicator map, a comprehensive monitoring and evaluation framework, and data collection templates to promote the development, implementation, and sustainability of the monitoring and evaluation system of a farmer's trade union in an African country. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Real-Time Embedded High Performance Computing: Communications Scheduling.

    DTIC Science & Technology

    1995-06-01

    real - time operating system must explicitly limit the degradation of the timing performance of all processes as the number of processes...adequately supported by a real - time operating system , could compound the development problems encountered in the past. Many experts feel that the... real - time operating system support for an MPP, although they all provide some support for distributed real-time applications. A distributed real

  14. Information processing using a single dynamical node as complex system

    PubMed Central

    Appeltant, L.; Soriano, M.C.; Van der Sande, G.; Danckaert, J.; Massar, S.; Dambre, J.; Schrauwen, B.; Mirasso, C.R.; Fischer, I.

    2011-01-01

    Novel methods for information processing are highly desired in our information-driven society. Inspired by the brain's ability to process information, the recently introduced paradigm known as 'reservoir computing' shows that complex networks can efficiently perform computation. Here we introduce a novel architecture that reduces the usually required large number of elements to a single nonlinear node with delayed feedback. Through an electronic implementation, we experimentally and numerically demonstrate excellent performance in a speech recognition benchmark. Complementary numerical studies also show excellent performance for a time series prediction benchmark. These results prove that delay-dynamical systems, even in their simplest manifestation, can perform efficient information processing. This finding paves the way to feasible and resource-efficient technological implementations of reservoir computing. PMID:21915110

  15. A novel double loop control model design for chemical unstable processes.

    PubMed

    Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He

    2014-03-01

    In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. © 2013 ISA Published by ISA All rights reserved.

  16. Determinants of business sustainability: an ergonomics perspective.

    PubMed

    Genaidy, Ash M; Sequeira, Reynold; Rinder, Magda M; A-Rehim, Amal D

    2009-03-01

    There is a need to integrate both macro- and micro-ergonomic approaches for the effective implementation of interventions designed to improve the root causes of problems such as work safety, quality and productivity in the enterprise system. The objective of this study was to explore from an ergonomics perspective the concept of business sustainability through optimising the worker-work environment interface. The specific aims were: (a) to assess the working conditions of a production department work process with the goal to jointly optimise work safety, quality and quantity; (b) to evaluate the enterprise-wide work process at the system level as a social entity in an attempt to trace the root causes of ergonomic issues impacting employees throughout the work process. The Work Compatibility Model was deployed to examine the experiences of workers (that is, effort, perceived risk/benefit, performance and satisfaction/dissatisfaction or psychological impact) and their associations with the complex domains of the work environment (task content, physical and non-physical work environment and conditions for learning/growth/development). This was followed by assessment of the enterprise system through detailed interviews with department managers and lead workers. A system diagnostic instrument was also constructed from information derived from the published literature to evaluate the enterprise system performance. The investigation of the production department indicated that the stress and musculoskeletal pain experienced by workers (particularly on the day shift) were derived from sources elsewhere in the work process. The enterprise system evaluation and detailed interviews allowed the research team to chart the feed-forward and feedback stress propagation loops in the work system. System improvement strategies were extracted on the basis of tacit/explicit knowledge obtained from department managers and lead workers. In certain situations concerning workplace human performance issues, a combined macro-micro ergonomic methodology is essential to solve the productivity, quality and safety issues impacting employees along the trajectory or path of the enterprise-wide work process. In this study, the symptoms associated with human performance issues in one production department work process had root causes originating in the customer service department work process. In fact, the issues found in the customer service department caused performance problems elsewhere in the enterprise-wide work process such as the traffic department. Sustainable enterprise solutions for workplace human performance require the integration of macro- and micro-ergonomic approaches.

  17. AIRSAR Automated Web-based Data Processing and Distribution System

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen

    2005-01-01

    In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.

  18. Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha

    2016-01-01

    To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.

  19. Proposed Framework for the Evaluation of Standalone Corpora Processing Systems: An Application to Arabic Corpora

    PubMed Central

    Al-Thubaity, Abdulmohsen; Alqifari, Reem

    2014-01-01

    Despite the accessibility of numerous online corpora, students and researchers engaged in the fields of Natural Language Processing (NLP), corpus linguistics, and language learning and teaching may encounter situations in which they need to develop their own corpora. Several commercial and free standalone corpora processing systems are available to process such corpora. In this study, we first propose a framework for the evaluation of standalone corpora processing systems and then use it to evaluate seven freely available systems. The proposed framework considers the usability, functionality, and performance of the evaluated systems while taking into consideration their suitability for Arabic corpora. While the results show that most of the evaluated systems exhibited comparable usability scores, the scores for functionality and performance were substantially different with respect to support for the Arabic language and N-grams profile generation. The results of our evaluation will help potential users of the evaluated systems to choose the system that best meets their needs. More importantly, the results will help the developers of the evaluated systems to enhance their systems and developers of new corpora processing systems by providing them with a reference framework. PMID:25610910

  20. Proposed framework for the evaluation of standalone corpora processing systems: an application to Arabic corpora.

    PubMed

    Al-Thubaity, Abdulmohsen; Al-Khalifa, Hend; Alqifari, Reem; Almazrua, Manal

    2014-01-01

    Despite the accessibility of numerous online corpora, students and researchers engaged in the fields of Natural Language Processing (NLP), corpus linguistics, and language learning and teaching may encounter situations in which they need to develop their own corpora. Several commercial and free standalone corpora processing systems are available to process such corpora. In this study, we first propose a framework for the evaluation of standalone corpora processing systems and then use it to evaluate seven freely available systems. The proposed framework considers the usability, functionality, and performance of the evaluated systems while taking into consideration their suitability for Arabic corpora. While the results show that most of the evaluated systems exhibited comparable usability scores, the scores for functionality and performance were substantially different with respect to support for the Arabic language and N-grams profile generation. The results of our evaluation will help potential users of the evaluated systems to choose the system that best meets their needs. More importantly, the results will help the developers of the evaluated systems to enhance their systems and developers of new corpora processing systems by providing them with a reference framework.

  1. Performance evaluation capabilities for the design of physical systems

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Wang, B. P.

    1972-01-01

    The results are presented of a study aimed at developing and formulating a capability for the limiting performance of large steady state systems. The accomplishments reported include: (1) development of a theory of limiting performance of large systems subject to steady state inputs; (2) application and modification of PERFORM, the computational capability for the limiting performance of systems with transient inputs; and (3) demonstration that use of an inherently smooth control force for a limiting performance calculation improves the system identification phase of the design process for physical systems subjected to transient loading.

  2. Controlling Ethylene for Extended Preservation of Fresh Fruits and Vegetables

    DTIC Science & Technology

    2008-12-01

    into a process simulation to determine the effects of key design parameters on the overall performance of the system. Integrating process simulation...High Decay [Asian Pears High High Decay [ Avocados High High Decay lBananas Moderate ~igh Decay Cantaloupe High Moderate Decay Cherimoya Very High High...ozonolysis. Process simulation was subsequently used to understand the effect of key system parameters on EEU performance. Using this modeling work

  3. Incomplete fuzzy data processing systems using artificial neural network

    NASA Technical Reports Server (NTRS)

    Patyra, Marek J.

    1992-01-01

    In this paper, the implementation of a fuzzy data processing system using an artificial neural network (ANN) is discussed. The binary representation of fuzzy data is assumed, where the universe of discourse is decartelized into n equal intervals. The value of a membership function is represented by a binary number. It is proposed that incomplete fuzzy data processing be performed in two stages. The first stage performs the 'retrieval' of incomplete fuzzy data, and the second stage performs the desired operation on the retrieval data. The method of incomplete fuzzy data retrieval is proposed based on the linear approximation of missing values of the membership function. The ANN implementation of the proposed system is presented. The system was computationally verified and showed a relatively small total error.

  4. Remediating ethylbenzene-contaminated clayey soil by a surfactant-aided electrokinetic (SAEK) process.

    PubMed

    Yuan, Ching; Weng, Chih-Huang

    2004-10-01

    The objectives of this research are to investigate the remediation efficiency and electrokinetic behavior of ethylbenzene-contaminated clay by a surfactant-aided electrokinetic (SAEK) process under a potential gradient of 2 Vcm(-1). Experimental results indicated that the type of processing fluids played a key role in determining the removal performance of ethylbenzene from clay in the SAEK process. A mixed surfactant system consisted of 0.5% SDS and 2.0% PANNOX 110 showed the best performance of ethylbenzene removed in the SAEK system. The removal efficiency of ethylbenzene was determined to be 63-98% in SAEK system while only 40% was achieved in an electrokinetic system with tap water as processing fluid. It was found that ethylbenzene was accumulated in the vicinity of anode in an electrokinetic system with tap water as processing fluid. However, the concentration front of ethylbenzene was shifted toward cathode in the SAEK system. The electroosmotic permeability and power consumption were 0.17 x 10(-6)-3.01 x 10(-6) cm(2)V(-1)s(-1) and 52-123 kW h m(-3), respectively. The cost, including the expense of energy and surfactants, was estimated to be 5.15-12.65 USD m(-3) for SAEK systems, which was 2.0-4.9 times greater than that in the system of electrokinetic alone (2.6 USD m(-3)). Nevertheless, by taking the remediation efficiency of ethylbenzene and the energy expenditure into account for the overall process performance evaluation, the system SAEK was still a cost-effective alternative treatment method.

  5. Measuring primary care practice performance within an integrated delivery system: a case study.

    PubMed

    Stewart, Louis J; Greisler, David

    2002-01-01

    This article examines the use of an integrated performance measurement system to plan and control primary care service delivery within an integrated delivery system. We review a growing body of literature that focuses on the development and implementation of management reporting systems among healthcare providers. Our study extends the existing literature by examining the use of performance information generated by an integrated performance measurement system within a healthcare organization. We conduct our examination through a case study of the WMG Primary Care Medicine Group, the primary care medical group practice of WellSpan Health System. WellSpan Health System is an integrated delivery system that serves south central Pennsylvania and northern Maryland. Our study examines the linkage between WellSpan Health's strategic objectives and its primary care medicine group's integrated performance measurement system. The conceptual design of this integrated performance measurement system combines financial metrics with practice management and clinical operating metrics to provide a more complete picture of medical group performance. Our findings demonstrate that WellSpan Health was able to achieve superior financial results despite a weak linkage between its integrated performance measurement system and its strategic objectives. WellSpan Health achieved this objective for its primary care medicine group by linking clinical performance information to physician compensation and reporting practice management performance through the use of statistical process charts. They found that the combined mechanisms of integrated performance measurement and statistical process control charts improved organizational learning and communications between organizational stakeholders.

  6. Optimization of MLS receivers for multipath environments

    NASA Technical Reports Server (NTRS)

    Mcalpine, G. A.; Highfill, J. H., III

    1976-01-01

    The design of a microwave landing system (MLS) aircraft receiver, capable of optimal performance in multipath environments found in air terminal areas, is reported. Special attention was given to the angle tracking problem of the receiver and includes tracking system design considerations, study and application of locally optimum estimation involving multipath adaptive reception and then envelope processing, and microcomputer system design. Results show processing is competitive in this application with i-f signal processing performance-wise and is much more simple and cheaper. A summary of the signal model is given.

  7. Systems Engineering Knowledge Asset (SEKA) Management for Higher Performing Engineering Teams: People, Process and Technology toward Effective Knowledge-Workers

    ERIC Educational Resources Information Center

    Shelby, Kenneth R., Jr.

    2013-01-01

    Systems engineering teams' value-creation for enterprises is slower than possible due to inefficiencies in communication, learning, common knowledge collaboration and leadership conduct. This dissertation outlines the surrounding people, process and technology dimensions for higher performing engineering teams. It describes a true experiment…

  8. Improvement of the Performance of an Electrocoagulation Process System Using Fuzzy Control of pH.

    PubMed

    Demirci, Yavuz; Pekel, Lutfiye Canan; Altinten, Ayla; Alpbaz, Mustafa

    2015-12-01

    The removal efficiencies of electrocoagulation (EC) systems are highly dependent on the initial value of pH. If an EC system has an acidic influent, the pH of the effluent increases during the treatment process; conversely, if such a system has an alkaline influent, the pH of the effluent decreases during the treatment process. Thus, changes in the pH of the wastewater affect the efficiency of the EC process. In this study, we investigated the dynamic effects of pH. To evaluate approaches for preventing increases in the pH of the system, the MATLAB/Simulink program was used to develop and evaluate an on-line computer-based system for pH control. The aim of this work was to study Proportional-Integral-Derivative (PID) control and fuzzy control of the pH of a real textile wastewater purification process using EC. The performances and dynamic behaviors of these two control systems were evaluated based on determinations of COD, colour, and turbidity removal efficiencies.

  9. Hydrothermal Gasification for Waste to Energy

    NASA Astrophysics Data System (ADS)

    Epps, Brenden; Laser, Mark; Choo, Yeunun

    2014-11-01

    Hydrothermal gasification is a promising technology for harvesting energy from waste streams. Applications range from straightforward waste-to-energy conversion (e.g. municipal waste processing, industrial waste processing), to water purification (e.g. oil spill cleanup, wastewater treatment), to biofuel energy systems (e.g. using algae as feedstock). Products of the gasification process are electricity, bottled syngas (H2 + CO), sequestered CO2, clean water, and inorganic solids; further chemical reactions can be used to create biofuels such as ethanol and biodiesel. We present a comparison of gasification system architectures, focusing on efficiency and economic performance metrics. Various system architectures are modeled computationally, using a model developed by the coauthors. The physical model tracks the mass of each chemical species, as well as energy conversions and transfers throughout the gasification process. The generic system model includes the feedstock, gasification reactor, heat recovery system, pressure reducing mechanical expanders, and electricity generation system. Sensitivity analysis of system performance to various process parameters is presented. A discussion of the key technological barriers and necessary innovations is also presented.

  10. Combining high performance simulation, data acquisition, and graphics display computers

    NASA Technical Reports Server (NTRS)

    Hickman, Robert J.

    1989-01-01

    Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors.

  11. Job-mix modeling and system analysis of an aerospace multiprocessor.

    NASA Technical Reports Server (NTRS)

    Mallach, E. G.

    1972-01-01

    An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain 'limiting case' assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.

  12. Study on photochemical analysis system (VLES) for EUV lithography

    NASA Astrophysics Data System (ADS)

    Sekiguchi, A.; Kono, Y.; Kadoi, M.; Minami, Y.; Kozawa, T.; Tagawa, S.; Gustafson, D.; Blackborow, P.

    2007-03-01

    A system for photo-chemical analysis of EUV lithography processes has been developed. This system has consists of 3 units: (1) an exposure that uses the Z-Pinch (Energetiq Tech.) EUV Light source (DPP) to carry out a flood exposure, (2) a measurement system RDA (Litho Tech Japan) for the development rate of photo-resists, and (3) a simulation unit that utilizes PROLITH (KLA-Tencor) to calculate the resist profiles and process latitude using the measured development rate data. With this system, preliminary evaluation of the performance of EUV lithography can be performed without any lithography tool (Stepper and Scanner system) that is capable of imaging and alignment. Profiles for 32 nm line and space pattern are simulated for the EUV resist (Posi-2 resist by TOK) by using VLES that hat has sensitivity at the 13.5nm wavelength. The simulation successfully predicts the resist behavior. Thus it is confirmed that the system enables efficient evaluation of the performance of EUV lithography processes.

  13. Striatal GABA-MRS predicts response inhibition performance and its cortical electrophysiological correlates.

    PubMed

    Quetscher, Clara; Yildiz, Ali; Dharmadhikari, Shalmali; Glaubitz, Benjamin; Schmidt-Wilcke, Tobias; Dydak, Ulrike; Beste, Christian

    2015-11-01

    Response inhibition processes are important for performance monitoring and are mediated via a network constituted by different cortical areas and basal ganglia nuclei. At the basal ganglia level, striatal GABAergic medium spiny neurons are known to be important for response selection, but the importance of the striatal GABAergic system for response inhibition processes remains elusive. Using a novel combination of behavior al, EEG and magnetic resonance spectroscopy (MRS) data, we examine the relevance of the striatal GABAergic system for response inhibition processes. The study shows that striatal GABA levels modulate the efficacy of response inhibition processes. Higher striatal GABA levels were related to better response inhibition performance. We show that striatal GABA modulate specific subprocesses of response inhibition related to pre-motor inhibitory processes through the modulation of neuronal synchronization processes. To our knowledge, this is the first study providing direct evidence for the relevance of the striatal GABAergic system for response inhibition functions and their cortical electrophysiological correlates in humans.

  14. Automatic and controlled processing in the corticocerebellar system.

    PubMed

    Ramnani, Narender

    2014-01-01

    During learning, performance changes often involve a transition from controlled processing in which performance is flexible and responsive to ongoing error feedback, but effortful and slow, to a state in which processing becomes swift and automatic. In this state, performance is unencumbered by the requirement to process feedback, but its insensitivity to feedback reduces its flexibility. Many properties of automatic processing are similar to those that one would expect of forward models, and many have suggested that these may be instantiated in cerebellar circuitry. Since hierarchically organized frontal lobe areas can both send and receive commands, I discuss the possibility that they can act both as controllers and controlled objects and that their behaviors can be independently modeled by forward models in cerebellar circuits. Since areas of the prefrontal cortex contribute to this hierarchically organized system and send outputs to the cerebellar cortex, I suggest that the cerebellum is likely to contribute to the automation of cognitive skills, and to the formation of habitual behavior which is resistant to error feedback. An important prerequisite to these ideas is that cerebellar circuitry should have access to higher order error feedback that signals the success or failure of cognitive processing. I have discussed the pathways through which such feedback could arrive via the inferior olive and the dopamine system. Cerebellar outputs inhibit both the inferior olive and the dopamine system. It is possible that learned representations in the cerebellum use this as a mechanism to suppress the processing of feedback in other parts of the nervous system. Thus, cerebellar processes that control automatic performance may be completed without triggering the engagement of controlled processes by prefrontal mechanisms. © 2014 Elsevier B.V. All rights reserved.

  15. Processing of on-board recorded data for quick analysis of aircraft performance. [rotor systems research aircraft

    NASA Technical Reports Server (NTRS)

    Michaud, N. H.

    1979-01-01

    A system of independent computer programs for the processing of digitized pulse code modulated (PCM) and frequency modulated (FM) data is described. Information is stored in a set of random files and accessed to produce both statistical and graphical output. The software system is designed primarily to present these reports within a twenty-four hour period for quick analysis of the helicopter's performance.

  16. Adapting Wave-front Algorithms to Efficiently Utilize Systems with Deep Communication Hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J.; Lang, Michael; Pakin, Scott

    2011-09-30

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance especially in hybrid systems using accelerators. Processorcores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contains wavefront processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundarymore » data downstream and whose cost is typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional steps in the parallel computation and higher use of on-chip communications. This tradeoff is explored using a performance model. An implementation using the Reverse-acceleration programming model on the petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  17. A Framework for Performing Verification and Validation in Reuse Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1997-01-01

    Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  18. A multiprocessing architecture for real-time monitoring

    NASA Technical Reports Server (NTRS)

    Schmidt, James L.; Kao, Simon M.; Read, Jackson Y.; Weitzenkamp, Scott M.; Laffey, Thomas J.

    1988-01-01

    A multitasking architecture for performing real-time monitoring and analysis using knowledge-based problem solving techniques is described. To handle asynchronous inputs and perform in real time, the system consists of three or more distributed processes which run concurrently and communicate via a message passing scheme. The Data Management Process acquires, compresses, and routes the incoming sensor data to other processes. The Inference Process consists of a high performance inference engine that performs a real-time analysis on the state and health of the physical system. The I/O Process receives sensor data from the Data Management Process and status messages and recommendations from the Inference Process, updates its graphical displays in real time, and acts as the interface to the console operator. The distributed architecture has been interfaced to an actual spacecraft (NASA's Hubble Space Telescope) and is able to process the incoming telemetry in real-time (i.e., several hundred data changes per second). The system is being used in two locations for different purposes: (1) in Sunnyville, California at the Space Telescope Test Control Center it is used in the preflight testing of the vehicle; and (2) in Greenbelt, Maryland at NASA/Goddard it is being used on an experimental basis in flight operations for health and safety monitoring.

  19. Digital processing of mesoscale analysis and space sensor data

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.

    1985-01-01

    The mesoscale analysis and space sensor (MASS) data management and analysis system on the research computer system is presented. The MASS data base management and analysis system was implemented on the research computer system which provides a wide range of capabilities for processing and displaying large volumes of conventional and satellite derived meteorological data. The research computer system consists of three primary computers (HP-1000F, Harris/6, and Perkin-Elmer 3250), each of which performs a specific function according to its unique capabilities. The overall tasks performed concerning the software, data base management and display capabilities of the research computer system in terms of providing a very effective interactive research tool for the digital processing of mesoscale analysis and space sensor data is described.

  20. Performance evaluation of the insurance companies based on AHP

    NASA Astrophysics Data System (ADS)

    Lu, Manhong; Zhu, Kunping

    2018-04-01

    With the entry of foreign capital, China's insurance industry is under increasing pressure of competition. The performance of a company is the external manifestation of its comprehensive strength. Therefore, the establishment of a scientific evaluation system is of practical significance for the insurance companies. In this paper, based on the financial and non-financial indicators of the companies, the performance evaluation system is constructed by means of the analytic hierarchy process (AHP). In the system, the weights of the indicators which represent the impact on the performance of the companies will be calculated by the process. The evaluation system is beneficial for the companies to realize their own strengths and weaknesses, so as to take steps to enhance the core competitiveness of the companies.

  1. Building high-performance system for processing a daily large volume of Chinese satellites imagery

    NASA Astrophysics Data System (ADS)

    Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin

    2014-10-01

    The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application workflows, is identified to improve the system in the coming years.

  2. Evaluation of computing systems using functionals of a Stochastic process

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.; Wu, L. T.

    1980-01-01

    An intermediate model was used to represent the probabilistic nature of a total system at a level which is higher than the base model and thus closer to the performance variable. A class of intermediate models, which are generally referred to as functionals of a Markov process, were considered. A closed form solution of performability for the case where performance is identified with the minimum value of a functional was developed.

  3. GPS-based system for satellite tracking and geodesy

    NASA Technical Reports Server (NTRS)

    Bertiger, Willy I.; Thornton, Catherine L.

    1989-01-01

    High-performance receivers and data processing systems developed for GPS are reviewed. The GPS Inferred Positioning System (GIPSY) and the Orbiter Analysis and Simulation Software (OASIS) are described. The OASIS software is used to assess GPS system performance using GIPSY for data processing. Consideration is given to parameter estimation for multiday arcs, orbit repeatability, orbit prediction, daily baseline repeatability, agreement with VLBI, and ambiguity resolution. Also, the dual-frequency Rogue receiver, which can track up to eight GPS satellites simultaneously, is discussed.

  4. Technology Insertion-Engineering Services Process Characterization. Task Order No. 1. Book 1 of 3. Database Documentation Book. OO-ALC MANPGP (Overview Layouts)

    DTIC Science & Technology

    1989-12-15

    Missile Systems Company 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER McDonnell Douglas Missile Systems...SEQUENCE NO. B008 MCDONNELL DOUGLAS McDonnefl Douglas Missile Systems Company St. Louis, Missouri 63166-0516 (314) 232-0232 91-02815 Distribution nt pm rt...Systems Company 7.1- 1 2. TASK ORDER NO. 1 PROCESS CHARACTERIZATION The brake assembly subunit is responsible for the assembly of brakes. Brakes enter

  5. System simulation of direct-current speed regulation based on Simulink

    NASA Astrophysics Data System (ADS)

    Yang, Meiying

    2018-06-01

    Many production machines require the smooth adjustment of speed in a certain range In the process of modern industrial production, and require good steady-state and dynamic performance. Direct-current speed regulation system with wide speed regulation range, small relative speed variation, good stability, large overload capacity, can bear the frequent impact load, can realize stepless rapid starting-braking and inversion of frequency and other good dynamic performances, can meet the different kinds of special operation requirements in production process of automation system. The direct-current power drive system is almost always used in the field of drive technology of high performance for a long time.

  6. Identification, regression and validation of an image processing degradation model to assess the effects of aeromechanical turbulence due to installation aircraft

    NASA Astrophysics Data System (ADS)

    Miccoli, M.; Usai, A.; Tafuto, A.; Albertoni, A.; Togna, F.

    2016-10-01

    The propagation environment around airborne platforms may significantly degrade the performance of Electro-Optical (EO) self-protection systems installed onboard. To ensure the sufficient level of protection, it is necessary to understand that are the best sensors/effectors installation positions to guarantee that the aeromechanical turbulence, generated by the engine exhausts and the rotor downwash, does not interfere with the imaging systems normal operations. Since the radiation-propagation-in-turbulence is a hardly predictable process, it was proposed a high-level approach in which, instead of studying the medium under turbulence, the turbulence effects on the imaging systems processing are assessed by means of an equivalent statistical model representation, allowing a definition of a Turbulence index to classify different level of turbulence intensities. Hence, a general measurement methodology for the degradation of the imaging systems performance in turbulence conditions was developed. The analysis of the performance degradation started by evaluating the effects of turbulences with a given index on the image processing chain (i.e., thresholding, blob analysis). The processing in turbulence (PIT) index is then derived by combining the effects of the given turbulence on the different image processing primitive functions. By evaluating the corresponding PIT index for a sufficient number of testing directions, it is possible to map the performance degradation around the aircraft installation for a generic imaging system, and to identify the best installation position for sensors/effectors composing the EO self-protection suite.

  7. RDD-100 and the systems engineering process

    NASA Technical Reports Server (NTRS)

    Averill, Robert D.

    1994-01-01

    An effective systems engineering approach applied through the project life cycle can help Langley produce a better product. This paper demonstrates how an enhanced systems engineering process for in-house flight projects assures that each system will achieve its goals with quality performance and within planned budgets and schedules. This paper also describes how the systems engineering process can be used in combination with available software tools.

  8. Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks

    PubMed Central

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480

  9. Computational models of neuron-astrocyte interactions lead to improved efficacy in the performance of neural networks.

    PubMed

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.

  10. Design and Performance of the Astro-E/XRS Signal Processing System

    NASA Technical Reports Server (NTRS)

    Boyce, Kevin R.; Audley, M. D.; Baker, R. G.; Dumonthier, J. J.; Fujimoto, R.; Gendreau, K. C.; Ishisaki, Y.; Kelley, R. L.; Stahle, C. K.; Szymkowiak, A. E.

    1999-01-01

    We describe the signal processing system of the Astro-E XRS instrument. The Calorimeter Analog Processor (CAP) provides bias and power for the detectors and amplifies the detector signals by a factor of 20,000. The Calorimeter Digital Processor (CDP) performs the digital processing of the calorimeter signals, detecting X-ray pulses and analyzing them by optimal filtering. We describe the operation of pulse detection, Pulse height analysis. and risetime determination. We also discuss performance, including the three event grades (hi-res mid-res, and low-res). anticoincidence detection, counting rate dependence, and noise rejection.

  11. Examining High-Performing Education Systems in Terms of Teacher Training: Lessons Learnt for Low-Performers

    ERIC Educational Resources Information Center

    Çer, Erkan; Solak, Ekrem

    2018-01-01

    The quality of a teacher plays one of the most important roles in the achievement of an education system. Teacher training is a multi-dimensional process which comprises the selection of teacher candidates, pre-service training, appointment, in-service training and teaching practices. Therefore, this study focuses on teacher training processes in…

  12. An Integrated Approach for Conducting a Behavioral Systems Analysis

    ERIC Educational Resources Information Center

    Diener, Lori H.; McGee, Heather M.; Miguel, Caio F.

    2009-01-01

    The aim of this paper is to illustrate how to conduct a Behavioral Systems Analysis (BSA) to aid in the design of targeted performance improvement interventions. BSA is a continuous process of analyzing the right variables to the right extent to aid in planning and managing performance at the organization, process, and job levels. BSA helps to…

  13. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions.

    PubMed

    Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A

    2015-09-21

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  14. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.

    2015-09-01

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  15. A service based adaptive U-learning system using UX.

    PubMed

    Jeong, Hwa-Young; Yi, Gangman

    2014-01-01

    In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users' tailored materials according to their learning style. That is, we analyzed the user's data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques.

  16. A Service Based Adaptive U-Learning System Using UX

    PubMed Central

    Jeong, Hwa-Young

    2014-01-01

    In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users' tailored materials according to their learning style. That is, we analyzed the user's data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques. PMID:25147832

  17. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units.

    PubMed

    Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  18. A Framework for Performing V&V within Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1996-01-01

    Verification and validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In order to provide early detection of errors, V&V is conducted in parallel with system development, often beginning with the concept phase. In reuse-based software engineering, however, decisions on the requirements, design and even implementation of domain assets can be made prior to beginning development of a specific system. In this case, V&V must be performed during domain engineering in order to have an impact on system development. This paper describes a framework for performing V&V within architecture-centric, reuse-based software engineering. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  19. Phased models for evaluating the performability of computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.; Meyer, J. F.

    1979-01-01

    A phase-by-phase modelling technique is introduced to evaluate a fault tolerant system's ability to execute different sets of computational tasks during different phases of the control process. Intraphase processes are allowed to differ from phase to phase. The probabilities of interphase state transitions are specified by interphase transition matrices. Based on constraints imposed on the intraphase and interphase transition probabilities, various iterative solution methods are developed for calculating system performability.

  20. Probing sensorimotor integration during musical performance.

    PubMed

    Furuya, Shinichi; Furukawa, Yuta; Uehara, Kazumasa; Oku, Takanori

    2018-03-10

    An integration of afferent sensory information from the visual, auditory, and proprioceptive systems into execution and update of motor programs plays crucial roles in control and acquisition of skillful sequential movements in musical performance. However, conventional behavioral and neurophysiological techniques that have been applied to study simplistic motor behaviors limit elucidating online sensorimotor integration processes underlying skillful musical performance. Here, we propose two novel techniques that were developed to investigate the roles of auditory and proprioceptive feedback in piano performance. First, a closed-loop noninvasive brain stimulation system that consists of transcranial magnetic stimulation, a motion sensor, and a microcomputer enabled to assess time-varying cortical processes subserving auditory-motor integration during piano playing. Second, a force-field system capable of manipulating the weight of a piano key allowed for characterizing movement adaptation based on the feedback obtained, which can shed light on the formation of an internal representation of the piano. Results of neurophysiological and psychophysics experiments provided evidence validating these systems as effective means for disentangling computational and neural processes of sensorimotor integration in musical performance. © 2018 New York Academy of Sciences.

  1. lean-ISD.

    ERIC Educational Resources Information Center

    Wallace, Guy W.

    2001-01-01

    Explains lean instructional systems design/development (ISD) as it relates to curriculum architecture design, based on Japan's lean production system. Discusses performance-based systems; ISD models; processes for organizational training and development; curriculum architecture to support job performance; and modular curriculum development. (LRW)

  2. Reading comprehension and working memory in learning-disabled readers: Is the phonological loop more important than the executive system?

    PubMed

    Swanson, H L

    1999-01-01

    This investigation explores the contribution of two working memory systems (the articulatory loop and the central executive) to the performance differences between learning-disabled (LD) and skilled readers. Performances of LD, chronological age (CA) matched, and reading level-matched children were compared on measures of phonological processing accuracy and speed (articulatory system), long-term memory (LTM) accuracy and speed, and executive processing. The results indicated that (a) LD readers were inferior on measures of articulatory, LTM, and executive processing; (b) LD readers were superior to RL readers on measures of executive processing, but were comparable to RL readers on measures of the articulatory and LTM system; (c) executive processing differences remained significant between LD and CA-matched children when measures of reading comprehension, articulatory processes, and LTM processes were partialed from the analysis; and (d) executive processing contributed significant variance to reading comprehension when measures of the articulatory and LTM systems were entered into a hierarchical regression model. In summary, LD readers experience constraints in the articulatory and LTM system, but constraints mediate only some of the influence of executive processing on reading comprehension. Further, LD readers suffer executive processing problems nonspecific to their reading comprehension problems. Copyright 1999 Academic Press.

  3. Damage modeling and statistical analysis of optics damage performance in MJ-class laser systems.

    PubMed

    Liao, Zhi M; Raymond, B; Gaylord, J; Fallejo, R; Bude, J; Wegner, P

    2014-11-17

    Modeling the lifetime of a fused silica optic is described for a multiple beam, MJ-class laser system. This entails combining optic processing data along with laser shot data to account for complete history of optic processing and shot exposure. Integrating with online inspection data allows for the construction of a performance metric to describe how an optic performs with respect to the model. This methodology helps to validate the damage model as well as allows strategic planning and identifying potential hidden parameters that are affecting the optic's performance.

  4. Miss-distance indicator for tank main guns

    NASA Astrophysics Data System (ADS)

    Bornstein, Jonathan A.; Hillis, David B.

    1996-06-01

    Tank main gun systems must possess extremely high levels of accuracy to perform successfully in battle. Under some circumstances, the first round fired in an engagement may miss the intended target, and it becomes necessary to rapidly correct fire. A breadboard automatic miss-distance indicator system was previously developed to assist in this process. The system, which would be mounted on a 'wingman' tank, consists of a charged-coupled device (CCD) camera and computer-based image-processing system, coupled with a separate infrared sensor to detect muzzle flash. For the system to be successfully employed with current generation tanks, it must be reliable, be relatively low cost, and respond rapidly maintaining current firing rates. Recently, the original indicator system was developed further in an effort to assist in achieving these goals. Efforts have focused primarily upon enhanced image-processing algorithms, both to improve system reliability and to reduce processing requirements. Intelligent application of newly refined trajectory models has permitted examination of reduced areas of interest and enhanced rejection of false alarms, significantly improving system performance.

  5. David Florida Laboratory Thermal Vacuum Data Processing System

    NASA Technical Reports Server (NTRS)

    Choueiry, Elie

    1994-01-01

    During 1991, the Space Simulation Facility conducted a survey to assess the requirements and analyze the merits for purchasing a new thermal vacuum data processing system for its facilities. A new, integrated, cost effective PC-based system was purchased which uses commercial off-the-shelf software for operation and control. This system can be easily reconfigured and allows its users to access a local area network. In addition, it provides superior performance compared to that of the former system which used an outdated mini-computer and peripheral hardware. This paper provides essential background on the old data processing system's features, capabilities, and the performance criteria that drove the genesis of its successor. This paper concludes with a detailed discussion of the thermal vacuum data processing system's components, features, and its important role in supporting our space-simulation environment and our capabilities for spacecraft testing. The new system was tested during the ANIK E spacecraft test, and was fully operational in November 1991.

  6. Adapting wave-front algorithms to efficiently utilize systems with deep communication hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J; Lang, Michael; Pakin, Scott

    2009-01-01

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance. Processor-cores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contain wave-front processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundary data downstream and whose cost ismore » typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional computation and higher use of on-chip communications. This tradeoff is explored using a performance model and an implementation on the Petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  7. Performance of redundant disk array organizations in transaction processing environments

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1993-01-01

    A performance evaluation is conducted for two redundant disk-array organizations in a transaction-processing environment, relative to the performance of both mirrored disk organizations and organizations using neither striping nor redundancy. The proposed parity-striping alternative to striping with rotated parity is shown to furnish rapid recovery from failure at the same low storage cost without interleaving the data over multiple disks. Both noncached systems and systems using a nonvolatile cache as the controller are considered.

  8. Due Process in Appraisal: A Quasi-Experiment in Procedural Justice.

    ERIC Educational Resources Information Center

    Taylor, M. Susan; And Others

    1995-01-01

    Extended research on procedural justice by examining effects of a due-process performance-appraisal system on (government) employees' and managers' reactions. Employee-management pairs were randomly assigned to either a due-process appraisal system or the existing one. Although due-process employees received lower evaluations, both employees and…

  9. Automated sleep stage detection with a classical and a neural learning algorithm--methodological aspects.

    PubMed

    Schwaibold, M; Schöchlin, J; Bolz, A

    2002-01-01

    For classification tasks in biosignal processing, several strategies and algorithms can be used. Knowledge-based systems allow prior knowledge about the decision process to be integrated, both by the developer and by self-learning capabilities. For the classification stages in a sleep stage detection framework, three inference strategies were compared regarding their specific strengths: a classical signal processing approach, artificial neural networks and neuro-fuzzy systems. Methodological aspects were assessed to attain optimum performance and maximum transparency for the user. Due to their effective and robust learning behavior, artificial neural networks could be recommended for pattern recognition, while neuro-fuzzy systems performed best for the processing of contextual information.

  10. Proposal for an astronaut mass measurement device for the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Beyer, Neil; Lomme, Jon; Mccollough, Holly; Price, Bradford; Weber, Heidi

    1994-01-01

    For medical reasons, astronauts in space need to have their mass measured. Currently, this measurement is performed using a mass-spring system. The current system is large, inaccurate, and uncomfortable for the astronauts. NASA is looking for new, different, and preferably better ways to perform this measurement process. After careful analysis our design team decided on a linear acceleration process. Within the process, four possible concept variants are put forth. Among these four variants, one is suggested over the others. The variant suggested is that of a motor-winch system to linearly accelerate the astronaut. From acceleration and force measurements of the process combined Newton's second law, the mass of an astronaut can be calculated.

  11. Power processing for electric propulsion

    NASA Technical Reports Server (NTRS)

    Finke, R. C.; Herron, B. G.; Gant, G. D.

    1975-01-01

    The inclusion of electric thruster systems in spacecraft design is considered. The propulsion requirements of such spacecraft dictate a wide range of thruster power levels and operational lifetimes, which must be matched by lightweight, efficient, and reliable thruster power processing systems. Electron bombardment ion thruster requirements are presented, and the performance characteristics of present power processing systems are reviewed. Design philosophies and alternatives in areas such as inverter type, arc protection, and control methods are discussed along with future performance potentials for meeting goals in the areas of power process or weight (10 kg/kW), efficiency (approaching 92 percent), reliability (0.96 for 15,000 hr), and thermal control capability (0.3 to 5 AU).

  12. An open system approach to process reengineering in a healthcare operational environment.

    PubMed

    Czuchry, A J; Yasin, M M; Norris, J

    2000-01-01

    The objective of this study is to examine the applicability of process reengineering in a healthcare operational environment. The intake process of a mental healthcare service delivery system is analyzed systematically to identify process-related problems. A methodology which utilizes an open system orientation coupled with process reengineering is utilized to overcome operational and patient related problems associated with the pre-reengineered intake process. The systematic redesign of the intake process resulted in performance improvements in terms of cost, quality, service and timing.

  13. Intelligent robotic tracker

    NASA Technical Reports Server (NTRS)

    Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.

    1987-01-01

    An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.

  14. Power processing systems for ion thrusters.

    NASA Technical Reports Server (NTRS)

    Herron, B. G.; Garth, D. R.; Finke, R. C.; Shumaker, H. A.

    1972-01-01

    The proposed use of ion thrusters to fulfill various communication satellite propulsion functions such as east-west and north-south stationkeeping, attitude control, station relocation and orbit raising, naturally leads to the requirement for lightweight, efficient and reliable thruster power processing systems. Collectively, the propulsion requirements dictate a wide range of thruster power levels and operational lifetimes, which must be matched by the power processing. This paper will discuss the status of such power processing systems, present system design alternatives and project expected near future power system performance.

  15. Cholinergic control of attention to cues guiding established performance versus learning: theoretical comment on Maddux, Kerfoot, Chatterjee, and Holland (2007).

    PubMed

    Sarter, Martin

    2007-02-01

    Previous views on the cognitive functions of the basal forebrain cholinergic system often suggested that this neuromodulator system influences fundamental attentional processes but not learning. The results from an elegant series of studies by J. M. Maddux, E. C. Kerfoot, S. Chatterjee, and P. Holland reveal the intricate relationships between the levels of attentional processing of stimuli and the rate of learning about such stimuli. Moreover, their results indicate a double dissociation between the role of prefrontal and posterior parietal cholinergic inputs, respectively, in attentional performance and the learning rate of stimuli that command different levels of attentional processing. Separate yet interacting components of the cortical cholinergic input system modulate the attentional processing of cues that guide well-practiced performance or that serve as conditioned stimuli during learning. Copyright (c) 2007 APA, all rights reserved.

  16. Comparing an FPGA to a Cell for an Image Processing Application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ngo, Hau; Broussard, Randy P.; Ives, Robert W.

    2010-12-01

    Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs), have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.

  17. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    PubMed

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  18. Annual ADP planning document

    NASA Technical Reports Server (NTRS)

    Mogilevsky, M.

    1973-01-01

    The Category A computer systems at KSC (Al and A2) which perform scientific and business/administrative operations are described. This data division is responsible for scientific requirements supporting Saturn, Atlas/Centaur, Titan/Centaur, Titan III, and Delta vehicles, and includes realtime functions, Apollo-Soyuz Test Project (ASTP), and the Space Shuttle. The work is performed chiefly on the GEL-635 (Al) system located in the Central Instrumentation Facility (CIF). The Al system can perform computations and process data in three modes: (1) real-time critical mode; (2) real-time batch mode; and (3) batch mode. The Division's IBM-360/50 (A2) system, also at the CIF, performs business/administrative data processing such as personnel, procurement, reliability, financial management and payroll, real-time inventory management, GSE accounting, preventive maintenance, and integrated launch vehicle modification status.

  19. Multispectral scanner system for ERTS: Four-band scanner system. Volume 1: System description and performance

    NASA Technical Reports Server (NTRS)

    Norwood, V. T.; Fermelia, L. R.; Tadler, G. A.

    1972-01-01

    The four-band Multispectral Scanner System (MSS) is discussed. Included is a description of the MSS with major emphasis on the flight subsystem (scanner and multiplexer), the theory for the MSS calibration system processing techniques, system calibration data, and a summary of the performance of the two four-band MSS systems.

  20. The Relationship of Learning and Performance Diagnosis at Different System Levels.

    ERIC Educational Resources Information Center

    Lubega, Khalid

    2003-01-01

    Examines learning and performance diagnosis, separately and in relation to each other, as they function in organization systems; explains the relationship between learning and performance diagnosis at the individual, process, and organizational levels using a three-level performance model; and discusses types of learning, including nonlearning,…

  1. Economic Evaluation of a Hybrid Desalination System Combining Forward and Reverse Osmosis

    PubMed Central

    Choi, Yongjun; Cho, Hyeongrak; Shin, Yonghyun; Jang, Yongsun; Lee, Sangho

    2015-01-01

    This study seeks to evaluate the performance and economic feasibility of the forward osmosis (FO)–reverse osmosis (RO) hybrid process; to propose a guideline by which this hybrid process might be more price-competitive in the field. A solution-diffusion model modified with film theory was applied to analyze the effects of concentration polarization, water, and salt transport coefficient on flux, recovery, seawater concentration, and treated wastewater of the FO process of an FO-RO hybrid system. A simple cost model was applied to analyze the effects of flux; recovery of the FO process; energy; and membrane cost on the FO-RO hybrid process. The simulation results showed that the water transport coefficient and internal concentration polarization resistance are very important factors that affect performance in the FO process; however; the effect of the salt transport coefficient does not seem to be large. It was also found that the flux and recovery of the FO process, the FO membrane, and the electricity cost are very important factors that influence the water cost of an FO-RO hybrid system. This hybrid system can be price-competitive with RO systems when its recovery rate is very high, the flux and the membrane cost of the FO are similar to those of the RO, and the electricity cost is expensive. The most important thing in commercializing the FO process is enhancing performance (e.g.; flux and the recovery of FO membranes). PMID:26729176

  2. Utilization of optical emission endpoint in photomask dry etch processing

    NASA Astrophysics Data System (ADS)

    Faure, Thomas B.; Huynh, Cuc; Lercel, Michael J.; Smith, Adam; Wagner, Thomas

    2002-03-01

    Use of accurate and repeatable endpoint detection during dry etch processing of photomask is very important for obtaining good mask mean-to-target and CD uniformity performance. It was found that the typical laser reflectivity endpoint detecting system used on photomask dry etch systems had several key limitations that caused unnecessary scrap and non-optimum image size performance. Consequently, work to develop and implement use of a more robust optical emission endpoint detection system for chrome dry etch processing of photomask was performed. Initial feasibility studies showed that the emission technique was sensitive enough to monitor pattern loadings on contact and via level masks down to 3 percent pattern coverage. Additional work was performed to further improve this to 1 percent pattern coverage by optimizing the endpoint detection parameters. Comparison studies of mask mean-to-target performance and CD uniformity were performed with the use of optical emission endpoint versus laser endpoint for masks built using TOK IP3600 and ZEP 7000 resist systems. It was found that an improvement in mean-to-target performance and CD uniformity was realized on several types of production masks. In addition, part-to-part endpoint time repeatability was found to be significantly improved with the use of optical emission endpoint.

  3. IEC 61511 and the capital project process--a protective management system approach.

    PubMed

    Summers, Angela E

    2006-03-17

    This year, the process industry has reached an important milestone in process safety-the acceptance of an internationally recognized standard for safety instrumented systems (SIS). This standard, IEC 61511, documents good engineering practice for the assessment, design, operation, maintenance, and management of SISs. The foundation of the standard is established by several requirements in Part 1, Clauses 5-7, which cover the development of a management system aimed at ensuring that functional safety is achieved. The management system includes a quality assurance process for the entire SIS lifecycle, requiring the development of procedures, identification of resources and acquisition of tools. For maximum benefit, the deliverables and quality control checks required by the standard should be integrated into the capital project process, addressing safety, environmental, plant productivity, and asset protection. Industry has become inundated with a multitude of programs focusing on safety, quality, and cost performance. This paper introduces a protective management system, which builds upon the work process identified in IEC 61511. Typical capital project phases are integrated with the management system to yield one comprehensive program to efficiently manage process risk. Finally, the paper highlights areas where internal practices or guidelines should be developed to improve program performance and cost effectiveness.

  4. Integrated command, control, communication and computation system design study. Summary of tasks performed

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A summary of tasks performed on an integrated command, control, communication, and computation system design study is given. The Tracking and Data Relay Satellite System command and control system study, an automated real-time operations study, and image processing work are discussed.

  5. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Koo, Michelle; Cao, Yu

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less

  6. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  7. Computer systems performance measurement techniques.

    DOT National Transportation Integrated Search

    1971-06-01

    Computer system performance measurement techniques, tools, and approaches are presented as a foundation for future recommendations regarding the instrumentation of the ARTS ATC data processing subsystem for purposes of measurement and evaluation.

  8. Advantages of isofocal printing in maskmaking with the ALTA 3500

    NASA Astrophysics Data System (ADS)

    Fuller, Scott E.; Pochkowski, Mike

    1999-04-01

    The ALTA 3500, an advanced scanned-laser mask lithography tool produced by Etec, was introduced to the marketplace in 1997. The system architecture was described and an initial performance evaluation was presented. This system, based on the ALTA 3000 system, uses a new 33.3X, 0.8 NA final reduction lens to reduce the spot size to 0.27 micrometers FWHM, thereby affording improved resolution and pattern acuity on the mask. An anisotropic chrome etch process was developed and introduced along with a TOK iP3600 resist to take advantage of the improved resolution. In this paper we will more extensively describe the performance of the ALTA 3500 scanned laser system and the performance of these new processes. In addition, an investigation of the benefits of operating in the optimal isofocal print region is examined and compared to printing at the nominal process conditions.

  9. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  10. The influence of drilling process automation on improvement of blasting works quality in open pit mining

    NASA Astrophysics Data System (ADS)

    Bodlak, Maciej; Dmytryk, Dominik; Mertuszka, Piotr; Szumny, Marcin; Tomkiewicz, Grzegorz

    2018-01-01

    The article describes the monitoring system of blasthole drilling process called HNS (Hole Navigation System), which was used in blasting works performed by Maxam Poland Ltd. Developed by Atlas Copco's, the HNS system - using satellite data - allows for a very accurate mapping of the designed grid of blastholes. The article presents the results of several conducted measurements of ground vibrations triggered by blasting, designed and performed using traditional technology and using the HNS system and shows first observations in this matter.

  11. GPU real-time processing in NA62 trigger system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2017-01-01

    A commercial Graphics Processing Unit (GPU) is used to build a fast Level 0 (L0) trigger system tested parasitically with the TDAQ (Trigger and Data Acquisition systems) of the NA62 experiment at CERN. In particular, the parallel computing power of the GPU is exploited to perform real-time fitting in the Ring Imaging CHerenkov (RICH) detector. Direct GPU communication using a FPGA-based board has been used to reduce the data transmission latency. The performance of the system for multi-ring reconstrunction obtained during the NA62 physics run will be presented.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) preliminary techno-economic assessment of the UCC catalyst/process system; (3) optimization of the most promising catalysts developed under prior contract; (4) optimization of the UCC catalyst system in a mannermore » that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Accomplishments are reported for Tasks 2 through 5.« less

  13. Gamma ray spectroscopy employing divalent europium-doped alkaline earth halides and digital readout for accurate histogramming

    DOEpatents

    Cherepy, Nerine Jane; Payne, Stephen Anthony; Drury, Owen B; Sturm, Benjamin W

    2014-11-11

    A scintillator radiation detector system according to one embodiment includes a scintillator; and a processing device for processing pulse traces corresponding to light pulses from the scintillator, wherein pulse digitization is used to improve energy resolution of the system. A scintillator radiation detector system according to another embodiment includes a processing device for fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times and performing a direct integration of fit parameters. A method according to yet another embodiment includes processing pulse traces corresponding to light pulses from a scintillator, wherein pulse digitization is used to improve energy resolution of the system. A method in a further embodiment includes fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times; and performing a direct integration of fit parameters. Additional systems and methods are also presented.

  14. ICC '86; Proceedings of the International Conference on Communications, Toronto, Canada, June 22-25, 1986, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Papers are presented on ISDN, mobile radio systems and techniques for digital connectivity, centralized and distributed algorithms in computer networks, communications networks, quality assurance and impact on cost, adaptive filters in communications, the spread spectrum, signal processing, video communication techniques, and digital satellite services. Topics discussed include performance evaluation issues for integrated protocols, packet network operations, the computer network theory and multiple-access, microwave single sideband systems, switching architectures, fiber optic systems, wireless local communications, modulation, coding, and synchronization, remote switching, software quality, transmission, and expert systems in network operations. Consideration is given to wide area networks, image and speech processing, office communications application protocols, multimedia systems, customer-controlled network operations, digital radio systems, channel modeling and signal processing in digital communications, earth station/on-board modems, computer communications system performance evaluation, source encoding, compression, and quantization, and adaptive communications systems.

  15. Beyond feedback control: the interactive use of performance management systems. Implications for process innovation in Italian healthcare organizations.

    PubMed

    Demartini, Chiara; Mella, Piero

    2014-01-01

    This paper shows how the use of performance management systems affects managers' perception of satisfaction, the effectiveness of the control system and the performance related to process innovation. An exploratory empirical research has been conducted on 85 managers operating in Italian healthcare organizations. Empirical findings put forward that the interactive--as opposed to diagnostic--use of performance management systems enhances managerial satisfaction with the control system and managerial perception of effectiveness. The present study then showed that it is not the control itself that is an obstacle to innovation in organizations in general (and in health organizations in particular) but the diagnostic use of the control mechanisms, which impedes the interaction between the control personnel and those subject to the control. Finally, this paper addresses managerial implications and further research avenues. Copyright © 2013 John Wiley & Sons, Ltd.

  16. System design of ELITE power processing unit

    NASA Astrophysics Data System (ADS)

    Caldwell, David J.

    The Electric Propulsion Insertion Transfer Experiment (ELITE) is a space mission planned for the mid 1990s in which technological readiness will be demonstrated for electric orbit transfer vehicles (EOTVs). A system-level design of the power processing unit (PPU), which conditions solar array power for the arcjet thruster, was performed to optimize performance with respect to reliability, power output, efficiency, specific mass, and radiation hardness. The PPU system consists of multiphased parallel switchmode converters, configured as current sources, connected directly from the array to the thruster. The PPU control system includes a solar array peak power tracker (PPT) to maximize the power delivered to the thruster regardless of variations in array characteristics. A stability analysis has been performed to verify that the system is stable despite the nonlinear negative impedance of the PPU input and the arcjet thruster. Performance specifications are given to provide the required spacecraft capability with existing technology.

  17. Extended performance solar electric propulsion thrust system study. Volume 2: Baseline thrust system

    NASA Technical Reports Server (NTRS)

    Poeschel, R. L.; Hawthorne, E. I.

    1977-01-01

    Several thrust system design concepts were evaluated and compared using the specifications of the most advanced 30- cm engineering model thruster as the technology base. Emphasis was placed on relatively high-power missions (60 to 100 kW) such as a Halley's comet rendezvous. The extensions in thruster performance required for the Halley's comet mission were defined and alternative thrust system concepts were designed in sufficient detail for comparing mass, efficiency, reliability, structure, and thermal characteristics. Confirmation testing and analysis of thruster and power-processing components were performed, and the feasibility of satisfying extended performance requirements was verified. A baseline design was selected from the alternatives considered, and the design analysis and documentation were refined. The baseline thrust system design features modular construction, conventional power processing, and a concentractor solar array concept and is designed to interface with the space shuttle.

  18. Simulation of process identification and controller tuning for flow control system

    NASA Astrophysics Data System (ADS)

    Chew, I. M.; Wong, F.; Bono, A.; Wong, K. I.

    2017-06-01

    PID controller is undeniably the most popular method used in controlling various industrial processes. The feature to tune the three elements in PID has allowed the controller to deal with specific needs of the industrial processes. This paper discusses the three elements of control actions and improving robustness of controllers through combination of these control actions in various forms. A plant model is simulated using the Process Control Simulator in order to evaluate the controller performance. At first, the open loop response of the plant is studied by applying a step input to the plant and collecting the output data from the plant. Then, FOPDT of physical model is formed by using both Matlab-Simulink and PRC method. Then, calculation of controller’s setting is performed to find the values of Kc and τi that will give satisfactory control in closed loop system. Then, the performance analysis of closed loop system is obtained by set point tracking analysis and disturbance rejection performance. To optimize the overall physical system performance, a refined tuning of PID or detuning is further conducted to ensure a consistent resultant output of closed loop system reaction to the set point changes and disturbances to the physical model. As a result, the PB = 100 (%) and τi = 2.0 (s) is preferably chosen for setpoint tracking while PB = 100 (%) and τi = 2.5 (s) is selected for rejecting the imposed disturbance to the model. In a nutshell, selecting correlation tuning values is likewise depended on the required control’s objective for the stability performance of overall physical model.

  19. Spacelab output processing system architectural study

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Two different system architectures are presented. The two architectures are derived from two different data flows within the Spacelab Output Processing System. The major differences between these system architectures are in the position of the decommutation function (the first architecture performs decommutation in the latter half of the system and the second architecture performs that function in the front end of the system). In order to be examined, the system was divided into five stand-alone subsystems; Work Assembler, Mass Storage System, Output Processor, Peripheral Pool, and Resource Monitor. The work load of each subsystem was estimated independent of the specific devices to be used. The candidate devices were surveyed from a wide sampling of off-the-shelf devices. Analytical expressions were developed to quantify the projected workload in conjunction with typical devices which would adequately handle the subsystem tasks. All of the study efforts were then directed toward preparing performance and cost curves for each architecture subsystem.

  20. Design and implementation of laser target simulator in hardware-in-the-loop simulation system based on LabWindows/CVI and RTX

    NASA Astrophysics Data System (ADS)

    Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong

    2016-11-01

    In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.

  1. Quantum Accelerators for High-performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less

  2. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    PubMed

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  3. Modeling the Hydrologic Processes of a Permeable Pavement System

    EPA Science Inventory

    A permeable pavement system can capture stormwater to reduce runoff volume and flow rate, improve onsite groundwater recharge, and enhance pollutant controls within the site. A new unit process model for evaluating the hydrologic performance of a permeable pavement system has be...

  4. Heat for film processing from solar energy

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Report describes solar water heating system for laboratory in Mill Valley, California. System furnishes 59 percent of hot water requirements for photographic film processing. Text of report discusses system problems and modifications, analyzes performance and economics, and supplies drawings and operation/maintenance manual.

  5. Donabedian's structure-process-outcome quality of care model: Validation in an integrated trauma system.

    PubMed

    Moore, Lynne; Lavoie, André; Bourgeois, Gilles; Lapointe, Jean

    2015-06-01

    According to Donabedian's health care quality model, improvements in the structure of care should lead to improvements in clinical processes that should in turn improve patient outcome. This model has been widely adopted by the trauma community but has not yet been validated in a trauma system. The objective of this study was to assess the performance of an integrated trauma system in terms of structure, process, and outcome and evaluate the correlation between quality domains. Quality of care was evaluated for patients treated in a Canadian provincial trauma system (2005-2010; 57 centers, n = 63,971) using quality indicators (QIs) developed and validated previously. Structural performance was measured by transposing on-site accreditation visit reports onto an evaluation grid according to American College of Surgeons criteria. The composite process QI was calculated as the average sum of proportions of conformity to 15 process QIs derived from literature review and expert opinion. Outcome performance was measured using risk-adjusted rates of mortality, complications, and readmission as well as hospital length of stay (LOS). Correlation was assessed with Pearson's correlation coefficients. Statistically significant correlations were observed between structure and process QIs (r = 0.33), and process and outcome QIs (r = -0.33 for readmission, r = -0.27 for LOS). Significant positive correlations were also observed between outcome QIs (r = 0.37 for mortality-readmission; r = 0.39 for mortality-LOS and readmission-LOS; r = 0.45 for mortality-complications; r = 0.34 for readmission-complications; 0.63 for complications-LOS). Significant correlations between quality domains observed in this study suggest that Donabedian's structure-process-outcome model is a valid model for evaluating trauma care. Trauma centers that perform well in terms of structure also tend to perform well in terms of clinical processes, which in turn has a favorable influence on patient outcomes. Prognostic study, level III.

  6. Collection, processing and dissemination of data for the national solar demonstration program

    NASA Technical Reports Server (NTRS)

    Day, R. E.; Murphy, L. J.; Smok, J. T.

    1978-01-01

    A national solar data system developed for the DOE by IBM provides for automatic gathering, conversion, transfer, and analysis of demonstration site data. NASA requirements for this system include providing solar site hardware, engineering, data collection, and analysis. The specific tasks include: (1) solar energy system design/integration; (2) developing a site data acquisition subsystem; (3) developing a central data processing system; (4) operating the test facility at Marshall Space Flight Center; (5) collecting and analyzing data. The systematic analysis and evaluation of the data from the National Solar Data System is reflected in a monthly performance report and a solar energy system performance evaluation report.

  7. A definition of high-level decisions in the engineering of systems

    NASA Astrophysics Data System (ADS)

    Powell, Robert Anthony

    The role of the systems engineer defines that he or she be proactive and guide the program manager and their customers through their decisions to enhance the effectiveness of system development---producing faster, better, and cheaper systems. The present lack of coverage in literature on what these decisions are and how they relate to each other may be a contributing factor to the high rate of failure among system projects. At the onset of the system development process, decisions have an integral role in the design of a system that meets stakeholders' needs. This is apparent during the design and qualification of both the Development System and the Operational System. The performance, cost and schedule of the Development System affect the performance of the Operational System and are affected by decisions that influence physical elements of the Development System. The performance, cost, and schedule of the Operational System is affected by decisions that influence physical elements of the Operational System. Traditionally, product and process have been designed using know-how and trial and error. However, the empiricism of engineers and program managers is limited which can, and has led to costly mistakes. To date, very little research has explored decisions made in the engineering of a system. In government, literature exists on procurement processes for major system development; but in general literature on decisions, how they relate to each other, and the key information requirements within one of two systems and across the two systems is not readily available. This research hopes to improve the processes inherent in the engineering of systems. The primary focus of this research is on department of defense (DoD) military systems, specifically aerospace systems and may generalize more broadly. The result of this research is a process tool, a Decision System Model, which can be used by systems engineers to guide the program manager and their customers through the decisions about concurrently designing and qualifying both the Development and Operational systems.

  8. A performance improvement case study in aircraft maintenance and its implications for hazard identification.

    PubMed

    Ward, Marie; McDonald, Nick; Morrison, Rabea; Gaynor, Des; Nugent, Tony

    2010-02-01

    Aircraft maintenance is a highly regulated, safety critical, complex and competitive industry. There is a need to develop innovative solutions to address process efficiency without compromising safety and quality. This paper presents the case that in order to improve a highly complex system such as aircraft maintenance, it is necessary to develop a comprehensive and ecologically valid model of the operational system, which represents not just what is meant to happen, but what normally happens. This model then provides the backdrop against which to change or improve the system. A performance report, the Blocker Report, specific to aircraft maintenance and related to the model was developed gathering data on anything that 'blocks' task or check performance. A Blocker Resolution Process was designed to resolve blockers and improve the current check system. Significant results were obtained for the company in the first trial and implications for safety management systems and hazard identification are discussed. Statement of Relevance: Aircraft maintenance is a safety critical, complex, competitive industry with a need to develop innovative solutions to address process and safety efficiency. This research addresses this through the development of a comprehensive and ecologically valid model of the system linked with a performance reporting and resolution system.

  9. Application of the Tool for Turbine Engine Closed-Loop Transient Analysis (TTECTrA) for Dynamic Systems Analysis

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Zinnecker, Alicia M.

    2014-01-01

    The aircraft engine design process seeks to achieve the best overall system-level performance, weight, and cost for a given engine design. This is achieved by a complex process known as systems analysis, where steady-state simulations are used to identify trade-offs that should be balanced to optimize the system. The steady-state simulations and data on which systems analysis relies may not adequately capture the true performance trade-offs that exist during transient operation. Dynamic Systems Analysis provides the capability for assessing these trade-offs at an earlier stage of the engine design process. The concept of dynamic systems analysis and the type of information available from this analysis are presented in this paper. To provide this capability, the Tool for Turbine Engine Closed-loop Transient Analysis (TTECTrA) was developed. This tool aids a user in the design of a power management controller to regulate thrust, and a transient limiter to protect the engine model from surge at a single flight condition (defined by an altitude and Mach number). Results from simulation of the closed-loop system may be used to estimate the dynamic performance of the model. This enables evaluation of the trade-off between performance and operability, or safety, in the engine, which could not be done with steady-state data alone. A design study is presented to compare the dynamic performance of two different engine models integrated with the TTECTrA software.

  10. Information Processing Capabilities in Performers Differing in Levels of Motor Skill

    DTIC Science & Technology

    1979-01-01

    F. I. 1. , ’ Lockhart , R. S. Levels of* processing : A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 1972, 11, 671-684...ARI TECHNICAL REPORT LEVEr.79iA4 Information Processing Capabilities in Performers Differing In Levels of 00 Motor Skill ,4 by Robert N. Singer... PROCESSING CAPABILITIES IN PERFORMERS DIFFERING IN LEVELS OF MOTOR SKILL INTRODUCTION In the human behaving systems model developed by Singer, Gerson, and

  11. Methodology for the systems engineering process. Volume 1: System functional activities

    NASA Technical Reports Server (NTRS)

    Nelson, J. H.

    1972-01-01

    Systems engineering is examined in terms of functional activities that are performed in the conduct of a system definition/design, and system development is described in a parametric analysis that combines functions, performance, and design variables. Emphasis is placed on identification of activities performed by design organizations, design specialty groups, as well as a central systems engineering organizational element. Identification of specific roles and responsibilities for doing functions, and monitoring and controlling activities within the system development operation are also emphasized.

  12. A parallel expert system for the control of a robotic air vehicle

    NASA Technical Reports Server (NTRS)

    Shakley, Donald; Lamont, Gary B.

    1988-01-01

    Expert systems can be used to govern the intelligent control of vehicles, for example the Robotic Air Vehicle (RAV). Due to the nature of the RAV system the associated expert system needs to perform in a demanding real-time environment. The use of a parallel processing capability to support the associated expert system's computational requirement is critical in this application. Thus, algorithms for parallel real-time expert systems must be designed, analyzed, and synthesized. The design process incorporates a consideration of the rule-set/face-set size along with representation issues. These issues are looked at in reference to information movement and various inference mechanisms. Also examined is the process involved with transporting the RAV expert system functions from the TI Explorer, where they are implemented in the Automated Reasoning Tool (ART), to the iPSC Hypercube, where the system is synthesized using Concurrent Common LISP (CCLISP). The transformation process for the ART to CCLISP conversion is described. The performance characteristics of the parallel implementation of these expert systems on the iPSC Hypercube are compared to the TI Explorer implementation.

  13. Modeling and analysis of power processing systems: Feasibility investigation and formulation of a methodology

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.

    1974-01-01

    A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.

  14. Nickel hydrogen battery expert system

    NASA Technical Reports Server (NTRS)

    Shiva, Sajjan G.

    1991-01-01

    The Hubble Telescope Battery Testbed at MSFC uses the Nickel Cadmium (NiCd) Battery Expert System (NICBES-2) which supports the evaluation of performance of Hubble Telescope spacecraft batteries and provides alarm diagnosis and action advice. NICBES-2 provides a reasoning system along with a battery domain knowledge base to achieve this battery health management function. An effort is summarized which was used to modify NICBES-2 to accommodate Nickel Hydrogen (NiH2) battery environment now in MSFC testbed. The NICBES-2 is implemented on a Sun Microsystem and is written in SunOS C and Quintus Prolog. The system now operates in a multitasking environment. NICBES-2 spawns three processes: serial port process (SPP); data handler process (DHP); and the expert system process (ESP) in order to process the telemetry data and provide the status and action advice. NICBES-2 performs orbit data gathering, data evaluation, alarm diagnosis and action advice and status and history display functions. The adaptation of NICBES-2 to work with NiH2 battery environment required modification to all of the three component processes.

  15. Performance indicators for the efficiency analysis of urban drainage systems.

    PubMed

    Artina, S; Becciu, G; Maglionico, M; Paoletti, A; Sanfilippo, U

    2005-01-01

    Performance indicators implemented in a decision support system (DSS) for the technical, managerial and economic evaluation of urban drainage systems (UDS), called MOMA FD, are presented. Several kinds of information are collected and processed by MOMA FD to evaluate both present situation and future scenarios of development and enhancement. Particular interest is focused on the evaluation of the environmental impact, which is considered a very relevant factor in the decision making process to identify the priorities for UDS improvements.

  16. Image processing for flight crew enhanced situation awareness

    NASA Technical Reports Server (NTRS)

    Roberts, Barry

    1993-01-01

    This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.

  17. Design of an FMCW radar baseband signal processing system for automotive application.

    PubMed

    Lin, Jau-Jr; Li, Yuan-Ping; Hsu, Wei-Chiang; Lee, Ta-Sung

    2016-01-01

    For a typical FMCW automotive radar system, a new design of baseband signal processing architecture and algorithms is proposed to overcome the ghost targets and overlapping problems in the multi-target detection scenario. To satisfy the short measurement time constraint without increasing the RF front-end loading, a three-segment waveform with different slopes is utilized. By introducing a new pairing mechanism and a spatial filter design algorithm, the proposed detection architecture not only provides high accuracy and reliability, but also requires low pairing time and computational loading. This proposed baseband signal processing architecture and algorithms balance the performance and complexity, and are suitable to be implemented in a real automotive radar system. Field measurement results demonstrate that the proposed automotive radar signal processing system can perform well in a realistic application scenario.

  18. Embedded image processing engine using ARM cortex-M4 based STM32F407 microcontroller

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samaiya, Devesh, E-mail: samaiya.devesh@gmail.com

    2014-10-06

    Due to advancement in low cost, easily available, yet powerful hardware and revolution in open source software, urge to make newer, more interactive machines and electronic systems have increased manifold among engineers. To make system more interactive, designers need easy to use sensor systems. Giving the boon of vision to machines was never easy, though it is not impossible these days; it is still not easy and expensive. This work presents a low cost, moderate performance and programmable Image processing engine. This Image processing engine is able to capture real time images, can store the images in the permanent storagemore » and can perform preprogrammed image processing operations on the captured images.« less

  19. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    ERIC Educational Resources Information Center

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  20. Evaluation of a Stirling Solar Dynamic System for Lunar Oxygen Production

    NASA Technical Reports Server (NTRS)

    Colozza, Anthony J.; Wong, Wayne A.

    2006-01-01

    An evaluation of a solar concentrator-based system for producing oxygen from the lunar regolith was performed. The system utilizes a solar concentrator mirror to provide thermal energy for the oxygen production process as well as thermal energy to power a Stirling heat engine for the production of electricity. The electricity produced is utilized to operate the equipment needed in the oxygen production process. The oxygen production method utilized in the analysis was the hydrogen reduction of ilmenite. Utilizing this method of oxygen production a baseline system design was produced. This baseline system had an oxygen production rate of 0.6 kg/hr with a concentrator mirror size of 5 m. Variations were performed on the baseline design to show how changes in the system size and process rate effected the oxygen production rate.

  1. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  2. Increased Reliability of Gas Turbine Components by Robust Coatings Manufacturing

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Dudykevych, T.; Sansom, D.; Subramanian, R.

    2017-08-01

    The expanding operational windows of the advanced gas turbine components demand increasing performance capability from protective coating systems. This demand has led to the development of novel multi-functional, multi-materials coating system architectures over the last years. In addition, the increasing dependency of components exposed to extreme environment on protective coatings results in more severe penalties, in case of a coating system failure. This emphasizes that reliability and consistency of protective coating systems are equally important to their superior performance. By means of examples, this paper describes the effects of scatter in the material properties resulting from manufacturing variations on coating life predictions. A strong foundation in process-property-performance correlations as well as regular monitoring and control of the coating process is essential for robust and well-controlled coating process. Proprietary and/or commercially available diagnostic tools can help in achieving these goals, but their usage in industrial setting is still limited. Various key contributors to process variability are briefly discussed along with the limitations of existing process and product control methods. Other aspects that are important for product reliability and consistency in serial manufacturing as well as advanced testing methodologies to simplify and enhance product inspection and improve objectivity are briefly described.

  3. Performance evaluation method of electric energy data acquire system based on combination of subjective and objective weights

    NASA Astrophysics Data System (ADS)

    Gao, Chen; Ding, Zhongan; Deng, Bofa; Yan, Shengteng

    2017-10-01

    According to the characteristics of electric energy data acquire system (EEDAS), considering the availability of each index data and the connection between the index integrity, establishing the performance evaluation index system of electric energy data acquire system from three aspects as master station system, communication channel, terminal equipment. To determine the comprehensive weight of each index based on triangular fuzzy number analytic hierarchy process with entropy weight method, and both subjective preference and objective attribute are taken into consideration, thus realize the performance comprehensive evaluation more reasonable and reliable. Example analysis shows that, by combination with analytic hierarchy process (AHP) and triangle fuzzy numbers (TFN) to establish comprehensive index evaluation system based on entropy method, the evaluation results not only convenient and practical, but also more objective and accurate.

  4. A perspective on future directions in aerospace propulsion system simulation

    NASA Technical Reports Server (NTRS)

    Miller, Brent A.; Szuch, John R.; Gaugler, Raymond E.; Wood, Jerry R.

    1989-01-01

    The design and development of aircraft engines is a lengthy and costly process using today's methodology. This is due, in large measure, to the fact that present methods rely heavily on experimental testing to verify the operability, performance, and structural integrity of components and systems. The potential exists for achieving significant speedups in the propulsion development process through increased use of computational techniques for simulation, analysis, and optimization. This paper outlines the concept and technology requirements for a Numerical Propulsion Simulation System (NPSS) that would provide capabilities to do interactive, multidisciplinary simulations of complete propulsion systems. By combining high performance computing hardware and software with state-of-the-art propulsion system models, the NPSS will permit the rapid calculation, assessment, and optimization of subcomponent, component, and system performance, durability, reliability and weight-before committing to building hardware.

  5. Aircraft Engine-Monitoring System And Display

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.; Person, Lee H., Jr.

    1992-01-01

    Proposed Engine Health Monitoring System and Display (EHMSD) provides enhanced means for pilot to control and monitor performances of engines. Processes raw sensor data into information meaningful to pilot. Provides graphical information about performance capabilities, current performance, and operational conditions in components or subsystems of engines. Provides means to control engine thrust directly and innovative means to monitor performance of engine system rapidly and reliably. Features reduce pilot workload and increase operational safety.

  6. Optimized design of embedded DSP system hardware supporting complex algorithms

    NASA Astrophysics Data System (ADS)

    Li, Yanhua; Wang, Xiangjun; Zhou, Xinling

    2003-09-01

    The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.

  7. Architecture Of High Speed Image Processing System

    NASA Astrophysics Data System (ADS)

    Konishi, Toshio; Hayashi, Hiroshi; Ohki, Tohru

    1988-01-01

    One of architectures for a high speed image processing system which corresponds to a new algorithm for a shape understanding is proposed. And the hardware system which is based on the archtecture was developed. Consideration points of the architecture are mainly that using processors should match with the processing sequence of the target image and that the developed system should be used practically in an industry. As the result, it was possible to perform each processing at a speed of 80 nano-seconds a pixel.

  8. Performance of the ALTA 3500 scanned-laser mask lithography system

    NASA Astrophysics Data System (ADS)

    Buck, Peter D.; Buxbaum, Alex H.; Coleman, Thomas P.; Tran, Long

    1998-09-01

    The ALTA 3500, an advanced scanned-laser mask lithography tool produced by Etec, was introduced to the marketplace in September 1997. The system architecture was described and an initial performance evaluation was presented. This system, based on the ALTA 3000, uses a new 33.3X, 0.8 NA final reduction lens to reduce the spot size to 0.27 micrometers FWHM, thereby affording improved resolution and pattern acuity on the mask. To take advantage of the improved resolution, a new anisotropic chrome etch process has been developed and introduced along with change from Olin 895i resist to TOK iP3600 resist. In this paper we will more extensively describe the performance of the ALTA 3500 and the performance of these new processes.

  9. Managing Analysis Models in the Design Process

    NASA Technical Reports Server (NTRS)

    Briggs, Clark

    2006-01-01

    Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.

  10. System-on-chip architecture and validation for real-time transceiver optimization: APC implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Suarez, Hernan; Zhang, Yan R.

    2015-05-01

    New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.

  11. Ku-band signal design study. [space shuttle orbiter data processing network

    NASA Technical Reports Server (NTRS)

    Rubin, I.

    1978-01-01

    Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.

  12. Non-equilibrium assembly of microtubules: from molecules to autonomous chemical robots.

    PubMed

    Hess, H; Ross, Jennifer L

    2017-09-18

    Biological systems have evolved to harness non-equilibrium processes from the molecular to the macro scale. It is currently a grand challenge of chemistry, materials science, and engineering to understand and mimic biological systems that have the ability to autonomously sense stimuli, process these inputs, and respond by performing mechanical work. New chemical systems are responding to the challenge and form the basis for future responsive, adaptive, and active materials. In this article, we describe a particular biochemical-biomechanical network based on the microtubule cytoskeletal filament - itself a non-equilibrium chemical system. We trace the non-equilibrium aspects of the system from molecules to networks and describe how the cell uses this system to perform active work in essential processes. Finally, we discuss how microtubule-based engineered systems can serve as testbeds for autonomous chemical robots composed of biological and synthetic components.

  13. Kinetic models for nitrogen inhibition in ANAMMOX process on deammonification system

    USDA-ARS?s Scientific Manuscript database

    The performance of the deammonification process depends on the microbial activity of ammonia oxidizing bacteria (AOB) and ANAMMOX bacteria, and the autotrophic organisms involved in this process have different preferences for substrate, that may cause inhibition or imbalance of the system. The aim o...

  14. Two improved coherent optical feedback systems for optical information processing

    NASA Technical Reports Server (NTRS)

    Lee, S. H.; Bartholomew, B.; Cederquist, J.

    1976-01-01

    Coherent optical feedback systems are Fabry-Perot interferometers modified to perform optical information processing. Two new systems based on plane parallel and confocal Fabry-Perot interferometers are introduced. The plane parallel system can be used for contrast control, intensity level selection, and image thresholding. The confocal system can be used for image restoration and solving partial differential equations. These devices are simpler and less expensive than previous systems. Experimental results are presented to demonstrate their potential for optical information processing.

  15. NTP comparison process

    NASA Technical Reports Server (NTRS)

    Corban, Robert

    1993-01-01

    The systems engineering process for the concept definition phase of the program involves requirements definition, system definition, and consistent concept definition. The requirements definition process involves obtaining a complete understanding of the system requirements based on customer needs, mission scenarios, and nuclear thermal propulsion (NTP) operating characteristics. A system functional analysis is performed to provide a comprehensive traceability and verification of top-level requirements down to detailed system specifications and provides significant insight into the measures of system effectiveness to be utilized in system evaluation. The second key element in the process is the definition of system concepts to meet the requirements. This part of the process involves engine system and reactor contractor teams to develop alternative NTP system concepts that can be evaluated against specific attributes, as well as a reference configuration against which to compare system benefits and merits. Quality function deployment (QFD), as an excellent tool within Total Quality Management (TQM) techniques, can provide the required structure and provide a link to the voice of the customer in establishing critical system qualities and their relationships. The third element of the process is the consistent performance comparison. The comparison process involves validating developed concept data and quantifying system merits through analysis, computer modeling, simulation, and rapid prototyping of the proposed high risk NTP subsystems. The maximum amount possible of quantitative data will be developed and/or validated to be utilized in the QFD evaluation matrix. If upon evaluation of a new concept or its associated subsystems determine to have substantial merit, those features will be incorporated into the reference configuration for subsequent system definition and comparison efforts.

  16. Attributes and Behaviors of Performance-Centered Systems.

    ERIC Educational Resources Information Center

    Gery, Gloria

    1995-01-01

    Examines attributes, characteristics, and behaviors of performance-centered software packages that are emerging in the consumer software marketplace and compares them with large-scale systems software being designed by internal information systems staffs and vendors of large-scale software designed for financial, manufacturing, processing, and…

  17. Computer program determines performance efficiency of remote measuring systems

    NASA Technical Reports Server (NTRS)

    Merewether, E. K.

    1966-01-01

    Computer programs control and evaluate instrumentation system performance for numerous rocket engine test facilities and prescribe calibration and maintenance techniques to maintain the systems within process specifications. Similar programs can be written for other test equipment in an industry such as the petrochemical industry.

  18. Making Health System Performance Measurement Useful to Policy Makers: Aligning Strategies, Measurement and Local Health System Accountability in Ontario

    PubMed Central

    Veillard, Jeremy; Huynh, Tai; Ardal, Sten; Kadandale, Sowmya; Klazinga, Niek S.; Brown, Adalsteinn D.

    2010-01-01

    This study examined the experience of the Ontario Ministry of Health and Long-Term Care in enhancing its stewardship and performance management role by developing a health system strategy map and a strategy-based scorecard through a process of policy reviews and expert consultations, and linking them to accountability agreements. An evaluation of the implementation and of the effects of the policy intervention has been carried out through direct policy observation over three years, document analysis, interviews with decision-makers and systematic discussion of findings with other authors and external reviewers. Cascading strategies at health and local health system levels were identified, and a core set of health system and local health system performance indicators was selected and incorporated into accountability agreements with the Local Health Integration Networks. despite the persistence of such challenges as measurement limitations and lack of systematic linkage to decision-making processes, these activities helped to strengthen substantially the ministry's performance management function. PMID:21286268

  19. A Study of Novice Systems Analysis Problem Solving Behaviors Using Protocol Analysis

    DTIC Science & Technology

    1992-09-01

    conducted. Each subject was given the same task to perform. The task involved a case study (Appendix B) of a utility company’s customer order processing system...behavior (Ramesh, 1989). The task was to design a customer order processing system that utilized a centralized telephone answering service center...of the utility company’s customer order processing system that was developed based on information obtained by a large systems consulting firm during

  20. Low energy physical activity recognition system on smartphones.

    PubMed

    Soria Morillo, Luis Miguel; Gonzalez-Abril, Luis; Ortega Ramirez, Juan Antonio; de la Concepcion, Miguel Angel Alvarez

    2015-03-03

    An innovative approach to physical activity recognition based on the use of discrete variables obtained from accelerometer sensors is presented. The system first performs a discretization process for each variable, which allows efficient recognition of activities performed by users using as little energy as possible. To this end, an innovative discretization and classification technique is presented based on the χ2 distribution. Furthermore, the entire recognition process is executed on the smartphone, which determines not only the activity performed, but also the frequency at which it is carried out. These techniques and the new classification system presented reduce energy consumption caused by the activity monitoring system. The energy saved increases smartphone usage time to more than 27 h without recharging while maintaining accuracy.

  1. PERFORMANCE EVALUATION AT A LONG-TERM FOOD PROCESSING LAND TREATMENT SITE

    EPA Science Inventory

    The objective of this project was to determine the performance of a full scale, operating overland flow land (GEL) treatment system treating nonhazardous waste. Performance was evaluated in terms of treatment of the applied waste and the environmental impact of the system, partic...

  2. Hyperswitch communication network

    NASA Technical Reports Server (NTRS)

    Peterson, J.; Pniel, M.; Upchurch, E.

    1991-01-01

    The Hyperswitch Communication Network (HCN) is a large scale parallel computer prototype being developed at JPL. Commercial versions of the HCN computer are planned. The HCN computer being designed is a message passing multiple instruction multiple data (MIMD) computer, and offers many advantages in price-performance ratio, reliability and availability, and manufacturing over traditional uniprocessors and bus based multiprocessors. The design of the HCN operating system is a uniquely flexible environment that combines both parallel processing and distributed processing. This programming paradigm can achieve a balance among the following competing factors: performance in processing and communications, user friendliness, and fault tolerance. The prototype is being designed to accommodate a maximum of 64 state of the art microprocessors. The HCN is classified as a distributed supercomputer. The HCN system is described, and the performance/cost analysis and other competing factors within the system design are reviewed.

  3. Conversion of Questionnaire Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Danny H; Elwood Jr, Robert H

    During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less

  4. AOIPS - An interactive image processing system. [Atmospheric and Oceanic Information Processing System

    NASA Technical Reports Server (NTRS)

    Bracken, P. A.; Dalton, J. T.; Quann, J. J.; Billingsley, J. B.

    1978-01-01

    The Atmospheric and Oceanographic Information Processing System (AOIPS) was developed to help applications investigators perform required interactive image data analysis rapidly and to eliminate the inefficiencies and problems associated with batch operation. This paper describes the configuration and processing capabilities of AOIPS and presents unique subsystems for displaying, analyzing, storing, and manipulating digital image data. Applications of AOIPS to research investigations in meteorology and earth resources are featured.

  5. Extraction of Data from a Hospital Information System to Perform Process Mining.

    PubMed

    Neira, Ricardo Alfredo Quintano; de Vries, Gert-Jan; Caffarel, Jennifer; Stretton, Erin

    2017-01-01

    The aim of this work is to share our experience in relevant data extraction from a hospital information system in preparation for a research study using process mining techniques. The steps performed were: research definition, mapping the normative processes, identification of tables and fields names of the database, and extraction of data. We then offer lessons learned during data extraction phase. Any errors made in the extraction phase will propagate and have implications on subsequent analyses. Thus, it is essential to take the time needed and devote sufficient attention to detail to perform all activities with the goal of ensuring high quality of the extracted data. We hope this work will be informative for other researchers to plan and execute extraction of data for process mining research studies.

  6. Cogeneration Technology Alternatives Study (CTAS). Volume 6: Computer data. Part 2: Residual-fired nocogeneration process boiler

    NASA Technical Reports Server (NTRS)

    Knightly, W. F.

    1980-01-01

    Computer generated data on the performance of the cogeneration energy conversion system are presented. Performance parameters included fuel consumption and savings, capital costs, economics, and emissions of residual fired process boilers.

  7. The effect of processing code, response modality and task difficulty on dual task performance and subjective workload in a manual system

    NASA Technical Reports Server (NTRS)

    Liu, Yili; Wickens, Christopher D.

    1987-01-01

    This paper reports on the first experiment of a series studying the effect of task structure and difficulty demand on time-sharing performance and workload in both automated and corresponding manual systems. The experimental task involves manual control time-shared with spatial and verbal decisions tasks of two levels of difficulty and two modes of response (voice or manual). The results provide strong evidence that tasks and processes competing for common processing resources are time shared less effecively and have higher workload than tasks competing for separate resources. Subjective measures and the structure of multiple resources are used in conjunction to predict dual task performance. The evidence comes from both single-task and from dual-task performance.

  8. Problems in characterizing barrier performance

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1988-01-01

    The barrier is a synchronization construct which is useful in separating a parallel program into parallel sections which are executed in sequence. The completion of a barrier requires cooperation among all executing processes. This requirement not only introduces the wait for the slowest process delay which is inherent in the definition of the synchronization, but also has implications for the efficient implementation and measurement of barrier performance in different systems. Types of barrier implementation and their relationship to different multiprocessor environments are described. Then the problem of measuring the performance of barrier implementations on specific machine architecture is discussed. The fact that the barrier synchronization requires the cooperation of all processes makes the problem of performance measurement similarly global. Making non-intrusive measurements of sufficient accuracy can be tricky on systems offering only rudimentary measurement tools.

  9. 40 CFR 420.14 - New source performance standards (NSPS).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... provided for process wastewaters from coke oven gas wet desulfurization systems, but only to the extent... wastewaters from other wet air pollution control systems (except those from coal charging and coke pushing emission controls), coal tar processing operations and coke plant groundwater remediation systems, but only...

  10. 40 CFR 420.14 - New source performance standards (NSPS).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... provided for process wastewaters from coke oven gas wet desulfurization systems, but only to the extent... wastewaters from other wet air pollution control systems (except those from coal charging and coke pushing emission controls), coal tar processing operations and coke plant groundwater remediation systems, but only...

  11. 40 CFR 420.14 - New source performance standards (NSPS).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... provided for process wastewaters from coke oven gas wet desulfurization systems, but only to the extent... wastewaters from other wet air pollution control systems (except those from coal charging and coke pushing emission controls), coal tar processing operations and coke plant groundwater remediation systems, but only...

  12. 40 CFR 420.14 - New source performance standards (NSPS).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... provided for process wastewaters from coke oven gas wet desulfurization systems, but only to the extent... wastewaters from other wet air pollution control systems (except those from coal charging and coke pushing emission controls), coal tar processing operations and coke plant groundwater remediation systems, but only...

  13. 40 CFR 420.14 - New source performance standards (NSPS).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... provided for process wastewaters from coke oven gas wet desulfurization systems, but only to the extent... wastewaters from other wet air pollution control systems (except those from coal charging and coke pushing emission controls), coal tar processing operations and coke plant groundwater remediation systems, but only...

  14. High performance embedded system for real-time pattern matching

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-02-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device.

  15. Visibility into the Work: TQM Work Process Analysis with HPT and ISD.

    ERIC Educational Resources Information Center

    Beagles, Charles A.; Griffin, Steven L.

    2003-01-01

    Discusses the use of total quality management (TQM), work process flow diagrams, and ISD (instructional systems development) tools with HPT (human performance technology) to address performance gaps in the Veterans Benefits Administration (VBA). Describes performance goals, which were to improve accuracy and reduce backlog of claim files. (LRW)

  16. Requirements Flowdown for Prognostics and Health Management

    NASA Technical Reports Server (NTRS)

    Goebel, Kai; Saxena, Abhinav; Roychoudhury, Indranil; Celaya, Jose R.; Saha, Bhaskar; Saha, Sankalita

    2012-01-01

    Prognostics and Health Management (PHM) principles have considerable promise to change the game of lifecycle cost of engineering systems at high safety levels by providing a reliable estimate of future system states. This estimate is a key for planning and decision making in an operational setting. While technology solutions have made considerable advances, the tie-in into the systems engineering process is lagging behind, which delays fielding of PHM-enabled systems. The derivation of specifications from high level requirements for algorithm performance to ensure quality predictions is not well developed. From an engineering perspective some key parameters driving the requirements for prognostics performance include: (1) maximum allowable Probability of Failure (PoF) of the prognostic system to bound the risk of losing an asset, (2) tolerable limits on proactive maintenance to minimize missed opportunity of asset usage, (3) lead time to specify the amount of advanced warning needed for actionable decisions, and (4) required confidence to specify when prognosis is sufficiently good to be used. This paper takes a systems engineering view towards the requirements specification process and presents a method for the flowdown process. A case study based on an electric Unmanned Aerial Vehicle (e-UAV) scenario demonstrates how top level requirements for performance, cost, and safety flow down to the health management level and specify quantitative requirements for prognostic algorithm performance.

  17. Performance and Reliability Optimization for Aerospace Systems subject to Uncertainty and Degradation

    NASA Technical Reports Server (NTRS)

    Miller, David W.; Uebelhart, Scott A.; Blaurock, Carl

    2004-01-01

    This report summarizes work performed by the Space Systems Laboratory (SSL) for NASA Langley Research Center in the field of performance optimization for systems subject to uncertainty. The objective of the research is to develop design methods and tools to the aerospace vehicle design process which take into account lifecycle uncertainties. It recognizes that uncertainty between the predictions of integrated models and data collected from the system in its operational environment is unavoidable. Given the presence of uncertainty, the goal of this work is to develop means of identifying critical sources of uncertainty, and to combine these with the analytical tools used with integrated modeling. In this manner, system uncertainty analysis becomes part of the design process, and can motivate redesign. The specific program objectives were: 1. To incorporate uncertainty modeling, propagation and analysis into the integrated (controls, structures, payloads, disturbances, etc.) design process to derive the error bars associated with performance predictions. 2. To apply modern optimization tools to guide in the expenditure of funds in a way that most cost-effectively improves the lifecycle productivity of the system by enhancing the subsystem reliability and redundancy. The results from the second program objective are described. This report describes the work and results for the first objective: uncertainty modeling, propagation, and synthesis with integrated modeling.

  18. Improving educational objectives of the Industrial and Management Systems Engineering programme at Kuwait University

    NASA Astrophysics Data System (ADS)

    Aldowaisan, Tariq; Allahverdi, Ali

    2016-05-01

    This paper describes the process of developing programme educational objectives (PEOs) for the Industrial and Management Systems Engineering programme at Kuwait University, and the process of deployment of these PEOs. Input of the four constituents of the programme, faculty, students, alumni, and employers, is incorporated in the development and update of the PEOs. For each PEO an assessment process is employed where performance measures are defined along with target attainment levels. Results from assessment tools are compared with the target attainment levels to measure performance with regard to the PEOs. The assessment indicates that the results meet or exceed the target attainment levels of the PEOs' performance measures.

  19. Performance Steel Castings

    DTIC Science & Technology

    2012-09-30

    Development of Sand Properties 103 Advanced Modeling Dataset.. 105 High Strength Low Alloy (HSLA) Steels 107 Steel Casting and Engineering Support...to achieve the performance goals required for new systems. The dramatic reduction in weight and increase in capability will require high performance...for improved weapon system reliability. SFSA developed innovative casting design and manufacturing processes for high performance parts. SFSA is

  20. Formal implementation of a performance evaluation model for the face recognition system.

    PubMed

    Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young

    2008-01-01

    Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.

  1. Measurement of health system performance at district level: A study protocol

    PubMed Central

    Sharma, Atul; Prinja, Shankar; Aggarwal, Arun Kumar

    2018-01-01

    Background Limited efforts have been observed in low and middle income countries to undertake health system performance assessment at district level. Absence of a comprehensive data collection tool and lack of a standardised single summary measure defining overall performance are some of the main problems. Present study has been undertaken to develop a summary composite health system performance index at district level. Methods A broad range of indicators covering all six domains as per building block framework were finalized by an expert panel. The domains were classified into twenty sub-domains, with 70 input and process indicators to measure performance. Seven sub-domains for assessing health system outputs and outcomes were identified, with a total of 28 indicators. Districts in Haryana state from north India were selected for the study. Primary and secondary data will be collected from 378 health facilities, district and state health directorate headquarters. Indicators will be normalized, aggregated to generate composite performance index at district level. Domain specific scores will present the quality of individual building block domains in the public health system. Robustness of the results will be checked using sensitivity analysis. Expected impact for public health: The study presents a methodology for comprehensive assessment of all health system domains on basis of input, process, output and outcome indicators which has never been reported from India. Generation of this index will help identify policy and implementation areas of concern and point towards potential solutions. Results may also help understand relationships between individual building blocks and their sub-components. Significance for public health Measuring performance of health system is important to understand progress and challenges, and create systems that are efficient, equitable and patient-focused. However, very few assessments of such nature have been observed in low and middle income countries, especially at district level, mainly because of methodological challenges. This study presents a methodology for comprehensive assessment of all domains of health system and generation of a composite Health System Performance Index on the basis of input, process, output and outcome indicators. It will help identify policy and implementation problems worthy of attention and point towards potential solutions to health system bottlenecks resulting in poor performance. The results may also help better understand the relationships between individual building blocks and their sub-components and the overall performance of the health system. PMID:29441330

  2. Proof-of-concept automation of propellant processing

    NASA Technical Reports Server (NTRS)

    Ramohalli, Kumar; Schallhorn, P. A.

    1989-01-01

    For space-based propellant production, automation of the process is needed. Currently, all phases of terrestrial production have some form of human interaction. A mixer was acquired to help perform the tasks of automation. A heating system to be used with the mixer was designed, built, and installed. Tests performed on the heating system verify design criteria. An IBM PS/2 personal computer was acquired for the future automation work. It is hoped that some the mixing process itself will be automated. This is a concept demonstration task; proving that propellant production can be automated reliably.

  3. Implementation of Statistical Process Control: Evaluating the Mechanical Performance of a Candidate Silicone Elastomer Docking Seal

    NASA Technical Reports Server (NTRS)

    Oravec, Heather Ann; Daniels, Christopher C.

    2014-01-01

    The National Aeronautics and Space Administration has been developing a novel docking system to meet the requirements of future exploration missions to low-Earth orbit and beyond. A dynamic gas pressure seal is located at the main interface between the active and passive mating components of the new docking system. This seal is designed to operate in the harsh space environment, but is also to perform within strict loading requirements while maintaining an acceptable level of leak rate. In this study, a candidate silicone elastomer seal was designed, and multiple subscale test articles were manufactured for evaluation purposes. The force required to fully compress each test article at room temperature was quantified and found to be below the maximum allowable load for the docking system. However, a significant amount of scatter was observed in the test results. Due to the stochastic nature of the mechanical performance of this candidate docking seal, a statistical process control technique was implemented to isolate unusual compression behavior from typical mechanical performance. The results of this statistical analysis indicated a lack of process control, suggesting a variation in the manufacturing phase of the process. Further investigation revealed that changes in the manufacturing molding process had occurred which may have influenced the mechanical performance of the seal. This knowledge improves the chance of this and future space seals to satisfy or exceed design specifications.

  4. Intercomparison of the community multiscale air quality model and CALGRID using process analysis.

    PubMed

    O'Neill, Susan M; Lamb, Brian K

    2005-08-01

    This study was designed to examine the similarities and differences between two advanced photochemical air quality modeling systems: EPA Models-3/CMAQ and CALGRID/CALMET. Both modeling systems were applied to an ozone episode that occurred along the I-5 urban corridor in western Washington and Oregon during July 11-14, 1996. Both models employed the same modeling domain and used the same detailed gridded emission inventory. The CMAQ model was run using both the CB-IV and RADM2 chemical mechanisms, while CALGRID was used with the SAPRC-97 chemical mechanism. Outputfrom the Mesoscale Meteorological Model (MM5) employed with observational nudging was used in both models. The two modeling systems, representing three chemical mechanisms and two sets of meteorological inputs, were evaluated in terms of statistical performance measures for both 1- and 8-h average observed ozone concentrations. The results showed that the different versions of the systems were more similar than different, and all versions performed well in the Portland region and downwind of Seattle but performed poorly in the more rural region north of Seattle. Improving the meteorological input into the CALGRID/CALMET system with planetary boundary layer (PBL) parameters from the Models-3/CMAQ meteorology preprocessor (MCIP) improved the performance of the CALGRID/CALMET system. The 8-h ensemble case was often the best performer of all the cases indicating that the models perform better over longer analysis periods. The 1-h ensemble case, derived from all runs, was not necessarily an improvement over the five individual cases, but the standard deviation about the mean provided a measure of overall modeling uncertainty. Process analysis was applied to examine the contribution of the individual processes to the species conservation equation. The process analysis results indicated that the two modeling systems arrive at similar solutions by very different means. Transport rates are faster and exhibit greater fluctuations in the CMAQ cases than in the CALGRID cases, which lead to different placement of the urban ozone plumes. The CALGRID cases, which rely on the SAPRC97 chemical mechanism, exhibited a greater diurnal production/loss cycle of ozone concentrations per hour compared to either the RADM2 or CBIV chemical mechanisms in the CMAQ cases. These results demonstrate the need for specialized process field measurements to confirm whether we are modeling ozone with valid processes.

  5. Analysis and optimization of solid oxide fuel cell-based auxiliary power units using a generic zero-dimensional fuel cell model

    NASA Astrophysics Data System (ADS)

    Göll, S.; Samsun, R. C.; Peters, R.

    Fuel-cell-based auxiliary power units can help to reduce fuel consumption and emissions in transportation. For this application, the combination of solid oxide fuel cells (SOFCs) with upstream fuel processing by autothermal reforming (ATR) is seen as a highly favorable configuration. Notwithstanding the necessity to improve each single component, an optimized architecture of the fuel cell system as a whole must be achieved. To enable model-based analyses, a system-level approach is proposed in which the fuel cell system is modeled as a multi-stage thermo-chemical process using the "flowsheeting" environment PRO/II™. Therein, the SOFC stack and the ATR are characterized entirely by corresponding thermodynamic processes together with global performance parameters. The developed model is then used to achieve an optimal system layout by comparing different system architectures. A system with anode and cathode off-gas recycling was identified to have the highest electric system efficiency. Taking this system as a basis, the potential for further performance enhancement was evaluated by varying four parameters characterizing different system components. Using methods from the design and analysis of experiments, the effects of these parameters and of their interactions were quantified, leading to an overall optimized system with encouraging performance data.

  6. Information processing of earth resources data

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1982-01-01

    Current trends in the use of remotely sensed data include integration of multiple data sources of various formats and use of complex models. These trends have placed a strain on information processing systems because an enormous number of capabilities are needed to perform a single application. A solution to this problem is to create a general set of capabilities which can perform a wide variety of applications. General capabilities for the Image-Based Information System (IBIS) are outlined in this report. They are then cross-referenced for a set of applications performed at JPL.

  7. Analytic and heuristic processing influences on adolescent reasoning and decision-making.

    PubMed

    Klaczynski, P A

    2001-01-01

    The normative/descriptive gap is the discrepancy between actual reasoning and traditional standards for reasoning. The relationship between age and the normative/descriptive gap was examined by presenting adolescents with a battery of reasoning and decision-making tasks. Middle adolescents (N = 76) performed closer to normative ideals than early adolescents (N = 66), although the normative/descriptive gap was large for both groups. Correlational analyses revealed that (1) normative responses correlated positively with each other, (2) nonnormative responses were positively interrelated, and (3) normative and nonnormative responses were largely independent. Factor analyses suggested that performance was based on two processing systems. The "analytic" system operates on "decontextualized" task representations and underlies conscious, computational reasoning. The "heuristic" system operates on "contextualized," content-laden representations and produces "cognitively cheap" responses that sometimes conflict with traditional norms. Analytic processing was more clearly linked to age and to intelligence than heuristic processing. Implications for cognitive development, the competence/performance issue, and rationality are discussed.

  8. A performability solution method for degradable nonrepairable systems

    NASA Technical Reports Server (NTRS)

    Furchtgott, D. G.; Meyer, J. F.

    1984-01-01

    The present performability model-solving algorithm identifies performance with 'reward', representing the state behavior of a system S by a finite-state stochastic process and determining reward by means of reward rates that are associated with the states of the base model. A general method is obtained for determining the probability distribution function of the performance (reward) variable, and therefore the performability, of the corresponding system. This is done for bounded utilization periods, and the result is an integral expression which is either analytically or numerically solvable.

  9. Oxygen Compatibility Assessment of Components and Systems

    NASA Technical Reports Server (NTRS)

    Stoltzfus, Joel; Sparks, Kyle

    2010-01-01

    Fire hazards are inherent in oxygen systems and a storied history of fires in rocket engine propulsion components exists. To detect and mitigate these fire hazards requires careful, detailed, and thorough analyses applied during the design process. The oxygen compatibility assessment (OCA) process designed by NASA Johnson Space Center (JSC) White Sands Test Facility (WSTF) can be used to determine the presence of fire hazards in oxygen systems and the likelihood of a fire. This process may be used as both a design guide and during the approval process to ensure proper design features and material selection. The procedure for performing an OCA is a structured step-by-step process to determine the most severe operating conditions; assess the flammability of the system materials at the use conditions; evaluate the presence and efficacy of ignition mechanisms; assess the potential for a fire to breach the system; and determine the reaction effect (the potential loss of life, mission, and system functionality as the result of a fire). This process should be performed for each component in a system. The results of each component assessment, and the overall system assessment, should be recorded in a report that can be used in the short term to communicate hazards and their mitigation and to aid in system/component development and, in the long term, to solve anomalies that occur during engine testing and operation.

  10. Failure detection system design methodology. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.

    1980-01-01

    The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.

  11. Maine Facility Research Summary : Dynamic Sign Systems for Narrow Bridges

    DOT National Transportation Integrated Search

    1997-09-01

    This report describes the development of operational surveillance data processing algorithms and software for application to urban freeway systems, conforming to a framework in which data processing is performed in stages: sensor malfunction detectio...

  12. Real-time Medical Emergency Response System: Exploiting IoT and Big Data for Public Health.

    PubMed

    Rathore, M Mazhar; Ahmad, Awais; Paul, Anand; Wan, Jiafu; Zhang, Daqiang

    2016-12-01

    Healthy people are important for any nation's development. Use of the Internet of Things (IoT)-based body area networks (BANs) is increasing for continuous monitoring and medical healthcare in order to perform real-time actions in case of emergencies. However, in the case of monitoring the health of all citizens or people in a country, the millions of sensors attached to human bodies generate massive volume of heterogeneous data, called "Big Data." Processing Big Data and performing real-time actions in critical situations is a challenging task. Therefore, in order to address such issues, we propose a Real-time Medical Emergency Response System that involves IoT-based medical sensors deployed on the human body. Moreover, the proposed system consists of the data analysis building, called "Intelligent Building," depicted by the proposed layered architecture and implementation model, and it is responsible for analysis and decision-making. The data collected from millions of body-attached sensors is forwarded to Intelligent Building for processing and for performing necessary actions using various units such as collection, Hadoop Processing (HPU), and analysis and decision. The feasibility and efficiency of the proposed system are evaluated by implementing the system on Hadoop using an UBUNTU 14.04 LTS coreTMi5 machine. Various medical sensory datasets and real-time network traffic are considered for evaluating the efficiency of the system. The results show that the proposed system has the capability of efficiently processing WBAN sensory data from millions of users in order to perform real-time responses in case of emergencies.

  13. MFAHP: A novel method on the performance evaluation of the industrial wireless networked control system

    NASA Astrophysics Data System (ADS)

    Wu, Linqin; Xu, Sheng; Jiang, Dezhi

    2015-12-01

    Industrial wireless networked control system has been widely used, and how to evaluate the performance of the wireless network is of great significance. In this paper, considering the shortcoming of the existing performance evaluation methods, a comprehensive performance evaluation method of networks multi-indexes fuzzy analytic hierarchy process (MFAHP) combined with the fuzzy mathematics and the traditional analytic hierarchy process (AHP) is presented. The method can overcome that the performance evaluation is not comprehensive and subjective. Experiments show that the method can reflect the network performance of real condition. It has direct guiding role on protocol selection, network cabling, and node setting, and can meet the requirements of different occasions by modifying the underlying parameters.

  14. Intermediate water recovery system

    NASA Technical Reports Server (NTRS)

    Deckman, G.; Anderson, A. R. (Editor)

    1973-01-01

    A water recovery system for collecting, storing, and processing urine, wash water, and humidity condensates from a crew of three aboard a spacecraft is described. The results of a 30-day test performed on a breadboard system are presented. The intermediate water recovery system produced clear, sterile, water with a 96.4 percent recovery rate from the processed urine. Recommendations for improving the system are included.

  15. Advanced Land Imager Assessment System

    NASA Technical Reports Server (NTRS)

    Chander, Gyanesh; Choate, Mike; Christopherson, Jon; Hollaren, Doug; Morfitt, Ron; Nelson, Jim; Nelson, Shar; Storey, James; Helder, Dennis; Ruggles, Tim; hide

    2008-01-01

    The Advanced Land Imager Assessment System (ALIAS) supports radiometric and geometric image processing for the Advanced Land Imager (ALI) instrument onboard NASA s Earth Observing-1 (EO-1) satellite. ALIAS consists of two processing subsystems for radiometric and geometric processing of the ALI s multispectral imagery. The radiometric processing subsystem characterizes and corrects, where possible, radiometric qualities including: coherent, impulse; and random noise; signal-to-noise ratios (SNRs); detector operability; gain; bias; saturation levels; striping and banding; and the stability of detector performance. The geometric processing subsystem and analysis capabilities support sensor alignment calibrations, sensor chip assembly (SCA)-to-SCA alignments and band-to-band alignment; and perform geodetic accuracy assessments, modulation transfer function (MTF) characterizations, and image-to-image characterizations. ALIAS also characterizes and corrects band-toband registration, and performs systematic precision and terrain correction of ALI images. This system can geometrically correct, and automatically mosaic, the SCA image strips into a seamless, map-projected image. This system provides a large database, which enables bulk trending for all ALI image data and significant instrument telemetry. Bulk trending consists of two functions: Housekeeping Processing and Bulk Radiometric Processing. The Housekeeping function pulls telemetry and temperature information from the instrument housekeeping files and writes this information to a database for trending. The Bulk Radiometric Processing function writes statistical information from the dark data acquired before and after the Earth imagery and the lamp data to the database for trending. This allows for multi-scene statistical analyses.

  16. Subsonic flight test evaluation of a propulsion system parameter estimation process for the F100 engine

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Gilyard, Glenn B.

    1992-01-01

    Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.

  17. Process control systems at Homer City coal preparation plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shell, W.P.

    1983-03-01

    An important part of process control engineering is the implementation of the basic control system design through commissioning to routine operation. This is a period when basic concepts can be reviewed and improvements either implemented or recorded for application in future systems. The experience of commissioning the process control systems in the Homer City coal cleaning plant are described and discussed. The current level of operating control performance in individual sections and the overall system are also reported and discussed.

  18. A radar data processing and enhancement system

    NASA Technical Reports Server (NTRS)

    Anderson, K. F.; Wrin, J. W.; James, R.

    1986-01-01

    This report describes the space position data processing system of the NASA Western Aeronautical Test Range. The system is installed at the Dryden Flight Research Facility of NASA Ames Research Center. This operational radar data system (RADATS) provides simultaneous data processing for multiple data inputs and tracking and antenna pointing outputs while performing real-time monitoring, control, and data enhancement functions. Experience in support of the space shuttle and aeronautical flight research missions is described, as well as the automated calibration and configuration functions of the system.

  19. The Need for V&V in Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1997-01-01

    V&V is currently performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to entire' domain or product line rather than a critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. engineering. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for activities.

  20. Taming Pipelines, Users, and High Performance Computing with Rector

    NASA Astrophysics Data System (ADS)

    Estes, N. M.; Bowley, K. S.; Paris, K. N.; Silva, V. H.; Robinson, M. S.

    2018-04-01

    Rector is a high-performance job management system created by the LROC SOC team to enable processing of thousands of observations and ancillary data products as well as ad-hoc user jobs across a 634 CPU core processing cluster.

  1. Environmental performance of wastewater reuse systems: impact of system boundaries and external conditions.

    PubMed

    Baresel, Christian; Dalgren, Lena; Almemark, Mats; Lazic, Aleksandra

    2016-01-01

    Wastewater reclamation will be a significant part of future water management and the environmental assessment of various treatment systems to reuse wastewater has become an important research field. The secondary treatment process and sludge handling on-site are, especially, electricity demanding processes due to aeration, pumping, mixing, dewatering, etc. used for operation and are being identified as the main contributor for many environmental impacts. This study discusses how the environmental performance of reuse treatment systems may be influenced by surrounding conditions. This article illustrates and discusses the importance of factors commonly treated as externalities and as such not being included in optimization strategies of reuse systems, but that are necessary to environmentally assess wastewater reclamation systems. This is illustrated by two up-stream and downstream processes; electricity supply and the use of sludge as fertilizer commonly practiced in regions considered for wastewater reclamation. The study shows that external conditions can have a larger impact on the overall environmental performance of reuse treatment systems than internal optimizations could compensate for. These results imply that a more holistic environmental assessment of reuse schemes could provide less environmental impacts as externalities could be included in measures to reduce the overall impacts.

  2. Detailed requirements document for the Interactive Financial Management System (IFMS), volume 1

    NASA Technical Reports Server (NTRS)

    Dodson, D. B.

    1975-01-01

    The detailed requirements for phase 1 (online fund control, subauthorization accounting, and accounts receivable functional capabilities) of the Interactive Financial Management System (IFMS) are described. This includes information on the following: systems requirements, performance requirements, test requirements, and production implementation. Most of the work is centered on systems requirements, and includes discussions on the following processes: resources authority, allotment, primary work authorization, reimbursable order acceptance, purchase request, obligation, cost accrual, cost distribution, disbursement, subauthorization performance, travel, accounts receivable, payroll, property, edit table maintenance, end-of-year, backup input. Other subjects covered include: external systems interfaces, general inquiries, general report requirements, communication requirements, and miscellaneous. Subjects covered under performance requirements include: response time, processing volumes, system reliability, and accuracy. Under test requirements come test data sources, general test approach, and acceptance criteria. Under production implementation come data base establishment, operational stages, and operational requirements.

  3. GLOBECOM '85 - Global Telecommunications Conference, New Orleans, LA, December 2-5, 1985, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Various papers on global telecommunications are presented. The general topics addressed include: multiservice integration with optical fibers, multicompany owned telecommunication networks, softworks quality and reliability, advanced on-board processing, impact of new services and systems on operations and maintenance, analytical studies of protocols for data communication networks, topics in packet radio networking, CCITT No. 7 to support new services, document processing and communication, antenna technology and system aspects in satellite communications. Also considered are: communication systems modelling methodology, experimental integrated local area voice/data nets, spread spectrum communications, motion video at the DS-0 rate, optical and data communications, intelligent work stations, switch performance analysis, novel radio communication systems, wireless local networks, ISDN services, LAN communication protocols, user-system interface, radio propagation and performance, mobile satellite system, software for computer networks, VLSI for ISDN terminals, quality management, man-machine interfaces in switching, and local area network performance.

  4. Timing Recovery Strategies in Magnetic Recording Systems

    NASA Astrophysics Data System (ADS)

    Kovintavewat, Piya

    At some point in a digital communications receiver, the received analog signal must be sampled. Good performance requires that these samples be taken at the right times. The process of synchronizing the sampler with the received analog waveform is known as timing recovery. Conventional timing recovery techniques perform well only when operating at high signal-to-noise ratio (SNR). Nonetheless, iterative error-control codes allow reliable communication at very low SNR, where conventional techniques fail. This paper provides a detailed review on the timing recovery strategies based on per-survivor processing (PSP) that are capable of working at low SNR. We also investigate their performance in magnetic recording systems because magnetic recording is a primary method of storage for a variety of applications, including desktop, mobile, and server systems. Results indicate that the timing recovery strategies based on PSP perform better than the conventional ones and are thus worth being employed in magnetic recording systems.

  5. NASA's Evolutionary Xenon Thruster (NEXT) Prototype Model 1R (PM1R) Ion Thruster and Propellant Management System Wear Test Results

    NASA Technical Reports Server (NTRS)

    VanNoord, Jonathan L.; Soulas, George C.; Sovey, James S.

    2010-01-01

    The results of the NEXT wear test are presented. This test was conducted with a 36-cm ion engine (designated PM1R) and an engineering model propellant management system. The thruster operated with beam extraction for a total of 1680 hr and processed 30.5 kg of xenon during the wear test, which included performance testing and some operation with an engineering model power processing unit. A total of 1312 hr was accumulated at full power, 277 hr at low power, and the remainder was at intermediate throttle levels. Overall ion engine performance, which includes thrust, thruster input power, specific impulse, and thrust efficiency, was steady with no indications of performance degradation. The propellant management system performed without incident during the wear test. The ion engine and propellant management system were also inspected following the test with no indication of anomalous hardware degradation from operation.

  6. Extended performance solar electric propulsion thrust system study. Volume 4: Thruster technology evaluation

    NASA Technical Reports Server (NTRS)

    Poeschel, R. L.; Hawthorne, E. I.; Weisman, Y. C.; Frisman, M.; Benson, G. C.; Mcgrath, R. J.; Martinelli, R. M.; Linsenbardt, T. L.; Beattie, J. R.

    1977-01-01

    Several thrust system design concepts were evaluated and compared using the specifications of the most advanced 30 cm engineering model thruster as the technology base. Emphasis was placed on relatively high power missions (60 to 100 kW) such as a Halley's comet rendezvous. The extensions in thruster performance required for the Halley's comet mission were defined and alternative thrust system concepts were designed in sufficient detail for comparing mass, efficiency, reliability, structure, and thermal characteristics. Confirmation testing and analysis of thruster and power processing components were performed, and the feasibility of satisfying extended performance requirements was verified. A baseline design was selected from the alternatives considered, and the design analysis and documentation were refined. The baseline thrust system design features modular construction, conventional power processing, and a concentrator solar array concept and is designed to interface with the Space Shuttle.

  7. An Interactive Graphics Program for Investigating Digital Signal Processing.

    ERIC Educational Resources Information Center

    Miller, Billy K.; And Others

    1983-01-01

    Describes development of an interactive computer graphics program for use in teaching digital signal processing. The program allows students to interactively configure digital systems on a monitor display and observe their system's performance by means of digital plots on the system's outputs. A sample program run is included. (JN)

  8. Ergonomics action research II: a framework for integrating HF into work system design.

    PubMed

    Neumann, W P; Village, J

    2012-01-01

    This paper presents a conceptual framework that can support efforts to integrate human factors (HF) into the work system design process, where improved and cost-effective application of HF is possible. The framework advocates strategies of broad stakeholder participation, linking of performance and health goals, and process focussed change tools that can help practitioners engage in improvements to embed HF into a firm's work system design process. Recommended tools include business process mapping of the design process, implementing design criteria, using cognitive mapping to connect to managers' strategic goals, tactical use of training and adopting virtual HF (VHF) tools to support the integration effort. Consistent with organisational change research, the framework provides guidance but does not suggest a strict set of steps. This allows more adaptability for the practitioner who must navigate within a particular organisational context to secure support for embedding HF into the design process for improved operator wellbeing and system performance. There has been little scientific literature about how a practitioner might integrate HF into a company's work system design process. This paper proposes a framework for this effort by presenting a coherent conceptual framework, process tools, design tools and procedural advice that can be adapted for a target organisation.

  9. An intelligent factory-wide optimal operation system for continuous production process

    NASA Astrophysics Data System (ADS)

    Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping

    2016-03-01

    In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.

  10. System analysis of graphics processor architecture using virtual prototyping

    NASA Astrophysics Data System (ADS)

    Hancock, William R.; Groat, Jeff; Steeves, Todd; Spaanenburg, Henk; Shackleton, John

    1995-06-01

    Honeywell has been actively involved in the definition of the next generation display processors for military and commercial cockpits. A major concern is how to achieve super graphics workstation performance in avionics application. Most notable are requirements for low volume, low power, harsh environmental conditions, real-time performance and low cost. This paper describes the application of VHDL to the system analysis tasks associated with achieving these goals in a cost effective manner. The paper will describe the top level architecture identified to provide the graphical and video processing power needed to drive future high resolution display devices and to generate more natural panoramic 3D formats. The major discussion, however, will be on the use of VHDL to model the processing elements and customized pipelines needed to realize the architecture and for doing the complex system tradeoff studies necessary to achieve a cost effective implementation. New software tools have been developed to allow 'virtual' prototyping in the VHDL environment. This results in a hardware/software codesign using VHDL performance and functional models. This unique architectural tool allows simulation and tradeoffs within a standard and tightly integrated toolset, which eventually will be used to specify and design the entire system from the top level requirements and system performance to the lowest level individual ASICs. New processing elements, algorithms, and standard graphical inputs can be designed, tested and evaluated without the costly hardware prototyping using the innovative 'virtual' prototyping techniques which are evolving on this project. In addition, virtual prototyping of the display processor does not bind the preliminary design to point solutions as a physical prototype will. when the development schedule is known, one can extrapolate processing elements performance and design the system around the most current technology.

  11. Comparative Effects of Antihistamines on Aircrew Mission Effectiveness under Sustained Operations

    DTIC Science & Technology

    1992-06-01

    measures consist mainly of process measures. Process measures are measures of activities used to accomplish the mission and produce the final results...They include task completion times and response variability, and information processing rates as they relate to unique task assignment. Performance...contains process measures that assess the Individual contributions of hardware/software and human components to overall system performance. Measures

  12. Advanced information processing system

    NASA Technical Reports Server (NTRS)

    Lala, J. H.

    1984-01-01

    Design and performance details of the advanced information processing system (AIPS) for fault and damage tolerant data processing on aircraft and spacecraft are presented. AIPS comprises several computers distributed throughout the vehicle and linked by a damage tolerant data bus. Most I/O functions are available to all the computers, which run in a TDMA mode. Each computer performs separate specific tasks in normal operation and assumes other tasks in degraded modes. Redundant software assures that all fault monitoring, logging and reporting are automated, together with control functions. Redundant duplex links and damage-spread limitation provide the fault tolerance. Details of an advanced design of a laboratory-scale proof-of-concept system are described, including functional operations.

  13. High-performance computing with quantum processing units

    DOE PAGES

    Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.; ...

    2017-03-01

    The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less

  14. High-performance computing with quantum processing units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.

    The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less

  15. Economics of human performance and systems total ownership cost.

    PubMed

    Onkham, Wilawan; Karwowski, Waldemar; Ahram, Tareq Z

    2012-01-01

    Financial costs of investing in people is associated with training, acquisition, recruiting, and resolving human errors have a significant impact on increased total ownership costs. These costs can also affect the exaggerate budgets and delayed schedules. The study of human performance economical assessment in the system acquisition process enhances the visibility of hidden cost drivers which support program management informed decisions. This paper presents the literature review of human total ownership cost (HTOC) and cost impacts on overall system performance. Economic value assessment models such as cost benefit analysis, risk-cost tradeoff analysis, expected value of utility function analysis (EV), growth readiness matrix, multi-attribute utility technique, and multi-regressions model were introduced to reflect the HTOC and human performance-technology tradeoffs in terms of the dollar value. The human total ownership regression model introduces to address the influencing human performance cost component measurement. Results from this study will increase understanding of relevant cost drivers in the system acquisition process over the long term.

  16. Performance Monitoring of Distributed Data Processing Systems

    NASA Technical Reports Server (NTRS)

    Ojha, Anand K.

    2000-01-01

    Test and checkout systems are essential components in ensuring safety and reliability of aircraft and related systems for space missions. A variety of systems, developed over several years, are in use at the NASA/KSC. Many of these systems are configured as distributed data processing systems with the functionality spread over several multiprocessor nodes interconnected through networks. To be cost-effective, a system should take the least amount of resource and perform a given testing task in the least amount of time. There are two aspects of performance evaluation: monitoring and benchmarking. While monitoring is valuable to system administrators in operating and maintaining, benchmarking is important in designing and upgrading computer-based systems. These two aspects of performance evaluation are the foci of this project. This paper first discusses various issues related to software, hardware, and hybrid performance monitoring as applicable to distributed systems, and specifically to the TCMS (Test Control and Monitoring System). Next, a comparison of several probing instructions are made to show that the hybrid monitoring technique developed by the NIST (National Institutes for Standards and Technology) is the least intrusive and takes only one-fourth of the time taken by software monitoring probes. In the rest of the paper, issues related to benchmarking a distributed system have been discussed and finally a prescription for developing a micro-benchmark for the TCMS has been provided.

  17. Effects of and preference for pay for performance: an analogue analysis.

    PubMed

    Long, Robert D; Wilder, David A; Betz, Alison; Dutta, Ami

    2012-01-01

    We examined the effects of 2 payment systems on the rate of check processing and time spent on task by participants in a simulated work setting. Three participants experienced individual pay-for-performance (PFP) without base pay and pay-for-time (PFT) conditions. In the last phase, we asked participants to choose which system they preferred. For all participants, the PFP condition produced higher rates of check processing and more time spent on task than did the PFT condition, but choice of payment system varied both within and across participants.

  18. An Integrated RFID and Barcode Tagged Item Inventory System for Deployment at New Brunswick Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younkin, James R; Kuhn, Michael J; Gradle, Colleen

    New Brunswick Laboratory (NBL) has a numerous inventory containing thousands of plutonium and uranium certified reference materials. The current manual inventory process is well established but is a lengthy process which requires significant oversight and double checking to ensure correctness. Oak Ridge National Laboratory has worked with NBL to develop and deploy a new inventory system which utilizes handheld computers with barcode scanners and radio frequency identification (RFID) readers termed the Tagged Item Inventory System (TIIS). Certified reference materials are identified by labels which incorporate RFID tags and barcodes. The label printing process and RFID tag association process are integratedmore » into the main desktop software application. Software on the handheld computers syncs with software on designated desktop machines and the NBL inventory database to provide a seamless inventory process. This process includes: 1) identifying items to be inventoried, 2) downloading the current inventory information to the handheld computer, 3) using the handheld to read item and location labels, and 4) syncing the handheld computer with a designated desktop machine to analyze the results, print reports, etc. The security of this inventory software has been a major concern. Designated roles linked to authenticated logins are used to control access to the desktop software while password protection and badge verification are used to control access to the handheld computers. The overall system design and deployment at NBL will be presented. The performance of the system will also be discussed with respect to a small piece of the overall inventory. Future work includes performing a full inventory at NBL with the Tagged Item Inventory System and comparing performance, cost, and radiation exposures to the current manual inventory process.« less

  19. Implementation of Insight Responsibilities in Process Engineering

    NASA Technical Reports Server (NTRS)

    Osborne, Deborah M.

    1997-01-01

    This report describes an approach for evaluating flight readiness (COFR) and contractor performance evaluation (award fee) as part of the insight role of NASA Process Engineering at Kennedy Space Center. Several evaluation methods are presented, including systems engineering evaluations and use of systems performance data. The transition from an oversight function to the insight function is described. The types of analytical tools appropriate for achieving the flight readiness and contractor performance evaluation goals are described and examples are provided. Special emphasis is placed upon short and small run statistical quality control techniques. Training requirements for system engineers are delineated. The approach described herein would be equally appropriate in other directorates at Kennedy Space Center.

  20. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems.

    PubMed

    Foo, Brian; van der Schaar, Mihaela

    2010-11-01

    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  1. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing

    PubMed Central

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2015-01-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA’s CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream. Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels. PMID:26566545

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) preliminary techno-economic assessment of the UCC catalyst/process system; (3) optimization of the most promising catalyst developed under prior contract; (4) optimization of the UCC catalyst system in a mannermore » that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop containing the most promising catalyst developed under Tasks 3 and 4 studies; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Progress reports are presented for tasks 2 through 5. 232 figs., 19 tabs.« less

  3. A Verification Method of Inter-Task Cooperation in Embedded Real-time Systems and its Evaluation

    NASA Astrophysics Data System (ADS)

    Yoshida, Toshio

    In software development process of embedded real-time systems, the design of the task cooperation process is very important. The cooperating process of such tasks is specified by task cooperation patterns. Adoption of unsuitable task cooperation patterns has fatal influence on system performance, quality, and extendibility. In order to prevent repetitive work caused by the shortage of task cooperation performance, it is necessary to verify task cooperation patterns in an early software development stage. However, it is very difficult to verify task cooperation patterns in an early software developing stage where task program codes are not completed yet. Therefore, we propose a verification method using task skeleton program codes and a real-time kernel that has a function of recording all events during software execution such as system calls issued by task program codes, external interrupts, and timer interrupt. In order to evaluate the proposed verification method, we applied it to the software development process of a mechatronics control system.

  4. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  5. The Advocacy of an Appraisal System for Teachers: A Case Study

    ERIC Educational Resources Information Center

    Bisschoff, Tom; Mathye, Annah

    2009-01-01

    Education systems all over the world, like all other organisations, have certain organisational goals that they set and wish to achieve. It is argued that for increased pupil performance, in the case of education systems, teachers must work harder and smarter. A performance system is regarded as part of the process to achieve this organisational…

  6. GaAs Supercomputing: Architecture, Language, And Algorithms For Image Processing

    NASA Astrophysics Data System (ADS)

    Johl, John T.; Baker, Nick C.

    1988-10-01

    The application of high-speed GaAs processors in a parallel system matches the demanding computational requirements of image processing. The architecture of the McDonnell Douglas Astronautics Company (MDAC) vector processor is described along with the algorithms and language translator. Most image and signal processing algorithms can utilize parallel processing and show a significant performance improvement over sequential versions. The parallelization performed by this system is within each vector instruction. Since each vector has many elements, each requiring some computation, useful concurrent arithmetic operations can easily be performed. Balancing the memory bandwidth with the computation rate of the processors is an important design consideration for high efficiency and utilization. The architecture features a bus-based execution unit consisting of four to eight 32-bit GaAs RISC microprocessors running at a 200 MHz clock rate for a peak performance of 1.6 BOPS. The execution unit is connected to a vector memory with three buses capable of transferring two input words and one output word every 10 nsec. The address generators inside the vector memory perform different vector addressing modes and feed the data to the execution unit. The functions discussed in this paper include basic MATRIX OPERATIONS, 2-D SPATIAL CONVOLUTION, HISTOGRAM, and FFT. For each of these algorithms, assembly language programs were run on a behavioral model of the system to obtain performance figures.

  7. Real-time object tracking based on scale-invariant features employing bio-inspired hardware.

    PubMed

    Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya

    2016-09-01

    We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Systems approach provides management control of complex programs

    NASA Technical Reports Server (NTRS)

    Dudek, E. F., Jr.; Mc Carthy, J. F., Jr.

    1970-01-01

    Integrated program management process provides management visual assistance through three interrelated charts - system model that identifies each function to be performed, matrix that identifies personnel responsibilities for these functions, process chart that breaks down the functions into discrete tasks.

  9. Procedure for minimizing the cost per watt of photovoltaic systems

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance

  10. SHARP's systems engineering challenge: rectifying integrated product team requirements with performance issues in an evolutionary spiral development acquisition

    NASA Astrophysics Data System (ADS)

    Kuehl, C. Stephen

    2003-08-01

    Completing its final development and early deployment on the Navy's multi-role aircraft, the F/A-18 E/F Super Hornet, the SHAred Reconnaissance Pod (SHARP) provides the war fighter with the latest digital tactical reconnaissance (TAC Recce) Electro-Optical/Infrared (EO/IR) sensor system. The SHARP program is an evolutionary acquisition that used a spiral development process across a prototype development phase tightly coupled into overlapping Engineering and Manufacturing Development (EMD) and Low Rate Initial Production (LRIP) phases. Under a tight budget environment with a highly compressed schedule, SHARP challenged traditional acquisition strategies and systems engineering (SE) processes. Adopting tailored state-of-the-art systems engineering process models allowd the SHARP program to overcome the technical knowledge transition challenges imposed by a compressed program schedule. The program's original goal was the deployment of digital TAC Recce mission capabilities to the fleet customer by summer of 2003. Hardware and software integration technical challenges resulted from requirements definition and analysis activities performed across a government-industry led Integrated Product Team (IPT) involving Navy engineering and test sites, Boeing, and RTSC-EPS (with its subcontracted hardware and government furnished equipment vendors). Requirements development from a bottoms-up approach was adopted using an electronic requirements capture environment to clarify and establish the SHARP EMD product baseline specifications as relevant technical data became available. Applying Earned-Value Management (EVM) against an Integrated Master Schedule (IMS) resulted in efficiently managing SE task assignments and product deliveries in a dynamically evolving customer requirements environment. Application of Six Sigma improvement methodologies resulted in the uncovering of root causes of errors in wiring interconnectivity drawings, pod manufacturing processes, and avionics requirements specifications. Utilizing the draft NAVAIR SE guideline handbook and the ANSI/EIA-632 standard: Processes for Engineering a System, a systems engineering tailored process approach was adopted for the accelerated SHARP EMD prgram. Tailoring SE processes in this accelerated product delivery environment provided unique opportunities to be technically creative in the establishment of a product performance baseline. This paper provides an historical overview of the systems engineering activities spanning the prototype phase through the EMD SHARP program phase, the performance requirement capture activities and refinement process challenges, and what SE process improvements can be applied to future SHARP-like programs adopting a compressed, evolutionary spiral development acquisition paradigm.

  11. Comparison of Direct Sequence Spread Spectrum Rake Receiver with a Maximum Ratio Combining Multicarrier Spread Spectrum Receiver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daryl Leon Wasden; Hussein Moradi; Behrouz Farhang-Broujeny

    2014-06-01

    This paper presents a theoretical analysis of the performance of a filter bank-based multicarrier spread spectrum (FB-MC-SS) system. We consider an FB-MC-SS setup where each data symbol is spread across multiple subcarriers, but there is no spreading in time. The results are then compared with those of the well-known direct sequence spread spectrum (DS-SS) system with a rake receiver for its best performance. We compare the two systems when the channel noise is white. We prove that as the processing gains of the two systems tend to infinity both approach the same performance. However, numerical simulations show that, in practice,more » where processing gain is limited, FB-MC-SS outperforms DS-SS.« less

  12. [Image processing system of visual prostheses based on digital signal processor DM642].

    PubMed

    Xie, Chengcheng; Lu, Yanyu; Gu, Yun; Wang, Jing; Chai, Xinyu

    2011-09-01

    This paper employed a DSP platform to create the real-time and portable image processing system, and introduced a series of commonly used algorithms for visual prostheses. The results of performance evaluation revealed that this platform could afford image processing algorithms to be executed in real time.

  13. Evolving Maturation of the Series-Bosch System

    NASA Technical Reports Server (NTRS)

    Stanley, Christine; Abney, Morgan B.; Barnett, Bill

    2017-01-01

    Human exploration missions to Mars and other destinations beyond low Earth orbit require highly robust, reliable, and maintainable life support systems that maximize recycling of water and oxygen. In order to meet this requirement, NASA has continued the development of a Series-Bosch System, a two stage reactor process that reduces carbon dioxide (CO2) with hydrogen (H2) to produce water and solid carbon. Theoretically, the Bosch process can recover 100% of the oxygen (O2) from CO2 in the form of water, making it an attractive option for long duration missions. The Series Bosch system includes a reverse water gas shift (RWGS) reactor, a carbon formation reactor (CFR), an H2 extraction membrane, and a CO2 extraction membrane. In 2016, the results of integrated testing of the Series Bosch system showed great promise and resulted in design modifications to the CFR to further improve performance. This year, integrated testing was conducted with the modified reactor to evaluate its performance and compare it with the performance of the previous configuration. Additionally, a CFR with the capability to load new catalyst and remove spent catalyst in-situ was built. Flow demonstrations were performed to evaluate both the catalyst loading and removal process and the hardware performance. The results of the integrated testing with the modified CFR as well as the flow demonstrations are discussed in this paper.

  14. Seasonal bacterial community succession in four typical wastewater treatment plants: correlations between core microbes and process performance.

    PubMed

    Zhang, Bo; Yu, Quanwei; Yan, Guoqi; Zhu, Hubo; Xu, Xiang Yang; Zhu, Liang

    2018-03-15

    To understand the seasonal variation of the activated sludge (AS) bacterial community and identify core microbes in different wastewater processing systems, seasonal AS samples were taken from every biological treatment unit within 4 full-scale wastewater treatment plants. These plants adopted A2/O, A/O and oxidation ditch processes and were active in the treatment of different types and sources of wastewater, some domestic and others industrial. The bacterial community composition was analyzed using high-throughput sequencing technology. The correlations among microbial community structure, dominant microbes and process performance were investigated. Seasonal variation had a stronger impact on the AS bacterial community than any variation within different wastewater treatment system. Facing seasonal variation, the bacterial community within the oxidation ditch process remained more stable those in either the A2/O or A/O processes. The core genera in domestic wastewater treatment systems were Nitrospira, Caldilineaceae, Pseudomonas and Lactococcus. The core genera in the textile dyeing and fine chemical industrial wastewater treatment systems were Nitrospira, Thauera and Thiobacillus.

  15. Overview of the Smart Network Element Architecture and Recent Innovations

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.

    2008-01-01

    In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.

  16. Decentralized Control of Scheduling in Distributed Systems.

    DTIC Science & Technology

    1983-12-15

    does not perform quite as well as the 10 state system, but is less sensitive to changes in scheduling period. It performs best when scheduling is...intra-process concerns. We extend theLr concept of a process to inolude Inter -ress comunication. That is. various form of send and receive primitives...Current busyness of each site based on some responses to requests for bids. A received bid is utilization factor. adjusted by incrementing it by a

  17. Determining the Effectiveness and Evaluating the Implementation Process of a Quality/Performance Circles System Model to Assist in Institutional Decision Making and Problem Solving at Lakeshore Technical Institute.

    ERIC Educational Resources Information Center

    Ladwig, Dennis J.

    During the 1982-83 school year, a quality/performance circles system model was implemented at Lakeshore Technical Institute (LTI) to promote greater participation by staff in decision making and problem solving. All management staff at the college (N=45) were invited to participate in the process, and 39 volunteered. Non-management staff (N=240)…

  18. Verification and Validation Methodology of Real-Time Adaptive Neural Networks for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Loparo, Kenneth; Mackall, Dale; Schumann, Johann; Soares, Fola

    2004-01-01

    Recent research has shown that adaptive neural based control systems are very effective in restoring stability and control of an aircraft in the presence of damage or failures. The application of an adaptive neural network with a flight critical control system requires a thorough and proven process to ensure safe and proper flight operation. Unique testing tools have been developed as part of a process to perform verification and validation (V&V) of real time adaptive neural networks used in recent adaptive flight control system, to evaluate the performance of the on line trained neural networks. The tools will help in certification from FAA and will help in the successful deployment of neural network based adaptive controllers in safety-critical applications. The process to perform verification and validation is evaluated against a typical neural adaptive controller and the results are discussed.

  19. The use of algorithmic behavioural transfer functions in parametric EO system performance models

    NASA Astrophysics Data System (ADS)

    Hickman, Duncan L.; Smith, Moira I.

    2015-10-01

    The use of mathematical models to predict the overall performance of an electro-optic (EO) system is well-established as a methodology and is used widely to support requirements definition, system design, and produce performance predictions. Traditionally these models have been based upon cascades of transfer functions based on established physical theory, such as the calculation of signal levels from radiometry equations, as well as the use of statistical models. However, the performance of an EO system is increasing being dominated by the on-board processing of the image data and this automated interpretation of image content is complex in nature and presents significant modelling challenges. Models and simulations of EO systems tend to either involve processing of image data as part of a performance simulation (image-flow) or else a series of mathematical functions that attempt to define the overall system characteristics (parametric). The former approach is generally more accurate but statistically and theoretically weak in terms of specific operational scenarios, and is also time consuming. The latter approach is generally faster but is unable to provide accurate predictions of a system's performance under operational conditions. An alternative and novel architecture is presented in this paper which combines the processing speed attributes of parametric models with the accuracy of image-flow representations in a statistically valid framework. An additional dimension needed to create an effective simulation is a robust software design whose architecture reflects the structure of the EO System and its interfaces. As such, the design of the simulator can be viewed as a software prototype of a new EO System or an abstraction of an existing design. This new approach has been used successfully to model a number of complex military systems and has been shown to combine improved performance estimation with speed of computation. Within the paper details of the approach and architecture are described in detail, and example results based on a practical application are then given which illustrate the performance benefits. Finally, conclusions are drawn and comments given regarding the benefits and uses of the new approach.

  20. Programmable partitioning for high-performance coherence domains in a multiprocessor system

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Salapura, Valentina [Chappaqua, NY

    2011-01-25

    A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.

  1. Dynamic Exergy Method for Evaluating the Control and Operation of Oxy-Combustion Boiler Island Systems.

    PubMed

    Jin, Bo; Zhao, Haibo; Zheng, Chuguang; Liang, Zhiwu

    2017-01-03

    Exergy-based methods are widely applied to assess the performance of energy conversion systems; however, these methods mainly focus on a certain steady-state and have limited applications for evaluating the control impacts on system operation. To dynamically obtain the thermodynamic behavior and reveal the influences of control structures, layers and loops, on system energy performance, a dynamic exergy method is developed, improved, and applied to a complex oxy-combustion boiler island system for the first time. The three most common operating scenarios are studied, and the results show that the flow rate change process leads to less energy consumption than oxygen purity and air in-leakage change processes. The variation of oxygen purity produces the largest impact on system operation, and the operating parameter sensitivity is not affected by the presence of process control. The control system saves energy during flow rate and oxygen purity change processes, while it consumes energy during the air in-leakage change process. More attention should be paid to the oxygen purity change because it requires the largest control cost. In the control system, the supervisory control layer requires the greatest energy consumption and the largest control cost to maintain operating targets, while the steam control loops cause the main energy consumption.

  2. Low Resolution Picture Transmission (LRPT) Demonstration System

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Yeh, Pen-Shu; Sank, Victor; Nyugen, Xuan; Xia, Wei; Duran, Steve; Day, John H. (Technical Monitor)

    2002-01-01

    Low-Resolution Picture Transmission (LRPT) is a proposed standard for direct broadcast transmission of satellite weather images. This standard is a joint effort by the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) and the National Oceanic Atmospheric Administration (NOAA). As a digital transmission scheme, its purpose is to replace the current analog Automatic Picture Transmission (APT) system for use in the Meteorological Operational (METOP) satellites. Goddard Space Flight Center has been tasked to build an LRPT Demonstration System (LDS). It's main objective is to develop or demonstrate the feasibility of a low-cost receiver utilizing a Personal Computer (PC) as the primary processing component and determine the performance of the protocol in the simulated Radio Frequency (RF) environment. The approach would consist of two phases. In the phase 1, a Commercial-off-the-Shelf (COTS) Modulator-Demodulator (MODEM) board that would perform RF demodulation would be purchased allowing the Central Processing Unit (CPU) to perform the Consultative Committee for Space Data Systems (CCSDS) protocol processing. Also since the weather images are compressed the PC would perform the decompression. Phase 1 was successfully demonstrated on December 1997. Phase 2 consists of developing a high-fidelity receiver, transmitter and environment simulator. Its goal is to find out how the METOP Specification performs in a simulated noise environment in a cost-effective receiver. The approach would be to produce a receiver using as much software as possible to perform front-end processing to take advantage of the latest high-speed PCs. Thus the COTS MODEM used in Phase 1 is performing RF demodulation along with data acquisition providing data to the receiving software. Also, environment simulator is produced using the noise patterns generated by Institute for Telecommunications Sciences (ITS) from their noise environment study.

  3. Challenges in performance of food safety management systems: a case of fish processing companies in Tanzania.

    PubMed

    Kussaga, Jamal B; Luning, Pieternel A; Tiisekwa, Bendantunguka P M; Jacxsens, Liesbeth

    2014-04-01

    This study provides insight for food safety (FS) performance in light of the current performance of core FS management system (FSMS) activities and context riskiness of these systems to identify the opportunities for improvement of the FSMS. A FSMS diagnostic instrument was applied to assess the performance levels of FSMS activities regarding context riskiness and FS performance in 14 fish processing companies in Tanzania. Two clusters (cluster I and II) with average FSMS (level 2) operating under moderate-risk context (score 2) were identified. Overall, cluster I had better (score 3) FS performance than cluster II (score 2 to 3). However, a majority of the fish companies need further improvement of their FSMS and reduction of context riskiness to assure good FS performance. The FSMS activity levels could be improved through hygienic design of equipment and facilities, strict raw material control, proper follow-up of critical control point analysis, developing specific sanitation procedures and company-specific sampling design and measuring plans, independent validation of preventive measures, and establishing comprehensive documentation and record-keeping systems. The risk level of the context could be reduced through automation of production processes (such as filleting, packaging, and sanitation) to restrict people's interference, recruitment of permanent high-skilled technological staff, and setting requirements on product use (storage and distribution conditions) on customers. However, such intervention measures for improvement could be taken in phases, starting with less expensive ones (such as sanitation procedures) that can be implemented in the short term to more expensive interventions (setting up assurance activities) to be adopted in the long term. These measures are essential for fish processing companies to move toward FSMS that are more effective.

  4. A flexible tool for hydraulic and water quality performance analysis of green infrastructure

    NASA Astrophysics Data System (ADS)

    Massoudieh, A.; Alikhani, J.

    2017-12-01

    Models that allow for design considerations of green infrastructure (GI) practices to control stormwater runoff and associated contaminants have received considerable attention in recent years. To be used to evaluate the effect design configurations on the long-term performance of GIs, models should be able to consider processes within GIs with good fidelity. In this presentation, a sophisticated, yet flexible tool for hydraulic and water quality assessment of GIs will be introduced. The tool can be used by design engineers and researchers to capture and explore the effect of design factors and properties of the media employed in the performance of GI systems at a relatively small scale. We deemed it essential to have a flexible GI modeling tool that is capable of simulating GI system components and specific biogeochemical processes affecting contaminants such as evapotranspiration, plant uptake, reactions, and particle-associated transport accurately while maintaining a high degree of flexibility to account for the myriad of GI alternatives. The mathematical framework for a stand-alone GI performance assessment tool has been developed and will be demonstrated. The process-based model framework developed here can be used to model a diverse range of GI practices such as stormwater ponds, green roofs, retention ponds, bioretention systems, infiltration trench, permeable pavement and other custom-designed combinatory systems. An example of the application of the system to evaluate the performance of a rain-garden system will be demonstrated.

  5. 20 years of KVH fiber optic gyro technology: the evolution from large, low performance FOGs to compact, precise FOGs and FOG-based inertial systems

    NASA Astrophysics Data System (ADS)

    Napoli, Jay

    2016-05-01

    Precision fiber optic gyroscopes (FOGs) are critical components for an array of platforms and applications ranging from stabilization and pointing orientation of payloads and platforms to navigation and control for unmanned and autonomous systems. In addition, FOG-based inertial systems provide extremely accurate data for geo-referencing systems. Significant improvements in the performance of FOGs and FOG-based inertial systems at KVH are due, in large part, to advancements in the design and manufacture of optical fiber, as well as in manufacturing operations and signal processing. Open loop FOGs, such as those developed and manufactured by KVH Industries, offer tactical-grade performance in a robust, small package. The success of KVH FOGs and FOG-based inertial systems is due to innovations in key fields, including the development of proprietary D-shaped fiber with an elliptical core, and KVH's unique ThinFiber. KVH continually improves its FOG manufacturing processes and signal processing, which result in improved accuracies across its entire FOG product line. KVH acquired its FOG capabilities, including its patented E•Core fiber, when the company purchased Andrew Corporation's Fiber Optic Group in 1997. E•Core fiber is unique in that the light-guiding core - critical to the FOG's performance - is elliptically shaped. The elliptical core produces a fiber that has low loss and high polarization-maintaining ability. In 2010, KVH developed its ThinFiber, a 170-micron diameter fiber that retains the full performance characteristics of E•Core fiber. ThinFiber has enabled the development of very compact, high-performance open-loop FOGs, which are also used in a line of FOG-based inertial measurement units and inertial navigation systems.

  6. Multiple systems for motor skill learning.

    PubMed

    Clark, Dav; Ivry, Richard B

    2010-07-01

    Motor learning is a ubiquitous feature of human competence. This review focuses on two particular classes of model tasks for studying skill acquisition. The serial reaction time (SRT) task is used to probe how people learn sequences of actions, while adaptation in the context of visuomotor or force field perturbations serves to illustrate how preexisting movements are recalibrated in novel environments. These tasks highlight important issues regarding the representational changes that occur during the course of motor learning. One important theme is that distinct mechanisms vary in their information processing costs during learning and performance. Fast learning processes may require few trials to produce large changes in performance but impose demands on cognitive resources. Slower processes are limited in their ability to integrate complex information but minimally demanding in terms of attention or processing resources. The representations derived from fast systems may be accessible to conscious processing and provide a relatively greater measure of flexibility, while the representations derived from slower systems are more inflexible and automatic in their behavior. In exploring these issues, we focus on how multiple neural systems may interact and compete during the acquisition and consolidation of new behaviors. Copyright © 2010 John Wiley & Sons, Ltd. This article is categorized under: Psychology > Motor Skill and Performance. Copyright © 2010 John Wiley & Sons, Ltd.

  7. No psychological effect of color context in a low level vision task

    PubMed Central

    Pedley, Adam; Wade, Alex R

    2013-01-01

    Background: A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Methods: Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task.  Results: A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η 2 = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. Discussion: We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas. PMID:25075280

  8. No psychological effect of color context in a low level vision task.

    PubMed

    Pedley, Adam; Wade, Alex R

    2013-01-01

    A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task.  A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η (2) = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas.

  9. Function-based design process for an intelligent ground vehicle vision system

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.

    2010-10-01

    An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.

  10. Practical and Theoretical Requirements for Controlling Rater Stringency in Peer Review.

    ERIC Educational Resources Information Center

    Cason, Gerald J.; Cason, Carolyn L.

    This study describes a computer based, performance rating information processing system, performance rating theory, and programs for the application of the theory to obtain ratings free from the effects of reviewer stringency in reviewing abstracts of conference papers. Originally, the Performance Rating (PR) System was used to evaluate the…

  11. Performance, Process, and Costs: Managing Service Quality with the Balanced Scorecard.

    ERIC Educational Resources Information Center

    Poll, Roswitha

    2001-01-01

    Describes a cooperative project among three German libraries that used the Balanced Scorecard as a concept for an integrated quality management system. Considers performance indicators across four perspectives that will help academic libraries establish an integrated controlling system and to collect and evaluate performance as well as cost data…

  12. Analysis of the packet formation process in packet-switched networks

    NASA Astrophysics Data System (ADS)

    Meditch, J. S.

    Two new queueing system models for the packet formation process in packet-switched telecommunication networks are developed, and their applications in process stability, performance analysis, and optimization studies are illustrated. The first, an M/M/1 queueing system characterization of the process, is a highly aggregated model which is useful for preliminary studies. The second, a marked extension of an earlier M/G/1 model, permits one to investigate stability, performance characteristics, and design of the packet formation process in terms of the details of processor architecture, and hardware and software implementations with processor structure and as many parameters as desired as variables. The two new models together with the earlier M/G/1 characterization span the spectrum of modeling complexity for the packet formation process from basic to advanced.

  13. Demonstration-scale evaluation of a novel high-solids anaerobic digestion process for converting organic wastes to fuel gas and compost.

    PubMed

    Rivard, C J; Duff, B W; Dickow, J H; Wiles, C C; Nagle, N J; Gaddy, J L; Clausen, E C

    1998-01-01

    Early evaluations of the bioconversion potential for combined wastes such as tuna sludge and sorted municipal solid waste (MSW) were conducted at laboratory scale and compared conventional low-solids, stirred-tank anaerobic systems with the novel, high-solids anaerobic digester (HSAD) design. Enhanced feedstock conversion rates and yields were determined for the HSAD system. In addition, the HSAD system demonstrated superior resiliency to process failure. Utilizing relatively dry feedstocks, the HSAD system is approximately one-tenth the size of conventional low-solids systems. In addition, the HSAD system is capable of organic loading rates (OLRs) on the order of 20-25 g volatile solids per liter digester volume per d (gVS/L/d), roughly 4-5 times those of conventional systems. Current efforts involve developing a demonstration-scale (pilot-scale) HSAD system. A two-ton/d plant has been constructed in Stanton, CA and is currently in the commissioning/startup phase. The purposes of the project are to verify laboratory- and intermediate-scale process performance; test the performance of large-scale prototype mechanical systems; demonstrate the long-term reliability of the process; and generate the process and economic data required for the design, financing, and construction of full-scale commercial systems. This study presents conformational fermentation data obtained at intermediate-scale and a snapshot of the pilot-scale project.

  14. Visualization Co-Processing of a CFD Simulation

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    1999-01-01

    OVERFLOW, a widely used CFD simulation code, is combined with a visualization system, pV3, to experiment with an environment for simulation/visualization co-processing on a SGI Origin 2000 computer(O2K) system. The shared memory version of the solver is used with the O2K 'pfa' preprocessor invoked to automatically discover parallelism in the source code. No other explicit parallelism is enabled. In order to study the scaling and performance of the visualization co-processing system, sample runs are made with different processor groups in the range of 1 to 254 processors. The data exchange between the visualization system and the simulation system is rapid enough for user interactivity when the problem size is small. This shared memory version of OVERFLOW, with minimal parallelization, does not scale well to an increasing number of available processors. The visualization task takes about 18 to 30% of the total processing time and does not appear to be a major contributor to the poor scaling. Improper load balancing and inter-processor communication overhead are contributors to this poor performance. Work is in progress which is aimed at obtaining improved parallel performance of the solver and removing the limitations of serial data transfer to pV3 by examining various parallelization/communication strategies, including the use of the explicit message passing.

  15. Exploring Systems That Support Good Clinical Care in Indigenous Primary Health-care Services: A Retrospective Analysis of Longitudinal Systems Assessment Tool Data from High-Improving Services.

    PubMed

    Woods, Cindy; Carlisle, Karen; Larkins, Sarah; Thompson, Sandra Claire; Tsey, Komla; Matthews, Veronica; Bailie, Ross

    2017-01-01

    Continuous Quality Improvement is a process for raising the quality of primary health care (PHC) across Indigenous PHC services. In addition to clinical auditing using plan, do, study, and act cycles, engaging staff in a process of reflecting on systems to support quality care is vital. The One21seventy Systems Assessment Tool (SAT) supports staff to assess systems performance in terms of five key components. This study examines quantitative and qualitative SAT data from five high-improving Indigenous PHC services in northern Australia to understand the systems used to support quality care. High-improving services selected for the study were determined by calculating quality of care indices for Indigenous health services participating in the Audit and Best Practice in Chronic Disease National Research Partnership. Services that reported continuing high improvement in quality of care delivered across two or more audit tools in three or more audits were selected for the study. Precollected SAT data (from annual team SAT meetings) are presented longitudinally using radar plots for quantitative scores for each component, and content analysis is used to describe strengths and weaknesses of performance in each systems' component. High-improving services were able to demonstrate strong processes for assessing system performance and consistent improvement in systems to support quality care across components. Key strengths in the quality support systems included adequate and orientated workforce, appropriate health system supports, and engagement with other organizations and community, while the weaknesses included lack of service infrastructure, recruitment, retention, and support for staff and additional costs. Qualitative data revealed clear voices from health service staff expressing concerns with performance, and subsequent SAT data provided evidence of changes made to address concerns. Learning from the processes and strengths of high-improving services may be useful as we work with services striving to improve the quality of care provided in other areas.

  16. Visual analysis of inter-process communication for large-scale parallel computing.

    PubMed

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  17. Materials Research Capabilities

    NASA Technical Reports Server (NTRS)

    Stofan, Andrew J.

    1986-01-01

    Lewis Research Center, in partnership with U.S. industry and academia, has long been a major force in developing advanced aerospace propulsion and power systems. One key aspect that made many of these systems possible has been the availability of high-performance, reliable, and long-life materials. To assure a continuing flow of new materials and processing concepts, basic understanding to guide such innovation, and technological support for development of major NASA systems, Lewis has supported a strong in-house materials research activity. Our researchers have discovered new alloys, polymers, metallic composites, ceramics, coatings, processing techniques, etc., which are now also in use by U.S. industry. This brochure highlights selected past accomplishments of our materials research and technology staff. It also provides many examples of the facilities available with which we can conduct materials research. The nation is now beginning to consider integrating technology for high-performance supersonic/hypersonic aircraft, nuclear space power systems, a space station, and new research areas such as materials processing in space. As we proceed, I am confident that our materials research staff will continue to provide important contributions which will help our nation maintain a strong technology position in these areas of growing world competition. Lewis Research Center, in partnership with U.S. industry and academia, has long been a major force in developing advanced aerospace propulsion and power systems. One key aspect that made many of these systems possible has been the availability of high-performance, reliable, and long-life materials. To assure a continuing flow of new materials and processing concepts, basic understanding to guide such innovation, and technological support for development of major NASA systems, Lewis has supported a strong in-house materials research activity. Our researchers have discovered new alloys, polymers, metallic composites, ceramics, coatings, processing techniques, etc., which are now also in use by U.S. industry. This brochure highlights selected past accomplishments of our materials research and technology staff. It also provides many examples of the facilities available with which we can conduct materials research. The nation is now beginning to consider integrating technology for high-performance supersonic/hypersonic aircraft, nuclear space power systems, a space station, and new research areas such as materials processing in space.

  18. Real-Time and Post-Processed Orbit Determination and Positioning

    NASA Technical Reports Server (NTRS)

    Harvey, Nathaniel E. (Inventor); Lu, Wenwen (Inventor); Miller, Mark A. (Inventor); Bar-Sever, Yoaz E. (Inventor); Miller, Kevin J. (Inventor); Romans, Larry J. (Inventor); Dorsey, Angela R. (Inventor); Sibthorpe, Anthony J. (Inventor); Weiss, Jan P. (Inventor); Bertiger, William I. (Inventor); hide

    2015-01-01

    Novel methods and systems for the accurate and efficient processing of real-time and latent global navigation satellite systems (GNSS) data are described. Such methods and systems can perform orbit determination of GNSS satellites, orbit determination of satellites carrying GNSS receivers, positioning of GNSS receivers, and environmental monitoring with GNSS data.

  19. Real-Time and Post-Processed Orbit Determination and Positioning

    NASA Technical Reports Server (NTRS)

    Bar-Sever, Yoaz E. (Inventor); Romans, Larry J. (Inventor); Weiss, Jan P. (Inventor); Gross, Jason (Inventor); Harvey, Nathaniel E. (Inventor); Lu, Wenwen (Inventor); Dorsey, Angela R. (Inventor); Miller, Mark A. (Inventor); Sibthorpe, Anthony J. (Inventor); Bertiger, William I. (Inventor); hide

    2016-01-01

    Novel methods and systems for the accurate and efficient processing of real-time and latent global navigation satellite systems (GNSS) data are described. Such methods and systems can perform orbit determination of GNSS satellites, orbit determination of satellites carrying GNSS receivers, positioning of GNSS receivers, and environmental monitoring with GNSS data.

  20. Stability and performance analysis of a jump linear control system subject to digital upsets

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Sun, Hui; Ma, Zhen-Yang

    2015-04-01

    This paper focuses on the methodology analysis for the stability and the corresponding tracking performance of a closed-loop digital jump linear control system with a stochastic switching signal. The method is applied to a flight control system. A distributed recoverable platform is implemented on the flight control system and subject to independent digital upsets. The upset processes are used to stimulate electromagnetic environments. Specifically, the paper presents the scenarios that the upset process is directly injected into the distributed flight control system, which is modeled by independent Markov upset processes and independent and identically distributed (IID) processes. A theoretical performance analysis and simulation modelling are both presented in detail for a more complete independent digital upset injection. The specific examples are proposed to verify the methodology of tracking performance analysis. The general analyses for different configurations are also proposed. Comparisons among different configurations are conducted to demonstrate the availability and the characteristics of the design. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61403395), the Natural Science Foundation of Tianjin, China (Grant No. 13JCYBJC39000), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry, China, the Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation of China (Grant No. 104003020106), and the Fund for Scholars of Civil Aviation University of China (Grant No. 2012QD21x).

  1. UCMS - A new signal parameter measurement system using digital signal processing techniques. [User Constraint Measurement System

    NASA Technical Reports Server (NTRS)

    Choi, H. J.; Su, Y. T.

    1986-01-01

    The User Constraint Measurement System (UCMS) is a hardware/software package developed by NASA Goddard to measure the signal parameter constraints of the user transponder in the TDRSS environment by means of an all-digital signal sampling technique. An account is presently given of the features of UCMS design and of its performance capabilities and applications; attention is given to such important aspects of the system as RF interface parameter definitions, hardware minimization, the emphasis on offline software signal processing, and end-to-end link performance. Applications to the measurement of other signal parameters are also discussed.

  2. An integral design strategy combining optical system and image processing to obtain high resolution images

    NASA Astrophysics Data System (ADS)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  3. Detailed requirements document for the problem reporting data system (PDS). [space shuttle and batch processing

    NASA Technical Reports Server (NTRS)

    West, R. S.

    1975-01-01

    The system is described as a computer-based system designed to track the status of problems and corrective actions pertinent to space shuttle hardware. The input, processing, output, and performance requirements of the system are presented along with standard display formats and examples. Operational requirements, hardware, requirements, and test requirements are also included.

  4. Implementing An Image Understanding System Architecture Using Pipe

    NASA Astrophysics Data System (ADS)

    Luck, Randall L.

    1988-03-01

    This paper will describe PIPE and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.

  5. Performance Optimization Control of ECH using Fuzzy Inference Application

    NASA Astrophysics Data System (ADS)

    Dubey, Abhay Kumar

    Electro-chemical honing (ECH) is a hybrid electrolytic precision micro-finishing technology that, by combining physico-chemical actions of electro-chemical machining and conventional honing processes, provides the controlled functional surfaces-generation and fast material removal capabilities in a single operation. Process multi-performance optimization has become vital for utilizing full potential of manufacturing processes to meet the challenging requirements being placed on the surface quality, size, tolerances and production rate of engineering components in this globally competitive scenario. This paper presents an strategy that integrates the Taguchi matrix experimental design, analysis of variances and fuzzy inference system (FIS) to formulate a robust practical multi-performance optimization methodology for complex manufacturing processes like ECH, which involve several control variables. Two methodologies one using a genetic algorithm tuning of FIS (GA-tuned FIS) and another using an adaptive network based fuzzy inference system (ANFIS) have been evaluated for a multi-performance optimization case study of ECH. The actual experimental results confirm their potential for a wide range of machining conditions employed in ECH.

  6. Automated inspection of hot steel slabs

    DOEpatents

    Martin, R.J.

    1985-12-24

    The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes. 5 figs.

  7. Automated inspection of hot steel slabs

    DOEpatents

    Martin, Ronald J.

    1985-01-01

    The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes.

  8. Integrated microfluidic systems for cell lysis, mixing/pumping and DNA amplification

    NASA Astrophysics Data System (ADS)

    Lee, Chia-Yen; Lee, Gwo-Bin; Lin, Jr-Lung; Huang, Fu-Chun; Liao, Chia-Sheng

    2005-06-01

    The present paper reports a fully automated microfluidic system for the DNA amplification process by integrating an electroosmotic pump, an active micromixer and an on-chip temperature control system. In this DNA amplification process, the cell lysis is initially performed in a micro cell lysis reactor. Extracted DNA samples, primers and reagents are then driven electroosmotically into a mixing region where they are mixed by the active micromixer. The homogeneous mixture is then thermally cycled in a micro-PCR (polymerase chain reaction) chamber to perform DNA amplification. Experimental results show that the proposed device can successfully automate the sample pretreatment operation for DNA amplification, thereby delivering significant time and effort savings. The new microfluidic system, which facilitates cell lysis, sample driving/mixing and DNA amplification, could provide a significant contribution to ongoing efforts to miniaturize bio-analysis systems by utilizing a simple fabrication process and cheap materials.

  9. How does information congruence influence diagnosis performance?

    PubMed

    Chen, Kejin; Li, Zhizhong

    2015-01-01

    Diagnosis performance is critical for the safety of high-consequence industrial systems. It depends highly on the information provided, perceived, interpreted and integrated by operators. This article examines the influence of information congruence (congruent information vs. conflicting information vs. missing information) and its interaction with time pressure (high vs. low) on diagnosis performance on a simulated platform. The experimental results reveal that the participants confronted with conflicting information spent significantly more time generating correct hypotheses and rated the results with lower probability values than when confronted with the other two levels of information congruence and were more prone to arrive at a wrong diagnosis result than when they were provided with congruent information. This finding stresses the importance of the proper processing of non-congruent information in safety-critical systems. Time pressure significantly influenced display switching frequency and completion time. This result indicates the decisive role of time pressure. Practitioner Summary: This article examines the influence of information congruence and its interaction with time pressure on human diagnosis performance on a simulated platform. For complex systems in the process control industry, the results stress the importance of the proper processing of non-congruent information in safety-critical systems.

  10. Quicker, slicker, and better? An evaluation of a web-based human resource management system

    NASA Astrophysics Data System (ADS)

    Gibb, Stephen; McBride, Andrew

    2001-10-01

    This paper reviews the design and development of a web based Human Resource Management (HRM) system which has as its foundation a 'capability profiler' tool for analysing individual or team roles in organisations. This provides a foundation for managing a set of integrated activities in recruitment and selection, performance and career management, and training and development for individuals, teams, and whole organisations. The challenges of representing and processing information about the human side of organisation encountered in the design and implementation of such systems are evident. There is a combination of legal, practical, technical and philosophical issues to be faced in the processes of defining roles, selecting staff, monitoring and managing the performance of employees in the design and implementation of such systems. The strengths and weaknesses of web based systems in this context are evaluated. This evaluation highlights both the potential, given the evolution of broader Enterprise Resource Planning (ERP) systems and strategies in manufacturing, and concerns about the migration of HRM processes to such systems.

  11. Welding process modelling and control

    NASA Technical Reports Server (NTRS)

    Romine, Peter L.; Adenwala, Jinen A.

    1993-01-01

    The research and analysis performed, and software developed, and hardware/software recommendations made during 1992 in development of the PC-based data acquisition system for support of Welding Process Modeling and Control is reported. A need was identified by the Metals Processing Branch of NASA Marshall Space Flight Center, for a mobile data aquisition and analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC-based system was chosen. The Welding Measurement System (WMS) is a dedicated instrument, strictly for the use of data aquisition and analysis. Although the WMS supports many of the functions associated with the process control, it is not the intention for this system to be used for welding process control.

  12. Suomi NPP Ground System Performance

    NASA Astrophysics Data System (ADS)

    Grant, K. D.; Bergeron, C.

    2013-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will replace the afternoon orbit component and ground processing system of the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA. The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological and geophysical observations of the Earth. The first satellite in the JPSS constellation, known as the Suomi National Polar-orbiting Partnership (Suomi NPP) satellite, was launched on 28 October 2011, and is currently undergoing product calibration and validation activities. As products reach a beta level of maturity, they are made available to the community through NOAA's Comprehensive Large Array-data Stewardship System (CLASS). CGS's data processing capability processes the satellite data from the Joint Polar Satellite System satellites to provide environmental data products (including Sensor Data Records (SDRs) and Environmental Data Records (EDRs)) to NOAA and Department of Defense (DoD) processing centers operated by the United States government. CGS is currently processing and delivering SDRs and EDRs for Suomi NPP and will continue through the lifetime of the Joint Polar Satellite System programs. Following the launch and sensor activation phase of the Suomi NPP mission, full volume data traffic is now flowing from the satellite through CGS's C3, data processing, and data delivery systems. Ground system performance is critical for this operational system. As part of early system checkout, Raytheon measured all aspects of data acquisition, routing, processing, and delivery to ensure operational performance requirements are met, and will continue to be met throughout the mission. Raytheon developed a tool to measure, categorize, and automatically adjudicate packet behavior across the system, and metrics collected by this tool form the basis of the information to be presented. This presentation will provide details of ground system processing performance, such as data rates through each of the CGS nodes, data accounting statistics, and retransmission rates and success, along with data processing throughput, data availability, and latency. In particular, two key metrics relating to the most important operational measures, availability (the ratio of actual granules delivered to the theoretical maximum number of granules that could be delivered over a particular period) and latency (the time from the detection of a photon by an instrument to the time a product is made available to the data consumer's interface), are provided for Raw Data Records (RDRs), SDRs, and EDRs. Specific availability metrics include Adjusted Expected Granules (the count of the theoretical maximum number of granules minus adjudicated exceptions (granules missing due to factors external to the CGS)), Data Made Available (DMA) (the number of granules provided to CLASS) and Availability Results. Latency metrics are similar, including Data Made Available Minus Exceptions, Data Made Latency, and Latency Results. Overall results, measured during a ninety day period from October 2012 through January 2013, are excellent, with all values surpassing system requirements.

  13. Research of real-time video processing system based on 6678 multi-core DSP

    NASA Astrophysics Data System (ADS)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  14. A Methodology for Making Early Comparative Architecture Performance Evaluations

    ERIC Educational Resources Information Center

    Doyle, Gerald S.

    2010-01-01

    Complex and expensive systems' development suffers from a lack of method for making good system-architecture-selection decisions early in the development process. Failure to make a good system-architecture-selection decision increases the risk that a development effort will not meet cost, performance and schedule goals. This research provides a…

  15. Digital nonlinearity compensation in high-capacity optical communication systems considering signal spectral broadening effect.

    PubMed

    Xu, Tianhua; Karanov, Boris; Shevchenko, Nikita A; Lavery, Domaniç; Liga, Gabriele; Killey, Robert I; Bayvel, Polina

    2017-10-11

    Nyquist-spaced transmission and digital signal processing have proved effective in maximising the spectral efficiency and reach of optical communication systems. In these systems, Kerr nonlinearity determines the performance limits, and leads to spectral broadening of the signals propagating in the fibre. Although digital nonlinearity compensation was validated to be promising for mitigating Kerr nonlinearities, the impact of spectral broadening on nonlinearity compensation has never been quantified. In this paper, the performance of multi-channel digital back-propagation (MC-DBP) for compensating fibre nonlinearities in Nyquist-spaced optical communication systems is investigated, when the effect of signal spectral broadening is considered. It is found that accounting for the spectral broadening effect is crucial for achieving the best performance of DBP in both single-channel and multi-channel communication systems, independent of modulation formats used. For multi-channel systems, the degradation of DBP performance due to neglecting the spectral broadening effect in the compensation is more significant for outer channels. Our work also quantified the minimum bandwidths of optical receivers and signal processing devices to ensure the optimal compensation of deterministic nonlinear distortions.

  16. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  17. Does Decentralization Improve Health System Performance and Outcomes in Low- and Middle-Income Countries? A Systematic Review of Evidence From Quantitative Studies.

    PubMed

    Dwicaksono, Adenantera; Fox, Ashley M

    2018-06-01

    Policy Points: For more than 3 decades, international development agencies have advocated health system decentralization to improve health system performance in low- and middle-income countries. We found little rigorous evidence documenting the impact of decentralization processes on health system performance or outcomes in part due to challenges in measuring such far-reaching and multifaceted system-level changes. We propose a renewed research agenda that focuses on discrete definitions of decentralization and how institutional factors and mechanisms affect health system performance and outcomes within the general context of decentralized governance structures. Despite the widespread adoption of decentralization reforms as a means to improve public service delivery in developing countries since the 1980s, empirical evidence of the role of decentralization on health system improvement is still limited and inconclusive. This study reviewed studies published from 2000 to 2016 with adequate research designs to identify evidence on whether and how decentralization processes have impacted health systems. We conducted a systematic review of peer-reviewed journal articles from the public health and social science literature. We searched for articles within 9 databases using predefined search terms reflecting decentralization and health system constructs. Inclusion criteria were original research articles, low- and middle-income country settings, quantifiable outcome measures, and study designs that use comparisons or statistical adjustments. We excluded studies in high-income country settings and/or published in a non-English language. Sixteen studies met our prespecified inclusion and exclusion criteria and were grouped based on outcomes measured: health system inputs (n = 3), performance (n = 7), and health outcomes (n = 7). Numerous studies addressing conceptual issues related to decentralization but without any attempt at empirical estimation were excluded. Overall, we found mixed results regarding the effects of decentralization on health system indicators with seemingly beneficial effects on health system performance and health outcomes. Only 10 studies were considered to have relatively low risks of bias. This study reveals the limited empirical knowledge of the impact of decentralization on health system performance. Mixed empirical findings on the role of decentralization on health system performance and outcomes highlight the complexity of decentralization processes and their systemwide effects. Thus, we propose a renewed research agenda that focuses on discrete definitions of decentralization and how institutional factors and mechanisms affect health system performance and outcomes within the general context of decentralized governance structures. © 2018 Milbank Memorial Fund.

  18. FPGA cluster for high-performance AO real-time control system

    NASA Astrophysics Data System (ADS)

    Geng, Deli; Goodsell, Stephen J.; Basden, Alastair G.; Dipper, Nigel A.; Myers, Richard M.; Saunter, Chris D.

    2006-06-01

    Whilst the high throughput and low latency requirements for the next generation AO real-time control systems have posed a significant challenge to von Neumann architecture processor systems, the Field Programmable Gate Array (FPGA) has emerged as a long term solution with high performance on throughput and excellent predictability on latency. Moreover, FPGA devices have highly capable programmable interfacing, which lead to more highly integrated system. Nevertheless, a single FPGA is still not enough: multiple FPGA devices need to be clustered to perform the required subaperture processing and the reconstruction computation. In an AO real-time control system, the memory bandwidth is often the bottleneck of the system, simply because a vast amount of supporting data, e.g. pixel calibration maps and the reconstruction matrix, need to be accessed within a short period. The cluster, as a general computing architecture, has excellent scalability in processing throughput, memory bandwidth, memory capacity, and communication bandwidth. Problems, such as task distribution, node communication, system verification, are discussed.

  19. Extended testing of compression distillation.

    NASA Technical Reports Server (NTRS)

    Bambenek, R. A.; Nuccio, P. P.

    1972-01-01

    During the past eight years, the NASA Manned Spacecraft Center has supported the development of an integrated water and waste management system which includes the compression distillation process for recovering useable water from urine, urinal flush water, humidity condensate, commode flush water, and concentrated wash water. This paper describes the design of the compression distillation unit, developed for this system, and the testing performed to demonstrate its reliability and performance. In addition, this paper summarizes the work performed on pretreatment and post-treatment processes, to assure the recovery of sterile potable water from urine and treated urinal flush water.

  20. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    PubMed

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  1. Evaluating Non-In-Place Update Techniques for Flash-Based Transaction Processing Systems

    NASA Astrophysics Data System (ADS)

    Wang, Yongkun; Goda, Kazuo; Kitsuregawa, Masaru

    Recently, flash memory is emerging as the storage device. With price sliding fast, the cost per capacity is approaching to that of SATA disk drives. So far flash memory has been widely deployed in consumer electronics even partly in mobile computing environments. For enterprise systems, the deployment has been studied by many researchers and developers. In terms of the access performance characteristics, flash memory is quite different from disk drives. Without the mechanical components, flash memory has very high random read performance, whereas it has a limited random write performance because of the erase-before-write design. The random write performance of flash memory is comparable with or even worse than that of disk drives. Due to such a performance asymmetry, naive deployment to enterprise systems may not exploit the potential performance of flash memory at full blast. This paper studies the effectiveness of using non-in-place-update (NIPU) techniques through the IO path of flash-based transaction processing systems. Our deliberate experiments using both open-source DBMS and commercial DBMS validated the potential benefits; x3.0 to x6.6 performance improvement was confirmed by incorporating non-in-place-update techniques into file system without any modification of applications or storage devices.

  2. A KPI framework for process-based benchmarking of hospital information systems.

    PubMed

    Jahn, Franziska; Winter, Alfred

    2011-01-01

    Benchmarking is a major topic for monitoring, directing and elucidating the performance of hospital information systems (HIS). Current approaches neglect the outcome of the processes that are supported by the HIS and their contribution to the hospital's strategic goals. We suggest to benchmark HIS based on clinical documentation processes and their outcome. A framework consisting of a general process model and outcome criteria for clinical documentation processes is introduced.

  3. Innovation in Information Technology: Theoretical and Empirical Study in SMQR Section of Export Import in Automotive Industry

    NASA Astrophysics Data System (ADS)

    Edi Nugroho Soebandrija, Khristian; Pratama, Yogi

    2014-03-01

    This paper has the objective to provide the innovation in information technology in both theoretical and empirical study. Precisely, both aspects relate to the Shortage Mispacking Quality Report (SMQR) Claims in Export and Import in Automotive Industry. This paper discusses the major aspects of Innovation, Information Technology, Performance and Competitive Advantage. Furthermore, In the empirical study of PT. Astra Honda Motor (AHM) refers to SMQR Claims, Communication Systems, Analysis and Design Systems. Briefly both aspects of the major aspects and its empirical study are discussed in the Introduction Session. Furthermore, the more detail discussion is conducted in the related aspects in other sessions of this paper, in particular in Literature Review in term classical and updated reference of current research. The increases of SMQR claim and communication problem at PT. Astra Daihatsu Motor (PT. ADM) which still using the email cause the time of claim settlement become longer and finally it causes the rejected of SMQR claim by supplier. With presence of this problem then performed to design the integrated communication system to manage the communication process of SMQR claim between PT. ADM with supplier. The systems was analyzed and designed is expected to facilitate the claim communication process so that can be run in accordance with the procedure and fulfill the target of claim settlement time and also eliminate the difficulties and problems on the previous manual communication system with the email. The design process of the system using the approach of system development life cycle method by Kendall & Kendall (2006)which design process covers the SMQR problem communication process, judgment process by the supplier, claim process, claim payment process and claim monitoring process. After getting the appropriate system designs for managing the SMQR claim, furthermore performed the system implementation and can be seen the improvement in claim communication process and settlement time become faster and achieve the target. The conclusion in this paper comprises two major aspects. The first one refers to the conclusion in term of theory and concept. The second one refers to the conclusion in term of the empirical study of one of automotive industries in Indonesia. Both of them are expected to have contribution in current and future research of related aspects that are discussed in this paper.

  4. Performance and evaluation of real-time multicomputer control systems

    NASA Technical Reports Server (NTRS)

    Shin, K. G.

    1983-01-01

    New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.

  5. Experimental and computational investigation of Morse taper conometric system reliability for the definition of fixed connections between dental implants and prostheses.

    PubMed

    Bressan, Eriberto; Lops, Diego; Tomasi, Cristiano; Ricci, Sara; Stocchero, Michele; Carniel, Emanuele Luigi

    2014-07-01

    Nowadays, dental implantology is a reliable technique for treatment of partially and completely edentulous patients. The achievement of stable dentition is ensured by implant-supported fixed dental prostheses. Morse taper conometric system may provide fixed retention between implants and dental prostheses. The aim of this study was to investigate retentive performance and mechanical strength of a Morse taper conometric system used as implant-supported fixed dental prostheses retention. Experimental and finite element investigations were performed. Experimental tests were achieved on a specific abutment-coping system, accounting for both cemented and non-cemented situations. The results from the experimental activities were processed to identify the mechanical behavior of the coping-abutment interface. Finally, the achieved information was applied to develop reliable finite element models of different abutment-coping systems. The analyses were developed accounting for different geometrical conformations of the abutment-coping system, such as different taper angle. The results showed that activation process, occurred through a suitable insertion force, could provide retentive performances equal to a cemented system without compromising the mechanical functionality of the system. These findings suggest that Morse taper conometrical system can provide a fixed connection between implants and dental prostheses if proper insertion force is applied. Activation process does not compromise the mechanical functionality of the system. © IMechE 2014.

  6. An Empirical Study of Combining Communicating Processes in a Parallel Discrete Event Simulation

    DTIC Science & Technology

    1990-12-01

    dynamics of the cost/performance criteria which typically made up computer resource acquisition decisions . offering a broad range of tradeoffs in the way... prcesses has a significant impact on simulation performance. It is the hypothesis of this 3-4 SYSTEM DECOMPOSITION PHYSICAL SYSTEM 1: N PHYSICAL PROCESS 1...EMPTY)) next-event = pop(next-event-queue); lp-clock = next-event - time; Simulate next event departure- consume event-enqueue new event end while; If no

  7. On the Performance Potential of Bioelectrochemical Life Support Systems

    NASA Technical Reports Server (NTRS)

    Mansell, J. Matthew

    2013-01-01

    An area of growing multi-disciplinary research and revolutionary development for bio-processing on Earth is bioelectrochemical systems. These systems exploit the capability of many microorganisms to act as biocatalysts, enhancing the performance of electrochemical processes which convert low-value materials into valuable products. Many varieties of such processes hold potential value for space exploration as means to recycle metabolic waste and other undesirable materials or insitu resources into oxygen, water, and other valuable substances. However, the wide range of possible reactants, products, configurations, and operating parameters, along with the early stage of development and application on the ground necessitate thorough consideration of which, if any, possibilities could outperform existing technologies and should thus receive investment for space applications. In turn, the decision depends on the theoretical and practical limits of performance and the value of the reactant-product conversions within spaceflight scenarios, and should, to the greatest extent possible, be examined from the perspective of a fully designed, integrated system, rather than as an isolated unit lacking critical components like valves and pumps. Herein, we select a series of possible reactant-product conversions, develop concept process flow diagrams for each, and estimate theoretical and (where sufficient literature data allows) practical performance limitations of each. The objective was to estimate the costs, benefits, and risks of each concept in order to aid strategic decisions in the early-phase technology development effort.

  8. SAR operational aspects

    NASA Astrophysics Data System (ADS)

    Holmdahl, P. E.; Ellis, A. B. E.; Moeller-Olsen, P.; Ringgaard, J. P.

    1981-12-01

    The basic requirements of the SAR ground segment of ERS-1 are discussed. A system configuration for the real time data acquisition station and the processing and archive facility is depicted. The functions of a typical SAR processing unit (SPU) are specified, and inputs required for near real time and full precision, deferred time processing are described. Inputs and the processing required for provision of these inputs to the SPU are dealt with. Data flow through the systems, and normal and nonnormal operational sequence, are outlined. Prerequisites for maintaining overall performance are identified, emphasizing quality control. The most demanding tasks to be performed by the front end are defined in order to determine types of processors and peripherals which comply with throughput requirements.

  9. Software and Hardware System for Fast Processes Study When Preparing Foundation Beds of Oil and Gas Facilities

    NASA Astrophysics Data System (ADS)

    Gruzin, A. V.; Gruzin, V. V.; Shalay, V. V.

    2018-04-01

    Analysis of existing technologies for preparing foundation beds of oil and gas buildings and structures has revealed the lack of reasoned recommendations on the selection of rational technical and technological parameters of compaction. To study the nature of the dynamics of fast processes during compaction of foundation beds of oil and gas facilities, a specialized software and hardware system was developed. The method of calculating the basic technical parameters of the equipment for recording fast processes is presented, as well as the algorithm for processing the experimental data. The performed preliminary studies confirmed the accuracy of the decisions made and the calculations performed.

  10. EOS MLS Science Data Processing System: A Description of Architecture and Capabilities

    NASA Technical Reports Server (NTRS)

    Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.

    2006-01-01

    This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.

  11. Effects of straw processing and pen overstocking on the growth performance and sorting characteristics of diets offered to replacement Holstein dairy heifers

    USDA-ARS?s Scientific Manuscript database

    The effects of pen-stocking density and straw processing on the growth performance of Holstein dairy heifers housed in a free-stall system are not well understood. Our objectives were to evaluate these factors on the growth performance, feed-bunk sorting behaviors, daily behavioral traits, and hygie...

  12. Finance and supply management project execution plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BENNION, S.I.

    As a subproject of the HANDI 2000 project, the Finance and Supply Management system is intended to serve FDH and Project Hanford major subcontractor with financial processes including general ledger, project costing, budgeting, and accounts payable, and supply management process including purchasing, inventory and contracts management. Currently these functions are performed with numerous legacy information systems and suboptimized processes.

  13. Method for sequentially processing a multi-level interconnect circuit in a vacuum chamber

    NASA Technical Reports Server (NTRS)

    Routh, D. E.; Sharma, G. C. (Inventor)

    1982-01-01

    The processing of wafer devices to form multilevel interconnects for microelectronic circuits is described. The method is directed to performing the sequential steps of etching the via, removing the photo resist pattern, back sputtering the entire wafer surface and depositing the next layer of interconnect material under common vacuum conditions without exposure to atmospheric conditions. Apparatus for performing the method includes a vacuum system having a vacuum chamber in which wafers are processed on rotating turntables. The vacuum chamber is provided with an RF sputtering system and a DC magnetron sputtering system. A gas inlet is provided in the chamber for the introduction of various gases to the vacuum chamber and the creation of various gas plasma during the sputtering steps.

  14. An optimal design of wind turbine and ship structure based on neuro-response surface method

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young

    2015-07-01

    The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.

  15. Oxygen Transfer in Moving Bed Biofilm Reactor and Integrated Fixed Film Activated Sludge Processes.

    PubMed

    2017-11-17

    A demonstrated approach to design the, so-called, medium-bubble air diffusion network for oxygen transfer into the aerobic zone(s) of moving bed biofilm reactor (MBBR) and integrated fixed-film activated sludge (IFAS) processes is described in this paper. Operational full-scale biological water resource recovery systems treating municipal sewerage demonstrate that medium-bubble air diffusion networks designed using the method presented here provide reliable service. Further improvement is possible, however, as knowledge gaps prevent more rational process designs. Filling such knowledge gaps can potentially result in higher performing and more economical systems. Small-scale system testing demonstrates significant enhancement of oxygen transfer capacity due to the presence of media, but quantification of such effects in full-scale systems is lacking, and is needed. Establishment of the relationship between diffuser submergence, aeration rate, and biofilm carrier fill fraction will enhance MBBR and IFAS aerobic process design, cost, and performance. Limited testing of full-scale systems is available to allow computation of alpha valuess. As with clean water testing of full-scale systems, further full-scale testing under actual operating conditions is required to more fully quantify MBBR and IFAS system oxygen transfer performance under a wide range of operating conditions. Control of MBBR and IFAS aerobic zone oxygen transfer systems can be optimized by recognizing that varying residual dissolved oxygen (DO) concentrations are needed, depending on operating conditions. For example, the DO concentration in the aerobic zone of nitrifying IFAS processes can be lowered during warm weather conditions when greater suspended growth nitrification can occur, resulting in the need for reduced nitrification by the biofilm compartment. Further application of oxygen transfer control approaches used in activated sludge systems to MBBR and IFAS systems, such as ammonia-based oxygen transfer system control, has been demonstrated to further improve MBBR and IFAS system energy-efficiency.

  16. Noisy text categorization.

    PubMed

    Vinciarelli, Alessandro

    2005-12-01

    This work presents categorization experiments performed over noisy texts. By noisy, we mean any text obtained through an extraction process (affected by errors) from media other than digital texts (e.g., transcriptions of speech recordings extracted with a recognition system). The performance of a categorization system over the clean and noisy (Word Error Rate between approximately 10 and approximately 50 percent) versions of the same documents is compared. The noisy texts are obtained through handwriting recognition and simulation of optical character recognition. The results show that the performance loss is acceptable for Recall values up to 60-70 percent depending on the noise sources. New measures of the extraction process performance, allowing a better explanation of the categorization results, are proposed.

  17. Learning Style and Ability Grouping in the High School System: Some Caribbean Findings.

    ERIC Educational Resources Information Center

    Richardson, Arthur G.; Fergus, Eudora E.

    1993-01-01

    The Inventory of Learning Processes assessed the learning styles of Caribbean ninth graders (47 boys, 67 girls) in 2 ability groups. The higher ability group performed better in deep processing, fact retention, and methodical study. Girls performed better in methodical study. (SK)

  18. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  19. Importance Of Quality Control in Reducing System Risk, a Lesson Learned From The Shuttle and a Recommendation for Future Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Messer, Bradley P.

    2006-01-01

    This paper presents lessons learned from the Space Shuttle return to flight experience and the importance of these lessons learned in the development of new the NASA Crew Launch Vehicle (CLV). Specifically, the paper discusses the relationship between process control and system risk, and the importance of process control in improving space vehicle flight safety. It uses the External Tank (ET) Thermal Protection System (TPS) experience and lessons learned from the redesign and process enhancement activities performed in preparation for Return to Flight after the Columbia accident. The paper also, discusses in some details, the Probabilistic engineering physics based risk assessment performed by the Shuttle program to evaluate the impact of TPS failure on system risk and the application of the methodology to the CLV.

  20. Toward high fidelity spectral sensing and RF signal processing in silicon photonic and nano-opto-mechanical platforms

    NASA Astrophysics Data System (ADS)

    Siddiqui, Aleem; Reinke, Charles; Shin, Heedeuk; Jarecki, Robert L.; Starbuck, Andrew L.; Rakich, Peter

    2017-05-01

    The performance of electronic systems for radio-frequency (RF) spectrum analysis is critical for agile radar and communications systems, ISR (intelligence, surveillance, and reconnaissance) operations in challenging electromagnetic (EM) environments, and EM-environment situational awareness. While considerable progress has been made in size, weight, and power (SWaP) and performance metrics in conventional RF technology platforms, fundamental limits make continued improvements increasingly difficult. Alternatively, we propose employing cascaded transduction processes in a chip-scale nano-optomechanical system (NOMS) to achieve a spectral sensor with exceptional signal-linearity, high dynamic range, narrow spectral resolution and ultra-fast sweep times. By leveraging the optimal capabilities of photons and phonons, the system we pursue in this work has performance metrics scalable well beyond the fundamental limitations inherent to all electronic systems. In our device architecture, information processing is performed on wide-bandwidth RF-modulated optical signals by photon-mediated phononic transduction of the modulation to the acoustical-domain for narrow-band filtering, and then back to the optical-domain by phonon-mediated phase modulation (the reverse process). Here, we rely on photonics to efficiently distribute signals for parallel processing, and on phononics for effective and flexible RF-frequency manipulation. This technology is used to create RF-filters that are insensitive to the optical wavelength, with wide center frequency bandwidth selectivity (1-100GHz), ultra-narrow filter bandwidth (1-100MHz), and high dynamic range (70dB), which we will present. Additionally, using this filter as a building block, we will discuss current results and progress toward demonstrating a multichannel-filter with a bandwidth of < 10MHz per channel, while minimizing cumulative optical/acoustic/optical transduced insertion-loss to ideally < 10dB. These proposed metric represent significant improvements over RF-platforms.

  1. Freeway performance measurement system : an operational analysis tool

    DOT National Transportation Integrated Search

    2001-07-30

    PeMS is a freeway performance measurement system for all of California. It processes 2 : GB/day of 30-second loop detector data in real time to produce useful information. Managers : at any time can have a uniform, and comprehensive assessment of fre...

  2. Lane marking/striping to improve image processing lane departure warning systems.

    DOT National Transportation Integrated Search

    2007-05-01

    Vision-based Lane Departure Warning Systems (LDWS) depend on pavement marking tracking to : determine that vehicles perform unintended drifts out of the travel lanes. Thus, it is expected that : the performances of these LDWS be influenced by the vis...

  3. Develop Advanced Nonlinear Signal Analysis Topographical Mapping System

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1997-01-01

    During the development of the SSME, a hierarchy of advanced signal analysis techniques for mechanical signature analysis has been developed by NASA and AI Signal Research Inc. (ASRI) to improve the safety and reliability for Space Shuttle operations. These techniques can process and identify intelligent information hidden in a measured signal which is often unidentifiable using conventional signal analysis methods. Currently, due to the highly interactive processing requirements and the volume of dynamic data involved, detailed diagnostic analysis is being performed manually which requires immense man-hours with extensive human interface. To overcome this manual process, NASA implemented this program to develop an Advanced nonlinear signal Analysis Topographical Mapping System (ATMS) to provide automatic/unsupervised engine diagnostic capabilities. The ATMS will utilize a rule-based Clips expert system to supervise a hierarchy of diagnostic signature analysis techniques in the Advanced Signal Analysis Library (ASAL). ASAL will perform automatic signal processing, archiving, and anomaly detection/identification tasks in order to provide an intelligent and fully automated engine diagnostic capability. The ATMS has been successfully developed under this contract. In summary, the program objectives to design, develop, test and conduct performance evaluation for an automated engine diagnostic system have been successfully achieved. Software implementation of the entire ATMS system on MSFC's OISPS computer has been completed. The significance of the ATMS developed under this program is attributed to the fully automated coherence analysis capability for anomaly detection and identification which can greatly enhance the power and reliability of engine diagnostic evaluation. The results have demonstrated that ATMS can significantly save time and man-hours in performing engine test/flight data analysis and performance evaluation of large volumes of dynamic test data.

  4. Parallel processing in a host plus multiple array processor system for radar

    NASA Technical Reports Server (NTRS)

    Barkan, B. Z.

    1983-01-01

    Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.

  5. Policy to Performance: State ABE Transition Systems Report. Transitioning Adults to Opportunity

    ERIC Educational Resources Information Center

    Alamprese, Judith A.

    2012-01-01

    The U.S. Department of Education's Policy to Performance project was funded in 2009 to build the capacity of state adult basic education (ABE) staff to develop and implement policies and practices that would support an ABE transition system. Policy to Performance states were selected though a competitive process. State adult education directors…

  6. Advanced information processing system - Status report. [for fault tolerant and damage tolerant data processing for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Brock, L. D.; Lala, J.

    1986-01-01

    The Advanced Information Processing System (AIPS) is designed to provide a fault tolerant and damage tolerant data processing architecture for a broad range of aerospace vehicles. The AIPS architecture also has attributes to enhance system effectiveness such as graceful degradation, growth and change tolerance, integrability, etc. Two key building blocks being developed by the AIPS program are a fault and damage tolerant processor and communication network. A proof-of-concept system is now being built and will be tested to demonstrate the validity and performance of the AIPS concepts.

  7. Influence of lateral displacement on the levitation performance of a magnetized bulk high-Tc superconductor magnet

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, J. S.; Ma, G. T.; Zheng, J.; Tuo, X. G.; Li, L. L.; Ye, C. Q.; Liao, X. L.; Wang, S. Y.

    2012-03-01

    Compared with the permanent magnet, the magnetized bulk high-Tc superconductor magnet (MBSCM) can trap higher magnetic field due to its strong flux pinning ability, so it is a good candidate to improve the levitation performance of high-Tc superconductive (HTS) maglev system. The trapped magnetic flux of a MBSCM is sustained by the inductive superconducting current produced by the magnetizing process and is susceptible to the current intensity as well as configuration. In the HTS maglev system, the lateral displacement is an important process to change the superconducting current within a MBSCM and then affects its levitation performance, which is essential for the traffic ability in curve-way, the loading capacity of lateral impact and so on. The research about influence of lateral displacement on the levitation performance of MBSCM is necessary when MBSCM is applied on the HTS maglev vehicle. The experimental investigations about the influence of lateral displacement on the levitation performance of a MBSCM with different trapped fluxes and applied fields are processed in this article. The analyses and conclusions of this article are useful for the practical application of MBSCM in HTS maglev system.

  8. Engineered Barrier System: Physical and Chemical Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. Dixon

    2004-04-26

    The conceptual and predictive models documented in this Engineered Barrier System: Physical and Chemical Environment Model report describe the evolution of the physical and chemical conditions within the waste emplacement drifts of the repository. The modeling approaches and model output data will be used in the total system performance assessment (TSPA-LA) to assess the performance of the engineered barrier system and the waste form. These models evaluate the range of potential water compositions within the emplacement drifts, resulting from the interaction of introduced materials and minerals in dust with water seeping into the drifts and with aqueous solutions forming bymore » deliquescence of dust (as influenced by atmospheric conditions), and from thermal-hydrological-chemical (THC) processes in the drift. These models also consider the uncertainty and variability in water chemistry inside the drift and the compositions of introduced materials within the drift. This report develops and documents a set of process- and abstraction-level models that constitute the engineered barrier system: physical and chemical environment model. Where possible, these models use information directly from other process model reports as input, which promotes integration among process models used for total system performance assessment. Specific tasks and activities of modeling the physical and chemical environment are included in the technical work plan ''Technical Work Plan for: In-Drift Geochemistry Modeling'' (BSC 2004 [DIRS 166519]). As described in the technical work plan, the development of this report is coordinated with the development of other engineered barrier system analysis model reports.« less

  9. Marshall Space Flight Center Ground Systems Development and Integration

    NASA Technical Reports Server (NTRS)

    Wade, Gina

    2016-01-01

    Ground Systems Development and Integration performs a variety of tasks in support of the Mission Operations Laboratory (MOL) and other Center and Agency projects. These tasks include various systems engineering processes such as performing system requirements development, system architecture design, integration, verification and validation, software development, and sustaining engineering of mission operations systems that has evolved the Huntsville Operations Support Center (HOSC) into a leader in remote operations for current and future NASA space projects. The group is also responsible for developing and managing telemetry and command configuration and calibration databases. Personnel are responsible for maintaining and enhancing their disciplinary skills in the areas of project management, software engineering, software development, software process improvement, telecommunications, networking, and systems management. Domain expertise in the ground systems area is also maintained and includes detailed proficiency in the areas of real-time telemetry systems, command systems, voice, video, data networks, and mission planning systems.

  10. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  11. Analysis of Hospital Processes with Process Mining Techniques.

    PubMed

    Orellana García, Arturo; Pérez Alfonso, Damián; Larrea Armenteros, Osvaldo Ulises

    2015-01-01

    Process mining allows for discovery, monitoring, and improving processes identified in information systems from their event logs. In hospital environments, process analysis has been a crucial factor for cost reduction, control and proper use of resources, better patient care, and achieving service excellence. This paper presents a new component for event logs generation in the Hospital Information System or HIS, developed at University of Informatics Sciences. The event logs obtained are used for analysis of hospital processes with process mining techniques. The proposed solution intends to achieve the generation of event logs in the system with high quality. The performed analyses allowed for redefining functions in the system and proposed proper flow of information. The study exposed the need to incorporate process mining techniques in hospital systems to analyze the processes execution. Moreover, we illustrate its application for making clinical and administrative decisions for the management of hospital activities.

  12. "Chemical transformers" from nanoparticle ensembles operated with logic.

    PubMed

    Motornov, Mikhail; Zhou, Jian; Pita, Marcos; Gopishetty, Venkateshwarlu; Tokarev, Ihor; Katz, Evgeny; Minko, Sergiy

    2008-09-01

    The pH-responsive nanoparticles were coupled with information-processing enzyme-based systems to yield "smart" signal-responsive hybrid systems with built-in Boolean logic. The enzyme systems performed AND/OR logic operations, transducing biochemical input signals into reversible structural changes (signal-directed self-assembly) of the nanoparticle assemblies, thus resulting in the processing and amplification of the biochemical signals. The hybrid system mimics biological systems in effective processing of complex biochemical information, resulting in reversible changes of the self-assembled structures of the nanoparticles. The bioinspired approach to the nanostructured morphing materials could be used in future self-assembled molecular robotic systems.

  13. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  14. Simulation of a master-slave event set processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comfort, J.C.

    1984-03-01

    Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less

  15. A methodology for evaluation of an interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Kovalick, William M.; Newcomer, Jeffrey A.; Wharton, Stephen W.

    1987-01-01

    Because of the considerable cost of an interactive multispectral image processing system, an evaluation of a prospective system should be performed to ascertain if it will be acceptable to the anticipated users. Evaluation of a developmental system indicated that the important system elements include documentation, user friendliness, image processing capabilities, and system services. The criteria and evaluation procedures for these elements are described herein. The following factors contributed to the success of the evaluation of the developmental system: (1) careful review of documentation prior to program development, (2) construction and testing of macromodules representing typical processing scenarios, (3) availability of other image processing systems for referral and verification, and (4) use of testing personnel with an applications perspective and experience with other systems. This evaluation was done in addition to and independently of program testing by the software developers of the system.

  16. Development and Implementation of a Generic Analysis Template for Structural-Thermal-Optical-Performance Modeling

    NASA Technical Reports Server (NTRS)

    Scola, Salvatore; Stavely, Rebecca; Jackson, Trevor; Boyer, Charlie; Osmundsen, Jim; Turczynski, Craig; Stimson, Chad

    2016-01-01

    Performance-related effects of system level temperature changes can be a key consideration in the design of many types of optical instruments. This is especially true for space-based imagers, which may require complex thermal control systems to maintain alignment of the optical components. Structural-Thermal-Optical-Performance (STOP) analysis is a multi-disciplinary process that can be used to assess the performance of these optical systems when subjected to the expected design environment. This type of analysis can be very time consuming, which makes it difficult to use as a trade study tool early in the project life cycle. In many cases, only one or two iterations can be performed over the course of a project. This limits the design space to best practices since it may be too difficult, or take too long, to test new concepts analytically. In order to overcome this challenge, automation, and a standard procedure for performing these studies is essential. A methodology was developed within the framework of the Comet software tool that captures the basic inputs, outputs, and processes used in most STOP analyses. This resulted in a generic, reusable analysis template that can be used for design trades for a variety of optical systems. The template captures much of the upfront setup such as meshing, boundary conditions, data transfer, naming conventions, and post-processing, and therefore saves time for each subsequent project. A description of the methodology and the analysis template is presented, and results are described for a simple telescope optical system.

  17. Development and implementation of a generic analysis template for structural-thermal-optical-performance modeling

    NASA Astrophysics Data System (ADS)

    Scola, Salvatore; Stavely, Rebecca; Jackson, Trevor; Boyer, Charlie; Osmundsen, Jim; Turczynski, Craig; Stimson, Chad

    2016-09-01

    Performance-related effects of system level temperature changes can be a key consideration in the design of many types of optical instruments. This is especially true for space-based imagers, which may require complex thermal control systems to maintain alignment of the optical components. Structural-Thermal-Optical-Performance (STOP) analysis is a multi-disciplinary process that can be used to assess the performance of these optical systems when subjected to the expected design environment. This type of analysis can be very time consuming, which makes it difficult to use as a trade study tool early in the project life cycle. In many cases, only one or two iterations can be performed over the course of a project. This limits the design space to best practices since it may be too difficult, or take too long, to test new concepts analytically. In order to overcome this challenge, automation, and a standard procedure for performing these studies is essential. A methodology was developed within the framework of the Comet software tool that captures the basic inputs, outputs, and processes used in most STOP analyses. This resulted in a generic, reusable analysis template that can be used for design trades for a variety of optical systems. The template captures much of the upfront setup such as meshing, boundary conditions, data transfer, naming conventions, and post-processing, and therefore saves time for each subsequent project. A description of the methodology and the analysis template is presented, and results are described for a simple telescope optical system.

  18. Structural health monitoring feature design by genetic programming

    NASA Astrophysics Data System (ADS)

    Harvey, Dustin Y.; Todd, Michael D.

    2014-09-01

    Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and other high-capital or life-safety critical structures. Conventional data processing involves pre-processing and extraction of low-dimensional features from in situ time series measurements. The features are then input to a statistical pattern recognition algorithm to perform the relevant classification or regression task necessary to facilitate decisions by the SHM system. Traditional design of signal processing and feature extraction algorithms can be an expensive and time-consuming process requiring extensive system knowledge and domain expertise. Genetic programming, a heuristic program search method from evolutionary computation, was recently adapted by the authors to perform automated, data-driven design of signal processing and feature extraction algorithms for statistical pattern recognition applications. The proposed method, called Autofead, is particularly suitable to handle the challenges inherent in algorithm design for SHM problems where the manifestation of damage in structural response measurements is often unclear or unknown. Autofead mines a training database of response measurements to discover information-rich features specific to the problem at hand. This study provides experimental validation on three SHM applications including ultrasonic damage detection, bearing damage classification for rotating machinery, and vibration-based structural health monitoring. Performance comparisons with common feature choices for each problem area are provided demonstrating the versatility of Autofead to produce significant algorithm improvements on a wide range of problems.

  19. A Realization of Theoretical Maximum Performance in IPSec on Gigabit Ethernet

    NASA Astrophysics Data System (ADS)

    Onuki, Atsushi; Takeuchi, Kiyofumi; Inada, Toru; Tokiniwa, Yasuhisa; Ushirozawa, Shinobu

    This paper describes “IPSec(IP Security) VPN system" and how it attains a theoretical maximum performance on Gigabit Ethernet. The Conventional System is implemented by software. However, the system has several bottlenecks which must be overcome to realize a theoretical maximum performance on Gigabit Ethernet. Thus, we newly propose IPSec VPN System with the FPGA(Field Programmable Gate Array) based hardware architecture, which transmits a packet by the pipe-lined flow processing and has 6 parallel structure of encryption and authentication engines. We show that our system attains the theoretical maximum performance in the short packet which is difficult to realize until now.

  20. Applications of massively parallel computers in telemetry processing

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; Pritchard, Jim; Knoble, Gordon

    1994-01-01

    Telemetry processing refers to the reconstruction of full resolution raw instrumentation data with artifacts, of space and ground recording and transmission, removed. Being the first processing phase of satellite data, this process is also referred to as level-zero processing. This study is aimed at investigating the use of massively parallel computing technology in providing level-zero processing to spaceflights that adhere to the recommendations of the Consultative Committee on Space Data Systems (CCSDS). The workload characteristics, of level-zero processing, are used to identify processing requirements in high-performance computing systems. An example of level-zero functions on a SIMD MPP, such as the MasPar, is discussed. The requirements in this paper are based in part on the Earth Observing System (EOS) Data and Operation System (EDOS).

  1. Wavelet-Based Processing for Fiber Optic Sensing Systems

    NASA Technical Reports Server (NTRS)

    Hamory, Philip J. (Inventor); Parker, Allen R., Jr. (Inventor)

    2016-01-01

    The present invention is an improved method of processing conglomerate data. The method employs a Triband Wavelet Transform that decomposes and decimates the conglomerate signal to obtain a final result. The invention may be employed to improve performance of Optical Frequency Domain Reflectometry systems.

  2. Advanced Control Synthesis for Reverse Osmosis Water Desalination Processes.

    PubMed

    Phuc, Bui Duc Hong; You, Sam-Sang; Choi, Hyeung-Six; Jeong, Seok-Kwon

    2017-11-01

      In this study, robust control synthesis has been applied to a reverse osmosis desalination plant whose product water flow and salinity are chosen as two controlled variables. The reverse osmosis process has been selected to study since it typically uses less energy than thermal distillation. The aim of the robust design is to overcome the limitation of classical controllers in dealing with large parametric uncertainties, external disturbances, sensor noises, and unmodeled process dynamics. The analyzed desalination process is modeled as a multi-input multi-output (MIMO) system with varying parameters. The control system is decoupled using a feed forward decoupling method to reduce the interactions between control channels. Both nominal and perturbed reverse osmosis systems have been analyzed using structured singular values for their stabilities and performances. Simulation results show that the system responses meet all the control requirements against various uncertainties. Finally the reduced order controller provides excellent robust performance, with achieving decoupling, disturbance attenuation, and noise rejection. It can help to reduce the membrane cleanings, increase the robustness against uncertainties, and lower the energy consumption for process monitoring.

  3. Cargo identification algorithms facilitating unmanned/unattended inspection at high throughput portals

    NASA Astrophysics Data System (ADS)

    Chalmers, Alex

    2007-10-01

    A simple model is presented of a possible inspection regimen applied to each leg of a cargo containers' journey between its point of origin and destination. Several candidate modalities are proposed to be used at multiple remote locations to act as a pre-screen inspection as the target approaches a perimeter and as the primary inspection modality at the portal. Information from multiple data sets are fused to optimize the costs and performance of a network of such inspection systems. A series of image processing algorithms are presented that automatically process X-ray images of containerized cargo. The goal of this processing is to locate the container in a real time stream of traffic traversing a portal without impeding the flow of commerce. Such processing may facilitate the inclusion of unmanned/unattended inspection systems in such a network. Several samples of the processing applied to data collected from deployed systems are included. Simulated data from a notional cargo inspection system with multiple sensor modalities and advanced data fusion algorithms are also included to show the potential increased detection and throughput performance of such a configuration.

  4. Analytic and Heuristic Processing Influences on Adolescent Reasoning and Decision-Making.

    ERIC Educational Resources Information Center

    Klaczynski, Paul A.

    2001-01-01

    Examined the relationship between age and the normative/descriptive gap--the discrepancy between actual reasoning and traditional standards for reasoning. Found that middle adolescents performed closer to normative ideals than early adolescents. Factor analyses suggested that performance was based on two processing systems, analytic and heuristic…

  5. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  6. Development and Evaluation of Video Systems for Performance Testing and Student Monitoring. Final Report.

    ERIC Educational Resources Information Center

    Hayes, John; Pulliam, Robert

    A video performance monitoring system was developed by the URS/Matrix Company, under contract to the USAF Human Resources Laboratory and was evaluated experimentally in three technical training settings. Using input from 1 to 8 video cameras, the system provided a flexible combination of signal processing, direct monitor, recording and replay…

  7. Towards a Performance Data and Development System: Getting Rid of Performance Appraisal.

    ERIC Educational Resources Information Center

    Janz, Tom

    If organizations are to measure and use worker performance information effectively, they must distinguish between two components of performance appraisal: performance data (recorded information for comparing workers) and performance development (the process of improving human assets by discouraging ineffective and reinforcing effective job…

  8. An FPGA-Based Rapid Wheezing Detection System

    PubMed Central

    Lin, Bor-Shing; Yen, Tian-Shiue

    2014-01-01

    Wheezing is often treated as a crucial indicator in the diagnosis of obstructive pulmonary diseases. A rapid wheezing detection system may help physicians to monitor patients over the long-term. In this study, a portable wheezing detection system based on a field-programmable gate array (FPGA) is proposed. This system accelerates wheezing detection, and can be used as either a single-process system, or as an integrated part of another biomedical signal detection system. The system segments sound signals into 2-second units. A short-time Fourier transform was used to determine the relationship between the time and frequency components of wheezing sound data. A spectrogram was processed using 2D bilateral filtering, edge detection, multithreshold image segmentation, morphological image processing, and image labeling, to extract wheezing features according to computerized respiratory sound analysis (CORSA) standards. These features were then used to train the support vector machine (SVM) and build the classification models. The trained model was used to analyze sound data to detect wheezing. The system runs on a Xilinx Virtex-6 FPGA ML605 platform. The experimental results revealed that the system offered excellent wheezing recognition performance (0.912). The detection process can be used with a clock frequency of 51.97 MHz, and is able to perform rapid wheezing classification. PMID:24481034

  9. High Performance MG-System Alloys For Weight Saving Applications: First Year Results From The Green Metallurgy EU Project

    NASA Astrophysics Data System (ADS)

    D'Errico, Fabrizio; Plaza, Gerardo Garces; Hofer, Markus; Kim, Shae K.

    The GREEN METALLURGY Project, a LIFE+ project co-financed by the EU Commission, has just concluded its first year. The Project seeks to set manufacturing processes at a pre-industrial scale for nanostructured-based high-performance Mg-Zn(Y) magnesium alloys. The Project's goal is the reduction of specific energy consumed and the overall carbon-footprint produced in the cradle-to-exit gate phases. Preliminary results addressed potentialities of the upstream manufacturing process pathway. Two Mg-Zn(Y) system alloys with rapid solidifying powders have been produced and directly extruded for 100% densification. Examination of the mechanical properties showed that such materials exhibit strength and elongation comparable to several high performing aluminum alloys; 390 MPa and 440 MPa for the average UTS for two different system alloys, and 10% and 15% elongations for two system alloys. These results, together with the low-environmental impact targeted, make these novel Mg alloys competitive as lightweight high-performance materials for automotive components.

  10. Methodologies and systems for heterogeneous concurrent computing

    NASA Technical Reports Server (NTRS)

    Sunderam, V. S.

    1994-01-01

    Heterogeneous concurrent computing is gaining increasing acceptance as an alternative or complementary paradigm to multiprocessor-based parallel processing as well as to conventional supercomputing. While algorithmic and programming aspects of heterogeneous concurrent computing are similar to their parallel processing counterparts, system issues, partitioning and scheduling, and performance aspects are significantly different. In this paper, we discuss critical design and implementation issues in heterogeneous concurrent computing, and describe techniques for enhancing its effectiveness. In particular, we highlight the system level infrastructures that are required, aspects of parallel algorithm development that most affect performance, system capabilities and limitations, and tools and methodologies for effective computing in heterogeneous networked environments. We also present recent developments and experiences in the context of the PVM system and comment on ongoing and future work.

  11. Recent progress in solution plasma-synthesized-carbon-supported catalysts for energy conversion systems

    NASA Astrophysics Data System (ADS)

    Lun Li, Oi; Lee, Hoonseung; Ishizaki, Takahiro

    2018-01-01

    Carbon-based materials have been widely utilized as the electrode materials in energy conversion and storage technologies, such as fuel cells and metal-air batteries. In these systems, the oxygen reduction reaction is an important step that determines the overall performance. A novel synthesis route, named the solution plasma process, has been recently utilized to synthesize various types of metal-based and heteroatom-doped carbon catalysts. In this review, we summarize cutting-edge technologies involving the synthesis and modeling of carbon-supported catalysts synthesized via solution plasma process, followed by current progress on the electrocatalytic performance of these catalysts. This review provides the fundamental and state-of-the-art performance of solution-plasma-synthesized electrode materials, as well as the remaining scientific and technological challenges for this process.

  12. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Optical signal-processing systems based on anisotropic media

    NASA Astrophysics Data System (ADS)

    Kiyashko, B. V.

    1995-10-01

    Partially coherent optical systems for signal processing are considered. The transfer functions are formed in these systems by interference of polarised light transmitted by an anisotropic medium. It is shown that such systems can perform various integral transformations of both optical and electric signals, in particular, two-dimensional Fourier and Fresnel transformations, as well as spectral analysis of weak light sources. It is demonstrated that such systems have the highest luminosity and vibration immunity among the systems with interference formation of transfer functions. An experimental investigation is reported of the application of these systems in the processing of signals from a linear hydroacoustic antenna array, and in measurements of the optical spectrum and of the intrinsic noise.

  13. Research on the Environmental Performance Evaluation of Electronic Waste Reverse Logistics Enterprise

    NASA Astrophysics Data System (ADS)

    Yang, Yu-Xiang; Chen, Fei-Yang; Tong, Tong

    According to the characteristic of e-waste reverse logistics, environmental performance evaluation system of electronic waste reverse logistics enterprise is proposed. We use fuzzy analytic hierarchy process method to evaluate the system. In addition, this paper analyzes the enterprise X, as an example, to discuss the evaluation method. It's important to point out attributes and indexes which should be strengthen during the process of ewaste reverse logistics and provide guidance suggestions to domestic e-waste reverse logistics enterprises.

  14. Seeing the forest for the trees: Networked workstations as a parallel processing computer

    NASA Technical Reports Server (NTRS)

    Breen, J. O.; Meleedy, D. M.

    1992-01-01

    Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.

  15. Study on Circular Complex viewed from Environmental Systems

    NASA Astrophysics Data System (ADS)

    Takeguchi, Tomoo; Adachi, Katsushige; Yoshikawa, Akira; Hiratsuka, Akira; Tsujino, Ryoji; Iguchi, Manabu

    In machining processes, cutting fluids are generally used for cooling and lubricating workpieces at the point cutting. However, these fluids frequently include chlorine, sulfur, phosphorus, or other additives. The chemicals not only become a mist affecting the health of workers engaged in the processing but also make the workshop environment worse. In particular, the chlorine becomes one of the causes of global warming by treating waste oil under high temperature conditions. It is furthermore said that huge cost beyond the purchase cost of oil occurs and dioxins (carcinogen) usually exist in the waste oil. Therefore, an environmentally-friendly cooling-air cutting system is required from the standpoint of green manufacturing. This system has been noted as a technique to solve the issues against the environment mentioned above. In the cooling-air cutting processing, the amount of CO2 emission shows a low value compared with the dry cutting one which uses oil. It is therefore thought that the cooling-air cutting system is a very important processing technique as an environmental countermeasure. At present, in strictly economic and environmental situations, the compatibility of the betterment of production efficiency with the improvement of environment is a subject in the actual spot of a cut processing. This study deals with the test results of cooling-air drilling performance from the viewpoint of taking green manufacturing into account. The workpiece made of die steel SKDll was manufactured by the cooling-air drilling performance at a revolution of 840 rpm and a temperature of -20°C with a high-speed steel drill (SKH56). The results were compared with those for the dry cutting performance. The main results obtained in this study are as follows: 1) The tool life for cooling-air drilling performance was about 6 times as long as that for the dry cutting performance. 2) The chip temperature for cooling-air drilling was 220°C lower than that for the dry cutting performance.

  16. Spitzer Telemetry Processing System

    NASA Technical Reports Server (NTRS)

    Stanboli, Alice; Martinez, Elmain M.; McAuley, James M.

    2013-01-01

    The Spitzer Telemetry Processing System (SirtfTlmProc) was designed to address objectives of JPL's Multi-mission Image Processing Lab (MIPL) in processing spacecraft telemetry and distributing the resulting data to the science community. To minimize costs and maximize operability, the software design focused on automated error recovery, performance, and information management. The system processes telemetry from the Spitzer spacecraft and delivers Level 0 products to the Spitzer Science Center. SirtfTlmProc is a unique system with automated error notification and recovery, with a real-time continuous service that can go quiescent after periods of inactivity. The software can process 2 GB of telemetry and deliver Level 0 science products to the end user in four hours. It provides analysis tools so the operator can manage the system and troubleshoot problems. It automates telemetry processing in order to reduce staffing costs.

  17. Development and evaluation of an intelligent traceability system for frozen tilapia fillet processing.

    PubMed

    Xiao, Xinqing; Fu, Zetian; Qi, Lin; Mira, Trebar; Zhang, Xiaoshuan

    2015-10-01

    The main export varieties in China are brand-name, high-quality bred aquatic products. Among them, tilapia has become the most important and fast-growing species since extensive consumer markets in North America and Europe have evolved as a result of commodity prices, year-round availability and quality of fresh and frozen products. As the largest tilapia farming country, China has over one-third of its tilapia production devoted to further processing and meeting foreign market demand. Using by tilapia fillet processing, this paper introduces the efforts for developing and evaluating ITS-TF: an intelligent traceability system integrated with statistical process control (SPC) and fault tree analysis (FTA). Observations, literature review and expert questionnaires were used for system requirement and knowledge acquisition; scenario simulation was applied to evaluate and validate ITS-TF performance. The results show that traceability requirement is evolved from a firefighting model to a proactive model for enhancing process management capacity for food safety; ITS-TF transforms itself as an intelligent system to provide functions on early warnings and process management by integrated SPC and FTA. The valuable suggestion that automatic data acquisition and communication technology should be integrated into ITS-TF was achieved for further system optimization, perfection and performance improvement. © 2014 Society of Chemical Industry.

  18. Missile signal processing common computer architecture for rapid technology upgrade

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  19. Aquarius's Instrument Science Data System (ISDS) Automated to Acquire, Process, Trend Data and Produce Radiometric System Assessment Reports

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Aquarius Radiometer, a subsystem of the Aquarius Instrument required a data acquisition ground system to support calibration and radiometer performance assessment. To support calibration and compose performance assessments, we developed an automated system which uploaded raw data to a ftp server and saved raw and processed data to a database. This paper details the overall functionalities of the Aquarius Instrument Science Data System (ISDS) and the individual electrical ground support equipment (EGSE) which produced data files that were infused into the ISDS. Real time EGSEs include an ICDS Simulator, Calibration GSE, Labview controlled power supply, and a chamber data acquisition system. ICDS Simulator serves as a test conductor primary workstation, collecting radiometer housekeeping (HK) and science data and passing commands and HK telemetry collection request to the radiometer. Calibration GSE (Radiometer Active Test Source) provides source choice from multiple targets for the radiometer external calibration. Power Supply GSE, controlled by labview, provides real time voltage and current monitoring of the radiometer. And finally the chamber data acquisition system produces data reflecting chamber vacuum pressure, thermistor temperatures, AVG and watts. Each GSE system produce text based data files every two to six minutes and automatically copies the data files to the Central Archiver PC. The Archiver PC stores the data files, schedules automated uploads of these files to an external FTP server, and accepts request to copy all data files to the ISDS for offline data processing and analysis. Aquarius Radiometer ISDS contains PHP and MATLab programs to parse, process and save all data to a MySQL database. Analysis tools (MATLab programs) in the ISDS system are capable of displaying radiometer science, telemetry and auxiliary data in near real time as well as performing data analysis and producing automated performance assessment reports of the Aquarius Radiometer.

  20. Decomposability and convex structure of thermal processes

    NASA Astrophysics Data System (ADS)

    Mazurek, Paweł; Horodecki, Michał

    2018-05-01

    We present an example of a thermal process (TP) for a system of d energy levels, which cannot be performed without an instant access to the whole energy space. This TP is uniquely connected with a transition between some states of the system, that cannot be performed without access to the whole energy space even when approximate transitions are allowed. Pursuing the question about the decomposability of TPs into convex combinations of compositions of processes acting non-trivially on smaller subspaces, we investigate transitions within the subspace of states diagonal in the energy basis. For three level systems, we determine the set of extremal points of these operations, as well as the minimal set of operations needed to perform an arbitrary TP, and connect the set of TPs with thermomajorization criterion. We show that the structure of the set depends on temperature, which is associated with the fact that TPs cannot increase deterministically extractable work from a state—the conclusion that holds for arbitrary d level system. We also connect the decomposability problem with detailed balance symmetry of an extremal TPs.

  1. A Review and Analysis of Performance Appraisal Processes, Volume III. Performance Appraisal for Professional Service Employees: Non-Technical Report. Professionalism in Schools Series.

    ERIC Educational Resources Information Center

    Ondrack, D. A.; Oliver, C.

    The third of three volumes, this report summarizes the findings of, first, a review and analysis of published literature on performance appraisal in general and particularly on the use of appraisals in public education systems, and, second, a series of field-site investigations of performance appraisal systems in action. The field site studies of…

  2. SpaceCubeX: A Framework for Evaluating Hybrid Multi-Core CPU FPGA DSP Architectures

    NASA Technical Reports Server (NTRS)

    Schmidt, Andrew G.; Weisz, Gabriel; French, Matthew; Flatley, Thomas; Villalpando, Carlos Y.

    2017-01-01

    The SpaceCubeX project is motivated by the need for high performance, modular, and scalable on-board processing to help scientists answer critical 21st century questions about global climate change, air quality, ocean health, and ecosystem dynamics, while adding new capabilities such as low-latency data products for extreme event warnings. These goals translate into on-board processing throughput requirements that are on the order of 100-1,000 more than those of previous Earth Science missions for standard processing, compression, storage, and downlink operations. To study possible future architectures to achieve these performance requirements, the SpaceCubeX project provides an evolvable testbed and framework that enables a focused design space exploration of candidate hybrid CPU/FPGA/DSP processing architectures. The framework includes ArchGen, an architecture generator tool populated with candidate architecture components, performance models, and IP cores, that allows an end user to specify the type, number, and connectivity of a hybrid architecture. The framework requires minimal extensions to integrate new processors, such as the anticipated High Performance Spaceflight Computer (HPSC), reducing time to initiate benchmarking by months. To evaluate the framework, we leverage a wide suite of high performance embedded computing benchmarks and Earth science scenarios to ensure robust architecture characterization. We report on our projects Year 1 efforts and demonstrate the capabilities across four simulation testbed models, a baseline SpaceCube 2.0 system, a dual ARM A9 processor system, a hybrid quad ARM A53 and FPGA system, and a hybrid quad ARM A53 and DSP system.

  3. Evaluating supplier quality performance using analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Kalimuthu Rajoo, Shanmugam Sundram; Kasim, Maznah Mat; Ahmad, Nazihah

    2013-09-01

    This paper elaborates the importance of evaluating supplier quality performance to an organization. Supplier quality performance evaluation reflects the actual performance of the supplier exhibited at customer's end. It is critical in enabling the organization to determine the area of improvement and thereafter works with supplier to close the gaps. Success of the customer partly depends on supplier's quality performance. Key criteria as quality, cost, delivery, technology support and customer service are categorized as main factors in contributing to supplier's quality performance. 18 suppliers' who were manufacturing automotive application parts evaluated in year 2010 using weight point system. There were few suppliers with common rating which led to common ranking observed by few suppliers'. Analytical Hierarchy Process (AHP), a user friendly decision making tool for complex and multi criteria problems was used to evaluate the supplier's quality performance challenging the weight point system that was used for 18 suppliers'. The consistency ratio was checked for criteria and sub-criteria. Final results of AHP obtained with no overlap ratings, therefore yielded a better decision making methodology as compared to weight point rating system.

  4. CORDIC-based digital signal processing (DSP) element for adaptive signal processing

    NASA Astrophysics Data System (ADS)

    Bolstad, Gregory D.; Neeld, Kenneth B.

    1995-04-01

    The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.

  5. A system level model for preliminary design of a space propulsion solid rocket motor

    NASA Astrophysics Data System (ADS)

    Schumacher, Daniel M.

    Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.

  6. Machine vision process monitoring on a poultry processing kill line: results from an implementation

    NASA Astrophysics Data System (ADS)

    Usher, Colin; Britton, Dougl; Daley, Wayne; Stewart, John

    2005-11-01

    Researchers at the Georgia Tech Research Institute designed a vision inspection system for poultry kill line sorting with the potential for process control at various points throughout a processing facility. This system has been successfully operating in a plant for over two and a half years and has been shown to provide multiple benefits. With the introduction of HACCP-Based Inspection Models (HIMP), the opportunity for automated inspection systems to emerge as viable alternatives to human screening is promising. As more plants move to HIMP, these systems have the great potential for augmenting a processing facilities visual inspection process. This will help to maintain a more consistent and potentially higher throughput while helping the plant remain within the HIMP performance standards. In recent years, several vision systems have been designed to analyze the exterior of a chicken and are capable of identifying Food Safety 1 (FS1) type defects under HIMP regulatory specifications. This means that a reliable vision system can be used in a processing facility as a carcass sorter to automatically detect and divert product that is not suitable for further processing. This improves the evisceration line efficiency by creating a smaller set of features that human screeners are required to identify. This can reduce the required number of screeners or allow for faster processing line speeds. In addition to identifying FS1 category defects, the Georgia Tech vision system can also identify multiple "Other Consumer Protection" (OCP) category defects such as skin tears, bruises, broken wings, and cadavers. Monitoring this data in an almost real-time system allows the processing facility to address anomalies as soon as they occur. The Georgia Tech vision system can record minute-by-minute averages of the following defects: Septicemia Toxemia, cadaver, over-scald, bruises, skin tears, and broken wings. In addition to these defects, the system also records the length and width information of the entire chicken and different parts such as the breast, the legs, the wings, and the neck. The system also records average color and miss- hung birds, which can cause problems in further processing. Other relevant production information is also recorded including truck arrival and offloading times, catching crew and flock serviceman data, the grower, the breed of chicken, and the number of dead-on- arrival (DOA) birds per truck. Several interesting observations from the Georgia Tech vision system, which has been installed in a poultry processing plant for several years, are presented. Trend analysis has been performed on the performance of the catching crews and flock serviceman, and the results of the processed chicken as they relate to the bird dimensions and equipment settings in the plant. The results have allowed researchers and plant personnel to identify potential areas for improvement in the processing operation, which should result in improved efficiency and yield.

  7. Practical, Real-Time, and Robust Watermarking on the Spatial Domain for High-Definition Video Contents

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu

    Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.

  8. An optical processor for object recognition and tracking

    NASA Technical Reports Server (NTRS)

    Sloan, J.; Udomkesmalee, S.

    1987-01-01

    The design and development of a miniaturized optical processor that performs real time image correlation are described. The optical correlator utilizes the Vander Lugt matched spatial filter technique. The correlation output, a focused beam of light, is imaged onto a CMOS photodetector array. In addition to performing target recognition, the device also tracks the target. The hardware, composed of optical and electro-optical components, occupies only 590 cu cm of volume. A complete correlator system would also include an input imaging lens. This optical processing system is compact, rugged, requires only 3.5 watts of operating power, and weighs less than 3 kg. It represents a major achievement in miniaturizing optical processors. When considered as a special-purpose processing unit, it is an attractive alternative to conventional digital image recognition processing. It is conceivable that the combined technology of both optical and ditital processing could result in a very advanced robot vision system.

  9. Process mining is an underutilized clinical research tool in transfusion medicine.

    PubMed

    Quinn, Jason G; Conrad, David M; Cheng, Calvino K

    2017-03-01

    To understand inventory performance, transfusion services commonly use key performance indicators (KPIs) as summary descriptors of inventory efficiency that are graphed, trended, and used to benchmark institutions. Here, we summarize current limitations in KPI-based evaluation of blood bank inventory efficiency and propose process mining as an ideal methodology for application to inventory management research to improve inventory flows and performance. The transit of a blood product from inventory receipt to final disposition is complex and relates to many internal and external influences, and KPIs may be inadequate to fully understand the complexity of the blood supply chain and how units interact with its processes. Process mining lends itself well to analysis of blood bank inventories, and modern laboratory information systems can track nearly all of the complex processes that occur in the blood bank. Process mining is an analytical tool already used in other industries and can be applied to blood bank inventory management and research through laboratory information systems data using commercial applications. Although the current understanding of real blood bank inventories is value-centric through KPIs, it potentially can be understood from a process-centric lens using process mining. © 2017 AABB.

  10. Early Benchmarks of Product Generation Capabilities of the GOES-R Ground System for Operational Weather Prediction

    NASA Astrophysics Data System (ADS)

    Kalluri, S. N.; Haman, B.; Vititoe, D.

    2014-12-01

    The ground system under development for Geostationary Operational Environmental Satellite-R (GOES-R) series of weather satellite has completed a key milestone in implementing the science algorithms that process raw sensor data to higher level products in preparation for launch. Real time observations from GOES-R are expected to make significant contributions to Earth and space weather prediction, and there are stringent requirements to product weather products at very low latency to meet NOAA's operational needs. Simulated test data from all the six GOES-R sensors are being processed by the system to test and verify performance of the fielded system. Early results show that the system development is on track to meet functional and performance requirements to process science data. Comparison of science products generated by the ground system from simulated data with those generated by the algorithm developers show close agreement among data sets which demonstrates that the algorithms are implemented correctly. Successful delivery of products to AWIPS and the Product Distribution and Access (PDA) system from the core system demonstrate that the external interfaces are working.

  11. Bristol Ridge: A 28-nm $$\\times$$ 86 Performance-Enhanced Microprocessor Through System Power Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundaram, Sriram; Grenat, Aaron; Naffziger, Samuel

    Power management techniques can be effective at extracting more performance and energy efficiency out of mature systems on chip (SoCs). For instance, the peak performance of microprocessors is often limited by worst case technology (Vmax), infrastructure (thermal/electrical), and microprocessor usage assumptions. Performance/watt of microprocessors also typically suffers from guard bands associated with the test and binning processes as well as worst case aging/lifetime degradation. Similarly, on multicore processors, shared voltage rails tend to limit the peak performance achievable in low thread count workloads. In this paper, we describe five power management techniques that maximize the per-part performance under the before-mentionedmore » constraints. Using these techniques, we demonstrate a net performance increase of up to 15% depending on the application and TDP of the SoC, implemented on 'Bristol Ridge,' a 28-nm CMOS, dual-core x 86 accelerated processing unit.« less

  12. Real-time implementing wavefront reconstruction for adaptive optics

    NASA Astrophysics Data System (ADS)

    Wang, Caixia; Li, Mei; Wang, Chunhong; Zhou, Luchun; Jiang, Wenhan

    2004-12-01

    The capability of real time wave-front reconstruction is important for an adaptive optics (AO) system. The bandwidth of system and the real-time processing ability of the wave-front processor is mainly affected by the speed of calculation. The system requires enough number of subapertures and high sampling frequency to compensate atmospheric turbulence. The number of reconstruction operation is increased accordingly. Since the performance of AO system improves with the decrease of calculation latency, it is necessary to study how to increase the speed of wavefront reconstruction. There are two methods to improve the real time of the reconstruction. One is to convert the wavefront reconstruction matrix, such as by wavelet or FFT. The other is enhancing the performance of the processing element. Analysis shows that the latency cutting is performed with the cost of reconstruction precision by the former method. In this article, the latter method is adopted. From the characteristic of the wavefront reconstruction algorithm, a systolic array by FPGA is properly designed to implement real-time wavefront reconstruction. The system delay is reduced greatly by the utilization of pipeline and parallel processing. The minimum latency of reconstruction is the reconstruction calculation of one subaperture.

  13. Automation of orbit determination functions for National Aeronautics and Space Administration (NASA)-supported satellite missions

    NASA Technical Reports Server (NTRS)

    Mardirossian, H.; Beri, A. C.; Doll, C. E.

    1990-01-01

    The Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC) provides spacecraft trajectory determination for a wide variety of National Aeronautics and Space Administration (NASA)-supported satellite missions, using the Tracking Data Relay Satellite System (TDRSS) and Ground Spaceflight and Tracking Data Network (GSTDN). To take advantage of computerized decision making processes that can be used in spacecraft navigation, the Orbit Determination Automation System (ODAS) was designed, developed, and implemented as a prototype system to automate orbit determination (OD) and orbit quality assurance (QA) functions performed by orbit operations. Based on a machine-resident generic schedule and predetermined mission-dependent QA criteria, ODAS autonomously activates an interface with the existing trajectory determination system using a batch least-squares differential correction algorithm to perform the basic OD functions. The computational parameters determined during the OD are processed to make computerized decisions regarding QA, and a controlled recovery process is activated when the criteria are not satisfied. The complete cycle is autonomous and continuous. ODAS was extensively tested for performance under conditions resembling actual operational conditions and found to be effective and reliable for extended autonomous OD. Details of the system structure and function are discussed, and test results are presented.

  14. Automation of orbit determination functions for National Aeronautics and Space Administration (NASA)-supported satellite missions

    NASA Technical Reports Server (NTRS)

    Mardirossian, H.; Heuerman, K.; Beri, A.; Samii, M. V.; Doll, C. E.

    1989-01-01

    The Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC) provides spacecraft trajectory determination for a wide variety of National Aeronautics and Space Administration (NASA)-supported satellite missions, using the Tracking Data Relay Satellite System (TDRSS) and Ground Spaceflight and Tracking Data Network (GSTDN). To take advantage of computerized decision making processes that can be used in spacecraft navigation, the Orbit Determination Automation System (ODAS) was designed, developed, and implemented as a prototype system to automate orbit determination (OD) and orbit quality assurance (QA) functions performed by orbit operations. Based on a machine-resident generic schedule and predetermined mission-dependent QA criteria, ODAS autonomously activates an interface with the existing trajectory determination system using a batch least-squares differential correction algorithm to perform the basic OD functions. The computational parameters determined during the OD are processed to make computerized decisions regarding QA, and a controlled recovery process isactivated when the criteria are not satisfied. The complete cycle is autonomous and continuous. ODAS was extensively tested for performance under conditions resembling actual operational conditions and found to be effective and reliable for extended autonomous OD. Details of the system structure and function are discussed, and test results are presented.

  15. Okayama optical polarimetry and spectroscopy system (OOPS) II. Network-transparent control software.

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Kurakami, T.; Shimizu, Y.; Yutani, M.

    Control system of the OOPS (Okayama Optical Polarimetry and Spectroscopy system) is designed to integrate several instruments whose controllers are distributed over a network; the OOPS instrument, a CCD camera and data acquisition unit, the 91 cm telescope, an autoguider, a weather monitor, and an image display tool SAOimage. With the help of message-based communication, the control processes cooperate with related processes to perform an astronomical observation under supervising control by a scheduler process. A logger process collects status data of all the instruments to distribute them to related processes upon request. Software structure of each process is described.

  16. Operational compatibility of 30-centimeter-diameter ion thruster with integrally regulated solar array power source

    NASA Technical Reports Server (NTRS)

    Gooder, S. T.

    1977-01-01

    System tests were performed in which Integrally Regulated Solar Arrays (IRSA's) were used to directly power the beam and accelerator loads of a 30-cm-diameter, electron bombardment, mercury ion thruster. The remaining thruster loads were supplied from conventional power-processing circuits. This combination of IRSA's and conventional circuits formed a hybrid power processor. Thruster performance was evaluated at 3/4- and 1-A beam currents with both the IRSA-hybrid and conventional power processors and was found to be identical for both systems. Power processing is significantly more efficient with the hybrid system. System dynamics and IRSA response to thruster arcs are also examined.

  17. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather; Robinson, William H.; Rech, Paolo

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  18. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE PAGES

    Quinn, Heather; Robinson, William H.; Rech, Paolo; ...

    2015-12-17

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  19. Impact of scatterometer wind (ASCAT-A/B) data assimilation on semi real-time forecast system at KIAPS

    NASA Astrophysics Data System (ADS)

    Han, H. J.; Kang, J. H.

    2016-12-01

    Since Jul. 2015, KIAPS (Korea Institute of Atmospheric Prediction Systems) has been performing the semi real-time forecast system to assess the performance of their forecast system as a NWP model. KPOP (KIAPS Protocol for Observation Processing) is a part of KIAPS data assimilation system and has been performing well in KIAPS semi real-time forecast system. In this study, due to the fact that KPOP would be able to treat the scatterometer wind data, we analyze the effect of scatterometer wind (ASCAT-A/B) on KIAPS semi real-time forecast system. O-B global distribution and statistics of scatterometer wind give use two information which are the difference between background field and observation is not too large and KPOP processed the scatterometer wind data well. The changes of analysis increment because of O-B global distribution appear remarkably at the bottom of atmospheric field. It also shows that scatterometer wind data cover wide ocean where data would be able to short. Performance of scatterometer wind data can be checked through the vertical error reduction against IFS between background and analysis field and vertical statistics of O-A. By these analysis result, we can notice that scatterometer wind data will influence the positive effect on lower level performance of semi real-time forecast system at KIAPS. After, long-term result based on effect of scatterometer wind data will be analyzed.

  20. Simulation of Unique Pressure Changing Steps and Situations in Psa Processes

    NASA Technical Reports Server (NTRS)

    Ebner, Armin D.; Mehrotra, Amal; Knox, James C.; LeVan, Douglas; Ritter, James A.

    2007-01-01

    A more rigorous cyclic adsorption process simulator is being developed for use in the development and understanding of new and existing PSA processes. Unique features of this new version of the simulator that Ritter and co-workers have been developing for the past decade or so include: multiple absorbent layers in each bed, pressure drop in the column, valves for entering and exiting flows and predicting real-time pressurization and depressurization rates, ability to account for choked flow conditions, ability to pressurize and depressurize simultaneously from both ends of the columns, ability to equalize between multiple pairs of columns, ability to equalize simultaneously from both ends of pairs of columns, and ability to handle very large pressure ratios and hence velocities associated with deep vacuum systems. These changes to the simulator now provide for unique opportunities to study the effects of novel pressure changing steps and extreme process conditions on the performance of virtually any commercial or developmental PSA process. This presentation will provide an overview of the cyclic adsorption process simulator equations and algorithms used in the new adaptation. It will focus primarily on the novel pressure changing steps and their effects on the performance of a PSA system that epitomizes the extremes of PSA process design and operation. This PSA process is a sorbent-based atmosphere revitalization (SBAR) system that NASA is developing for new manned exploration vehicles. This SBAR system consists of a 2-bed 3-step 3-layer system that operates between atmospheric pressure and the vacuum of space, evacuates from both ends of the column simultaneously, experiences choked flow conditions during pressure changing steps, and experiences a continuously changing feed composition, as it removes metabolic CO2 and H20 from a closed and fixed volume, i.e., the spacecraft cabin. Important process performance indicators of this SBAR system are size, and the corresponding CO2 and H20 removal efficiencies, and N2 and O2 loss rates. Results of the fundamental behavior of this PSA process during extreme operating conditions will be presented and discussed.

  1. Development and application of an acceptance testing model

    NASA Technical Reports Server (NTRS)

    Pendley, Rex D.; Noonan, Caroline H.; Hall, Kenneth R.

    1992-01-01

    The process of acceptance testing large software systems for NASA has been analyzed, and an empirical planning model of the process constructed. This model gives managers accurate predictions of the staffing needed, the productivity of a test team, and the rate at which the system will pass. Applying the model to a new system shows a high level of agreement between the model and actual performance. The model also gives managers an objective measure of process improvement.

  2. PERFORM: A System for Monitoring, Assessment and Management of Patients with Parkinson's Disease

    PubMed Central

    Tzallas, Alexandros T.; Tsipouras, Markos G.; Rigas, Georgios; Tsalikakis, Dimitrios G.; Karvounis, Evaggelos C.; Chondrogiorgi, Maria; Psomadellis, Fotis; Cancela, Jorge; Pastorino, Matteo; Waldmeyer, María Teresa Arredondo; Konitsiotis, Spiros; Fotiadis, Dimitrios I.

    2014-01-01

    In this paper, we describe the PERFORM system for the continuous remote monitoring and management of Parkinson's disease (PD) patients. The PERFORM system is an intelligent closed-loop system that seamlessly integrates a wide range of wearable sensors constantly monitoring several motor signals of the PD patients. Data acquired are pre-processed by advanced knowledge processing methods, integrated by fusion algorithms to allow health professionals to remotely monitor the overall status of the patients, adjust medication schedules and personalize treatment. The information collected by the sensors (accelerometers and gyroscopes) is processed by several classifiers. As a result, it is possible to evaluate and quantify the PD motor symptoms related to end of dose deterioration (tremor, bradykinesia, freezing of gait (FoG)) as well as those related to over-dose concentration (Levodopa-induced dyskinesia (LID)). Based on this information, together with information derived from tests performed with a virtual reality glove and information about the medication and food intake, a patient specific profile can be built. In addition, the patient specific profile with his evaluation during the last week and last month, is compared to understand whether his status is stable, improving or worsening. Based on that, the system analyses whether a medication change is needed—always under medical supervision—and in this case, information about the medication change proposal is sent to the patient. The performance of the system has been evaluated in real life conditions, the accuracy and acceptability of the system by the PD patients and healthcare professionals has been tested, and a comparison with the standard routine clinical evaluation done by the PD patients' physician has been carried out. The PERFORM system is used by the PD patients and in a simple and safe non-invasive way for long-term record of their motor status, thus offering to the clinician a precise, long-term and objective view of patient's motor status and drug/food intake. Thus, with the PERFORM system the clinician can remotely receive precise information for the PD patient's status on previous days and define the optimal therapeutical treatment. PMID:25393786

  3. PERFORM: a system for monitoring, assessment and management of patients with Parkinson's disease.

    PubMed

    Tzallas, Alexandros T; Tsipouras, Markos G; Rigas, Georgios; Tsalikakis, Dimitrios G; Karvounis, Evaggelos C; Chondrogiorgi, Maria; Psomadellis, Fotis; Cancela, Jorge; Pastorino, Matteo; Waldmeyer, María Teresa Arredondo; Konitsiotis, Spiros; Fotiadis, Dimitrios I

    2014-11-11

    In this paper, we describe the PERFORM system for the continuous remote monitoring and management of Parkinson's disease (PD) patients. The PERFORM system is an intelligent closed-loop system that seamlessly integrates a wide range of wearable sensors constantly monitoring several motor signals of the PD patients. Data acquired are pre-processed by advanced knowledge processing methods, integrated by fusion algorithms to allow health professionals to remotely monitor the overall status of the patients, adjust medication schedules and personalize treatment. The information collected by the sensors (accelerometers and gyroscopes) is processed by several classifiers. As a result, it is possible to evaluate and quantify the PD motor symptoms related to end of dose deterioration (tremor, bradykinesia, freezing of gait (FoG)) as well as those related to over-dose concentration (Levodopa-induced dyskinesia (LID)). Based on this information, together with information derived from tests performed with a virtual reality glove and information about the medication and food intake, a patient specific profile can be built. In addition, the patient specific profile with his evaluation during the last week and last month, is compared to understand whether his status is stable, improving or worsening. Based on that, the system analyses whether a medication change is needed--always under medical supervision--and in this case, information about the medication change proposal is sent to the patient. The performance of the system has been evaluated in real life conditions, the accuracy and acceptability of the system by the PD patients and healthcare professionals has been tested, and a comparison with the standard routine clinical evaluation done by the PD patients' physician has been carried out. The PERFORM system is used by the PD patients and in a simple and safe non-invasive way for long-term record of their motor status, thus offering to the clinician a precise, long-term and objective view of patient's motor status and drug/food intake. Thus, with the PERFORM system the clinician can remotely receive precise information for the PD patient's status on previous days and define the optimal therapeutical treatment.

  4. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  5. Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Lauer, Frank

    This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less

  6. Mutual information-based template matching scheme for detection of breast masses: from mammography to digital breast tomosynthesis

    PubMed Central

    Mazurowski, Maciej A; Lo, Joseph Y; Harrawood, Brian P; Tourassi, Georgia D

    2011-01-01

    Development of a computational decision aid for a new medical imaging modality typically is a long and complicated process. It consists of collecting data in the form of images and annotations, development of image processing and pattern recognition algorithms for analysis of the new images and finally testing of the resulting system. Since new imaging modalities are developed more rapidly than ever before, any effort for decreasing the time and cost of this development process could result in maximizing the benefit of the new imaging modality to patients by making the computer aids quickly available to radiologists that interpret the images. In this paper, we make a step in this direction and investigate the possibility of translating the knowledge about the detection problem from one imaging modality to another. Specifically, we present a computer-aided detection (CAD) system for mammographic masses that uses a mutual information-based template matching scheme with intelligently selected templates. We presented principles of template matching with mutual information for mammography before. In this paper, we present an implementation of those principles in a complete computer-aided detection system. The proposed system, through an automatic optimization process, chooses the most useful templates (mammographic regions of interest) using a large database of previously collected and annotated mammograms. Through this process, the knowledge about the task of detecting masses in mammograms is incorporated in the system. Then we evaluate whether our system developed for screen-film mammograms can be successfully applied not only to other mammograms but also to digital breast tomosynthesis (DBT) reconstructed slices without adding any DBT cases for training. Our rationale is that since mutual information is known to be a robust intermodality image similarity measure, it has high potential of transferring knowledge between modalities in the context of the mass detection task. Experimental evaluation of the system on mammograms showed competitive performance compared to other mammography CAD systems recently published in the literature. When the system was applied “as-is” to DBT, its performance was notably worse than that for mammograms. However, with a simple additional preprocessing step, the performance of the system reached levels similar to that obtained for mammograms. In conclusion, the presented CAD system not only performed competitively on screen-film mammograms but it also performed robustly on DBT showing that direct transfer of knowledge across breast imaging modalities for mass detection is in fact possible. PMID:21554985

  7. Embodiment and Performance

    ERIC Educational Resources Information Center

    Bessell, Jacquelyn; Riddell, Patricia

    2016-01-01

    Evidence suggests that some cognitive processes are based on sensorimotor systems in the brain (embodied cognition). The premise of this is that "Biological brains are first and foremost the control systems for biological bodies". It has therefore been suggested that both online cognition (processing as we move through the world) and…

  8. Evaluation and comparison of alternative designs for water/solid-waste processing systems for spacecraft

    NASA Technical Reports Server (NTRS)

    Spurlock, J. M.

    1975-01-01

    Promising candidate designs currently being considered for the management of spacecraft solid waste and waste-water materials were assessed. The candidate processes were: (1) the radioisotope thermal energy evaporation/incinerator process; (2) the dry incineration process; and (3) the wet oxidation process. The types of spacecraft waste materials that were included in the base-line computational input to the candidate systems were feces, urine residues, trash and waste-water concentrates. The performance characteristics and system requirements for each candidate process to handle this input and produce the specified acceptable output (i.e., potable water, a storable dry ash, and vapor phase products that can be handled by a spacecraft atmosphere control system) were estimated and compared. Recommendations are presented.

  9. A digital computer simulation and study of a direct-energy-transfer power-conditioning system

    NASA Technical Reports Server (NTRS)

    Burns, W. W., III; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.; Paulkovich, J.

    1974-01-01

    A digital computer simulation technique, which can be used to study such composite power-conditioning systems, was applied to a spacecraft direct-energy-transfer power-processing system. The results obtained duplicate actual system performance with considerable accuracy. The validity of the approach and its usefulness in studying various aspects of system performance such as steady-state characteristics and transient responses to severely varying operating conditions are demonstrated experimentally.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Junhwan; Hwang, Sungui; Park, Kyihwan, E-mail: khpark@gist.ac.kr

    To utilize a time-of-flight-based laser scanner as a distance measurement sensor, the measurable distance and accuracy are the most important performance parameters to consider. For these purposes, the optical system and electronic signal processing of the laser scanner should be optimally designed in order to reduce a distance error caused by the optical crosstalk and wide dynamic range input. Optical system design for removing optical crosstalk problem is proposed in this work. Intensity control is also considered to solve the problem of a phase-shift variation in the signal processing circuit caused by object reflectivity. The experimental results for optical systemmore » and signal processing design are performed using 3D measurements.« less

  11. Systems Engineering of Unmanned DoD Systems: Following the Joint Capabilities Integration and Development System/Defense Acquisition System Process to Develop an Unmanned Ground Vehicle System

    DTIC Science & Technology

    2015-12-01

    Manual D-A-1). APAs are “Performance attributes of a system not important enough to be considered KPPs or KSAs, but still appropriate to include in...the CDD or CPD are designated as APAs ” (JCIDS Manual D-A-1). The requirements are expressed using Thresholds (T) and Objectives (O). “Performance...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA SYSTEMS ENGINEERING CAPSTONE PROJECT REPORT Approved for public release; distribution is

  12. Electronic Performance Support Systems: Comparison of Types of Integration Levels on Performance Outcomes

    ERIC Educational Resources Information Center

    Phillips, Sharon A.

    2013-01-01

    Selecting appropriate performance improvement interventions is a critical component of a comprehensive model of performance improvement. Intervention selection is an interconnected process involving analysis of an organization's environment, definition of the performance problem, and identification of a performance gap and identification of causal…

  13. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  14. Altitude deviations: Breakdowns of an error-tolerant system

    NASA Technical Reports Server (NTRS)

    Palmer, Everett A.; Hutchins, Edwin L.; Ritter, Richard D.; Vancleemput, Inge

    1993-01-01

    Pilot reports of aviation incidents to the Aviation Safety Reporting System (ASRS) provide a window on the problems occurring in today's airline cockpits. The narratives of 10 pilot reports of errors made in the automation-assisted altitude-change task are used to illustrate some of the issues of pilots interacting with automatic systems. These narratives are then used to construct a description of the cockpit as an information processing system. The analysis concentrates on the error-tolerant properties of the system and on how breakdowns can occasionally occur. An error-tolerant system can detect and correct its internal processing errors. The cockpit system consists of two or three pilots supported by autoflight, flight-management, and alerting systems. These humans and machines have distributed access to clearance information and perform redundant processing of information. Errors can be detected as deviations from either expected behavior or as deviations from expected information. Breakdowns in this system can occur when the checking and cross-checking tasks that give the system its error-tolerant properties are not performed because of distractions or other task demands. Recommendations based on the analysis for improving the error tolerance of the cockpit system are given.

  15. A Cost and Performance System (CAPS) in a Federal agency

    NASA Technical Reports Server (NTRS)

    Huseonia, W. F.; Penton, P. G.

    1994-01-01

    Cost and Performance System (CAPS) is an automated system used from the planning phase through implementation to analysis and documentation. Data is retrievable or available for analysis of cost versus performance anomalies. CAPS provides a uniform system across intra- and international elements. A common system is recommended throughout an entire cost or profit center. Data can be easily accumulated and aggregated into higher levels of tracking and reporting of cost and performance.The level and quality of performance or productivity is indicated in the CAPS model and its process. The CAPS model provides the necessary decision information and insight to the principal investigator/project engineer for a successful project management experience. CAPS provides all levels of management with the appropriate detailed level of data.

  16. The ATLAS Data Acquisition System in LHC Run 2

    NASA Astrophysics Data System (ADS)

    Panduro Vazquez, William; ATLAS Collaboration

    2017-10-01

    The LHC has been providing pp collisions with record luminosity and energy since the start of Run 2 in 2015. The Trigger and Data Acquisition system of the ATLAS experiment has been upgraded to deal with the increased performance required by this new operational mode. The dataflow system and associated network infrastructure have been reshaped in order to benefit from technological progress and to maximize the flexibility and efficiency of the data selection process. The new design is radically different from the previous implementation both in terms of architecture and performance, with the previous two-level structure merged into a single processing farm, performing incremental data collection and analysis. In addition, logical farm slicing, with each slice managed by a dedicated supervisor, has been dropped in favour of global management by a single farm master operating at 100 kHz. This farm master has also been integrated with a new software-based Region of Interest builder, replacing the previous VMEbus-based system. Finally, the Readout system has been completely refitted with new higher performance, lower footprint server machines housing a new custom front-end interface card. Here we will cover the overall design of the system, along with performance results from the start-up phase of LHC Run 2.

  17. Multichannel Detection in High-Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    Miller, James C.; And Others

    1982-01-01

    A linear photodiode array is used as the photodetector element in a new ultraviolet-visible detection system for high-performance liquid chromatography (HPLC). Using a computer network, the system processes eight different chromatographic signals simultaneously in real-time and acquires spectra manually/automatically. Applications in fast HPLC…

  18. Implementing asyncronous collective operations in a multi-node processing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Eisley, Noel A.; Heidelberger, Philip

    A method, system, and computer program product are disclosed for implementing an asynchronous collective operation in a multi-node data processing system. In one embodiment, the method comprises sending data to a plurality of nodes in the data processing system, broadcasting a remote get to the plurality of nodes, and using this remote get to implement asynchronous collective operations on the data by the plurality of nodes. In one embodiment, each of the nodes performs only one task in the asynchronous operations, and each nodes sets up a base address table with an entry for a base address of a memorymore » buffer associated with said each node. In another embodiment, each of the nodes performs a plurality of tasks in said collective operations, and each task of each node sets up a base address table with an entry for a base address of a memory buffer associated with the task.« less

  19. Power processing for electric propulsion

    NASA Technical Reports Server (NTRS)

    Finke, R. C.; Herron, B. G.; Gant, G. D.

    1975-01-01

    The potential of achieving up to 30 per cent more spacecraft payload or 50 per cent more useful operating life by the use of electric propulsion in place of conventional cold gas or hydrazine systems in science, communications, and earth applications spacecraft is a compelling reason to consider the inclusion of electric thruster systems in new spacecraft design. The propulsion requirements of such spacecraft dictate a wide range of thruster power levels and operational lifetimes, which must be matched by lightweight, efficient, and reliable thruster power processing systems. This paper will present electron bombardment ion thruster requirements; review the performance characteristics of present power processing systems; discuss design philosophies and alternatives in areas such as inverter type, arc protection, and control methods; and project future performance potentials for meeting goals in the areas of power processor weight (10 kg/kW), efficiency (approaching 92 per cent), reliability (0.96 for 15,000 hr), and thermal control capability (0.3 to 5 AU).

  20. Charging and Discharging Processes of Thermal Energy Storage System Using Phase change materials

    NASA Astrophysics Data System (ADS)

    Kanimozhi, B., Dr.; Harish, Kasilanka; Sai Tarun, Bellamkonda; Saty Sainath Reddy, Pogaku; Sai Sujeeth, Padakandla

    2017-05-01

    The objective of the study is to investigate the thermal characteristics of charging and discharge processes of fabricated thermal energy storage system using Phase change materials. Experiments were performed with phase change materials in which a storage tank have designed and developed to enhance the heat transfer rate from the solar tank to the PCM storage tank. The enhancement of heat transfer can be done by using a number of copper tubes in the fabricated storage tank. This storage tank can hold or conserve heat energy for a much longer time than the conventional water storage system. Performance evaluations of experimental results during charging and discharging processes of paraffin wax have discussed. In which heat absorption and heat rejection have been calculated with various flow rate.

  1. WMAP C&DH Software

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan; Leath, Tim; Ferrer, Art; Miller, Todd; Walters, Mark; Savadkin, Bruce; Wu, Ji-Wei; Slegel, Steve; Stagmer, Emory

    2007-01-01

    The command-and-data-handling (C&DH) software of the Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft functions as the sole interface between (1) the spacecraft and its instrument subsystem and (2) ground operations equipment. This software includes a command-decoding and -distribution system, a telemetry/data-handling system, and a data-storage-and-playback system. This software performs onboard processing of attitude sensor data and generates commands for attitude-control actuators in a closed-loop fashion. It also processes stored commands and monitors health and safety functions for the spacecraft and its instrument subsystems. The basic functionality of this software is the same of that of the older C&DH software of the Rossi X-Ray Timing Explorer (RXTE) spacecraft, the main difference being the addition of the attitude-control functionality. Previously, the C&DH and attitude-control computations were performed by different processors because a single RXTE processor did not have enough processing power. The WMAP spacecraft includes a more-powerful processor capable of performing both computations.

  2. An experimental investigation of the effects of alarm processing and display on operator performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O`Hara, J.; Brown, W.; Hallbert, B.

    1998-03-01

    This paper describes a research program sponsored by the US Nuclear Regulatory Commission to address the human factors engineering (HFE) aspects of nuclear power plant alarm systems. The overall objective of the program is to develop HFE review guidance for advanced alarm systems. As part of this program, guidance has been developed based on a broad base of technical and research literature. In the course of guidance development, aspects of alarm system design for which the technical basis was insufficient to support complete guidance development were identified. The primary purpose of the research reported in this paper was to evaluatemore » the effects of three of these alarm system design characteristics on operator performance in order to contribute to the understanding of potential safety issues and to provide data to support the development of design review guidance in these areas. Three alarm system design characteristics studied were (1) alarm processing (degree of alarm reduction), (2) alarm availability (dynamic prioritization and suppression), and (3) alarm display (a dedicated tile format, a mixed tile and message list format, and a format in which alarm information is integrated into the process displays). A secondary purpose was to provide confirmatory evidence of selected alarm system guidance developed in an earlier phase of the project. The alarm characteristics were combined into eight separate experimental conditions. Six, two-person crews of professional nuclear power plant operators participated in the study. Following training, each crew completed 16 test trials which consisted of two trials in each of the eight experimental conditions (one with a low-complexity scenario and one with a high-complexity scenario). Measures of process performance, operator task performance, situation awareness, and workload were obtained. In addition, operator opinions and evaluations of the alarm processing and display conditions were collected. No deficient performance was observed in any of the experimental conditions, providing confirmatory support for many design review guidelines. The operators identified numerous strengths and weaknesses associated with individual alarm design characteristics.« less

  3. Review of Exploration Systems Development (ESD) Integrated Hazard Development Process. Volume 1; Appendices

    NASA Technical Reports Server (NTRS)

    Smiles, Michael D.; Blythe, Michael P.; Bejmuk, Bohdan; Currie, Nancy J.; Doremus, Robert C.; Franzo, Jennifer C.; Gordon, Mark W.; Johnson, Tracy D.; Kowaleski, Mark M.; Laube, Jeffrey R.

    2015-01-01

    The Chief Engineer of the Exploration Systems Development (ESD) Office requested that the NASA Engineering and Safety Center (NESC) perform an independent assessment of the ESD's integrated hazard development process. The focus of the assessment was to review the integrated hazard analysis (IHA) process and identify any gaps/improvements in the process (e.g., missed causes, cause tree completeness, missed hazards). This document contains the outcome of the NESC assessment.

  4. Review of Exploration Systems Development (ESD) Integrated Hazard Development Process. Appendices; Volume 2

    NASA Technical Reports Server (NTRS)

    Smiles, Michael D.; Blythe, Michael P.; Bejmuk, Bohdan; Currie, Nancy J.; Doremus, Robert C.; Franzo, Jennifer C.; Gordon, Mark W.; Johnson, Tracy D.; Kowaleski, Mark M.; Laube, Jeffrey R.

    2015-01-01

    The Chief Engineer of the Exploration Systems Development (ESD) Office requested that the NASA Engineering and Safety Center (NESC) perform an independent assessment of the ESD's integrated hazard development process. The focus of the assessment was to review the integrated hazard analysis (IHA) process and identify any gaps/improvements in the process (e.g. missed causes, cause tree completeness, missed hazards). This document contains the outcome of the NESC assessment.

  5. Dynamic Resource Allocation to Improve Service Performance in Order Fulfillment Systems

    DTIC Science & Technology

    2009-01-01

    efficient system uses economies of scale at two points: orders are batched before processing, which reduces processing costs, and processed or- ders ...the ef- fects of batching on order picking processes is well-researched and well-understood ( van den Berg and Gademann, 1999). Because orders are...a final so- journ time distribution. Our work builds on existing research in matrix-geometric methods by Neuts (1981), Asmussen and M0ller (2001

  6. Motivation for documentation.

    PubMed

    Graham, Denise H

    2004-11-01

    The quality improvement plan relies on controlling quality of care through improving the process or system as a whole. Your ongoing data collection is paramount to the process of system-wide improvement and performance, enhancement of financial performance, operational performance and overall service performance and satisfaction. The threat of litigation and having to defend yourself from a claim of wrongdoing still looms every time your wheels turn. Your runsheet must serve and protect you. Look at the NFPA 1710 standard, which was enacted to serve and protect firefighters. This standard was enacted with their personal safety and well-being as the principle behind staffing requirements. At what stage of draft do you suppose the NFPA 1710 standard would be today if the relative data were collected sporadically or were not tracked for each service-related death? It may have taken many more service-related deaths to effect change for a system-wide improvement in operational performance. Every call merits documentation and data collection. Your data are catalysts for change.

  7. 77 FR 36577 - Agency Information Collection Activities; Submission for OMB Review; Comment Request; Tax...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-19

    ... PERFORMS, the performance management system for the UI program. UI PERFORMS incorporates a strategic planning process of identifying priorities; ongoing collection and monitoring of valid data to measure...

  8. Analysis And Control System For Automated Welding

    NASA Technical Reports Server (NTRS)

    Powell, Bradley W.; Burroughs, Ivan A.; Kennedy, Larry Z.; Rodgers, Michael H.; Goode, K. Wayne

    1994-01-01

    Automated variable-polarity plasma arc (VPPA) welding apparatus operates under electronic supervision by welding analysis and control system. System performs all major monitoring and controlling functions. It acquires, analyzes, and displays weld-quality data in real time and adjusts process parameters accordingly. Also records pertinent data for use in post-weld analysis and documentation of quality. System includes optoelectronic sensors and data processors that provide feedback control of welding process.

  9. Facilitating NASA's Use of GEIA-STD-0005-1, Performance Standard for Aerospace and High Performance Electronic Systems Containing Lead-Free Solder

    NASA Technical Reports Server (NTRS)

    Plante, Jeannete

    2010-01-01

    GEIA-STD-0005-1 defines the objectives of, and requirements for, documenting processes that assure customers and regulatory agencies that AHP electronic systems containing lead-free solder, piece parts, and boards will satisfy the applicable requirements for performance, reliability, airworthiness, safety, and certify-ability throughout the specified life of performance. It communicates requirements for a Lead-Free Control Plan (LFCP) to assist suppliers in the development of their own Plans. The Plan documents the Plan Owner's (supplier's) processes, that assure their customer, and all other stakeholders that the Plan owner's products will continue to meet their requirements. The presentation reviews quality assurance requirements traceability and LFCP template instructions.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath

    The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementationmore » of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.« less

  11. Performance analysis of a generalized upset detection procedure

    NASA Technical Reports Server (NTRS)

    Blough, Douglas M.; Masson, Gerald M.

    1987-01-01

    A general procedure for upset detection in complex systems, called the data block capture and analysis upset monitoring process is described and analyzed. The process consists of repeatedly recording a fixed amount of data from a set of predetermined observation lines of the system being monitored (i.e., capturing a block of data), and then analyzing the captured block in an attempt to determine whether the system is functioning correctly. The algorithm which analyzes the data blocks can be characterized in terms of the amount of time it requires to examine a given length data block to ascertain the existence of features/conditions that have been predetermined to characterize the upset-free behavior of the system. The performance of linear, quadratic, and logarithmic data analysis algorithms is rigorously characterized in terms of three performance measures: (1) the probability of correctly detecting an upset; (2) the expected number of false alarms; and (3) the expected latency in detecting upsets.

  12. Integrating conflict detection and attentional control mechanisms.

    PubMed

    Walsh, Bong J; Buonocore, Michael H; Carter, Cameron S; Mangun, George R

    2011-09-01

    Human behavior involves monitoring and adjusting performance to meet established goals. Performance-monitoring systems that act by detecting conflict in stimulus and response processing have been hypothesized to influence cortical control systems to adjust and improve performance. Here we used fMRI to investigate the neural mechanisms of conflict monitoring and resolution during voluntary spatial attention. We tested the hypothesis that the ACC would be sensitive to conflict during attentional orienting and influence activity in the frontoparietal attentional control network that selectively modulates visual information processing. We found that activity in ACC increased monotonically with increasing attentional conflict. This increased conflict detection activity was correlated with both increased activity in the attentional control network and improved speed and accuracy from one trial to the next. These results establish a long hypothesized interaction between conflict detection systems and neural systems supporting voluntary control of visual attention.

  13. A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    1998-01-01

    This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.

  14. Faculty Performance Management System: The Faculty Development/Evaluation System at Beaufort Technical College, 1986-1987. Revised.

    ERIC Educational Resources Information Center

    Tobias, Earole; And Others

    Designed for faculty members at Beaufort Technical College (BTC) in South Carolina, this handbook describes the college's faculty evaluation process and procedures. The first sections of the handbook explain the rationale and method for the faculty evaluation process, state the purposes and objectives of the system, and offer a model which breaks…

  15. Performance analysis of gamma ray spectrometric parameters on digital signal and analog signal processing based MCA systems using NaI(Tl) detector.

    PubMed

    Kukreti, B M; Sharma, G K

    2012-05-01

    Accurate and speedy estimations of ppm range uranium and thorium in the geological and rock samples are most useful towards ongoing uranium investigations and identification of favorable radioactive zones in the exploration field areas. In this study with the existing 5 in. × 4 in. NaI(Tl) detector setup, prevailing background and time constraints, an enhanced geometrical setup has been worked out to improve the minimum detection limits for primordial radioelements K(40), U(238) and Th(232). This geometrical setup has been integrated with the newly introduced, digital signal processing based MCA system for the routine spectrometric analysis of low concentration rock samples. Stability performance, during the long counting hours, for digital signal processing MCA system and its predecessor NIM bin based MCA system has been monitored, using the concept of statistical process control. Monitored results, over a time span of few months, have been quantified in terms of spectrometer's parameters such as Compton striping constants and Channel sensitivities, used for evaluating primordial radio element concentrations (K(40), U(238) and Th(232)) in geological samples. Results indicate stable dMCA performance, with a tendency of higher relative variance, about mean, particularly for Compton stripping constants. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. The AIS-5000 parallel processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmitt, L.A.; Wilson, S.S.

    1988-05-01

    The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In thismore » paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.« less

  17. On the assessment of performance and emissions characteristics of a SI engine provided with a laser ignition system

    NASA Astrophysics Data System (ADS)

    Birtas, A.; Boicea, N.; Draghici, F.; Chiriac, R.; Croitoru, G.; Dinca, M.; Dascalu, T.; Pavel, N.

    2017-10-01

    Performance and exhaust emissions of spark ignition engines are strongly dependent on the development of the combustion process. Controlling this process in order to improve the performance and to reduce emissions by ensuring rapid and robust combustion depends on how ignition stage is achieved. An ignition system that seems to be able for providing such an enhanced combustion process is that based on plasma generation using a Q-switched solid state laser that delivers pulses with high peak power (of MW-order level). The laser-spark devices used in the present investigations were realized using compact diffusion-bonded Nd:YAG/Cr4+:YAG ceramic media. The laser igniter was designed, integrated and built to resemble a classical spark plug and therefore it could be mounted directly on the cylinder head of a passenger car engine. In this study are reported the results obtained using such ignition system provided for a K7M 710 engine currently produced by Renault-Dacia, where the standard calibrations were changed towards the lean mixtures combustion zone. Results regarding the performance, the exhaust emissions and the combustion characteristics in optimized spark timing conditions, which demonstrate the potential of such an innovative ignition system, are presented.

  18. Evaluation of the functional performance and technical quality of an Electronic Documentation System of the Nursing Process.

    PubMed

    de Oliveira, Neurilene Batista; Peres, Heloisa Helena Ciqueto

    2015-01-01

    To evaluate the functional performance and the technical quality of the Electronic Documentation System of the Nursing Process of the Teaching Hospital of the University of São Paulo. exploratory-descriptive study. The Quality Model of regulatory standard 25010 and the Evaluation Process defined under regulatory standard 25040, both of the International Organization for Standardization/International Electrotechnical Commission. The quality characteristics evaluated were: functional suitability, reliability, usability, performance efficiency, compatibility, security, maintainability and portability. The sample was made up of 37 evaluators. in the evaluation of the specialists in information technology, only the characteristic of usability obtained a rate of positive responses of less than 70%. For the nurse lecturers, all the quality characteristics obtained a rate of positive responses of over 70%. The staff nurses of the medical and surgical clinics with experience in using the system) and staff nurses from other units of the hospital and from other health institutions (without experience in using the system) obtained rates of positive responses of more than 70% referent to the functional suitability, usability, and security. However, performance efficiency, reliability and compatibility all obtained rates below the parameter established. the software achieved rates of positive responses of over 70% for the majority of the quality characteristics evaluated.

  19. DSN system performance test software

    NASA Technical Reports Server (NTRS)

    Martin, M.

    1978-01-01

    The system performance test software is currently being modified to include additional capabilities and enhancements. Additional software programs are currently being developed for the Command Store and Forward System and the Automatic Total Recall System. The test executive is the main program. It controls the input and output of the individual test programs by routing data blocks and operator directives to those programs. It also processes data block dump requests from the operator.

  20. Electric terminal performance and characterization of solid oxide fuel cells and systems

    NASA Astrophysics Data System (ADS)

    Lindahl, Peter Allan

    Solid Oxide Fuel Cells (SOFCs) are electrochemical devices which can effect efficient, clean, and quiet conversion of chemical to electrical energy. In contrast to conventional electricity generation systems which feature multiple discrete energy conversion processes, SOFCs are direct energy conversion devices. That is, they feature a fully integrated chemical to electrical energy conversion process where the electric load demanded of the cell intrinsically drives the electrochemical reactions and associated processes internal to the cell. As a result, the cell's electric terminals provide a path for interaction between load side electric demand and the conversion side processes. The implication of this is twofold. First, the magnitude and dynamic characteristics of the electric load demanded of the cell can directly impact the long-term efficacy of the cell's chemical to electrical energy conversion. Second, the electric terminal response to dynamic loads can be exploited for monitoring the cell's conversion side processes and used in diagnostic analysis and degradation-mitigating control schemes. This dissertation presents a multi-tier investigation into this electric terminal based performance characterization of SOFCs through the development of novel test systems, analysis techniques and control schemes. First, a reference-based simulation system is introduced. This system scales up the electric terminal performance of a prototype SOFC system, e.g. a single fuel cell, to that of a full power-level stack. This allows realistic stack/load interaction studies while maintaining explicit ability for post-test analysis of the prototype system. Next, a time-domain least squares fitting method for electrochemical impedance spectroscopy (EIS) is developed for reduced-time monitoring of the electrochemical and physicochemical mechanics of the fuel cell through its electric terminals. The utility of the reference-based simulator and the EIS technique are demonstrated through their combined use in the performance testing of a hybrid-source power management (HSPM) system designed to allow in-situ EIS monitoring of a stack under dynamic loading conditions. The results from the latter study suggest that an HSPM controller allows an opportunity for in-situ electric terminal monitoring and control-based mitigation of SOFC degradation. As such, an exploration of control-based SOFC degradation mitigation is presented and ideas for further work are suggested.

  1. Reconsidering the measurement of ancillary service performance.

    PubMed

    Griffin, D T; Rauscher, J A

    1987-08-01

    Prospective payment reimbursement systems have forced hospitals to review their costs more carefully. The result of the increased emphasis on costs is that many hospitals use costs, rather than margin, to judge the performance of ancillary services. However, arbitrary selection of performance measures for ancillary services can result in managerial decisions contrary to hospital objectives. Managerial accounting systems provide models which assist in the development of performance measures for ancillary services. Selection of appropriate performance measures provides managers with the incentive to pursue goals congruent with those of the hospital overall. This article reviews the design and implementation of managerial accounting systems, and considers the impact of prospective payment systems and proposed changes in capital reimbursement on this process.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washiya, Tadahiro; Komaki, Jun; Funasaka, Hideyuki

    Japan Atomic Energy Agency (JAEA) has been developing the new aqueous reprocessing system named 'NEXT' (New Extraction system for TRU recovery)1-2, which provides many advantages as waste volume reduction, cost savings by advanced components and simplification of process operation. Advanced head-end systems in the 'NEXT' process consist of fuel disassembly system, fuel shearing system and continuous dissolver system. We developed reliable fuel disassembly system with innovative procedure, and short-length shearing system and continuous dissolver system can be provided highly concentrated dissolution to adapt to the uranium crystallization process. We have carried out experimental studies, and fabrication of engineering-scale test devicesmore » to confirm the systems performance. In this paper, research and development of advanced head-end systems are described. (authors)« less

  3. MSFC Skylab instrumentation and communication system mission evaluation

    NASA Technical Reports Server (NTRS)

    Adair, B. M.

    1974-01-01

    An evaluation of the in-orbit performance of the instrumentation and communications systems installed on Skylab is presented. Performance is compared with functional requirements and the fidelity of communications. In-orbit performance includes processing engineering, scientific, experiment, and biomedical data, implementing ground-generated commands, audio and video communication, generating rendezvous ranging information, and radio frequency transmission and reception. A history of the system evolution based on the functional requirements and a physical description of the launch configuration is included. The report affirms that the instrumentation and communication system satisfied all imposed requirements.

  4. CO2 laser ranging systems study

    NASA Technical Reports Server (NTRS)

    Filippi, C. A.

    1975-01-01

    The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.

  5. GPR-Based Water Leak Models in Water Distribution Systems

    PubMed Central

    Ayala-Cabrera, David; Herrera, Manuel; Izquierdo, Joaquín; Ocaña-Levario, Silvia J.; Pérez-García, Rafael

    2013-01-01

    This paper addresses the problem of leakage in water distribution systems through the use of ground penetrating radar (GPR) as a nondestructive method. Laboratory tests are performed to extract features of water leakage from the obtained GPR images. Moreover, a test in a real-world urban system under real conditions is performed. Feature extraction is performed by interpreting GPR images with the support of a pre-processing methodology based on an appropriate combination of statistical methods and multi-agent systems. The results of these tests are presented, interpreted, analyzed and discussed in this paper.

  6. An Adaptive Technique for a Redundant-Sensor Navigation System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chien, T. T.

    1972-01-01

    An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. The gyro navigation system is modeled as a Gauss-Markov process, with degradation modes defined as changes in characteristics specified by parameters associated with the model. The adaptive system is formulated as a multistage stochastic process: (1) a detection system, (2) an identification system and (3) a compensation system. It is shown that the sufficient statistics for the partially observable process in the detection and identification system is the posterior measure of the state of degradation, conditioned on the measurement history.

  7. Bibliographic Post-Processing with the TIS Intelligent Gateway: Analytical and Communication Capabilities.

    ERIC Educational Resources Information Center

    Burton, Hilary D.

    TIS (Technology Information System) is an intelligent gateway system capable of performing quantitative evaluation and analysis of bibliographic citations using a set of Process functions. Originally developed by Lawrence Livermore National Laboratory (LLNL) to analyze information retrieved from three major federal databases, DOE/RECON,…

  8. Method for simultaneous overlapped communications between neighboring processors in a multiple

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1991-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  9. Development of Entry-Level Competence Tests: A Strategy for Evaluation of Vocational Education Training Systems

    ERIC Educational Resources Information Center

    Schutte, Marc; Spottl, Georg

    2011-01-01

    Developing countries such as Malaysia and Oman have recently established occupational standards based on core work processes (functional clusters of work objects, activities and performance requirements), to which competencies (performance determinants) can be linked. While the development of work-process-based occupational standards is supposed…

  10. Finite element analysis as a design tool for thermoplastic vulcanizate glazing seals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gase, K.M.; Hudacek, L.L.; Pesevski, G.T.

    1998-12-31

    There are three materials that are commonly used in commercial glazing seals: EPDM, silicone and thermoplastic vulcanizates (TPVs). TPVs are a high performance class of thermoplastic elastomers (TPEs), where TPEs have elastomeric properties with thermoplastic processability. TPVs have emerged as materials well suited for use in glazing seals due to ease of processing, economics and part design flexibility. The part design and development process is critical to ensure that the chosen TPV provides economics, quality and function in demanding environments. In the design and development process, there is great value in utilizing dual durometer systems to capitalize on the benefitsmore » of soft and rigid materials. Computer-aided design tools, such as Finite Element Analysis (FEA), are effective in minimizing development time and predicting system performance. Examples of TPV glazing seals will illustrate the benefits of utilizing FEA to take full advantage of the material characteristics, which results in functional performance and quality while reducing development iterations. FEA will be performed on two glazing seal profiles to confirm optimum geometry.« less

  11. Acquisition Reform: DOD Should Streamline Its Decision-Making Process for Weapon Systems to Reduce Inefficiencies

    DTIC Science & Technology

    2015-02-01

    WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) U.S. Government Accountability Office,441 G Street NW,Washington,DC,20548 8...milestone decision process; and (3) alternative processes used by some DOD programs and leading commercial firms. To perform this work , GAO...Decision Makers 27 Abbreviations ACAT Acquisition Category DOD Department of Defense This is a work of the U.S

  12. Development and manufacture of visor for helmet-mounted display

    NASA Astrophysics Data System (ADS)

    Krevor, David H.; McNelly, Gregg; Skubon, John; Speirs, Robert

    2004-01-01

    The manufacturing design and process development for the Visor for the JHMCS (Joint Helmet Mounted Cueing System) are discussed. The JHMCS system is a Helmet Mounted Display (HMD) system currently flying on the F-15, F-16 and F/A-18 aircraft. The Visor manufacturing processes are essential to both system performance and economy. The Visor functions both as the system optical combiner and personal protective equipment for the pilot. The Visor material is optical polycarbonate. For a military HMD system, the mechanical and environmental properties of the Visor are as necessary as the optical properties. The visor must meet stringent dimensional requirements to assure adequate system optical performance. Injection molding can provide dimensional fidelity to the requirements, if done properly. Concurrent design of the visor and the tool (i.e., the injection mold) is essential. The concurrent design necessarily considers manufacturing operations and the use environment of the Visor. Computer modeling of the molding process is a necessary input to the mold design. With proper attention to product design and tool development, it is possible to improve upon published standard dimensional tolerances for molded polycarbonate articles.

  13. First Results From A Multi-Ion Beam Lithography And Processing System At The University Of Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gila, Brent; Appleton, Bill R.; Fridmann, Joel

    2011-06-01

    The University of Florida (UF) have collaborated with Raith to develop a version of the Raith ionLiNE IBL system that has the capability to deliver multi-ion species in addition to the Ga ions normally available. The UF system is currently equipped with a AuSi liquid metal alloy ion source (LMAIS) and ExB filter making it capable of delivering Au and Si ions and ion clusters for ion beam processing. Other LMAIS systems could be developed in the future to deliver other ion species. This system is capable of high performance ion beam lithography, sputter profiling, maskless ion implantation, ion beammore » mixing, and spatial and temporal ion beam assisted writing and processing over large areas (100 mm2)--all with selected ion species at voltages from 15-40 kV and nanometer precision. We discuss the performance of the system with the AuSi LMAIS source and ExB mass separator. We report on initial results from the basic system characterization, ion beam lithography, as well as for basic ion-solid interactions.« less

  14. Evaluation of a biological wastewater treatment system combining an OSA process with ultrasound for sludge reduction.

    PubMed

    Romero-Pareja, P M; Aragon, C A; Quiroga, J M; Coello, M D

    2017-05-01

    Sludge production is an undesirable by-product of biological wastewater treatment. The oxic-settling-anaerobic (OSA) process constitutes one of the most promising techniques for reducing the sludge produced at the treatment plant without negative consequences for its overall performance. In the present study, the OSA process is applied in combination with ultrasound treatment, a lysis technique, in a lab-scale wastewater treatment plant to assess whether sludge reduction is enhanced as a result of mechanical treatment. Reported sludge reductions of 45.72% and 78.56% were obtained for the two regimes of combined treatment tested in this study during two respective stages: UO1 and UO2. During the UO1 stage, the general performance and nutrient removal improved, obtaining 47.28% TN removal versus 21.95% in the conventional stage. However, the performance of the system was seriously damaged during the UO2 stage. Increases in dehydrogenase and protease activities were observed during both stages. The advantages of the combined process are not necessarily economic, but operational, as US treatment acts as contributing factor in the OSA process, inducing mechanisms that lead to sludge reduction in the OSA process and improving performance parameters. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Design of systems for productivity and well being.

    PubMed

    Edwards, Kasper; Jensen, Per Langaa

    2014-01-01

    It has always been an ambition within the ergonomic profession to ensure that design or redesign of production systems consider both productivity and employee well being, but there are many approaches to how to achieve this. This paper identifies the basic issues to be addressed in light of some research activities at DTU, especially by persons responsible for facilitating design processes. Four main issues must be addressed: (1) determining the limits and scope of the system to be designed; (2) identifying stakeholders related to the system and their role in the system design; (3) handling the process' different types of knowledge; and (4) emphasizing that performance management systems, key performance indicators (KPIs), and leadership are also part of the system design and must be given attention. With the examples presented, we argue that knowledge does exist to help system design facilitators address these basic issues. Copyright © 2013. Published by Elsevier Ltd.

  16. NASA End-to-End Data System /NEEDS/ information adaptive system - Performing image processing onboard the spacecraft

    NASA Technical Reports Server (NTRS)

    Kelly, W. L.; Howle, W. M.; Meredith, B. D.

    1980-01-01

    The Information Adaptive System (IAS) is an element of the NASA End-to-End Data System (NEEDS) Phase II and is focused toward onbaord image processing. Since the IAS is a data preprocessing system which is closely coupled to the sensor system, it serves as a first step in providing a 'Smart' imaging sensor. Some of the functions planned for the IAS include sensor response nonuniformity correction, geometric correction, data set selection, data formatting, packetization, and adaptive system control. The inclusion of these sensor data preprocessing functions onboard the spacecraft will significantly improve the extraction of information from the sensor data in a timely and cost effective manner and provide the opportunity to design sensor systems which can be reconfigured in near real time for optimum performance. The purpose of this paper is to present the preliminary design of the IAS and the plans for its development.

  17. Performance of an image analysis processing system for hen tracking in an environmental preference chamber.

    PubMed

    Kashiha, Mohammad Amin; Green, Angela R; Sales, Tatiana Glogerley; Bahr, Claudia; Berckmans, Daniel; Gates, Richard S

    2014-10-01

    Image processing systems have been widely used in monitoring livestock for many applications, including identification, tracking, behavior analysis, occupancy rates, and activity calculations. The primary goal of this work was to quantify image processing performance when monitoring laying hens by comparing length of stay in each compartment as detected by the image processing system with the actual occurrences registered by human observations. In this work, an image processing system was implemented and evaluated for use in an environmental animal preference chamber to detect hen navigation between 4 compartments of the chamber. One camera was installed above each compartment to produce top-view images of the whole compartment. An ellipse-fitting model was applied to captured images to detect whether the hen was present in a compartment. During a choice-test study, mean ± SD success detection rates of 95.9 ± 2.6% were achieved when considering total duration of compartment occupancy. These results suggest that the image processing system is currently suitable for determining the response measures for assessing environmental choices. Moreover, the image processing system offered a comprehensive analysis of occupancy while substantially reducing data processing time compared with the time-intensive alternative of manual video analysis. The above technique was used to monitor ammonia aversion in the chamber. As a preliminary pilot study, different levels of ammonia were applied to different compartments while hens were allowed to navigate between compartments. Using the automated monitor tool to assess occupancy, a negative trend of compartment occupancy with ammonia level was revealed, though further examination is needed. ©2014 Poultry Science Association Inc.

  18. Development of the Data Acquisition and Processing System for a Pulsed 2-Micron Coherent Doppler Lidar System

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.

    2010-01-01

    A general overview of the development of a data acquisition and processing system is presented for a pulsed, 2-micron coherent Doppler Lidar system located in NASA Langley Research Center in Hampton, Virginia, USA. It is a comprehensive system that performs high-speed data acquisition, analysis, and data display both in real time and offline. The first flight missions are scheduled for the summer of 2010 as part of the NASA Genesis and Rapid Intensification Processes (GRIP) campaign for the study of hurricanes. The system as well as the control software is reviewed and its requirements and unique features are discussed.

  19. Performance of the Landsat-Data Collection System in a Total System Context

    NASA Technical Reports Server (NTRS)

    Paulson, R. W. (Principal Investigator); Merk, C. F.

    1975-01-01

    The author has identified the following significant results. This experiment was, and continues to be, an integration of the LANDSAT-DCS with the data collection and processing system of the Geological Survey. Although an experimental demonstration, it was a successful integration of a satellite relay system that is capable of continental data collection, and an existing governmental nationwide operational data processing and distributing networks. The Survey's data processing system uses a large general purpose computer with insufficient redundancy for 24-hour a day, 7 day a week operation. This is significant, but soluble obstacle to converting the experimental integration of the system to an operational integration.

  20. SysML: A Language for Space System Engineering

    NASA Astrophysics Data System (ADS)

    Mazzini, S.; Strangapede, A.

    2008-08-01

    This paper presents the results of an ESA/ESTEC internal study, performed with the support of INTECS, about modeling languages to support Space System Engineering activities and processes, with special emphasis on system requirements identification and analysis. The study was focused on the assessment of dedicated UML profiles, their positioning alongside the system and software life cycles and associated methodologies. Requirements for a Space System Requirements Language were identified considering the ECSS-E-10 and ECSS-E_40 processes. The study has identified SysML as a very promising language, having as theoretical background the reference system processes defined by the ISO15288, as well as industrial practices.

  1. Examination of a carton sealing line using a thermographic scanner

    NASA Astrophysics Data System (ADS)

    Kleinfeld, Jack M.

    1999-03-01

    The study of the operation and performance of natural gas fired sealing lines for polyethylene coated beverage containers was performed. Both thermal and geometric data was abstracted from the thermal scans and used to characterize the performance of the sealing line. The impact of process operating variables such as line speed and carton to carton spacing was studied. Recommendations for system improvements, instrumentation and process control were made.

  2. Skylab technology electrical power system

    NASA Technical Reports Server (NTRS)

    Woosley, A. P.; Smith, O. B.; Nassen, H. S.

    1974-01-01

    The solar array/battery power systems for the Skylab vehicle were designed to operate in a solar inertial pointing mode to provide power continuously to the Skylab. Questions of power management are considered, taking into account difficulties caused by the reduction in power system performance due to the effects of structural failure occurring during the launching process. The performance of the solar array of the Apollo Telescope Mount Power System is discussed along with the Orbital Workshop solar array performance and the Airlock Module power conditioning group performance. A list is presented of a number of items which have been identified during mission monitoring and are recommended for electrical power system concepts, designs, and operation for future spacecraft.

  3. Hydride heat pump with heat regenerator

    NASA Technical Reports Server (NTRS)

    Jones, Jack A. (Inventor)

    1991-01-01

    A regenerative hydride heat pump process and system is provided which can regenerate a high percentage of the sensible heat of the system. A series of at least four canisters containing a lower temperature performing hydride and a series of at least four canisters containing a higher temperature performing hydride is provided. Each canister contains a heat conductive passageway through which a heat transfer fluid is circulated so that sensible heat is regenerated. The process and system are useful for air conditioning rooms, providing room heat in the winter or for hot water heating throughout the year, and, in general, for pumping heat from a lower temperature to a higher temperature.

  4. Development Research of a Teachers' Educational Performance Support System: The Practices of Design, Development, and Evaluation

    ERIC Educational Resources Information Center

    Hung, Wei-Chen; Smith, Thomas J.; Harris, Marian S.; Lockard, James

    2010-01-01

    This study adopted design and development research methodology (Richey & Klein, "Design and development research: Methods, strategies, and issues," 2007) to systematically investigate the process of applying instructional design principles, human-computer interaction, and software engineering to a performance support system (PSS) for behavior…

  5. Deriving the 12-Lead Electrocardiogram From Four Standard Leads Based on the Frank Torso Model

    DTIC Science & Technology

    2001-10-25

    System The University of Aizu, Fukushima Prefecture, Japan Abstract – This paper proposes a lead method and a processing means for monitoring the 12...Performing Organization Name(s) and Address(es) The University of Aizu Graduate School of Information System Fukushima Prefecture, Japan Performing

  6. Development and Release of a GRACE-FO "Grand Simulation" Data Set by JPL

    NASA Astrophysics Data System (ADS)

    Fahnestock, E.; Yuan, D. N.; Wiese, D. N.; McCullough, C. M.; Harvey, N.; Sakumura, C.; Paik, M.; Bertiger, W. I.; Wen, H. Y.; Kruizinga, G. L. H.

    2017-12-01

    The GRACE-FO mission, to be launched early in 2018, will require several stages of data processing to be performed within its Science Data System (SDS). In an effort to demonstrate effective implementation and inter-operation of this level 1, 2, and 3 data processing, and to verify its combined ability to recover a truth Earth gravity field to within top-level requirements, the SDS team has performed a system test which it has termed the "Grand Simulation". This process starts with iteration to converge on a mutually consistent integrated truth orbit, non-gravitational acceleration time history, and spacecraft attitude time history, generated with the truth models for all elements of the integrated system (geopotential, both GRACE-FO spacecraft, constellation of GPS spacecraft, etc.). Level 1A data products are generated and then the GPS time to onboard receiver time clock error is introduced into those products according to a realistic truth clock offset model. The various data products are noised according to current best estimate noise models, and then some are used within a precision orbit determination and clock offset estimation/recovery process. Processing from level 1A to level 1B data products uses the recovered clock offset to correct back to GPS time, and performs gap-filling, compression, etc. This exercises nearly all software pathways intended for processing actual GRACE-FO science data. Finally, a monthly gravity field is recovered and compared against the truth background field. In this talk we briefly summarize the resulting performance vs. requirements, and lessons learned in the system test process. Finally, we provide information for use of the level 1B data set by the general community for gravity solution studies and software trials in anticipation of operational GRACE-FO data. ©2016 California Institute of Technology. Government sponsorship acknowledged.

  7. Facilitating cancer research using natural language processing of pathology reports.

    PubMed

    Xu, Hua; Anderson, Kristin; Grann, Victor R; Friedman, Carol

    2004-01-01

    Many ongoing clinical research projects, such as projects involving studies associated with cancer, involve manual capture of information in surgical pathology reports so that the information can be used to determine the eligibility of recruited patients for the study and to provide other information, such as cancer prognosis. Natural language processing (NLP) systems offer an alternative to automated coding, but pathology reports have certain features that are difficult for NLP systems. This paper describes how a preprocessor was integrated with an existing NLP system (MedLEE) in order to reduce modification to the NLP system and to improve performance. The work was done in conjunction with an ongoing clinical research project that assesses disparities and risks of developing breast cancer for minority women. An evaluation of the system was performed using manually coded data from the research project's database as a gold standard. The evaluation outcome showed that the extended NLP system had a sensitivity of 90.6% and a precision of 91.6%. Results indicated that this system performed satisfactorily for capturing information for the cancer research project.

  8. Development and evaluation of low cost honey heating-cum-filtration system.

    PubMed

    Alam, Md Shafiq; Sharma, D K; Sehgal, V K; Arora, M; Bhatia, S

    2014-11-01

    A fully mechanized honey heating-cum-filtration system was designed, developed, fabricated and evaluated for its performance. The system comprised of two sections; the top heating section and the lower filtering section. The developed system was evaluated for its performance at different process conditions (25 kg and 50 kg capacity using processing condition: 50 °C heating temperature and 60 °C heating temperature with 20 and 40 min holding time, respectively) and it was found that the total time required for heating, holding and filtration of honey was 108 and 142 min for 25 kg and 50 kg capacity of machine, respectively, irrespective of the processing conditions. The optimum capacity of the system was found to be 50 kg and it involved an investment of Rs 40,000 for its fabrication. The honey filtered through the developed filtration system was compared with the honey filtered in a high cost honey processing plant and raw honey for its microbial and biochemical (reducing sugars (%), moisture, acidity and pH) quality attributes. It was observed that the process of filtering through the developed unit resulted in reduction of microbes. The microbiological quality of honey filtered through the developed filtration system was better than that of raw honey and commercially processed honey. The treatment conditions found best in context of microbiological counts were 60 °C temperature for 20 min. There was 1.97 fold reductions in the plate count and 2.14 reductions in the fungal count of honey processed through the developed filtration system as compared to the raw honey. No coliforms were found in the processed honey. Honey processed through developed unit witnessed less moisture content, acidity and more reducing sugars as compared to raw honey, whereas its quality was comparable to the commercially processed honey.

  9. Microfluidic biolector-microfluidic bioprocess control in microtiter plates.

    PubMed

    Funke, Matthias; Buchenauer, Andreas; Schnakenberg, Uwe; Mokwa, Wilfried; Diederichs, Sylvia; Mertens, Alan; Müller, Carsten; Kensy, Frank; Büchs, Jochen

    2010-10-15

    In industrial-scale biotechnological processes, the active control of the pH-value combined with the controlled feeding of substrate solutions (fed-batch) is the standard strategy to cultivate both prokaryotic and eukaryotic cells. On the contrary, for small-scale cultivations, much simpler batch experiments with no process control are performed. This lack of process control often hinders researchers to scale-up and scale-down fermentation experiments, because the microbial metabolism and thereby the growth and production kinetics drastically changes depending on the cultivation strategy applied. While small-scale batches are typically performed highly parallel and in high throughput, large-scale cultivations demand sophisticated equipment for process control which is in most cases costly and difficult to handle. Currently, there is no technical system on the market that realizes simple process control in high throughput. The novel concept of a microfermentation system described in this work combines a fiber-optic online-monitoring device for microtiter plates (MTPs)--the BioLector technology--together with microfluidic control of cultivation processes in volumes below 1 mL. In the microfluidic chip, a micropump is integrated to realize distinct substrate flow rates during fed-batch cultivation in microscale. Hence, a cultivation system with several distinct advantages could be established: (1) high information output on a microscale; (2) many experiments can be performed in parallel and be automated using MTPs; (3) this system is user-friendly and can easily be transferred to a disposable single-use system. This article elucidates this new concept and illustrates applications in fermentations of Escherichia coli under pH-controlled and fed-batch conditions in shaken MTPs. Copyright 2010 Wiley Periodicals, Inc.

  10. High-performance wavelet engine

    NASA Astrophysics Data System (ADS)

    Taylor, Fred J.; Mellot, Jonathon D.; Strom, Erik; Koren, Iztok; Lewis, Michael P.

    1993-11-01

    Wavelet processing has shown great promise for a variety of image and signal processing applications. Wavelets are also among the most computationally expensive techniques in signal processing. It is demonstrated that a wavelet engine constructed with residue number system arithmetic elements offers significant advantages over commercially available wavelet accelerators based upon conventional arithmetic elements. Analysis is presented predicting the dynamic range requirements of the reported residue number system based wavelet accelerator.

  11. Application of Advanced Signal Processing Techniques to Angle of Arrival Estimation in ATC Navigation and Surveillance Systems

    DTIC Science & Technology

    1982-06-23

    Administration Systems Research and Development Service 14, Spseq Aese Ce ’ Washington, D.C. 20591 It. SeppkW•aae metm The work reported in this document was...consider sophisticated signal processing techniques as an alternative method of improving system performanceH Some work in this area has already taken place...demands on the frequency spectrum. As noted in Table 1-1, there has been considerable work on advanced signal processing in the MLS context

  12. Experimental Evaluation of Performance Feedback Using the Dismounted Infantry Virtual After Action Review System. Long Range Navy and Marine Corps Science and Technology Program

    DTIC Science & Technology

    2007-11-14

    Artificial intelligence and 4 23 education , Volume 1: Learning environments and tutoring systems. Hillsdale, NJ: Erlbaum. Wickens, C.D. (1984). Processing...and how to use it to best optimize the learning process. Some researchers (see Loftin & Savely, 1991) have proposed adding intelligent systems to the...is experienced as the cognitive centers in an individual’s brain process visual, tactile, kinesthetic , olfactory, proprioceptive, and auditory

  13. A web platform for integrated surface water - groundwater modeling and data management

    NASA Astrophysics Data System (ADS)

    Fatkhutdinov, Aybulat; Stefan, Catalin; Junghanns, Ralf

    2016-04-01

    Model-based decision support systems are considered to be reliable and time-efficient tools for resources management in various hydrology related fields. However, searching and acquisition of the required data, preparation of the data sets for simulations as well as post-processing, visualization and publishing of the simulations results often requires significantly more work and time than performing the modeling itself. The purpose of the developed software is to combine data storage facilities, data processing instruments and modeling tools in a single platform which potentially can reduce time required for performing simulations, hence decision making. The system is developed within the INOWAS (Innovative Web Based Decision Support System for Water Sustainability under a Changing Climate) project. The platform integrates spatially distributed catchment scale rainfall - runoff, infiltration and groundwater flow models with data storage, processing and visualization tools. The concept is implemented in a form of a web-GIS application and is build based on free and open source components, including the PostgreSQL database management system, Python programming language for modeling purposes, Mapserver for visualization and publishing the data, Openlayers for building the user interface and others. Configuration of the system allows performing data input, storage, pre- and post-processing and visualization in a single not disturbed workflow. In addition, realization of the decision support system in the form of a web service provides an opportunity to easily retrieve and share data sets as well as results of simulations over the internet, which gives significant advantages for collaborative work on the projects and is able to significantly increase usability of the decision support system.

  14. Optimum random and age replacement policies for customer-demand multi-state system reliability under imperfect maintenance

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Luan; Chang, Chin-Chih; Sheu, Dwan-Fang

    2016-04-01

    This paper proposes the generalised random and age replacement policies for a multi-state system composed of multi-state elements. The degradation of the multi-state element is assumed to follow the non-homogeneous continuous time Markov process which is a continuous time and discrete state process. A recursive approach is presented to efficiently compute the time-dependent state probability distribution of the multi-state element. The state and performance distribution of the entire multi-state system is evaluated via the combination of the stochastic process and the Lz-transform method. The concept of customer-centred reliability measure is developed based on the system performance and the customer demand. We develop the random and age replacement policies for an aging multi-state system subject to imperfect maintenance in a failure (or unacceptable) state. For each policy, the optimum replacement schedule which minimises the mean cost rate is derived analytically and discussed numerically.

  15. A pipelined architecture for real time correction of non-uniformity in infrared focal plane arrays imaging system using multiprocessors

    NASA Astrophysics Data System (ADS)

    Zou, Liang; Fu, Zhuang; Zhao, YanZheng; Yang, JunYan

    2010-07-01

    This paper proposes a kind of pipelined electric circuit architecture implemented in FPGA, a very large scale integrated circuit (VLSI), which efficiently deals with the real time non-uniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPA). Dual Nios II soft-core processors and a DSP with a 64+ core together constitute this image system. Each processor undertakes own systematic task, coordinating its work with each other's. The system on programmable chip (SOPC) in FPGA works steadily under the global clock frequency of 96Mhz. Adequate time allowance makes FPGA perform NUC image pre-processing algorithm with ease, which has offered favorable guarantee for the work of post image processing in DSP. And at the meantime, this paper presents a hardware (HW) and software (SW) co-design in FPGA. Thus, this systematic architecture yields an image processing system with multiprocessor, and a smart solution to the satisfaction with the performance of the system.

  16. Applying Hanford Tank Mixing Data to Define Pulse Jet Mixer Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, Beric E.; Bamberger, Judith A.; Recknagle, Kurtis P.

    Pulse jet mixed (PJM) process vessels are being developed for storing, blending, and chemical processing of nuclear waste slurries at the Waste Treatment and Immobilization Plant (WTP) to be built at Hanford, Washington. These waste slurries exhibit variable process feed characteristics including Newtonian to non-Newtonian rheologies over a range of solids loadings. Waste feed to the WTP from the Hanford Tank Farms will be accomplished via the Waste Feed Delivery (WFD) system which includes million-gallon underground storage double-shell tanks (DSTs) with dual-opposed jet mixer pumps. Experience using WFD type jet mixer pumps to mobilize actual Hanford waste in DSTs maymore » be used to establish design threshold criteria of interest to pulse jet mixed process vessel operation. This paper describes a method to evaluate the pulse jet mixed vessel capability to process waste based on information obtained during mobilizing and suspending waste by the WFD system jet mixer pumps in a DST. Calculations of jet velocity and wall shear stress in a specific pulse jet mixed process vessel were performed using a commercial computational fluid dynamics (CFD) code. The CFD-modelled process vessel consists of a 4.9-m- (16-ft-) diameter tank with a 2:1 semi-elliptical head, a single, 10-cm (4-in.) downward facing 60-degree conical nozzle, and a 0.61-m (24-in.) inside diameter PJM. The PJM is located at 70% of the vessel radius with the nozzle stand-off-distance 14 cm (6 in.) above the vessel head. The CFD modeled fluid velocity and wall shear stress can be used to estimate vessel waste-processing performance by comparison to available actual WFD system process data. Test data from the operation of jet mixer pumps in the 23-m (75-ft) diameter DSTs have demonstrated mobilization, solid particles in a sediment matrix were moved from their initial location, and suspension, mobilized solid particles were moved to a higher elevation in the vessel than their initial location, of waste solids. Jet mixer pumps were used in Hanford waste tank 241-AZ-101, and at least 95% of the 0.46-m (18-in.) deep sediment, with a shear strength of 1,500 to 4,200 Pa, was mobilized. Solids with a median particle size of 43 μm, 90th percentile of 94μm, were suspended in tank 241-AZ-101 to at least 5.5 m (216 in.) above the vessel bottom. Analytical calculations for this jet mixer pump test were used to estimate the velocities and wall shear stress that mobilized and suspended the waste. These velocities and wall shear stresses provide design threshold criteria which are metrics for system performance that can be evaluated via testing. If the fluid motion in a specific pulse jet mixed process vessel meets or exceeds the fluid motion of the demonstrated performance in the WFD system, confidence is provided that that vessel will similarly mobilize and suspend those solids if they were within the WTP. The single PJM CFD-calculated jet velocity and wall shear stress compare favorably with the design threshold criterion estimated for the tank 241-AZ-101 process data. Therefore, for both mobilization and suspension, the performance data evaluated from the WFD system testing increases confidence that the performance of the pulse jet mixed process vessels will be sufficient to process that waste even if that waste is not fully characterized.« less

  17. Combined Acquisition/Processing For Data Reduction

    NASA Astrophysics Data System (ADS)

    Kruger, Robert A.

    1982-01-01

    Digital image processing systems necessarily consist of three components: acquisition, storage/retrieval and processing. The acquisition component requires the greatest data handling rates. By coupling together the acquisition witn some online hardwired processing, data rates and capacities for short term storage can be reduced. Furthermore, long term storage requirements can be reduced further by appropriate processing and editing of image data contained in short term memory. The net result could be reduced performance requirements for mass storage, processing and communication systems. Reduced amounts of data also snouid speed later data analysis and diagnostic decision making.

  18. Robotic Processing Of Rocket-Engine Nozzles

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Maslakowski, John E.; Gutow, David A.; Deily, David C.

    1994-01-01

    Automated manufacturing cell containing computer-controlled robotic processing system developed to implement some important related steps in fabrication of rocket-engine nozzles. Performs several tedious and repetitive fabrication, measurement, adjustment, and inspection processes and subprocesses now performed manually. Offers advantages of reduced processing time, greater consistency, excellent collection of data, objective inspections, greater productivity, and simplified fixturing. Also affords flexibility: by making suitable changes in hardware and software, possible to modify process and subprocesses. Flexibility makes work cell adaptable to fabrication of heat exchangers and other items structured similarly to rocket nozzles.

  19. Performance management of the public healthcare services in Ireland: a review.

    PubMed

    Mesabbah, Mohammed; Arisha, Amr

    2016-01-01

    Performance Management (PM) processes have become a potent part of strategic and service quality decisions in healthcare organisations. In 2005, the management of public healthcare in Ireland was amalgamated into a single integrated management body, named the Health Service Executive (HSE). Since then, the HSE has come up with a range of strategies for healthcare developments and reforms, and has developed a PM system as part of its strategic planning. The purpose of this paper is to review the application of PM in the Irish Healthcare system, with a particular focus on Irish Hospitals and Emergency Services. An extensive review of relevant HSE's publications from 2005 to 2013 is conducted. Studies of the relevant literature related to the application of PM and of international best practices in healthcare performance systems are also presented. PM and performance measurement systems used by the HSE include many performance reports designed to monitor performance trends and strategic goals. Issues in the current PM system include inconsistency of measures and performance reporting, unclear strategy alignment, and deficiencies in reporting (e.g. feedback and corrective actions). Furthermore, PM processes have not been linked adequately into Irish public hospitals' management systems. The HSE delivers several services such as mental health, social inclusion, etc. This study focuses on the HSE's PM framework, with a particular interest in acute hospitals and emergency services. This is the first comprehensive review of Irish healthcare PM since the introduction of the HSE. A critical analysis of the HSE reports identifies the shortcomings in its current PM system.

  20. Development of Plant Control Diagnosis Technology and Increasing Its Applications

    NASA Astrophysics Data System (ADS)

    Kugemoto, Hidekazu; Yoshimura, Satoshi; Hashizume, Satoru; Kageyama, Takashi; Yamamoto, Toru

    A plant control diagnosis technology was developed to improve the performance of plant-wide control and maintain high productivity of plants. The control performance diagnosis system containing this technology picks out the poor performance loop, analyzes the cause, and outputs the result on the Web page. Meanwhile, the PID tuning tool is used to tune extracted loops from the control performance diagnosis system. It has an advantage of tuning safely without process changes. These systems are powerful tools to do Kaizen (continuous improvement efforts) step by step, coordinating with the operator. This paper describes a practical technique regarding the diagnosis system and its industrial applications.

Top