Sample records for processing time requirements

  1. Evaluation of STAT medication ordering process in a community hospital.

    PubMed

    Abdelaziz, Hani; Richardson, Sandra; Walsh, Kim; Nodzon, Jessica; Schwartz, Barbara

    2016-01-01

    In most health care facilities, problems related to delays in STAT medication order processing time are of common concern. The purpose of this study was to evaluate processing time for STAT orders at Kimball Medical Center. All STAT orders were reviewed to determine processing time; order processing time was also stratified by physician order entry (physician entered (PE) orders vs. non-physician entered (NPE) orders). Collected data included medication ordered, indication, time ordered, time verified by pharmacist, time sent from pharmacy, and time charted as given to the patient. A total of 502 STAT orders were reviewed and 389 orders were included for analysis. Overall, median time was 29 minutes, IQR 16-63; p<0.0001.). The time needed to process NPE orders was significantly less than that needed for PE orders (median 27 vs. 34 minutes; p=0.026). In terms of NPE orders, the median total time required to process STAT orders for medications available in the Automated Dispensing Devices (ADM) was within 30 minutes, while that required to process orders for medications not available in the ADM was significantly greater than 30 minutes. For PE orders, the median total time required to process orders for medications available in the ADM (i.e., not requiring pharmacy involvement) was significantly greater than 30 minutes. [Median time = 34 minutes (p<0.001)]. We conclude that STAT order processing time may be improved by increasing the availability of medications in ADM, and pharmacy involvement in the verification process.

  2. Evaluation of STAT medication ordering process in a community hospital

    PubMed Central

    Walsh., Kim; Schwartz., Barbara

    Background: In most health care facilities, problems related to delays in STAT medication order processing time are of common concern. Objective: The purpose of this study was to evaluate processing time for STAT orders at Kimball Medical Center. Methods: All STAT orders were reviewed to determine processing time; order processing time was also stratified by physician order entry (physician entered (PE) orders vs. non-physician entered (NPE) orders). Collected data included medication ordered, indication, time ordered, time verified by pharmacist, time sent from pharmacy, and time charted as given to the patient. Results: A total of 502 STAT orders were reviewed and 389 orders were included for analysis. Overall, median time was 29 minutes, IQR 16–63; p<0.0001.). The time needed to process NPE orders was significantly less than that needed for PE orders (median 27 vs. 34 minutes; p=0.026). In terms of NPE orders, the median total time required to process STAT orders for medications available in the Automated Dispensing Devices (ADM) was within 30 minutes, while that required to process orders for medications not available in the ADM was significantly greater than 30 minutes. For PE orders, the median total time required to process orders for medications available in the ADM (i.e., not requiring pharmacy involvement) was significantly greater than 30 minutes. [Median time = 34 minutes (p<0.001)]. Conclusion: We conclude that STAT order processing time may be improved by increasing the availability of medications in ADM, and pharmacy involvement in the verification process. PMID:27382418

  3. SAR operational aspects

    NASA Astrophysics Data System (ADS)

    Holmdahl, P. E.; Ellis, A. B. E.; Moeller-Olsen, P.; Ringgaard, J. P.

    1981-12-01

    The basic requirements of the SAR ground segment of ERS-1 are discussed. A system configuration for the real time data acquisition station and the processing and archive facility is depicted. The functions of a typical SAR processing unit (SPU) are specified, and inputs required for near real time and full precision, deferred time processing are described. Inputs and the processing required for provision of these inputs to the SPU are dealt with. Data flow through the systems, and normal and nonnormal operational sequence, are outlined. Prerequisites for maintaining overall performance are identified, emphasizing quality control. The most demanding tasks to be performed by the front end are defined in order to determine types of processors and peripherals which comply with throughput requirements.

  4. 31 CFR 357.29 - Time required for processing transaction request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Book-Entry Securities System (Legacy Treasury Direct) § 357.29 Time required for processing transaction request. For purposes of a transaction request affecting payment instructions with respect to a security...

  5. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  6. 31 CFR 357.29 - Time required for processing transaction request.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false Time required for processing transaction request. 357.29 Section 357.29 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS...

  7. 31 CFR 357.29 - Time required for processing transaction request.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false Time required for processing transaction request. 357.29 Section 357.29 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS...

  8. 31 CFR 357.29 - Time required for processing transaction request.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false Time required for processing transaction request. 357.29 Section 357.29 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS...

  9. 31 CFR 357.29 - Time required for processing transaction request.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false Time required for processing transaction request. 357.29 Section 357.29 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE FISCAL SERVICE REGULATIONS...

  10. Workflow and maintenance characteristics of five automated laboratory instruments for the diagnosis of sexually transmitted infections.

    PubMed

    Ratnam, Sam; Jang, Dan; Gilchrist, Jodi; Smieja, Marek; Poirier, Andre; Hatchette, Todd; Flandin, Jean-Frederic; Chernesky, Max

    2014-07-01

    The choice of a suitable automated system for a diagnostic laboratory depends on various factors. Comparative workflow studies provide quantifiable and objective metrics to determine hands-on time during specimen handling and processing, reagent preparation, return visits and maintenance, and test turnaround time and throughput. Using objective time study techniques, workflow characteristics for processing 96 and 192 tests were determined on m2000 RealTime (Abbott Molecular), Viper XTR (Becton Dickinson), cobas 4800 (Roche Molecular Diagnostics), Tigris (Hologic Gen-Probe), and Panther (Hologic Gen-Probe) platforms using second-generation assays for Chlamydia trachomatis and Neisseria gonorrhoeae. A combination of operational and maintenance steps requiring manual labor showed that Panther had the shortest overall hands-on times and Viper XTR the longest. Both Panther and Tigris showed greater efficiency whether 96 or 192 tests were processed. Viper XTR and Panther had the shortest times to results and m2000 RealTime the longest. Sample preparation and loading time was the shortest for Panther and longest for cobas 4800. Mandatory return visits were required only for m2000 RealTime and cobas 4800 when 96 tests were processed, and both required substantially more hands-on time than the other systems due to increased numbers of return visits when 192 tests were processed. These results show that there are substantial differences in the amount of labor required to operate each system. Assay performance, instrumentation, testing capacity, workflow, maintenance, and reagent costs should be considered in choosing a system. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  11. Space station needs, attributes, and architectural options study. Volume 1: Missions and requirements

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Science and applications, NOAA environmental observation, commercial resource observations, commercial space processing, commercial communications, national security, technology development, and GEO servicing are addressed. Approach to time phasing of mission requirements, system sizing summary, time-phased user mission payload support, space station facility requirements, and integrated time-phased system requirements are also addressed.

  12. Superior memory efficiency of quantum devices for the simulation of continuous-time stochastic processes

    NASA Astrophysics Data System (ADS)

    Elliott, Thomas J.; Gu, Mile

    2018-03-01

    Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.

  13. Pre- and post-head processing for single- and double-scrambled sentences of a head-final language as measured by the eye tracking method.

    PubMed

    Tamaoka, Katsuo; Asano, Michiko; Miyaoka, Yayoi; Yokosawa, Kazuhiko

    2014-04-01

    Using the eye-tracking method, the present study depicted pre- and post-head processing for simple scrambled sentences of head-final languages. Three versions of simple Japanese active sentences with ditransitive verbs were used: namely, (1) SO₁O₂V canonical, (2) SO₂O₁V single-scrambled, and (3) O₁O₂SV double-scrambled order. First pass reading times indicated that the third noun phrase just before the verb in both single- and double-scrambled sentences required longer reading times compared to canonical sentences. Re-reading times (the sum of all fixations minus the first pass reading) showed that all noun phrases including the crucial phrase before the verb in double-scrambled sentences required longer re-reading times than those required for single-scrambled sentences; single-scrambled sentences had no difference from canonical ones. Therefore, a single filler-gap dependency can be resolved in pre-head anticipatory processing whereas two filler-gap dependencies require much greater cognitive loading than a single case. These two dependencies can be resolved in post-head processing using verb agreement information.

  14. The Cassini project: Lessons learned through operations

    NASA Astrophysics Data System (ADS)

    McCormick, Egan D.

    1998-01-01

    The Cassini space probe requires 180 238Pu Light-weight Radioisotopic Heater Units (LWRHU) and 216 238Pu General Purpose Heat Source (GPHS) pellets. Additional LWRHU and GPHS pellets required for non-destructive (NDA) and destructive assay purposes were fabricated bringing the original pellet requirement to 224 LWRHU and 252 GPHS. Due to rejection of pellets resulting from chemical impurities in the fuel and/or failure to meet dimensional specifications a total of 320 GPHS pellets were fabricated for the mission. Initial plans called for LANL to process a total of 30 kg of oxide powder for pressing into monolithic ceramic pellets. The original 30 kg commitment was processed within the time frame allotted; an additional 8 kg were required to replace fuel lost due to failure to meet Quality Assurance specifications for impurities and dimensions. During the time frame allotted for pellet production, operations were impacted by equipment failure, unacceptable fuel impurities levels, and periods of extended down time, >30 working days during which little or no processing occurred. Throughout the production process, the reality of operations requirements varied from the theory upon which production schedules were based.

  15. 21 CFR 113.81 - Product preparation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... HUMAN CONSUMPTION THERMALLY PROCESSED LOW-ACID FOODS PACKAGED IN HERMETICALLY SEALED CONTAINERS... are suitable for use in processing low-acid food. Compliance with this requirement may be accomplished... food to the required temperature, holding it at this temperature for the required time, and then either...

  16. 21 CFR 113.81 - Product preparation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... HUMAN CONSUMPTION THERMALLY PROCESSED LOW-ACID FOODS PACKAGED IN HERMETICALLY SEALED CONTAINERS... are suitable for use in processing low-acid food. Compliance with this requirement may be accomplished... food to the required temperature, holding it at this temperature for the required time, and then either...

  17. 21 CFR 113.81 - Product preparation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... HUMAN CONSUMPTION THERMALLY PROCESSED LOW-ACID FOODS PACKAGED IN HERMETICALLY SEALED CONTAINERS... are suitable for use in processing low-acid food. Compliance with this requirement may be accomplished... food to the required temperature, holding it at this temperature for the required time, and then either...

  18. 40 CFR 63.11940 - What continuous monitoring requirements must I meet for control devices required to install CPMS...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... consistent with the manufacturer's recommendations within 15 days or by the next time any process vent stream... the manufacturer's recommendations within 15 days or by the next time any process vent stream is...) Determine gas stream flow using the design blower capacity, with appropriate adjustments for pressure drop...

  19. Time process study with UML.

    PubMed

    Shiki, N; Ohno, Y; Fujii, A; Murata, T; Matsumura, Y

    2009-01-01

    We propose a new business-process analysis approach, Time Process Study (TPS), which comprises process analysis and time and motion studies (TMS). TPS offsets weaknesses of TMS; the cost of field studies and the difficulties in applying them to tasks whose time span differs from those of usual tasks. In TPS, the job procedures are first displayed using a unified modeling language (UML). Next, time and manpower for each procedure are studied through interviews and TMS, and the information is appended to the UML diagram. We applied TPS in the case of a hospital-based cancer registry (HCR) of a university hospital to clarify the work procedure and the time required, and investigated TPS's availability. Meetings for the study were held once a month from July to September in 2008, and one inquirer committed a total of eight hours to the hospital survey. TPS revealed that HCR consisted of three tasks and 14 functions. The registration required 123 hours/month/person, the quality control required 6.5 hours/ 6 months/person and filing data into the population-based cancer registry required 0.5 hours/6 months/person. Of the total tasks involved in registration, 116.5 hours/month/person were undertaken by a registration worker, which shows the necessity of employing one full-time staff. With TPS, it is straightforward to share the concept among the study-team because the job procedure is first displayed using UML. Therefore, it requires a few workload to conduct TMS and interview. The obtained results were adopted for the review of staff assignment of HCR by Japanese government.

  20. Low-SWaP coincidence processing for Geiger-mode LIDAR video

    NASA Astrophysics Data System (ADS)

    Schultz, Steven E.; Cervino, Noel P.; Kurtz, Zachary D.; Brown, Myron Z.

    2015-05-01

    Photon-counting Geiger-mode lidar detector arrays provide a promising approach for producing three-dimensional (3D) video at full motion video (FMV) data rates, resolution, and image size from long ranges. However, coincidence processing required to filter raw photon counts is computationally expensive, generally requiring significant size, weight, and power (SWaP) and also time. In this paper, we describe a laboratory test-bed developed to assess the feasibility of low-SWaP, real-time processing for 3D FMV based on Geiger-mode lidar. First, we examine a design based on field programmable gate arrays (FPGA) and demonstrate proof-of-concept results. Then we examine a design based on a first-of-its-kind embedded graphical processing unit (GPU) and compare performance with the FPGA. Results indicate feasibility of real-time Geiger-mode lidar processing for 3D FMV and also suggest utility for real-time onboard processing for mapping lidar systems.

  1. Simulation of Simple Controlled Processes with Dead-Time.

    ERIC Educational Resources Information Center

    Watson, Keith R.; And Others

    1985-01-01

    The determination of closed-loop response of processes containing dead-time is typically not covered in undergraduate process control, possibly because the solution by Laplace transforms requires the use of Pade approximation for dead-time, which makes the procedure lengthy and tedious. A computer-aided method is described which simplifies the…

  2. 9 CFR 318.23 - Heat-processing and stabilization requirements for uncured meat patties.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Heat-processing and stabilization...; REINSPECTION AND PREPARATION OF PRODUCTS General § 318.23 Heat-processing and stabilization requirements for... been heat processed for less time or using lower internal temperatures than are prescribed by paragraph...

  3. Modeling operators' emergency response time for chemical processing operations.

    PubMed

    Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam

    2014-01-01

    Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations to determine performance coefficients. These coefficients represent the decrease in time required for various basic motions in emergency situations and were used to model an emergency response. This approach will make hazardous operations requiring operator response, alarm management, and evacuation processes easier to design and predict. An application of this methodology is included in the article. The time required for an emergency response was roughly a one-third faster than for a normal response time.

  4. Documentation of a restart option for the U.S. Geological Survey coupled Groundwater and Surface-Water Flow (GSFLOW) model

    USGS Publications Warehouse

    Regan, R. Steve; Niswonger, Richard G.; Markstrom, Steven L.; Barlow, Paul M.

    2015-10-02

    The spin-up simulation should be run for a sufficient length of time necessary to establish antecedent conditions throughout a model domain. Each GSFLOW application can require different lengths of time to account for the hydrologic stresses to propagate through a coupled groundwater and surface-water system. Typically, groundwater hydrologic processes require many years to come into equilibrium with dynamic climate and other forcing (or stress) data, such as precipitation and well pumping, whereas runoff-dominated surface-water processes respond relatively quickly. Use of a spin-up simulation can substantially reduce execution-time requirements for applications where the time period of interest is small compared to the time for hydrologic memory; thus, use of the restart option can be an efficient strategy for forecast and calibration simulations that require multiple simulations starting from the same day.

  5. Integrated Analysis Tools for Determination of Structural Integrity and Durability of High temperature Polymer Matrix Composites

    DTIC Science & Technology

    2008-08-18

    fidelity will be used to reduce the massive experimental testing and associated time required for qualification of new materials. Tools and...develping a model of the thermo-oxidative process for polymer systems, that incorporates the effects of reaction rates, Fickian diffusion, time varying...degradation processes. Year: 2005 Month: 12 Not required at this time . AIR FORCE OFFICE OF SCIENTIFIC KESEARCH 04 SEP 2008 Page 2 of 2 DTIC Data

  6. The Relationship between Processing and Storage in Working Memory Span: Not Two Sides of the Same Coin

    ERIC Educational Resources Information Center

    Maehara, Yukio; Saito, Satoru

    2007-01-01

    In working memory (WM) span tests, participants maintain memory items while performing processing tasks. In this study, we examined the impact of task processing requirements on memory-storage activities, looking at the stimulus order effect and the impact of storage requirements on processing activities, testing the processing time effect in WM…

  7. Choosing a software design method for real-time Ada applications: JSD process inversion as a means to tailor a design specification to the performance requirements and target machine

    NASA Technical Reports Server (NTRS)

    Withey, James V.

    1986-01-01

    The validity of real-time software is determined by its ability to execute on a computer within the time constraints of the physical system it is modeling. In many applications the time constraints are so critical that the details of process scheduling are elevated to the requirements analysis phase of the software development cycle. It is not uncommon to find specifications for a real-time cyclic executive program included to assumed in such requirements. It was found that prelininary designs structured around this implementation abscure the data flow of the real world system that is modeled and that it is consequently difficult and costly to maintain, update and reuse the resulting software. A cyclic executive is a software component that schedules and implicitly synchronizes the real-time software through periodic and repetitive subroutine calls. Therefore a design method is sought that allows the deferral of process scheduling to the later stages of design. The appropriate scheduling paradigm must be chosen given the performance constraints, the largest environment and the software's lifecycle. The concept of process inversion is explored with respect to the cyclic executive.

  8. Quality Service Analysis and Improvement of Pharmacy Unit of XYZ Hospital Using Value Stream Analysis Methodology

    NASA Astrophysics Data System (ADS)

    Jonny; Nasution, Januar

    2013-06-01

    Value stream mapping is a tool which is needed to let the business leader of XYZ Hospital to see what is actually happening in its business process that have caused longer lead time for self-produced medicines in its pharmacy unit. This problem has triggered many complaints filed by patients. After deploying this tool, the team has come up with the fact that in processing the medicine, pharmacy unit does not have any storage and capsule packing tool and this condition has caused many wasting times in its process. Therefore, the team has proposed to the business leader to procure the required tools in order to shorten its process. This research has resulted in shortened lead time from 45 minutes to 30 minutes as required by the government through Indonesian health ministry with increased %VA (valued added activity) or Process Cycle Efficiency (PCE) from 66% to 68% (considered lean because it is upper than required 30%). This result has proved that the process effectiveness has been increase by the improvement.

  9. Practical, Real-Time, and Robust Watermarking on the Spatial Domain for High-Definition Video Contents

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu

    Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.

  10. Optical fiber repeatered transmission systems utilizing SAW filters

    NASA Astrophysics Data System (ADS)

    Rosenberg, R. L.; Ross, D. G.; Trischitta, P. R.; Fishman, D. A.; Armitage, C. B.

    1983-05-01

    Baseband digital transmission-line systems capable of signaling rates of several hundred to several thousand Mbit/s are presently being developed around the world. The pulse regeneration process is gated by a timing wave which is synchronous with the symbol rate of the arriving pulse stream. Synchronization is achieved by extracting a timing wave from the arriving pulse stream, itself. To date, surface acoustic-wave (SAW) filters have been widely adopted for timing recovery in the in-line regenerators of high-bit-rate systems. The present investigation has the objective to acquaint the SAW community in general, and SAW filter suppliers in particular, with the requirements for timing recovery filters in repeatered digital transmission systems. Attention is given to the system structure, the timing loop function, the system requirements affecting the timing-recovery filter, the decision process, timing jitter accumulation, the filter 'ringing' requirement, and aspects of reliability.

  11. Mass production of silicon pore optics for ATHENA

    NASA Astrophysics Data System (ADS)

    Wille, Eric; Bavdaz, Marcos; Collon, Maximilien

    2016-07-01

    Silicon Pore Optics (SPO) provide high angular resolution with low effective area density as required for the Advanced Telescope for High Energy Astrophysics (Athena). The x-ray telescope consists of several hundreds of SPO mirror modules. During the development of the process steps of the SPO technology, specific requirements of a future mass production have been considered right from the beginning. The manufacturing methods heavily utilise off-the-shelf equipment from the semiconductor industry, robotic automation and parallel processing. This allows to upscale the present production flow in a cost effective way, to produce hundreds of mirror modules per year. Considering manufacturing predictions based on the current technology status, we present an analysis of the time and resources required for the Athena flight programme. This includes the full production process starting with Si wafers up to the integration of the mirror modules. We present the times required for the individual process steps and identify the equipment required to produce two mirror modules per day. A preliminary timeline for building and commissioning the required infrastructure, and for flight model production of about 1000 mirror modules, is presented.

  12. Parallel Processing Systems for Passive Ranging During Helicopter Flight

    NASA Technical Reports Server (NTRS)

    Sridhar, Bavavar; Suorsa, Raymond E.; Showman, Robert D. (Technical Monitor)

    1994-01-01

    The complexity of rotorcraft missions involving operations close to the ground result in high pilot workload. In order to allow a pilot time to perform mission-oriented tasks, sensor-aiding and automation of some of the guidance and control functions are highly desirable. Images from an electro-optical sensor provide a covert way of detecting objects in the flight path of a low-flying helicopter. Passive ranging consists of processing a sequence of images using techniques based on optical low computation and recursive estimation. The passive ranging algorithm has to extract obstacle information from imagery at rates varying from five to thirty or more frames per second depending on the helicopter speed. We have implemented and tested the passive ranging algorithm off-line using helicopter-collected images. However, the real-time data and computation requirements of the algorithm are beyond the capability of any off-the-shelf microprocessor or digital signal processor. This paper describes the computational requirements of the algorithm and uses parallel processing technology to meet these requirements. Various issues in the selection of a parallel processing architecture are discussed and four different computer architectures are evaluated regarding their suitability to process the algorithm in real-time. Based on this evaluation, we conclude that real-time passive ranging is a realistic goal and can be achieved with a short time.

  13. Problematics of Time and Timing in the Longitudinal Study of Human Development: Theoretical and Methodological Issues

    PubMed Central

    Lerner, Richard M.; Schwartz, Seth J; Phelps, Erin

    2009-01-01

    Studying human development involves describing, explaining, and optimizing intraindividual change and interindividual differences in such change and, as such, requires longitudinal research. The selection of the appropriate type of longitudinal design requires selecting the option that best addresses the theoretical questions asked about developmental process and the use of appropriate statistical procedures to best exploit data derived from theory-predicated longitudinal research. This paper focuses on several interrelated problematics involving the treatment of time and the timing of observations that developmental scientists face in creating theory-design fit and in charting in change-sensitive ways developmental processes across life. We discuss ways in which these problematics may be addressed to advance theory-predicated understanding of the role of time in processes of individual development. PMID:19554215

  14. Incubation: A Neglected Aspect of the Writing Process.

    ERIC Educational Resources Information Center

    Krashen, Stephen

    2001-01-01

    Discusses the role of incubation in the writing process. Suggests that one secret to coming up with good ideas in writing is understanding the importance of realizing that the process entails patient revision, takes time, and often requires some time off task. (Author/VWL)

  15. An Approach for Integrating the Prioritization of Functional and Nonfunctional Requirements

    PubMed Central

    Dabbagh, Mohammad; Lee, Sai Peck

    2014-01-01

    Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches. PMID:24982987

  16. An approach for integrating the prioritization of functional and nonfunctional requirements.

    PubMed

    Dabbagh, Mohammad; Lee, Sai Peck

    2014-01-01

    Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches.

  17. Enabling model checking for collaborative process analysis: from BPMN to `Network of Timed Automata'

    NASA Astrophysics Data System (ADS)

    Mallek, Sihem; Daclin, Nicolas; Chapurlat, Vincent; Vallespir, Bruno

    2015-04-01

    Interoperability is a prerequisite for partners involved in performing collaboration. As a consequence, the lack of interoperability is now considered a major obstacle. The research work presented in this paper aims to develop an approach that allows specifying and verifying a set of interoperability requirements to be satisfied by each partner in the collaborative process prior to process implementation. To enable the verification of these interoperability requirements, it is necessary first and foremost to generate a model of the targeted collaborative process; for this research effort, the standardised language BPMN 2.0 is used. Afterwards, a verification technique must be introduced, and model checking is the preferred option herein. This paper focuses on application of the model checker UPPAAL in order to verify interoperability requirements for the given collaborative process model. At first, this step entails translating the collaborative process model from BPMN into a UPPAAL modelling language called 'Network of Timed Automata'. Second, it becomes necessary to formalise interoperability requirements into properties with the dedicated UPPAAL language, i.e. the temporal logic TCTL.

  18. Lossless data compression for improving the performance of a GPU-based beamformer.

    PubMed

    Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi

    2015-04-01

    The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.

  19. An Ada implementation of the network manager for the advanced information processing system

    NASA Technical Reports Server (NTRS)

    Nagle, Gail A.

    1986-01-01

    From an implementation standpoint, the Ada language provided many features which facilitated the data and procedure abstraction process. The language supported a design which was dynamically flexible (despite strong typing), modular, and self-documenting. Adequate training of programmers requires access to an efficient compiler which supports full Ada. When the performance issues for real time processing are finally addressed by more stringent requirements for tasking features and the development of efficient run-time environments for embedded systems, the full power of the language will be realized.

  20. Analyzing Discrepancies in a Software Development Project Change Request (CR) Assessment Process and Recommendations for Process Improvements

    NASA Technical Reports Server (NTRS)

    Cunningham, Kenneth James

    2003-01-01

    The Change Request (CR) assessment process is essential in the display development cycle. The assessment process is performed to ensure that the changes stated in the description of the CR match the changes in the actual display requirements. If a discrepancy is found between the CR and the requirements, the CR must be returned to the originator for corrections. Data was gathered from each of the developers to determine the type of discrepancies and the amount of time spent assessing each CR. This study sought to determine the most common types of discrepancies, and the amount of time required to assessing those issues. The study found that even though removing discrepancy before an assessment would save half the time needed to assess an CR with a discrepancy, the number of CR's found to have a discrepancy was very small compared to the total number of CR's assessed during the data gathering period.

  1. Time required for institutional review board review at one Veterans Affairs medical center.

    PubMed

    Hall, Daniel E; Hanusa, Barbara H; Stone, Roslyn A; Ling, Bruce S; Arnold, Robert M

    2015-02-01

    Despite growing concern that institutional review boards (IRBs) impose burdensome delays on research, little is known about the time required for IRB review across different types of research. To measure the overall and incremental process times for IRB review as a process of quality improvement. After developing a detailed process flowchart of the IRB review process, 2 analysts abstracted temporal data from the records pertaining to all 103 protocols newly submitted to the IRB at a large urban Veterans Affairs medical center from June 1, 2009, through May 31, 2011. Disagreements were reviewed with the principal investigator to reach consensus. We then compared the review times across review types using analysis of variance and post hoc Scheffé tests after achieving normally distributed data through logarithmic transformation. Calendar days from initial submission to final approval of research protocols. Initial IRB review took 2 to 4 months, with expedited and exempt reviews requiring less time (median [range], 85 [23-631] and 82 [16-437] days, respectively) than full board reviews (median [range], 131 [64-296] days; P = .008). The median time required for credentialing of investigators was 1 day (range, 0-74 days), and review by the research and development committee took a median of 15 days (range, 0-184 days). There were no significant differences in credentialing or research and development times across review types (exempt, expedited, or full board). Of the extreme delays in IRB review, 80.0% were due to investigators' slow responses to requested changes. There were no systematic delays attributable to the information security officer, privacy officer, or IRB chair. Measuring and analyzing review times is a critical first step in establishing a culture and process of continuous quality improvement among IRBs that govern research programs. The review times observed at this IRB are substantially longer than the 60-day target recommended by expert panels. The method described here could be applied to other IRBs to begin identifying and improving inefficiencies.

  2. A Theory of Information Quality and a Framework for its Implementation in the Requirements Engineering Process

    NASA Astrophysics Data System (ADS)

    Grenn, Michael W.

    This dissertation introduces a theory of information quality to explain macroscopic behavior observed in the systems engineering process. The theory extends principles of Shannon's mathematical theory of communication [1948] and statistical mechanics to information development processes concerned with the flow, transformation, and meaning of information. The meaning of requirements information in the systems engineering context is estimated or measured in terms of the cumulative requirements quality Q which corresponds to the distribution of the requirements among the available quality levels. The requirements entropy framework (REF) implements the theory to address the requirements engineering problem. The REF defines the relationship between requirements changes, requirements volatility, requirements quality, requirements entropy and uncertainty, and engineering effort. The REF is evaluated via simulation experiments to assess its practical utility as a new method for measuring, monitoring and predicting requirements trends and engineering effort at any given time in the process. The REF treats the requirements engineering process as an open system in which the requirements are discrete information entities that transition from initial states of high entropy, disorder and uncertainty toward the desired state of minimum entropy as engineering effort is input and requirements increase in quality. The distribution of the total number of requirements R among the N discrete quality levels is determined by the number of defined quality attributes accumulated by R at any given time. Quantum statistics are used to estimate the number of possibilities P for arranging R among the available quality levels. The requirements entropy H R is estimated using R, N and P by extending principles of information theory and statistical mechanics to the requirements engineering process. The information I increases as HR and uncertainty decrease, and the change in information AI needed to reach the desired state of quality is estimated from the perspective of the receiver. The HR may increase, decrease or remain steady depending on the degree to which additions, deletions and revisions impact the distribution of R among the quality levels. Current requirements trend metrics generally treat additions, deletions and revisions the same and simply measure the quantity of these changes over time. The REF evaluates the quantity of requirements changes over time, distinguishes between their positive and negative effects by calculating their impact on HR, Q, and AI, and forecasts when the desired state will be reached, enabling more accurate assessment of the status and progress of the requirements engineering effort. Results from random variable simulations suggest the REF is an improved leading indicator of requirements trends that can be readily combined with current methods. The increase in I, or decrease in H R and uncertainty, is proportional to the engineering effort E input into the requirements engineering process. The REF estimates the AE needed to transition R from their current state of quality to the desired end state or some other interim state of interest. Simulation results are compared with measured engineering effort data for Department of Defense programs published in the SE literature, and the results suggest the REF is a promising new method for estimation of AE.

  3. A Model of Batch Scheduling for a Single Batch Processor with Additional Setups to Minimize Total Inventory Holding Cost of Parts of a Single Item Requested at Multi-due-date

    NASA Astrophysics Data System (ADS)

    Hakim Halim, Abdul; Ernawati; Hidayat, Nita P. A.

    2018-03-01

    This paper deals with a model of batch scheduling for a single batch processor on which a number of parts of a single items are to be processed. The process needs two kinds of setups, i. e., main setups required before processing any batches, and additional setups required repeatedly after the batch processor completes a certain number of batches. The parts to be processed arrive at the shop floor at the times coinciding with their respective starting times of processing, and the completed parts are to be delivered at multiple due dates. The objective adopted for the model is that of minimizing total inventory holding cost consisting of holding cost per unit time for a part in completed batches, and that in in-process batches. The formulation of total inventory holding cost is derived from the so-called actual flow time defined as the interval between arrival times of parts at the production line and delivery times of the completed parts. The actual flow time satisfies not only minimum inventory but also arrival and delivery just in times. An algorithm to solve the model is proposed and a numerical example is shown.

  4. DART system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.

    2005-08-01

    The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less

  5. Automation and integration of components for generalized semantic markup of electronic medical texts.

    PubMed

    Dugan, J M; Berrios, D C; Liu, X; Kim, D K; Kaizer, H; Fagan, L M

    1999-01-01

    Our group has built an information retrieval system based on a complex semantic markup of medical textbooks. We describe the construction of a set of web-based knowledge-acquisition tools that expedites the collection and maintenance of the concepts required for text markup and the search interface required for information retrieval from the marked text. In the text markup system, domain experts (DEs) identify sections of text that contain one or more elements from a finite set of concepts. End users can then query the text using a predefined set of questions, each of which identifies a subset of complementary concepts. The search process matches that subset of concepts to relevant points in the text. The current process requires that the DE invest significant time to generate the required concepts and questions. We propose a new system--called ACQUIRE (Acquisition of Concepts and Queries in an Integrated Retrieval Environment)--that assists a DE in two essential tasks in the text-markup process. First, it helps her to develop, edit, and maintain the concept model: the set of concepts with which she marks the text. Second, ACQUIRE helps her to develop a query model: the set of specific questions that end users can later use to search the marked text. The DE incorporates concepts from the concept model when she creates the questions in the query model. The major benefit of the ACQUIRE system is a reduction in the time and effort required for the text-markup process. We compared the process of concept- and query-model creation using ACQUIRE to the process used in previous work by rebuilding two existing models that we previously constructed manually. We observed a significant decrease in the time required to build and maintain the concept and query models.

  6. Consolidation of lunar regolith: Microwave versus direct solar heating

    NASA Technical Reports Server (NTRS)

    Kunitzer, J.; Strenski, D. G.; Yankee, S. J.; Pletka, B. J.

    1991-01-01

    The production of construction materials on the lunar surface will require an appropriate fabrication technique. Two processing methods considered as being suitable for producing dense, consolidated products such as bricks are direct solar heating and microwave heating. An analysis was performed to compare the two processes in terms of the amount of power and time required to fabricate bricks of various size. The regolith was considered to be a mare basalt with an overall density of 60 pct. of theoretical. Densification was assumed to take place by vitrification since this process requires moderate amounts of energy and time while still producing dense products. Microwave heating was shown to be significantly faster compared to solar furnace heating for rapid production of realistic-size bricks.

  7. Ramp Technology and Intelligent Processing in Small Manufacturing

    NASA Technical Reports Server (NTRS)

    Rentz, Richard E.

    1992-01-01

    To address the issues of excessive inventories and increasing procurement lead times, the Navy is actively pursuing flexible computer integrated manufacturing (FCIM) technologies, integrated by communication networks to respond rapidly to its requirements for parts. The Rapid Acquisition of Manufactured Parts (RAMP) program, initiated in 1986, is an integral part of this effort. The RAMP program's goal is to reduce the current average production lead times experienced by the Navy's inventory control points by a factor of 90 percent. The manufacturing engineering component of the RAMP architecture utilizes an intelligent processing technology built around a knowledge-based shell provided by ICAD, Inc. Rules and data bases in the software simulate an expert manufacturing planner's knowledge of shop processes and equipment. This expert system can use Product Data Exchange using STEP (PDES) data to determine what features the required part has, what material is required to manufacture it, what machines and tools are needed, and how the part should be held (fixtured) for machining, among other factors. The program's rule base then indicates, for example, how to make each feature, in what order to make it, and to which machines on the shop floor the part should be routed for processing. This information becomes part of the shop work order. The process planning function under RAMP greatly reduces the time and effort required to complete a process plan. Since the PDES file that drives the intelligent processing is 100 percent complete and accurate to start with, the potential for costly errors is greatly diminished.

  8. Ramp technology and intelligent processing in small manufacturing

    NASA Astrophysics Data System (ADS)

    Rentz, Richard E.

    1992-04-01

    To address the issues of excessive inventories and increasing procurement lead times, the Navy is actively pursuing flexible computer integrated manufacturing (FCIM) technologies, integrated by communication networks to respond rapidly to its requirements for parts. The Rapid Acquisition of Manufactured Parts (RAMP) program, initiated in 1986, is an integral part of this effort. The RAMP program's goal is to reduce the current average production lead times experienced by the Navy's inventory control points by a factor of 90 percent. The manufacturing engineering component of the RAMP architecture utilizes an intelligent processing technology built around a knowledge-based shell provided by ICAD, Inc. Rules and data bases in the software simulate an expert manufacturing planner's knowledge of shop processes and equipment. This expert system can use Product Data Exchange using STEP (PDES) data to determine what features the required part has, what material is required to manufacture it, what machines and tools are needed, and how the part should be held (fixtured) for machining, among other factors. The program's rule base then indicates, for example, how to make each feature, in what order to make it, and to which machines on the shop floor the part should be routed for processing. This information becomes part of the shop work order. The process planning function under RAMP greatly reduces the time and effort required to complete a process plan. Since the PDES file that drives the intelligent processing is 100 percent complete and accurate to start with, the potential for costly errors is greatly diminished.

  9. 40 CFR 98.266 - Data reporting requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... wet-process phosphoric acid process lines. (8) Number of times missing data procedures were used to... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Data reporting requirements. 98.266... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Phosphoric Acid Production § 98.266 Data reporting...

  10. Industrial Photogrammetry - Accepted Metrology Tool or Exotic Niche

    NASA Astrophysics Data System (ADS)

    Bösemann, Werner

    2016-06-01

    New production technologies like 3D printing and other adaptive manufacturing technologies have changed the industrial manufacturing process, often referred to as next industrial revolution or short industry 4.0. Such Cyber Physical Production Systems combine virtual and real world through digitization, model building process simulation and optimization. It is commonly understood that measurement technologies are the key to combine the real and virtual worlds (eg. [Schmitt 2014]). This change from measurement as a quality control tool to a fully integrated step in the production process has also changed the requirements for 3D metrology solutions. Key words like MAA (Measurement Assisted Assembly) illustrate that new position of metrology in the industrial production process. At the same time it is obvious that these processes not only require more measurements but also systems to deliver the required information in high density in a short time. Here optical solutions including photogrammetry for 3D measurements have big advantages over traditional mechanical CMM's. The paper describes the relevance of different photogrammetric solutions including state of the art, industry requirements and application examples.

  11. Developing Software Requirements for a Knowledge Management System That Coordinates Training Programs with Business Processes and Policies in Large Organizations

    ERIC Educational Resources Information Center

    Kiper, J. Richard

    2013-01-01

    For large organizations, updating instructional programs presents a challenge to keep abreast of constantly changing business processes and policies. Each time a process or policy changes, significant resources are required to locate and modify the training materials that convey the new content. Moreover, without the ability to track learning…

  12. LABORATORY PROCESS CONTROLLER USING NATURAL LANGUAGE COMMANDS FROM A PERSONAL COMPUTER

    NASA Technical Reports Server (NTRS)

    Will, H.

    1994-01-01

    The complex environment of the typical research laboratory requires flexible process control. This program provides natural language process control from an IBM PC or compatible machine. Sometimes process control schedules require changes frequently, even several times per day. These changes may include adding, deleting, and rearranging steps in a process. This program sets up a process control system that can either run without an operator, or be run by workers with limited programming skills. The software system includes three programs. Two of the programs, written in FORTRAN77, record data and control research processes. The third program, written in Pascal, generates the FORTRAN subroutines used by the other two programs to identify the user commands with the user-written device drivers. The software system also includes an input data set which allows the user to define the user commands which are to be executed by the computer. To set the system up the operator writes device driver routines for all of the controlled devices. Once set up, this system requires only an input file containing natural language command lines which tell the system what to do and when to do it. The operator can make up custom commands for operating and taking data from external research equipment at any time of the day or night without the operator in attendance. This process control system requires a personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. The program requires a FORTRAN77 compiler and user-written device drivers. This program was developed in 1989 and has a memory requirement of about 62 Kbytes.

  13. Process Improvements in Training Device Acceptance Testing: A Study in Total Quality Management

    DTIC Science & Technology

    1990-12-12

    Quality Management , a small group of Government and industry specialists examined the existing training device acceptance test process for potential improvements. The agreed-to mission of the Air Force/Industry partnership was to continuously identify and promote implementable approaches to minimize the cost and time required for acceptance testing while ensuring that validated performance supports the user training requirements. Application of a Total Quality process improvement model focused on the customers and their requirements, analyzed how work was accomplished, and

  14. Intelligent Work Process Engineering System

    NASA Technical Reports Server (NTRS)

    Williams, Kent E.

    2003-01-01

    Optimizing performance on work activities and processes requires metrics of performance for management to monitor and analyze in order to support further improvements in efficiency, effectiveness, safety, reliability and cost. Information systems are therefore required to assist management in making timely, informed decisions regarding these work processes and activities. Currently information systems regarding Space Shuttle maintenance and servicing do not exist to make such timely decisions. The work to be presented details a system which incorporates various automated and intelligent processes and analysis tools to capture organize and analyze work process related data, to make the necessary decisions to meet KSC organizational goals. The advantages and disadvantages of design alternatives to the development of such a system will be discussed including technologies, which would need to bedesigned, prototyped and evaluated.

  15. An application of computer aided requirements analysis to a real time deep space system

    NASA Technical Reports Server (NTRS)

    Farny, A. M.; Morris, R. V.; Hartsough, C.; Callender, E. D.; Teichroew, D.; Chikofsky, E.

    1981-01-01

    The entire procedure of incorporating the requirements and goals of a space flight project into integrated, time ordered sequences of spacecraft commands, is called the uplink process. The Uplink Process Control Task (UPCT) was created to examine the uplink process and determine ways to improve it. The Problem Statement Language/Problem Statement Analyzer (PSL/PSA) designed to assist the designer/analyst/engineer in the preparation of specifications of an information system is used as a supporting tool to aid in the analysis. Attention is given to a definition of the uplink process, the definition of PSL/PSA, the construction of a PSA database, the value of analysis to the study of the uplink process, and the PSL/PSA lessons learned.

  16. Real-Time and Memory Correlation via Acousto-Optic Processing,

    DTIC Science & Technology

    1978-06-01

    acousto - optic technology as an answer to these requirements appears very attractive. Three fundamental signal-processing schemes using the acousto ... optic interaction have been investigated: (i) real-time correlation and convolution, (ii) Fourier and discrete Fourier transformation, and (iii

  17. Course Development Cycle Time: A Framework for Continuous Process Improvement.

    ERIC Educational Resources Information Center

    Lake, Erinn

    2003-01-01

    Details Edinboro University's efforts to reduce the extended cycle time required to develop new courses and programs. Describes a collaborative process improvement framework, illustrated data findings, the team's recommendations for improvement, and the outcomes of those recommendations. (EV)

  18. Fat content and D-Value, A tale of two finfish

    USDA-ARS?s Scientific Manuscript database

    In the National Advisory Committee on Microbiological Criteria for Food (NACMCF) report, it was determined that the cooking process (time/temperature) requirement for seafood would be different than for meat products. NACMCF identified a need to determine the time/temperature requirements to adequ...

  19. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  20. Note: Quasi-real-time analysis of dynamic near field scattering data using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.

    2012-10-01

    We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.

  1. An Open-Source Hardware and Software System for Acquisition and Real-Time Processing of Electrophysiology during High Field MRI

    PubMed Central

    Purdon, Patrick L.; Millan, Hernan; Fuller, Peter L.; Bonmassar, Giorgio

    2008-01-01

    Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open source system for simultaneous electrophysiology and fMRI featuring low-noise (< 0.6 uV p-p input noise), electromagnetic compatibility for MRI (tested up to 7 Tesla), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has used in human EEG/fMRI studies at 3 and 7 Tesla examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3 Tesla fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level. PMID:18761038

  2. An open-source hardware and software system for acquisition and real-time processing of electrophysiology during high field MRI.

    PubMed

    Purdon, Patrick L; Millan, Hernan; Fuller, Peter L; Bonmassar, Giorgio

    2008-11-15

    Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open-source system for simultaneous electrophysiology and fMRI featuring low-noise (<0.6microV p-p input noise), electromagnetic compatibility for MRI (tested up to 7T), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has been used in human EEG/fMRI studies at 3 and 7T examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3T fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level.

  3. [CMACPAR an modified parallel neuro-controller for control processes].

    PubMed

    Ramos, E; Surós, R

    1999-01-01

    CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.

  4. Cortical Specializations Underlying Fast Computations

    PubMed Central

    Volgushev, Maxim

    2016-01-01

    The time course of behaviorally relevant environmental events sets temporal constraints on neuronal processing. How does the mammalian brain make use of the increasingly complex networks of the neocortex, while making decisions and executing behavioral reactions within a reasonable time? The key parameter determining the speed of computations in neuronal networks is a time interval that neuronal ensembles need to process changes at their input and communicate results of this processing to downstream neurons. Theoretical analysis identified basic requirements for fast processing: use of neuronal populations for encoding, background activity, and fast onset dynamics of action potentials in neurons. Experimental evidence shows that populations of neocortical neurons fulfil these requirements. Indeed, they can change firing rate in response to input perturbations very quickly, within 1 to 3 ms, and encode high-frequency components of the input by phase-locking their spiking to frequencies up to 300 to 1000 Hz. This implies that time unit of computations by cortical ensembles is only few, 1 to 3 ms, which is considerably faster than the membrane time constant of individual neurons. The ability of cortical neuronal ensembles to communicate on a millisecond time scale allows for complex, multiple-step processing and precise coordination of neuronal activity in parallel processing streams, while keeping the speed of behavioral reactions within environmentally set temporal constraints. PMID:25689988

  5. The ED-inpatient dashboard: Uniting emergency and inpatient clinicians to improve the efficiency and quality of care for patients requiring emergency admission to hospital.

    PubMed

    Staib, Andrew; Sullivan, Clair; Jones, Matt; Griffin, Bronwyn; Bell, Anthony; Scott, Ian

    2017-06-01

    Patients who require emergency admission to hospital require complex care that can be fragmented, occurring in the ED, across the ED-inpatient interface (EDii) and subsequently, in their destination inpatient ward. Our hospital had poor process efficiency with slow transit times for patients requiring emergency care. ED clinicians alone were able to improve the processes and length of stay for the patients discharged directly from the ED. However, improving the efficiency of care for patients requiring emergency admission to true inpatient wards required collaboration with reluctant inpatient clinicians. The inpatient teams were uninterested in improving time-based measures of care in isolation, but they were motivated by improving patient outcomes. We developed a dashboard showing process measures such as 4 h rule compliance rate coupled with clinically important outcome measures such as inpatient mortality. The EDii dashboard helped unite both ED and inpatient teams in clinical redesign to improve both efficiencies of care and patient outcomes. © 2016 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  6. Design of an MR image processing module on an FPGA chip

    NASA Astrophysics Data System (ADS)

    Li, Limin; Wyrwicz, Alice M.

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.

  7. Design of an MR image processing module on an FPGA chip

    PubMed Central

    Li, Limin; Wyrwicz, Alice M.

    2015-01-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646

  8. Image/Time Series Mining Algorithms: Applications to Developmental Biology, Document Processing and Data Streams

    ERIC Educational Resources Information Center

    Tataw, Oben Moses

    2013-01-01

    Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…

  9. A Metrics-Based Approach to Intrusion Detection System Evaluation for Distributed Real-Time Systems

    DTIC Science & Technology

    2002-04-01

    Based Approach to Intrusion Detection System Evaluation for Distributed Real - Time Systems Authors: G. A. Fink, B. L. Chappell, T. G. Turner, and...Distributed, Security. 1 Introduction Processing and cost requirements are driving future naval combat platforms to use distributed, real - time systems of...distributed, real - time systems . As these systems grow more complex, the timing requirements do not diminish; indeed, they may become more constrained

  10. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  11. An automated workflow for parallel processing of large multiview SPIM recordings.

    PubMed

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-04-01

    Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  12. A time-driven, activity-based costing methodology for determining the costs of red blood cell transfusion in patients with beta thalassaemia major.

    PubMed

    Burns, K E; Haysom, H E; Higgins, A M; Waters, N; Tahiri, R; Rushford, K; Dunstan, T; Saxby, K; Kaplan, Z; Chunilal, S; McQuilten, Z K; Wood, E M

    2018-04-10

    To describe the methodology to estimate the total cost of administration of a single unit of red blood cells (RBC) in adults with beta thalassaemia major in an Australian specialist haemoglobinopathy centre. Beta thalassaemia major is a genetic disorder of haemoglobin associated with multiple end-organ complications and typically requiring lifelong RBC transfusion therapy. New therapeutic agents are becoming available based on advances in understanding of the disorder and its consequences. Assessment of the true total cost of transfusion, incorporating both product and activity costs, is required in order to evaluate the benefits and costs of these new therapies. We describe the bottom-up, time-driven, activity-based costing methodology used to develop process maps to provide a step-by-step outline of the entire transfusion pathway. Detailed flowcharts for each process are described. Direct observations and timing of the process maps document all activities, resources, staff, equipment and consumables in detail. The analysis will include costs associated with performing these processes, including resources and consumables. Sensitivity analyses will be performed to determine the impact of different staffing levels, timings and probabilities associated with performing different tasks. Thirty-one process maps have been developed, with over 600 individual activities requiring multiple timings. These will be used for future detailed cost analyses. Detailed process maps using bottom-up, time-driven, activity-based costing for determining the cost of RBC transfusion in thalassaemia major have been developed. These could be adapted for wider use to understand and compare the costs and complexities of transfusion in other settings. © 2018 British Blood Transfusion Society.

  13. 40 CFR 63.1427 - Process vent requirements for processes using extended cookout as an epoxide emission reduction...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... = Concentration of epoxide in the reactor liquid at the beginning of the time period, weight percent. k = Reaction rate constant, 1/hr. t = Time, hours. Note: This equation assumes a first order reaction with respect... process knowledge, reaction kinetics, and engineering knowledge, in accordance with paragraph (a)(2)(i) of...

  14. 40 CFR 63.1427 - Process vent requirements for processes using extended cookout as an epoxide emission reduction...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... reactor liquid at the beginning of the time period, weight percent. k = Reaction rate constant, 1/hr. t = Time, hours. Note: This equation assumes a first order reaction with respect to epoxide concentration... measuring the concentration of the unreacted epoxide, or by using process knowledge, reaction kinetics, and...

  15. 40 CFR 63.1427 - Process vent requirements for processes using extended cookout as an epoxide emission reduction...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... = Concentration of epoxide in the reactor liquid at the beginning of the time period, weight percent. k = Reaction rate constant, 1/hr. t = Time, hours. Note: This equation assumes a first order reaction with respect... process knowledge, reaction kinetics, and engineering knowledge, in accordance with paragraph (a)(2)(i) of...

  16. Space station microscopy: Beyond the box

    NASA Technical Reports Server (NTRS)

    Hunter, N. R.; Pierson, Duane L.; Mishra, S. K.

    1993-01-01

    Microscopy aboard Space Station Freedom poses many unique challenges for in-flight investigations. Disciplines such as material processing, plant and animal research, human reseach, enviromental monitoring, health care, and biological processing have diverse microscope requirements. The typical microscope not only does not meet the comprehensive needs of these varied users, but also tends to require excessive crew time. To assess user requirements, a comprehensive survey was conducted among investigators with experiments requiring microscopy. The survey examined requirements such as light sources, objectives, stages, focusing systems, eye pieces, video accessories, etc. The results of this survey and the application of an Intelligent Microscope Imaging System (IMIS) may address these demands for efficient microscopy service in space. The proposed IMIS can accommodate multiple users with varied requirements, operate in several modes, reduce crew time needed for experiments, and take maximum advantage of the restrictive data/ instruction transmission environment on Freedom.

  17. Time Required for Institutional Review Board Review at One Veterans Affairs Medical Center

    PubMed Central

    Hall, Daniel E.; Hanusa, Barbara H.; Stone, Roslyn A.; Ling, Bruce S.; Arnold, Robert M.

    2015-01-01

    IMPORTANCE Despite growing concern that institutional review boards (IRBs) impose burdensome delays on research, little is known about the time required for IRB review across different types of research. OBJECTIVE To measure the overall and incremental process times for IRB review as a process of quality improvement. DESIGN, SETTING, AND PARTICIPANTS After developing a detailed process flowchart of the IRB review process, 2 analysts abstracted temporal data from the records pertaining to all 103 protocols newly submitted to the IRB at a large urban Veterans Affairs medical center from June 1, 2009, through May 31, 2011. Disagreements were reviewed with the principal investigator to reach consensus. We then compared the review times across review types using analysis of variance and post hoc Scheffé tests after achieving normally distributed data through logarithmic transformation. MAIN OUTCOMES AND MEASURES Calendar days from initial submission to final approval of research protocols. RESULTS Initial IRB review took 2 to 4 months, with expedited and exempt reviews requiring less time (median [range], 85 [23–631] and 82 [16–437] days, respectively) than full board reviews (median [range], 131 [64–296] days; P = .008). The median time required for credentialing of investigators was 1 day (range, 0–74 days), and review by the research and development committee took a median of 15 days (range, 0–184 days). There were no significant differences in credentialing or research and development times across review types (exempt, expedited, or full board). Of the extreme delays in IRB review, 80.0% were due to investigators' slow responses to requested changes. There were no systematic delays attributable to the information security officer, privacy officer, or IRB chair. CONCLUSIONS AND RELEVANCE Measuring and analyzing review times is a critical first step in establishing a culture and process of continuous quality improvement among IRBs that govern research programs. The review times observed at this IRB are substantially longer than the 60-day target recommended by expert panels. The method described here could be applied to other IRBs to begin identifying and improving inefficiencies. PMID:25494359

  18. Event-driven processing for hardware-efficient neural spike sorting

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Pereira, João L.; Constandinou, Timothy G.

    2018-02-01

    Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.

  19. Informing future NRT satellite distribution capabilities: Lessons learned from NASA's Land Atmosphere NRT capability for EOS (LANCE)

    NASA Astrophysics Data System (ADS)

    Davies, D.; Murphy, K. J.; Michael, K.

    2013-12-01

    NASA's Land Atmosphere Near real-time Capability for EOS (Earth Observing System) (LANCE) provides data and imagery from Terra, Aqua and Aura satellites in less than 3 hours from satellite observation, to meet the needs of the near real-time (NRT) applications community. This article describes the architecture of the LANCE and outlines the modifications made to achieve the 3-hour latency requirement with a view to informing future NRT satellite distribution capabilities. It also describes how latency is determined. LANCE is a distributed system that builds on the existing EOS Data and Information System (EOSDIS) capabilities. To achieve the NRT latency requirement, many components of the EOS satellite operations, ground and science processing systems have been made more efficient without compromising the quality of science data processing. The EOS Data and Operations System (EDOS) processes the NRT stream with higher priority than the science data stream in order to minimize latency. In addition to expediting transfer times, the key difference between the NRT Level 0 products and those for standard science processing is the data used to determine the precise location and tilt of the satellite. Standard products use definitive geo-location (attitude and ephemeris) data provided daily, whereas NRT products use predicted geo-location provided by the instrument Global Positioning System (GPS) or approximation of navigational data (depending on platform). Level 0 data are processed in to higher-level products at designated Science Investigator-led Processing Systems (SIPS). The processes used by LANCE have been streamlined and adapted to work with datasets as soon as they are downlinked from satellites or transmitted from ground stations. Level 2 products that require ancillary data have modified production rules to relax the requirements for ancillary data so reducing processing times. Looking to the future, experience gained from LANCE can provide valuable lessons on satellite and ground system architectures and on how the delivery of NRT products from other NASA missions might be achieved.

  20. HEVC real-time decoding

    NASA Astrophysics Data System (ADS)

    Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas

    2013-09-01

    The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.

  1. Coherent diffractive imaging of time-evolving samples with improved temporal resolution

    DOE PAGES

    Ulvestad, A.; Tripathi, A.; Hruszkewycz, S. O.; ...

    2016-05-19

    Bragg coherent x-ray diffractive imaging is a powerful technique for investigating dynamic nanoscale processes in nanoparticles immersed in reactive, realistic environments. Its temporal resolution is limited, however, by the oversampling requirements of three-dimensional phase retrieval. Here, we show that incorporating the entire measurement time series, which is typically a continuous physical process, into phase retrieval allows the oversampling requirement at each time step to be reduced, leading to a subsequent improvement in the temporal resolution by a factor of 2-20 times. The increased time resolution will allow imaging of faster dynamics and of radiation-dose-sensitive samples. Furthermore, this approach, which wemore » call "chrono CDI," may find use in improving the time resolution in other imaging techniques.« less

  2. Autonomous Object Characterization with Large Datasets

    DTIC Science & Technology

    2015-10-18

    desk, where a substantial amount of effort is required to transform raw photometry into a data product, minimizing the amount of time the analyst has...were used to explore concepts in satellite characterization and satellite state change. The first algorithm provides real- time stability estimation... Timely and effective space object (SO) characterization is a challenge, and requires advanced data processing techniques. Detection and identification

  3. Retail yield and fabrication times for veal as influenced by purchasing options and merchandising styles.

    PubMed

    McNeill, M S; Griffin, D B; Dockerty, T R; Walter, J P; Johnson, H K; Savell, J W

    1998-06-01

    Twenty-nine selected styles of subprimals or sections of veal were obtained from a commercial facility to assist in the development of a support program for retailers. They were fabricated into bone-in or boneless retail cuts and associated components by trained meat cutters. Each style selected (n = 6) was used to generate mean retail yields and labor requirements, which were calculated from wholesale and retail weights and processing times. Means and standard errors for veal ribs consisting of five different styles (n = 30) concluded that style #2, 7-rib 4 (10 cm) x 4 (10 cm), had the lowest percentage of total retail yield (P < .05) owing to the greatest percentage of bone. Furthermore, rib style #2 required the longest total processing time (P < .05). Rib styles #3, 7-rib chop-ready, and #5, 6-rib chop ready, yielded the greatest percentage of total retail yield and also had the shortest total processing time (P < .05). Within veal loins, style #2, 4 (10 cm) x 4 (10 cm) loin kidney fat in, had the greatest percentage fat (P < .05). Loin styles #2 and #3, 4 (10 cm) x 4 (10 cm) loin special trimmed, generated more lean and fat trimmings and bone, resulting in lower percentage of total retail yields than loin style #1, 0 (0 cm) x 1 (2.5 cm) loin special trimmed (P < .05). Results indicated that bone-in subprimals and sections required more processing time if fabricated into a boneless end point. In addition, as the number of different retail cuts increased, processing times also increased.

  4. 33 CFR 230.17 - Filing requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... supplement, district commanders will establish a time schedule for each step of the process based upon considerations listed in 40 CFR 1501.8 and upon other management considerations. The time required from the... reviews by division and the incorporation of division's comments in the EIS. HQUSACE and/or division will...

  5. Navigation Operations with Prototype Components of an Automated Real-Time Spacecraft Navigation System

    NASA Technical Reports Server (NTRS)

    Cangahuala, L.; Drain, T. R.

    1999-01-01

    At present, ground navigation support for interplanetary spacecraft requires human intervention for data pre-processing, filtering, and post-processing activities; these actions must be repeated each time a new batch of data is collected by the ground data system.

  6. Toward a Framework for Dynamic Service Binding in E-Procurement

    NASA Astrophysics Data System (ADS)

    Ashoori, Maryam; Eze, Benjamin; Benyoucef, Morad; Peyton, Liam

    In an online environment, an E-Procurement process should be able to react and adapt in near real-time to changes in suppliers, requirements, and regulations. WS-BPEL is an emerging standard for process automation, but is oriented towards design-time binding of services. This missing issue can be resolved through designing an extension to WS-BPEL to support automation of flexible e-Procurement processes. Our proposed framework will support dynamic acquisition of procurement services from different suppliers dealing with changing procurement requirements. The proposed framework is illustrated by applying it to health care where different health insurance providers could be involved to procure the medication for patients.

  7. FPGA-based real time processing of the Plenoptic Wavefront Sensor

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, L. F.; Marín, Y.; Díaz, J. J.; Piqueras, J.; García-Jiménez, J.; Rodríguez-Ramos, J. M.

    The plenoptic wavefront sensor combines measurements at pupil and image planes in order to obtain simultaneously wavefront information from different points of view, being capable to sample the volume above the telescope to extract the tomographic information of the atmospheric turbulence. The advantages of this sensor are presented elsewhere at this conference (José M. Rodríguez-Ramos et al). This paper will concentrate in the processing required for pupil plane phase recovery, and its computation in real time using FPGAs (Field Programmable Gate Arrays). This technology eases the implementation of massive parallel processing and allows tailoring the system to the requirements, maintaining flexibility, speed and cost figures.

  8. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  9. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  10. Environmental Compliance Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1981-02-01

    The Guide is intended to assist Department of Energy personnel by providing information on the NEPA process, the processes of other environmental statutes that bear on the NEPA process, the timing relationships between the NEPA process and these other processes, as well as timing relationships between the NEPA process and the development process for policies, programs, and projects. This information should be helpful not only in formulating environmental compliance plans but also in achieving compliance with NEPA and various other environmental statutes. The Guide is divided into three parts with related appendices: Part I provides guidance for developing environmental compliancemore » plans for DOE actions; Part II is devoted to NEPA with detailed flowcharts depicting the compliance procedures required by CEQ regulations and Department of Energy NEPA Guidelines; and Part III contains a series of flowcharts for other Federal environmental requirements that may apply to DOE projects.« less

  11. Design of a high-speed digital processing element for parallel simulation

    NASA Technical Reports Server (NTRS)

    Milner, E. J.; Cwynar, D. S.

    1983-01-01

    A prototype of a custom designed computer to be used as a processing element in a multiprocessor based jet engine simulator is described. The purpose of the custom design was to give the computer the speed and versatility required to simulate a jet engine in real time. Real time simulations are needed for closed loop testing of digital electronic engine controls. The prototype computer has a microcycle time of 133 nanoseconds. This speed was achieved by: prefetching the next instruction while the current one is executing, transporting data using high speed data busses, and using state of the art components such as a very large scale integration (VLSI) multiplier. Included are discussions of processing element requirements, design philosophy, the architecture of the custom designed processing element, the comprehensive instruction set, the diagnostic support software, and the development status of the custom design.

  12. Simulation Modeling of Software Development Processes

    NASA Technical Reports Server (NTRS)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  13. Development of a semi-automated model identification and calibration tool for conceptual modelling of sewer systems.

    PubMed

    Wolfs, Vincent; Villazon, Mauricio Florencio; Willems, Patrick

    2013-01-01

    Applications such as real-time control, uncertainty analysis and optimization require an extensive number of model iterations. Full hydrodynamic sewer models are not sufficient for these applications due to the excessive computation time. Simplifications are consequently required. A lumped conceptual modelling approach results in a much faster calculation. The process of identifying and calibrating the conceptual model structure could, however, be time-consuming. Moreover, many conceptual models lack accuracy, or do not account for backwater effects. To overcome these problems, a modelling methodology was developed which is suited for semi-automatic calibration. The methodology is tested for the sewer system of the city of Geel in the Grote Nete river basin in Belgium, using both synthetic design storm events and long time series of rainfall input. A MATLAB/Simulink(®) tool was developed to guide the modeller through the step-wise model construction, reducing significantly the time required for the conceptual modelling process.

  14. Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.

    PubMed

    Gijsberts, Arjan; Metta, Giorgio

    2013-05-01

    Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Automation and integration of components for generalized semantic markup of electronic medical texts.

    PubMed Central

    Dugan, J. M.; Berrios, D. C.; Liu, X.; Kim, D. K.; Kaizer, H.; Fagan, L. M.

    1999-01-01

    Our group has built an information retrieval system based on a complex semantic markup of medical textbooks. We describe the construction of a set of web-based knowledge-acquisition tools that expedites the collection and maintenance of the concepts required for text markup and the search interface required for information retrieval from the marked text. In the text markup system, domain experts (DEs) identify sections of text that contain one or more elements from a finite set of concepts. End users can then query the text using a predefined set of questions, each of which identifies a subset of complementary concepts. The search process matches that subset of concepts to relevant points in the text. The current process requires that the DE invest significant time to generate the required concepts and questions. We propose a new system--called ACQUIRE (Acquisition of Concepts and Queries in an Integrated Retrieval Environment)--that assists a DE in two essential tasks in the text-markup process. First, it helps her to develop, edit, and maintain the concept model: the set of concepts with which she marks the text. Second, ACQUIRE helps her to develop a query model: the set of specific questions that end users can later use to search the marked text. The DE incorporates concepts from the concept model when she creates the questions in the query model. The major benefit of the ACQUIRE system is a reduction in the time and effort required for the text-markup process. We compared the process of concept- and query-model creation using ACQUIRE to the process used in previous work by rebuilding two existing models that we previously constructed manually. We observed a significant decrease in the time required to build and maintain the concept and query models. Images Figure 1 Figure 2 Figure 4 Figure 5 PMID:10566457

  16. Significantly reducing the processing times of high-speed photometry data sets using a distributed computing model

    NASA Astrophysics Data System (ADS)

    Doyle, Paul; Mtenzi, Fred; Smith, Niall; Collins, Adrian; O'Shea, Brendan

    2012-09-01

    The scientific community is in the midst of a data analysis crisis. The increasing capacity of scientific CCD instrumentation and their falling costs is contributing to an explosive generation of raw photometric data. This data must go through a process of cleaning and reduction before it can be used for high precision photometric analysis. Many existing data processing pipelines either assume a relatively small dataset or are batch processed by a High Performance Computing centre. A radical overhaul of these processing pipelines is required to allow reduction and cleaning rates to process terabyte sized datasets at near capture rates using an elastic processing architecture. The ability to access computing resources and to allow them to grow and shrink as demand fluctuates is essential, as is exploiting the parallel nature of the datasets. A distributed data processing pipeline is required. It should incorporate lossless data compression, allow for data segmentation and support processing of data segments in parallel. Academic institutes can collaborate and provide an elastic computing model without the requirement for large centralized high performance computing data centers. This paper demonstrates how a base 10 order of magnitude improvement in overall processing time has been achieved using the "ACN pipeline", a distributed pipeline spanning multiple academic institutes.

  17. Reticles, write time, and the need for speed

    NASA Astrophysics Data System (ADS)

    Ackmann, Paul W.; Litt, Lloyd C.; Ning, Guo Xiang

    2014-10-01

    Historical data indicates reticle write times are increasing node-to-node. The cost of mask sets is increasing driven by the tighter requirements and more levels. The regular introduction of new generations of mask patterning tools with improved performance is unable to fully compensate for the increased data and complexity required. Write time is a primary metric that drives mask fabrication speed. Design (Raw data) is only the first step in the process and many interactions between mask and wafer technology such as OPC used, OPC efficiency for writers, fracture engines, and actual field size used drive total write time. Yield, technology, and inspection rules drive the remaining raw cycle time. Yield can be even more critical for speed of delivery as it drives re-writes and wasted time. While intrinsic process yield is important, repair capability is the reason mask delivery is still able to deliver 100% good reticles to the fab. Advanced nodes utilizing several layers of multiple patterning may require mask writer tool dedication to meet image placement specifications. This will increase the effective mask cycle time for a layer mask set and drive the need for additional mask write capability in order to deliver masks at the rate required by the wafer fab production schedules.

  18. The Turkish Medicines and Medical Devices Agency: Comparison of Its Registration Process with Australia, Canada, Saudi Arabia, and Singapore

    PubMed Central

    Mashaki Ceyhan, Emel; Gürsöz, Hakki; Alkan, Ali; Coşkun, Hacer; Koyuncu, Oğuzhan; Walker, Stuart

    2018-01-01

    Introduction: Regulatory agency comparisons can be of more value and facilitate improvements if conducted among countries with common challenges and similar health agency characteristics. A study was conducted to compare the registration review model used by the Turkish Medicines and Medical Devices Agency (Türkiye Ilaç ve Tibbi Cihaz Kurumu; TITCK) with those of four similar-sized regulatory agencies to identify areas of strength and those requiring further improvement within the TITCK in relation to the review process as well as to assess the level of adherence to good review practices (GRevP) in order to facilitate the TITCK progress toward agency goals. Methods: A questionnaire was completed and validated by the TITCK to collect data related to agency organizational structure, regulatory review process and decision-making practices. Similar questionnaires were completed and validated by Australia's Therapeutic Goods Administration (TGA), Health Canada, Singapore's Health Science Authority (HSA), and the Saudi Arabia Food and Drug Administration (SFDA). Results: The TITCK performs a full review for all new active substance (NAS) applications. Submission of a Certificate of Pharmaceutical product (CPP) with an application is not required; however, evidence of approval in another country is required for final authorization by the TITCK. Pricing data are not required by the TITCK at the time of submission; however, pricing must be completed to enable products to be commercially available. Mean approval times at the TITCK exceeded the agency's overall target time suggesting room for improved performance, consistency, and process predictability. Measures of GRevP are in place, but the implementation by the TITCK is not currently formalized. Discussion: Comparisons made through this study enabled recommendations to the TITCK that include streamlining the good manufacturing practice (GMP) process by sharing GMP inspection outcomes and certificates issued by other authorities, thus avoiding the delays by the current process; removing the requirement for prior approval or CPP; introducing shared or joint reviews with other similar regulatory authorities; formally implementing and monitoring GRevP; defining target timing for each review milestone; redefining the pricing process; and improving transparency by developing publicly available summaries for the basis of approval. PMID:29422861

  19. The Turkish Medicines and Medical Devices Agency: Comparison of Its Registration Process with Australia, Canada, Saudi Arabia, and Singapore.

    PubMed

    Mashaki Ceyhan, Emel; Gürsöz, Hakki; Alkan, Ali; Coşkun, Hacer; Koyuncu, Oğuzhan; Walker, Stuart

    2018-01-01

    Introduction: Regulatory agency comparisons can be of more value and facilitate improvements if conducted among countries with common challenges and similar health agency characteristics. A study was conducted to compare the registration review model used by the Turkish Medicines and Medical Devices Agency (Türkiye Ilaç ve Tibbi Cihaz Kurumu; TITCK) with those of four similar-sized regulatory agencies to identify areas of strength and those requiring further improvement within the TITCK in relation to the review process as well as to assess the level of adherence to good review practices (GRevP) in order to facilitate the TITCK progress toward agency goals. Methods: A questionnaire was completed and validated by the TITCK to collect data related to agency organizational structure, regulatory review process and decision-making practices. Similar questionnaires were completed and validated by Australia's Therapeutic Goods Administration (TGA), Health Canada, Singapore's Health Science Authority (HSA), and the Saudi Arabia Food and Drug Administration (SFDA). Results: The TITCK performs a full review for all new active substance (NAS) applications. Submission of a Certificate of Pharmaceutical product (CPP) with an application is not required; however, evidence of approval in another country is required for final authorization by the TITCK. Pricing data are not required by the TITCK at the time of submission; however, pricing must be completed to enable products to be commercially available. Mean approval times at the TITCK exceeded the agency's overall target time suggesting room for improved performance, consistency, and process predictability. Measures of GRevP are in place, but the implementation by the TITCK is not currently formalized. Discussion: Comparisons made through this study enabled recommendations to the TITCK that include streamlining the good manufacturing practice (GMP) process by sharing GMP inspection outcomes and certificates issued by other authorities, thus avoiding the delays by the current process; removing the requirement for prior approval or CPP; introducing shared or joint reviews with other similar regulatory authorities; formally implementing and monitoring GRevP; defining target timing for each review milestone; redefining the pricing process; and improving transparency by developing publicly available summaries for the basis of approval.

  20. Analysis of data systems requirements for global crop production forecasting in the 1985 time frame

    NASA Technical Reports Server (NTRS)

    Downs, S. W.; Larsen, P. A.; Gerstner, D. A.

    1978-01-01

    Data systems concepts that would be needed to implement the objective of the global crop production forecasting in an orderly transition from experimental to operational status in the 1985 time frame were examined. Information needs of users were converted into data system requirements, and the influence of these requirements on the formulation of a conceptual data system was analyzed. Any potential problem areas in meeting these data system requirements were identified in an iterative process.

  1. Hidden Process Models

    DTIC Science & Technology

    2009-12-18

    cannot be detected with univariate techniques, but require multivariate analysis instead (Kamitani and Tong [2005]). Two other time series analysis ...learning for time series analysis . The historical record of DBNs can be traced back to Dean and Kanazawa [1988] and Dean and Wellman [1991], with...Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: Hidden Process Models, probabilistic time series modeling, functional Magnetic Resonance Imaging

  2. Memory for Context becomes Less Specific with Time

    ERIC Educational Resources Information Center

    Wiltgen, Brian J.; Silva, Alcino J.

    2007-01-01

    Context memories initially require the hippocampus, but over time become independent of this structure. This shift reflects a consolidation process whereby memories are gradually stored in distributed regions of the cortex. The function of this process is thought to be the extraction of statistical regularities and general knowledge from specific…

  3. 5 CFR 1631.8 - Prompt response.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... so advise the requester within 5 work days. The time limit for processing such a request will not... receipt of the request, unless additional time is required for one of the following reasons: (1) It is... the office processing the request (e.g., the record keeper); (2) It is necessary to search for...

  4. 36 CFR 251.51 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of motion picture, videotaping, sound recording, or any other moving image or audio recording... category—A processing or monitoring category requiring more than 50 hours of agency time to process an application for a special use authorization (processing category 6 and, in certain situations, processing...

  5. 36 CFR 251.51 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of motion picture, videotaping, sound recording, or any other moving image or audio recording... category—A processing or monitoring category requiring more than 50 hours of agency time to process an application for a special use authorization (processing category 6 and, in certain situations, processing...

  6. 36 CFR 251.51 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of motion picture, videotaping, sound recording, or any other moving image or audio recording... category—A processing or monitoring category requiring more than 50 hours of agency time to process an application for a special use authorization (processing category 6 and, in certain situations, processing...

  7. Development of a real-time microchip PCR system for portable plant disease diagnosis.

    PubMed

    Koo, Chiwan; Malapi-Wight, Martha; Kim, Hyun Soo; Cifci, Osman S; Vaughn-Diaz, Vanessa L; Ma, Bo; Kim, Sungman; Abdel-Raziq, Haron; Ong, Kevin; Jo, Young-Ki; Gross, Dennis C; Shim, Won-Bo; Han, Arum

    2013-01-01

    Rapid and accurate detection of plant pathogens in the field is crucial to prevent the proliferation of infected crops. Polymerase chain reaction (PCR) process is the most reliable and accepted method for plant pathogen diagnosis, however current conventional PCR machines are not portable and require additional post-processing steps to detect the amplified DNA (amplicon) of pathogens. Real-time PCR can directly quantify the amplicon during the DNA amplification without the need for post processing, thus more suitable for field operations, however still takes time and require large instruments that are costly and not portable. Microchip PCR systems have emerged in the past decade to miniaturize conventional PCR systems and to reduce operation time and cost. Real-time microchip PCR systems have also emerged, but unfortunately all reported portable real-time microchip PCR systems require various auxiliary instruments. Here we present a stand-alone real-time microchip PCR system composed of a PCR reaction chamber microchip with integrated thin-film heater, a compact fluorescence detector to detect amplified DNA, a microcontroller to control the entire thermocycling operation with data acquisition capability, and a battery. The entire system is 25 × 16 × 8 cm(3) in size and 843 g in weight. The disposable microchip requires only 8-µl sample volume and a single PCR run consumes 110 mAh of power. A DNA extraction protocol, notably without the use of liquid nitrogen, chemicals, and other large lab equipment, was developed for field operations. The developed real-time microchip PCR system and the DNA extraction protocol were used to successfully detect six different fungal and bacterial plant pathogens with 100% success rate to a detection limit of 5 ng/8 µl sample.

  8. Development of a Real-Time Microchip PCR System for Portable Plant Disease Diagnosis

    PubMed Central

    Kim, Hyun Soo; Cifci, Osman S.; Vaughn-Diaz, Vanessa L.; Ma, Bo; Kim, Sungman; Abdel-Raziq, Haron; Ong, Kevin; Jo, Young-Ki; Gross, Dennis C.; Shim, Won-Bo; Han, Arum

    2013-01-01

    Rapid and accurate detection of plant pathogens in the field is crucial to prevent the proliferation of infected crops. Polymerase chain reaction (PCR) process is the most reliable and accepted method for plant pathogen diagnosis, however current conventional PCR machines are not portable and require additional post-processing steps to detect the amplified DNA (amplicon) of pathogens. Real-time PCR can directly quantify the amplicon during the DNA amplification without the need for post processing, thus more suitable for field operations, however still takes time and require large instruments that are costly and not portable. Microchip PCR systems have emerged in the past decade to miniaturize conventional PCR systems and to reduce operation time and cost. Real-time microchip PCR systems have also emerged, but unfortunately all reported portable real-time microchip PCR systems require various auxiliary instruments. Here we present a stand-alone real-time microchip PCR system composed of a PCR reaction chamber microchip with integrated thin-film heater, a compact fluorescence detector to detect amplified DNA, a microcontroller to control the entire thermocycling operation with data acquisition capability, and a battery. The entire system is 25×16×8 cm3 in size and 843 g in weight. The disposable microchip requires only 8-µl sample volume and a single PCR run consumes 110 mAh of power. A DNA extraction protocol, notably without the use of liquid nitrogen, chemicals, and other large lab equipment, was developed for field operations. The developed real-time microchip PCR system and the DNA extraction protocol were used to successfully detect six different fungal and bacterial plant pathogens with 100% success rate to a detection limit of 5 ng/8 µl sample. PMID:24349341

  9. Integrated SeismoGeodetic Systsem with High-Resolution, Real-Time GNSS and Accelerometer Observation For Earthquake Early Warning Application.

    NASA Astrophysics Data System (ADS)

    Passmore, P. R.; Jackson, M.; Zimakov, L. G.; Raczka, J.; Davidson, P.

    2014-12-01

    The key requirements for Earthquake Early Warning and other Rapid Event Notification Systems are: Quick delivery of digital data from a field station to the acquisition and processing center; Data integrity for real-time earthquake notification in order to provide warning prior to significant ground shaking in the given target area. These two requirements are met in the recently developed Trimble SG160-09 SeismoGeodetic System, which integrates both GNSS and acceleration measurements using the Kalman filter algorithm to create a new high-rate (200 sps), real-time displacement with sufficient accuracy and very low latency for rapid delivery of the acquired data to a processing center. The data acquisition algorithm in the SG160-09 System provides output of both acceleration and displacement digital data with 0.2 sec delay. This is a significant reduction in the time interval required for real-time transmission compared to data delivery algorithms available in digitizers currently used in other Earthquake Early Warning networks. Both acceleration and displacement data are recorded and transmitted to the processing site in a specially developed Multiplexed Recording Format (MRF) that minimizes the bandwidth required for real-time data transmission. In addition, a built in algorithm calculates the τc and Pd once the event is declared. The SG160-09 System keeps track of what data has not been acknowledged and re-transmits the data giving priority to current data. Modified REF TEK Protocol Daemon (RTPD) receives the digital data and acknowledges data received without error. It forwards this "good" data to processing clients of various real-time data processing software including Earthworm and SeisComP3. The processing clients cache packets when a data gap occurs due to a dropped packet or network outage. The cache packet time is settable, but should not exceed 0.5 sec in the Earthquake Early Warning network configuration. The rapid data transmission algorithm was tested with different communication media, including Internet, DSL, Wi-Fi, GPRS, etc. The test results show that the data latency via most communication media do not exceed 0.5 sec nominal from a first sample in the data packet. Detailed acquisition algorithm and results of data transmission via different communication media are presented.

  10. Online total organic carbon (TOC) monitoring for water and wastewater treatment plants processes and operations optimization

    NASA Astrophysics Data System (ADS)

    Assmann, Céline; Scott, Amanda; Biller, Dondra

    2017-08-01

    Organic measurements, such as biological oxygen demand (BOD) and chemical oxygen demand (COD) were developed decades ago in order to measure organics in water. Today, these time-consuming measurements are still used as parameters to check the water treatment quality; however, the time required to generate a result, ranging from hours to days, does not allow COD or BOD to be useful process control parameters - see (1) Standard Method 5210 B; 5-day BOD Test, 1997, and (2) ASTM D1252; COD Test, 2012. Online organic carbon monitoring allows for effective process control because results are generated every few minutes. Though it does not replace BOD or COD measurements still required for compliance reporting, it allows for smart, data-driven and rapid decision-making to improve process control and optimization or meet compliances. Thanks to the smart interpretation of generated data and the capability to now take real-time actions, municipal drinking water and wastewater treatment facility operators can positively impact their OPEX (operational expenditure) efficiencies and their capabilities to meet regulatory requirements. This paper describes how three municipal wastewater and drinking water plants gained process insights, and determined optimization opportunities thanks to the implementation of online total organic carbon (TOC) monitoring.

  11. Orthopaedic Device Approval Through the Premarket Approval Process: A Financial Feasibility Analysis for a Single Center.

    PubMed

    Yang, Brian W; Iorio, Matthew L; Day, Charles S

    2017-03-15

    The 2 main routes of medical device approval through the U.S. Food and Drug Administration are the premarket approval (PMA) process, which requires clinical trials, and the 510(k) premarket notification, which exempts devices from clinical trials if they are substantially equivalent to an existing device. Recently, there has been growing concern regarding the safety of devices approved through the 510(k) premarket notification. The PMA process decreases the potential for device recall; however, it is substantially more costly and time-consuming. Investors and medical device companies are only willing to invest in devices if they can expect to recoup their investment within a timeline of roughly 7 years. Our study utilizes financial modeling to assess the financial feasibility of approving various orthopaedic medical devices through the 510(k) and PMA processes. The expected time to recoup investment through the 510(k) process ranged from 0.585 years to 7.715 years, with an average time of 2.4 years; the expected time to recoup investment through the PMA route ranged from 2.9 years to 24.5 years, with an average time of 8.5 years. Six of the 13 orthopaedic device systems that we analyzed would require longer than our 7-year benchmark to recoup the investment costs of the PMA process. With the 510(k) premarket notification, only 1 device system would take longer than 7 years to recoup its investment costs. Although the 510(k) premarket notification has demonstrated safety concerns, broad requirements for PMA authorization may limit device innovation for less-prevalent orthopaedic conditions. As a result, new approval frameworks may be beneficial. Our report demonstrates how current regulatory policies can potentially influence orthopaedic device innovation.

  12. Real-Time Monitoring of Scada Based Control System for Filling Process

    NASA Astrophysics Data System (ADS)

    Soe, Aung Kyaw; Myint, Aung Naing; Latt, Maung Maung; Theingi

    2008-10-01

    This paper is a design of real-time monitoring for filling system using Supervisory Control and Data Acquisition (SCADA). The monitoring of production process is described in real-time using Visual Basic.Net programming under Visual Studio 2005 software without SCADA software. The software integrators are programmed to get the required information for the configuration screens. Simulation of components is expressed on the computer screen using parallel port between computers and filling devices. The programs of real-time simulation for the filling process from the pure drinking water industry are provided.

  13. Real-Time Processing System for the JET Hard X-Ray and Gamma-Ray Profile Monitor Enhancement

    NASA Astrophysics Data System (ADS)

    Fernandes, Ana M.; Pereira, Rita C.; Neto, André; Valcárcel, Daniel F.; Alves, Diogo; Sousa, Jorge; Carvalho, Bernardo B.; Kiptily, Vasily; Syme, Brian; Blanchard, Patrick; Murari, Andrea; Correia, Carlos M. B. A.; Varandas, Carlos A. F.; Gonçalves, Bruno

    2014-06-01

    The Joint European Torus (JET) is currently undertaking an enhancement program which includes tests of relevant diagnostics with real-time processing capabilities for the International Thermonuclear Experimental Reactor (ITER). Accordingly, a new real-time processing system was developed and installed at JET for the gamma-ray and hard X-ray profile monitor diagnostic. The new system is connected to 19 CsI(Tl) photodiodes in order to obtain the line-integrated profiles of the gamma-ray and hard X-ray emissions. Moreover, it was designed to overcome the former data acquisition (DAQ) limitations while exploiting the required real-time features. The new DAQ hardware, based on the Advanced Telecommunication Computer Architecture (ATCA) standard, includes reconfigurable digitizer modules with embedded field-programmable gate array (FPGA) devices capable of acquiring and simultaneously processing data in real-time from the 19 detectors. A suitable algorithm was developed and implemented in the FPGAs, which are able to deliver the corresponding energy of the acquired pulses. The processed data is sent periodically, during the discharge, through the JET real-time network and stored in the JET scientific databases at the end of the pulse. The interface between the ATCA digitizers, the JET control and data acquisition system (CODAS), and the JET real-time network is provided by the Multithreaded Application Real-Time executor (MARTe). The work developed allowed attaining two of the major milestones required by next fusion devices: the ability to process and simultaneously supply high volume data rates in real-time.

  14. Real time closed loop control of an Ar and Ar/O2 plasma in an ICP

    NASA Astrophysics Data System (ADS)

    Faulkner, R.; Soberón, F.; McCarter, A.; Gahan, D.; Karkari, S.; Milosavljevic, V.; Hayden, C.; Islyaikin, A.; Law, V. J.; Hopkins, M. B.; Keville, B.; Iordanov, P.; Doherty, S.; Ringwood, J. V.

    2006-10-01

    Real time closed loop control for plasma assisted semiconductor manufacturing has been the subject of academic research for over a decade. However, due to process complexity and the lack of suitable real time metrology, progress has been elusive and genuine real time, multi-input, multi-output (MIMO) control of a plasma assisted process has yet to be successfully implemented in an industrial setting. A Splasma parameter control strategy T is required to be adopted whereby process recipes which are defined in terms of plasma properties such as critical species densities as opposed to input variables such as rf power and gas flow rates may be transferable between different chamber types. While PIC simulations and multidimensional fluid models have contributed considerably to the basic understanding of plasmas and the design of process equipment, such models require a large amount of processing time and are hence unsuitable for testing control algorithms. In contrast, linear dynamical empirical models, obtained through system identification techniques are ideal in some respects for control design since their computational requirements are comparatively small and their structure facilitates the application of classical control design techniques. However, such models provide little process insight and are specific to an operating point of a particular machine. An ideal first principles-based, control-oriented model would exhibit the simplicity and computational requirements of an empirical model and, in addition, despite sacrificing first principles detail, capture enough of the essential physics and chemistry of the process in order to provide reasonably accurate qualitative predictions. This paper will discuss the development of such a first-principles based, control-oriented model of a laboratory inductively coupled plasma chamber. The model consists of a global model of the chemical kinetics coupled to an analytical model of power deposition. Dynamics of actuators including mass flow controllers and exhaust throttle are included and sensor characteristics are also modelled. The application of this control-oriented model to achieve multivariable closed loop control of specific species e.g. atomic Oxygen and ion density using the actuators rf power, Oxygen and Argon flow rates, and pressure/exhaust flow rate in an Ar/O2 ICP plasma will be presented.

  15. Design of an MR image processing module on an FPGA chip.

    PubMed

    Li, Limin; Wyrwicz, Alice M

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128×128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. General Recommendations on Fatigue Risk Management for the Canadian Forces

    DTIC Science & Technology

    2010-04-01

    missions performed in aviation require an individual(s) to process large amount of information in a short period of time and to do this on a continuous...information processing required during sustained operations can deteriorate an individual’s ability to perform a task. Given the high operational tempo...memory, which, in turn, is utilized to perform human thought processes (Baddeley, 2003). While various versions of this theory exist, they all share

  17. Route to one-step microstructure mold fabrication for PDMS microfluidic chip

    NASA Astrophysics Data System (ADS)

    Lv, Xiaoqing; Geng, Zhaoxin; Fan, Zhiyuan; Wang, Shicai; Su, Yue; Fang, Weihao; Pei, Weihua; Chen, Hongda

    2018-04-01

    The microstructure mold fabrication for PDMS microfluidic chip remains complex and time-consuming process requiring special equipment and protocols: photolithography and etching. Thus, a rapid and cost-effective method is highly needed. Comparing with the traditional microfluidic chip fabricating process based on the micro-electromechanical system (MEMS), this method is simple and easy to implement, and the whole fabrication process only requires 1-2 h. Different size of microstructure from 100 to 1000 μm was fabricated, and used to culture four kinds of breast cancer cell lines. Cell viability and morphology was assessed when they were cultured in the micro straight channels, micro square holes and the bonding PDMS-glass microfluidic chip. The experimental results indicate that the microfluidic chip is good and meet the experimental requirements. This method can greatly reduce the process time and cost of the microfluidic chip, and provide a simple and effective way for the structure design and in the field of biological microfabrications and microfluidic chips.

  18. Missile signal processing common computer architecture for rapid technology upgrade

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  19. 36 CFR 251.51 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... recording, or any other moving image or audio recording equipment on National Forest System lands that... processing or monitoring category requiring more than 50 hours of agency time to process an application for a special use authorization (processing category 6 and, in certain situations, processing category 5) or...

  20. On the energy budget in the current disruption region. [of geomagnetic tail

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Birn, Joachim

    1993-01-01

    This study investigates the energy budget in the current disruption region of the magnetotail, coincident with a pre-onset thin current sheet, around substorm onset time using published observational data and theoretical estimates. We find that the current disruption/dipolarization process typically requires energy inflow into the primary disruption region. The disruption dipolarization process is therefore endoenergetic, i.e., requires energy input to operate. Therefore we argue that some other simultaneously operating process, possibly a large scale magnetotail instability, is required to provide the necessary energy input into the current disruption region.

  1. Descriptive Study Analyzing Discrepancies in a Software Development Project Change Request (CR) Assessment Process and Recommendations for Process Improvements

    NASA Technical Reports Server (NTRS)

    Cunningham, Kenneth J.

    2002-01-01

    The Change Request (CR) assessment process is essential in the display development cycle. The assessment process is performed to ensure that the changes stated in the description of the CR match the changes in the actual display requirements. If a discrepancy is found between the CR and the requirements, the CR must be returned to the originator for corrections. Data will be gathered from each of the developers to determine the type of discrepancies and the amount of time spent assessing each CR. This study will determine the most common types of discrepancies and the amount of time spent assessing those issues. The results of the study will provide a foundation for future improvements as well as a baseline for future studies.

  2. Practical Unitary Simulator for Non-Markovian Complex Processes

    NASA Astrophysics Data System (ADS)

    Binder, Felix C.; Thompson, Jayne; Gu, Mile

    2018-06-01

    Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.

  3. MNE Scan: Software for real-time processing of electrophysiological data.

    PubMed

    Esch, Lorenz; Sun, Limin; Klüber, Viktor; Lew, Seok; Baumgarten, Daniel; Grant, P Ellen; Okada, Yoshio; Haueisen, Jens; Hämäläinen, Matti S; Dinh, Christoph

    2018-06-01

    Magnetoencephalography (MEG) and Electroencephalography (EEG) are noninvasive techniques to study the electrophysiological activity of the human brain. Thus, they are well suited for real-time monitoring and analysis of neuronal activity. Real-time MEG/EEG data processing allows adjustment of the stimuli to the subject's responses for optimizing the acquired information especially by providing dynamically changing displays to enable neurofeedback. We introduce MNE Scan, an acquisition and real-time analysis software based on the multipurpose software library MNE-CPP. MNE Scan allows the development and application of acquisition and novel real-time processing methods in both research and clinical studies. The MNE Scan development follows a strict software engineering process to enable approvals required for clinical software. We tested the performance of MNE Scan in several device-independent use cases, including, a clinical epilepsy study, real-time source estimation, and Brain Computer Interface (BCI) application. Compared to existing tools we propose a modular software considering clinical software requirements expected by certification authorities. At the same time the software is extendable and freely accessible. We conclude that MNE Scan is the first step in creating a device-independent open-source software to facilitate the transition from basic neuroscience research to both applied sciences and clinical applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Investigating steam penetration using thermometric methods in dental handpieces with narrow internal lumens during sterilizing processes with non-vacuum or vacuum processes.

    PubMed

    Winter, S; Smith, A; Lappin, D; McDonagh, G; Kirk, B

    2017-12-01

    Dental handpieces are required to be sterilized between patient use. Vacuum steam sterilization processes with fractionated pre/post-vacuum phases or unique cycles for specified medical devices are required for hollow instruments with internal lumens to assure successful air removal. Entrapped air will compromise achievement of required sterilization conditions. Many countries and professional organizations still advocate non-vacuum sterilization processes for these devices. To investigate non-vacuum downward/gravity displacement, type-N steam sterilization of dental handpieces, using thermometric methods to measure time to achieve sterilization temperature at different handpiece locations. Measurements at different positions within air turbines were undertaken with thermocouples and data loggers. Two examples of widely used UK benchtop steam sterilizers were tested: a non-vacuum benchtop sterilizer (Little Sister 3; Eschmann, Lancing, UK) and a vacuum benchtop sterilizer (Lisa; W&H, Bürmoos, Austria). Each sterilizer cycle was completed with three handpieces and each cycle in triplicate. A total of 140 measurements inside dental handpiece lumens were recorded. The non-vacuum process failed (time range: 0-150 s) to reliably achieve sterilization temperatures within the time limit specified by the international standard (15 s equilibration time). The measurement point at the base of the handpiece failed in all test runs (N = 9) to meet the standard. No failures were detected with the vacuum steam sterilization type B process with fractionated pre-vacuum and post-vacuum phases. Non-vacuum downward/gravity displacement, type-N steam sterilization processes are unreliable in achieving sterilization conditions inside dental handpieces, and the base of the handpiece is the site most likely to fail. Copyright © 2017 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  5. An architecture for heuristic control of real-time processes

    NASA Technical Reports Server (NTRS)

    Raulefs, P.; Thorndyke, P. W.

    1987-01-01

    Abstract Process management combines complementary approaches of heuristic reasoning and analytical process control. Management of a continuous process requires monitoring the environment and the controlled system, assessing the ongoing situation, developing and revising planned actions, and controlling the execution of the actions. For knowledge-intensive domains, process management entails the potentially time-stressed cooperation among a variety of expert systems. By redesigning a blackboard control architecture in an object-oriented framework, researchers obtain an approach to process management that considerably extends blackboard control mechanisms and overcomes limitations of blackboard systems.

  6. Electrophysiological evidence of automatic early semantic processing.

    PubMed

    Hinojosa, José A; Martín-Loeches, Manuel; Muñoz, Francisco; Casado, Pilar; Pozo, Miguel A

    2004-01-01

    This study investigates the automatic-controlled nature of early semantic processing by means of the Recognition Potential (RP), an event-related potential response that reflects lexical selection processes. For this purpose tasks differing in their processing requirements were used. Half of the participants performed a physical task involving a lower-upper case discrimination judgement (shallow processing requirements), whereas the other half carried out a semantic task, consisting in detecting animal names (deep processing requirements). Stimuli were identical in the two tasks. Reaction time measures revealed that the physical task was easier to perform than the semantic task. However, RP effects elicited by the physical and semantic tasks did not differ in either latency, amplitude, or topographic distribution. Thus, the results from the present study suggest that early semantic processing is automatically triggered whenever a linguistic stimulus enters the language processor.

  7. X-33 Environmental Impact Statement: A Fast Track Approach

    NASA Technical Reports Server (NTRS)

    McCaleb, Rebecca C.; Holland, Donna L.

    1998-01-01

    NASA is required by the National Environmental Policy Act (NEPA) to prepare an appropriate level environmental analysis for its major projects. Development of the X-33 Technology Demonstrator and its associated flight test program required an environmental impact statement (EIS) under the NEPA. The EIS process is consists of four parts: the "Notice of Intent" to prepare an EIS and scoping; the draft EIS which is distributed for review and comment; the final ETS; and the "Record of Decision." Completion of this process normally takes from 2 - 3 years, depending on the complexity of the proposed action. Many of the agency's newest fast track, technology demonstration programs require NEPA documentation, but cannot sustain the lengthy time requirement between program concept development to implementation. Marshall Space Flight Center, in cooperation with Kennedy Space Center, accomplished the NEPA process for the X-33 Program in 13 months from Notice of Intent to Record of Decision. In addition, the environmental team implemented an extensive public involvement process, conducting a total of 23 public meetings for scoping and draft EIS comment along with numerous informal meetings with public officials, civic organizations, and Native American Indians. This paper will discuss the fast track approach used to successfully accomplish the NEPA process for X-33 on time.

  8. Development of a Real-Time Pulse Processing Algorithm for TES-Based X-Ray Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Tan, Hui; Hennig, Wolfgang; Warburton, William K.; Doriese, W. Bertrand; Kilbourne, Caroline A.

    2011-01-01

    We report here a real-time pulse processing algorithm for superconducting transition-edge sensor (TES) based x-ray microcalorimeters. TES-based. microca1orimeters offer ultra-high energy resolutions, but the small volume of each pixel requires that large arrays of identical microcalorimeter pixe1s be built to achieve sufficient detection efficiency. That in turn requires as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of data to a host computer for post-processing. Therefore, a real-time pulse processing algorithm that not only can be implemented in the readout electronics but also achieve satisfactory energy resolutions is desired. We have developed an algorithm that can be easily implemented. in hardware. We then tested the algorithm offline using several data sets acquired with an 8 x 8 Goddard TES x-ray calorimeter array and 2x16 NIST time-division SQUID multiplexer. We obtained an average energy resolution of close to 3.0 eV at 6 keV for the multiplexed pixels while preserving over 99% of the events in the data sets.

  9. Proposed algorithm to improve job shop production scheduling using ant colony optimization method

    NASA Astrophysics Data System (ADS)

    Pakpahan, Eka KA; Kristina, Sonna; Setiawan, Ari

    2017-12-01

    This paper deals with the determination of job shop production schedule on an automatic environment. On this particular environment, machines and material handling system are integrated and controlled by a computer center where schedule were created and then used to dictate the movement of parts and the operations at each machine. This setting is usually designed to have an unmanned production process for a specified interval time. We consider here parts with various operations requirement. Each operation requires specific cutting tools. These parts are to be scheduled on machines each having identical capability, meaning that each machine is equipped with a similar set of cutting tools therefore is capable of processing any operation. The availability of a particular machine to process a particular operation is determined by the remaining life time of its cutting tools. We proposed an algorithm based on the ant colony optimization method and embedded them on matlab software to generate production schedule which minimize the total processing time of the parts (makespan). We test the algorithm on data provided by real industry and the process shows a very short computation time. This contributes a lot to the flexibility and timelines targeted on an automatic environment.

  10. Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units.

    PubMed

    Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley

    2011-05-01

    Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.

  11. Media processors using a new microsystem architecture designed for the Internet era

    NASA Astrophysics Data System (ADS)

    Wyland, David C.

    1999-12-01

    The demands of digital image processing, communications and multimedia applications are growing more rapidly than traditional design methods can fulfill them. Previously, only custom hardware designs could provide the performance required to meet the demands of these applications. However, hardware design has reached a crisis point. Hardware design can no longer deliver a product with the required performance and cost in a reasonable time for a reasonable risk. Software based designs running on conventional processors can deliver working designs in a reasonable time and with low risk but cannot meet the performance requirements. What is needed is a media processing approach that combines very high performance, a simple programming model, complete programmability, short time to market and scalability. The Universal Micro System (UMS) is a solution to these problems. The UMS is a completely programmable (including I/O) system on a chip that combines hardware performance with the fast time to market, low cost and low risk of software designs.

  12. Securing electronic medical record in Near Field Communication using Advanced Encryption Standard (AES).

    PubMed

    Renardi, Mikhael Bagus; Basjaruddin, Noor Cholis; Rakhman, Edi

    2018-01-01

    Doctors usually require patients' medical records before medical examinations. Nevertheless, obtaining such records may take time. Hence, Near Field Communication (NFC) could be used to store and send medical records between doctors and patients. Another issue is that there could be a threat such as, Man In The Middle Attack and eavesdropping, thus, a security method is required to secure the data. Furthermore, the information regarding the key and initialisation vector in NFC cannot be sent using one data package, hence, the data transmission should be done several times. Therefore, the initialisation vector that changed in each transmission is implemented, and the key utilised is based on the component agreed by both parties. This study aims at applying the cryptography process that does disturb and hinder the speed of data transmission. The result demonstrated that the data transmitted could be secured and the encryption process did not hinder data exchange. Also, different number of characters in plaintexts required different amount of time for encryption and decryption. It could be affected by the specifications of the devices used and the processes happening in the devices.

  13. Requirements Analysis for Large Ada Programs: Lessons Learned on CCPDS- R

    DTIC Science & Technology

    1989-12-01

    when the design had matured and This approach was not optimal from the formal the SRS role was to be the tester’s contract, implemen- testing and...on the software development CPU processing load. These constraints primar- process is the necessity to include sufficient testing ily affect algorithm...allocations and timing requirements are by-products of the software design process when multiple CSCls are a P R StrR eSOFTWARE ENGINEERING executed within

  14. Semi-automated camera trap image processing for the detection of ungulate fence crossing events.

    PubMed

    Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija

    2017-09-27

    Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.

  15. DATA QUALITY OBJECTIVES FOR SELECTING WASTE SAMPLES FOR THE BENCH STEAM REFORMER TEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BANNING DL

    2010-08-03

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Fluid Bed Steam Reformer testing. The type, quantity and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluid bed steam reformer (FBSR). A determination of the adequacy of the FBSR process to treat Hanford tank waste is required.more » The initial step in determining the adequacy of the FBSR process is to select archived waste samples from the 222-S Laboratory that will be used to test the FBSR process. Analyses of the selected samples will be required to confirm the samples meet the testing criteria.« less

  16. Distributed systems status and control

    NASA Technical Reports Server (NTRS)

    Kreidler, David; Vickers, David

    1990-01-01

    Concepts are investigated for an automated status and control system for a distributed processing environment. System characteristics, data requirements for health assessment, data acquisition methods, system diagnosis methods and control methods were investigated in an attempt to determine the high-level requirements for a system which can be used to assess the health of a distributed processing system and implement control procedures to maintain an accepted level of health for the system. A potential concept for automated status and control includes the use of expert system techniques to assess the health of the system, detect and diagnose faults, and initiate or recommend actions to correct the faults. Therefore, this research included the investigation of methods by which expert systems were developed for real-time environments and distributed systems. The focus is on the features required by real-time expert systems and the tools available to develop real-time expert systems.

  17. Technological Enhancements for Personal Computers

    DTIC Science & Technology

    1992-03-01

    quicker order processing , shortening the time required to obtain critical spare parts. 31 Customer service and spare parts tracking are facilitated by...cards speed up order processing and filing. Bar code readers speed inventory control processing. D. DEPLOYMENT PLANNING. Many units with high mobility

  18. Cognitive abilities required in time judgment depending on the temporal tasks used: A comparison of children and adults.

    PubMed

    Droit-Volet, S; Wearden, J H; Zélanti, P S

    2015-01-01

    The aim of this study was to examine age-related differences in time judgments during childhood as a function of the temporal task used. Children aged 5 and 8 years, as well as adults, were submitted to 3 temporal tasks (bisection, generalization and reproduction) with short (0.4/0.8 s) and long durations (8/16 s). Furthermore, their cognitive capacities in terms of working memory, attentional control, and processing speed were assessed by a wide battery of neuropsychological tests. The results showed that the age-related differences in time judgment were greater in the reproduction task than in the temporal discrimination tasks. This task was indeed more demanding in terms of working memory and information processing speed. In addition, the bisection task appeared to be easier for children than the generalization task, whereas these 2 tasks were similar for the adults, although the generalization task required more attention to be paid to the processing of durations. Our study thus demonstrates that it is important to understand the different cognitive processes involved in time judgment as a function of the temporal tasks used before venturing to draw conclusions about the development of time perception capabilities.

  19. Evaluating MC&A effectiveness to verify the presence of nuclear materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, P. G.; Morzinski, J. A.; Ostenak, Carl A.

    Traditional materials accounting is focused exclusively on the material balance area (MBA), and involves periodically closing a material balance based on accountability measurements conducted during a physical inventory. In contrast, the physical inventory for Los Alamos National Laboratory's near-real-time accounting system is established around processes and looks more like an item inventory. That is, the intent is not to measure material for accounting purposes, since materials have already been measured in the normal course of daily operations. A given unit process operates many times over the course of a material balance period. The product of a given unit process maymore » move for processing within another unit process in the same MBA or may be transferred out of the MBA. Since few materials are unmeasured the physical inventory for a near-real-time process area looks more like an item inventory. Thus, the intent of the physical inventory is to locate the materials on the books and verify information about the materials contained in the books. Closing a materials balance for such an area is a matter of summing all the individual mass balances for the batches processed by all unit processes in the MBA. Additionally, performance parameters are established to measure the program's effectiveness. Program effectiveness for verifying the presence of nuclear material is required to be equal to or greater than a prescribed performance level, process measurements must be within established precision and accuracy values, physical inventory results meet or exceed performance requirements, and inventory differences are less than a target/goal quantity. This approach exceeds DOE established accounting and physical inventory program requirements. Hence, LANL is committed to this approach and to seeking opportunities for further improvement through integrated technologies. This paper will provide a detailed description of this evaluation process.« less

  20. Space-time-modulated stochastic processes

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-10-01

    Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.

  1. A novel patterning control strategy based on real-time fingerprint recognition and adaptive wafer level scanner optimization

    NASA Astrophysics Data System (ADS)

    Cekli, Hakki Ergun; Nije, Jelle; Ypma, Alexander; Bastani, Vahid; Sonntag, Dag; Niesing, Henk; Zhang, Linmiao; Ullah, Zakir; Subramony, Venky; Somasundaram, Ravin; Susanto, William; Matsunobu, Masazumi; Johnson, Jeff; Tabery, Cyrus; Lin, Chenxi; Zou, Yi

    2018-03-01

    In addition to lithography process and equipment induced variations, processes like etching, annealing, film deposition and planarization exhibit variations, each having their own intrinsic characteristics and leaving an effect, a `fingerprint', on the wafers. With ever tighter requirements for CD and overlay, controlling these process induced variations is both increasingly important and increasingly challenging in advanced integrated circuit (IC) manufacturing. For example, the on-product overlay (OPO) requirement for future nodes is approaching <3nm, requiring the allowable budget for process induced variance to become extremely small. Process variance control is seen as an bottleneck to further shrink which drives the need for more sophisticated process control strategies. In this context we developed a novel `computational process control strategy' which provides the capability of proactive control of each individual wafer with aim to maximize the yield, without introducing a significant impact on metrology requirements, cycle time or productivity. The complexity of the wafer process is approached by characterizing the full wafer stack building a fingerprint library containing key patterning performance parameters like Overlay, Focus, etc. Historical wafer metrology is decomposed into dominant fingerprints using Principal Component Analysis. By associating observed fingerprints with their origin e.g. process steps, tools and variables, we can give an inline assessment of the strength and origin of the fingerprints on every wafer. Once the fingerprint library is established, a wafer specific fingerprint correction recipes can be determined based on its processing history. Data science techniques are used in real-time to ensure that the library is adaptive. To realize this concept, ASML TWINSCAN scanners play a vital role with their on-board full wafer detection and exposure correction capabilities. High density metrology data is created by the scanner for each wafer and on every layer during the lithography steps. This metrology data will be used to obtain the process fingerprints. Also, the per exposure and per wafer correction potential of the scanners will be utilized for improved patterning control. Additionally, the fingerprint library will provide early detection of excursions for inline root cause analysis and process optimization guidance.

  2. Etch bias inversion during EUV mask ARC etch

    NASA Astrophysics Data System (ADS)

    Lajn, Alexander; Rolff, Haiko; Wistrom, Richard

    2017-07-01

    The introduction of EUV lithography to high volume manufacturing is now within reach for 7nm technology node and beyond (1), at least for some steps. The scheduling is in transition from long to mid-term. Thus, all contributors need to focus their efforts on the production requirements. For the photo mask industry, these requirements include the control of defectivity, CD performance and lifetime of their masks. The mask CD performance including CD uniformity, CD targeting, and CD linearity/ resolution, is predominantly determined by the photo resist performance and by the litho and etch processes. State-of-the-art chemically amplified resists exhibit an asymmetric resolution for directly and indirectly written features, which usually results in a similarly asymmetric resolution performance on the mask. This resolution gap may reach as high as multiple tens of nanometers on the mask level in dependence of the chosen processes. Depending on the printing requirements of the wafer process, a reduction or even an increase of this gap may be required. A potential way of tuning via the etch process, is to control the lateral CD contribution during etch. Aside from process tuning knobs like pressure, RF powers and gases, which usually also affect CD linearity and CD uniformity, the simplest knob is the etch time itself. An increased over etch time results in an increased CD contribution in the normal case. , We found that the etch CD contribution of ARC layer etch on EUV photo masks is reduced by longer over etch times. Moreover, this effect can be demonstrated to be present for different etch chambers and photo resists.

  3. 78 FR 75571 - Independent Assessment of the Process for the Review of Device Submissions; High Priority...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... of performing the technical analysis, management assessment, and program evaluation tasks required to.... Analysis of elements of the review process (including the presubmission process, and investigational device... time to facilitate a more efficient process. This includes analysis of root causes for inefficiencies...

  4. The minicell TMirradiator: A new system for a new market

    NASA Astrophysics Data System (ADS)

    Clouser, James F.; Beers, Eric W.

    1998-06-01

    Since the commissioning of the first industrial Gamma Irradiator design, designers and operators of irradiation systems have been attempting to meet the specific production requirements and challenges presented to them. This objective has resulted in many different versions of irradiators currently in service today, all of which had original charters and many of which still perform very well within even the new requirements of this industry. Continuing changes in the marketplace have, however, placed pressures on existing designs due to a combination of changing dose requirements for sterlization, increased economic pressures from the specific industry served for both time and location and the increasing variety of product types requiring processing. Additionally, certain market areas which could never economically support a typical gamma processing facility have either not been serviced, or have forced potential gamma users to transport product long distances to one of these existing facilities. The MiniCell TM removes many of the traditional barriers previously accepted in the radiation processing industry for building a processing facility in a location. Its reduced size and cost have allowed many potential users to consider in-house processing and its ability to be quickly assembled allow it to meet market needs in a much more timely fashion than the previous designs. The MiniCell system can cost effectively meet many of the current market needs of reducing total cost of processing and also be flexible enough to process product in a wide range of industries effectively.

  5. Development and Release of a GRACE-FO "Grand Simulation" Data Set by JPL

    NASA Astrophysics Data System (ADS)

    Fahnestock, E.; Yuan, D. N.; Wiese, D. N.; McCullough, C. M.; Harvey, N.; Sakumura, C.; Paik, M.; Bertiger, W. I.; Wen, H. Y.; Kruizinga, G. L. H.

    2017-12-01

    The GRACE-FO mission, to be launched early in 2018, will require several stages of data processing to be performed within its Science Data System (SDS). In an effort to demonstrate effective implementation and inter-operation of this level 1, 2, and 3 data processing, and to verify its combined ability to recover a truth Earth gravity field to within top-level requirements, the SDS team has performed a system test which it has termed the "Grand Simulation". This process starts with iteration to converge on a mutually consistent integrated truth orbit, non-gravitational acceleration time history, and spacecraft attitude time history, generated with the truth models for all elements of the integrated system (geopotential, both GRACE-FO spacecraft, constellation of GPS spacecraft, etc.). Level 1A data products are generated and then the GPS time to onboard receiver time clock error is introduced into those products according to a realistic truth clock offset model. The various data products are noised according to current best estimate noise models, and then some are used within a precision orbit determination and clock offset estimation/recovery process. Processing from level 1A to level 1B data products uses the recovered clock offset to correct back to GPS time, and performs gap-filling, compression, etc. This exercises nearly all software pathways intended for processing actual GRACE-FO science data. Finally, a monthly gravity field is recovered and compared against the truth background field. In this talk we briefly summarize the resulting performance vs. requirements, and lessons learned in the system test process. Finally, we provide information for use of the level 1B data set by the general community for gravity solution studies and software trials in anticipation of operational GRACE-FO data. ©2016 California Institute of Technology. Government sponsorship acknowledged.

  6. The exact fundamental solution for the Benes tracking problem

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam

    2009-05-01

    The universal continuous-discrete tracking problem requires the solution of a Fokker-Planck-Kolmogorov forward equation (FPKfe) for an arbitrary initial condition. Using results from quantum mechanics, the exact fundamental solution for the FPKfe is derived for the state model of arbitrary dimension with Benes drift that requires only the computation of elementary transcendental functions and standard linear algebra techniques- no ordinary or partial differential equations need to be solved. The measurement process may be an arbitrary, discrete-time nonlinear stochastic process, and the time step size can be arbitrary. Numerical examples are included, demonstrating its utility in practical implementation.

  7. A Requirements-Based Exploration of Open-Source Software Development Projects--Towards a Natural Language Processing Software Analysis Framework

    ERIC Educational Resources Information Center

    Vlas, Radu Eduard

    2012-01-01

    Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects,…

  8. Industrial implementation of spatial variability control by real-time SPC

    NASA Astrophysics Data System (ADS)

    Roule, O.; Pasqualini, F.; Borde, M.

    2016-10-01

    Advanced technology nodes require more and more information to get the wafer process well setup. The critical dimension of components decreases following Moore's law. At the same time, the intra-wafer dispersion linked to the spatial non-uniformity of tool's processes is not capable to decrease in the same proportions. APC systems (Advanced Process Control) are being developed in waferfab to automatically adjust and tune wafer processing, based on a lot of process context information. It can generate and monitor complex intrawafer process profile corrections between different process steps. It leads us to put under control the spatial variability, in real time by our SPC system (Statistical Process Control). This paper will outline the architecture of an integrated process control system for shape monitoring in 3D, implemented in waferfab.

  9. 36 CFR 251.51 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of motion picture, videotaping, sound recording, or any other moving image or audio recording.... Major category—A processing or monitoring category requiring more than 50 hours of agency time to process an application for a special use authorization (processing category 6 and, in certain situations...

  10. 76 FR 62684 - Fees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-11

    ... The NOI asked whether the Part should include a section on fingerprint processing fees. Comments... provisions for the collection of fees for processing fingerprints. The section requires the Commission to adopt preliminary rates for processing fingerprints at the same time as the annual fee schedule is set...

  11. Amplified Self-replication of DNA Origami Nanostructures through Multi-cycle Fast-annealing Process

    NASA Astrophysics Data System (ADS)

    Zhou, Feng; Zhuo, Rebecca; He, Xiaojin; Sha, Ruojie; Seeman, Nadrian; Chaikin, Paul

    We have developed a non-biological self-replication process using templated reversible association of components and irreversible linking with annealing and UV cycles. The current method requires a long annealing time, up to several days, to achieve the specific self-assembly of DNA nanostructures. In this work, we accomplished the self-replication with a shorter time and smaller replication rate per cycle. By decreasing the ramping time, we obtained the comparable replication yield within 90 min. Systematic studies show that the temperature and annealing time play essential roles in the self-replication process. In this manner, we can amplify the self-replication process to a factor of 20 by increasing the number of cycles within the same amount of time.

  12. A Scheduling Algorithm for Replicated Real-Time Tasks

    NASA Technical Reports Server (NTRS)

    Yu, Albert C.; Lin, Kwei-Jay

    1991-01-01

    We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.

  13. GEOTAIL Spacecraft historical data report

    NASA Technical Reports Server (NTRS)

    Boersig, George R.; Kruse, Lawrence F.

    1993-01-01

    The purpose of this GEOTAIL Historical Report is to document ground processing operations information gathered on the GEOTAIL mission during processing activities at the Cape Canaveral Air Force Station (CCAFS). It is hoped that this report may aid management analysis, improve integration processing and forecasting of processing trends, and reduce real-time schedule changes. The GEOTAIL payload is the third Delta 2 Expendable Launch Vehicle (ELV) mission to document historical data. Comparisons of planned versus as-run schedule information are displayed. Information will generally fall into the following categories: (1) payload stay times (payload processing facility/hazardous processing facility/launch complex-17A); (2) payload processing times (planned, actual); (3) schedule delays; (4) integrated test times (experiments/launch vehicle); (5) unique customer support requirements; (6) modifications performed at facilities; (7) other appropriate information (Appendices A & B); and (8) lessons learned (reference Appendix C).

  14. RVC-CAL library for endmember and abundance estimation in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Lazcano López, R.; Madroñal Quintín, D.; Juárez Martínez, E.; Sanz Álvaro, C.

    2015-10-01

    Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for instance, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. Thus, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization. In that line, this paper describes the construction of a new hyperspectral processing library for RVC-CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This paper presents the development of the required library functions to implement two of the four stages of the hyperspectral imaging processing chain--endmember and abundances estimation. The results obtained show that the library achieves speedups of 30%, approximately, comparing to an existing software of hyperspectral images analysis; concretely, the endmember estimation step reaches an average speedup of 27.6%, which saves almost 8 seconds in the execution time. It also shows the existence of some bottlenecks, as the communication interfaces among the different actors due to the volume of data to transfer. Finally, it is shown that the library considerably simplifies the implementation process. Thus, experimental results show the potential of a RVC-CAL library for analyzing hyperspectral images in real-time, as it provides enough resources to study the system performance.

  15. 8 CFR 1208.10 - Failure to appear at a scheduled hearing before an immigration judge; failure to follow...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... before an immigration judge; failure to follow requirements for biometrics and other biographical... to follow requirements for biometrics and other biographical information processing. Failure to... requirements for biometrics and other biographical information within the time allowed will result in dismissal...

  16. 8 CFR 1208.10 - Failure to appear at a scheduled hearing before an immigration judge; failure to follow...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... before an immigration judge; failure to follow requirements for biometrics and other biographical... to follow requirements for biometrics and other biographical information processing. Failure to... requirements for biometrics and other biographical information within the time allowed will result in dismissal...

  17. 8 CFR 1208.10 - Failure to appear at a scheduled hearing before an immigration judge; failure to follow...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... before an immigration judge; failure to follow requirements for biometrics and other biographical... to follow requirements for biometrics and other biographical information processing. Failure to... requirements for biometrics and other biographical information within the time allowed will result in dismissal...

  18. 8 CFR 1208.10 - Failure to appear at a scheduled hearing before an immigration judge; failure to follow...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... before an immigration judge; failure to follow requirements for biometrics and other biographical... to follow requirements for biometrics and other biographical information processing. Failure to... requirements for biometrics and other biographical information within the time allowed will result in dismissal...

  19. 33 CFR 148.107 - What additional information may be required?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... HOMELAND SECURITY (CONTINUED) DEEPWATER PORTS DEEPWATER PORTS: GENERAL Application for a License § 148.107...), in coordination with MARAD, may determine whether compliance with the requirement is important to processing the application within the time prescribed by the Act. (3) If the requirement is important to...

  20. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  1. Rapid visual grouping and figure-ground processing using temporally structured displays.

    PubMed

    Cheadle, Samuel; Usher, Marius; Müller, Hermann J

    2010-08-23

    We examine the time course of visual grouping and figure-ground processing. Figure (contour) and ground (random-texture) elements were flickered with different phases (i.e., contour and background are alternated), requiring the observer to group information within a pre-specified time window. It was found this grouping has a high temporal resolution: less than 20ms for smooth contours, and less than 50ms for line conjunctions with sharp angles. Furthermore, the grouping process takes place without an explicit knowledge of the phase of the elements, and it requires a cumulative build-up of information. The results are discussed in relation to the neural mechanism for visual grouping and figure-ground segregation. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Planning assistance for the NASA 30/20 GHz program. Network control architecture study.

    NASA Technical Reports Server (NTRS)

    Inukai, T.; Bonnelycke, B.; Strickland, S.

    1982-01-01

    Network Control Architecture for a 30/20 GHz flight experiment system operating in the Time Division Multiple Access (TDMA) was studied. Architecture development, identification of processing functions, and performance requirements for the Master Control Station (MCS), diversity trunking stations, and Customer Premises Service (CPS) stations are covered. Preliminary hardware and software processing requirements as well as budgetary cost estimates for the network control system are given. For the trunking system control, areas covered include on board SS-TDMA switch organization, frame structure, acquisition and synchronization, channel assignment, fade detection and adaptive power control, on board oscillator control, and terrestrial network timing. For the CPS control, they include on board processing and adaptive forward error correction control.

  3. 40 CFR 63.8005 - What requirements apply to my process vessels?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... temperature, as required by § 63.1257(d)(3)(iii)(B), you may elect to measure the liquid temperature in the... the daily averages specified in § 63.998(b)(3). An operating block is a period of time that is equal to the time from the beginning to end of an emission episode or sequence of emission episodes. (g...

  4. 40 CFR 63.8005 - What requirements apply to my process vessels?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... temperature, as required by § 63.1257(d)(3)(iii)(B), you may elect to measure the liquid temperature in the... the daily averages specified in § 63.998(b)(3). An operating block is a period of time that is equal to the time from the beginning to end of an emission episode or sequence of emission episodes. (g...

  5. 40 CFR 63.8005 - What requirements apply to my process vessels?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... temperature, as required by § 63.1257(d)(3)(iii)(B), you may elect to measure the liquid temperature in the... the daily averages specified in § 63.998(b)(3). An operating block is a period of time that is equal to the time from the beginning to end of an emission episode or sequence of emission episodes. (g...

  6. Age-Related Differences in Reaction Time Task Performance in Young Children

    ERIC Educational Resources Information Center

    Kiselev, Sergey; Espy, Kimberlay Andrews; Sheffield, Tiffany

    2009-01-01

    Performance of reaction time (RT) tasks was investigated in young children and adults to test the hypothesis that age-related differences in processing speed supersede a "global" mechanism and are a function of specific differences in task demands and processing requirements. The sample consisted of 54 4-year-olds, 53 5-year-olds, 59…

  7. Assessment Competence through In Situ Practice for Preservice Educators

    ERIC Educational Resources Information Center

    Hurley, Kimberly S.

    2018-01-01

    Effective assessment is the cornerstone of the teaching and learning process and a benchmark of teaching competency. P-12 assessment in physical activity can be complex and dynamic, often requiring a set of skills developed over time through trial and error. Novice teachers have limited time to hone an assessment process that can showcase their…

  8. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860

  9. Advanced information processing system: Local system services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter

    1989-01-01

    The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.

  10. Definition of an auxiliary processor dedicated to real-time operating system kernels

    NASA Technical Reports Server (NTRS)

    Halang, Wolfgang A.

    1988-01-01

    In order to increase the efficiency of process control data processing, it is necessary to enhance the productivity of real time high level languages and to automate the task administration, because presently 60 percent or more of the applications are still programmed in assembly languages. This may be achieved by migrating apt functions for the support of process control oriented languages into the hardware, i.e., by new architectures. Whereas numerous high level languages have already been defined or realized, there are no investigations yet on hardware assisted implementation of real time features. The requirements to be fulfilled by languages and operating systems in hard real time environment are summarized. A comparison of the most prominent languages, viz. Ada, HAL/S, LTR, Pearl, as well as the real time extensions of FORTRAN and PL/1, reveals how existing languages meet these demands and which features still need to be incorporated to enable the development of reliable software with predictable program behavior, thus making it possible to carry out a technical safety approval. Accordingly, Pearl proved to be the closest match to the mentioned requirements.

  11. Error analysis of real time and post processed or bit determination of GFO using GPS tracking

    NASA Technical Reports Server (NTRS)

    Schreiner, William S.

    1991-01-01

    The goal of the Navy's GEOSAT Follow-On (GFO) mission is to map the topography of the world's oceans in both real time (operational) and post processed modes. Currently, the best candidate for supplying the required orbit accuracy is the Global Positioning System (GPS). The purpose of this fellowship was to determine the expected orbit accuracy for GFO in both the real time and post-processed modes when using GPS tracking. This report presents the work completed through the ending date of the fellowship.

  12. Resource conflict detection and removal strategy for nondeterministic emergency response processes using Petri nets

    NASA Astrophysics Data System (ADS)

    Zeng, Qingtian; Liu, Cong; Duan, Hua

    2016-09-01

    Correctness of an emergency response process specification is critical to emergency mission success. Therefore, errors in the specification should be detected and corrected at build-time. In this paper, we propose a resource conflict detection approach and removal strategy for emergency response processes constrained by resources and time. In this kind of emergency response process, there are two timing functions representing the minimum and maximum execution time for each activity, respectively, and many activities require resources to be executed. Based on the RT_ERP_Net, the earliest time to start each activity and the ideal execution time of the process can be obtained. To detect and remove the resource conflicts in the process, the conflict detection algorithms and a priority-activity-first resolution strategy are given. In this way, real execution time for each activity is obtained and a conflict-free RT_ERP_Net is constructed by adding virtual activities. By experiments, it is proved that the resolution strategy proposed can shorten the execution time of the whole process to a great degree.

  13. Modulation and synchronization technique for MF-TDMA system

    NASA Technical Reports Server (NTRS)

    Faris, Faris; Inukai, Thomas; Sayegh, Soheil

    1994-01-01

    This report addresses modulation and synchronization techniques for a multi-frequency time division multiple access (MF-TDMA) system with onboard baseband processing. The types of synchronization techniques analyzed are asynchronous (conventional) TDMA, preambleless asynchronous TDMA, bit synchronous timing with a preamble, and preambleless bit synchronous timing. Among these alternatives, preambleless bit synchronous timing simplifies onboard multicarrier demultiplexer/demodulator designs (about 2:1 reduction in mass and power), requires smaller onboard buffers (10:1 to approximately 3:1 reduction in size), and provides better frame efficiency as well as lower onboard processing delay. Analysis and computer simulation illustrate that this technique can support a bit rate of up to 10 Mbit/s (or higher) with proper selection of design parameters. High bit rate transmission may require Doppler compensation and multiple phase error measurements. The recommended modulation technique for bit synchronous timing is coherent QPSK with differential encoding for the uplink and coherent QPSK for the downlink.

  14. Gibberllin driven growth in elf3 mutants requires PIF4 and PIF5

    USDA-ARS?s Scientific Manuscript database

    The regulatory connections between the circadian clock and hormone signaling are essential to understand, as these two regulatory processes work together to time growth processes relative to predictable environmental events. Gibberellins (GAs) are phytohormones that control many growth processes thr...

  15. Large Composite Structures Processing Technologies for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Clinton, R. G., Jr.; Vickers, J. H.; McMahon, W. M.; Hulcher, A. B.; Johnston, N. J.; Cano, R. J.; Belvin, H. L.; McIver, K.; Franklin, W.; Sidwell, D.

    2001-01-01

    Significant efforts have been devoted to establishing the technology foundation to enable the progression to large scale composite structures fabrication. We are not capable today of fabricating many of the composite structures envisioned for the second generation reusable launch vehicle (RLV). Conventional 'aerospace' manufacturing and processing methodologies (fiber placement, autoclave, tooling) will require substantial investment and lead time to scale-up. Out-of-autoclave process techniques will require aggressive efforts to mature the selected technologies and to scale up. Focused composite processing technology development and demonstration programs utilizing the building block approach are required to enable envisioned second generation RLV large composite structures applications. Government/industry partnerships have demonstrated success in this area and represent best combination of skills and capabilities to achieve this goal.

  16. A real-time dashboard for managing pathology processes.

    PubMed

    Halwani, Fawaz; Li, Wei Chen; Banerjee, Diponkar; Lessard, Lysanne; Amyot, Daniel; Michalowski, Wojtek; Giffen, Randy

    2016-01-01

    The Eastern Ontario Regional Laboratory Association (EORLA) is a newly established association of all the laboratory and pathology departments of Eastern Ontario that currently includes facilities from eight hospitals. All surgical specimens for EORLA are processed in one central location, the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital (TOH), where the rapid growth and influx of surgical and cytology specimens has created many challenges in ensuring the timely processing of cases and reports. Although the entire process is maintained and tracked in a clinical information system, this system lacks pre-emptive warnings that can help management address issues as they arise. Dashboard technology provides automated, real-time visual clues that could be used to alert management when a case or specimen is not being processed within predefined time frames. We describe the development of a dashboard helping pathology clinical management to make informed decisions on specimen allocation and tracking. The dashboard was designed and developed in two phases, following a prototyping approach. The first prototype of the dashboard helped monitor and manage pathology processes at the DPLM. The use of this dashboard helped to uncover operational inefficiencies and contributed to an improvement of turn-around time within The Ottawa Hospital's DPML. It also allowed the discovery of additional requirements, leading to a second prototype that provides finer-grained, real-time information about individual cases and specimens. We successfully developed a dashboard that enables managers to address delays and bottlenecks in specimen allocation and tracking. This support ensures that pathology reports are provided within time frame standards required for high-quality patient care. Given the importance of rapid diagnostics for a number of diseases, the use of real-time dashboards within pathology departments could contribute to improving the quality of patient care beyond EORLA's.

  17. Automated inspection of hot steel slabs

    DOEpatents

    Martin, R.J.

    1985-12-24

    The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes. 5 figs.

  18. Automated inspection of hot steel slabs

    DOEpatents

    Martin, Ronald J.

    1985-01-01

    The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes.

  19. Labour time required for piglet castration with isoflurane-anaesthesia using shared and stationary inhaler devices.

    PubMed

    Weber, Sabrina; Das, Gürbüz; Waldmann, Karl-Heinz; Gauly, Matthias

    2014-01-01

    Isoflurane-anaesthesia combined with an analgesic represents a welfare-friendly method of pain mitigation for castration of piglets. However, it requires an inhaler device, which is uneconomic for small farms. Sharing a device among farms may be an economical option if the shared use does not increase labour time and the resulting costs. This study aimed to investigate the amount and components of labour time required for piglet castration with isoflurane anaesthesia performed with stationary and shared devices. Piglets (N = 1579) were anaesthetised with isoflurane (using either stationary or shared devices) and castrated.The stationary devices were used in a group (n = 5) of larger farms (84 sows/farm on an average), whereas smaller farms (n = 7; 32 sows/farm on an average) shared one device. Each farm was visited four times and labour time for each process-step was recorded. The complete process included machine set-up, anaesthesia and castration by a practitioner, and preparation, collection and transport of piglets by a farmer. Labour time of the complete process was increased (P = 0.012) on farms sharing a device (266 s/piglet) compared to farms using stationary devices (177 s/ piglet), due to increased time for preparation (P = 0.055), castration (P = 0.026) and packing (P = 0.010) when sharing a device. However, components of the time budget of farms using stationary or shared devices did not differ significantly (P > 0.05). Cost arising from time spent by farmers did not differ considerably between the use of stationary (0.28 Euro per piglet) and shared (0.26 Euro) devices. It is concluded that costs arising from the increased labour time due to sharing a device can be considered marginal, since the high expenses originating from purchasing an inhaler device are shared among several farms.

  20. Influence of heat transfer rates on pressurization of liquid/slush hydrogen propellant tanks

    NASA Technical Reports Server (NTRS)

    Sasmal, G. P.; Hochstein, J. I.; Hardy, T. L.

    1993-01-01

    A multi-dimensional computational model of the pressurization process in liquid/slush hydrogen tank is developed and used to study the influence of heat flux rates at the ullage boundaries on the process. The new model computes these rates and performs an energy balance for the tank wall whereas previous multi-dimensional models required a priori specification of the boundary heat flux rates. Analyses of both liquid hydrogen and slush hydrogen pressurization were performed to expose differences between the two processes. Graphical displays are presented to establish the dependence of pressurization time, pressurant mass required, and other parameters of interest on ullage boundary heat flux rates and pressurant mass flow rate. Detailed velocity fields and temperature distributions are presented for selected cases to further illuminate the details of the pressurization process. It is demonstrated that ullage boundary heat flux rates do significantly effect the pressurization process and that minimizing heat loss from the ullage and maximizing pressurant flow rate minimizes the mass of pressurant gas required to pressurize the tank. It is further demonstrated that proper dimensionless scaling of pressure and time permit all the pressure histories examined during this study to be displayed as a single curve.

  1. Dual vs. single computer monitor in a Canadian hospital Archiving Department: a study of efficiency and satisfaction.

    PubMed

    Poder, Thomas G; Godbout, Sylvie T; Bellemare, Christian

    This paper describes a comparative study of clinical coding by Archivists (also known as Clinical Coders in some other countries) using single and dual computer monitors. In the present context, processing a record corresponds to checking the available information; searching for the missing physician information; and finally, performing clinical coding. We collected data for each Archivist during her use of the single monitor for 40 hours and during her use of the dual monitor for 20 hours. During the experimental periods, Archivists did not perform other related duties, so we were able to measure the real-time processing of records. To control for the type of records and their impact on the process time required, we categorised the cases as major or minor, based on whether acute care or day surgery was involved. Overall results show that 1,234 records were processed using a single monitor and 647 records using a dual monitor. The time required to process a record was significantly higher (p= .071) with a single monitor compared to a dual monitor (19.83 vs.18.73 minutes). However, the percentage of major cases was significantly higher (p= .000) in the single monitor group compared to the dual monitor group (78% vs. 69%). As a consequence, we adjusted our results, which reduced the difference in time required to process a record between the two systems from 1.1 to 0.61 minutes. Thus, the net real-time difference was only 37 seconds in favour of the dual monitor system. Extrapolated over a 5-year period, this would represent a time savings of 3.1% and generate a net cost savings of $7,729 CAD (Canadian dollars) for each workstation that devoted 35 hours per week to the processing of records. Finally, satisfaction questionnaire responses indicated a high level of satisfaction and support for the dual-monitor system. The implementation of a dual-monitor system in a hospital archiving department is an efficient option in the context of scarce human resources and has the strong support of Archivists.

  2. Fully automated processing of fMRI data in SPM: from MRI scanner to PACS.

    PubMed

    Maldjian, Joseph A; Baer, Aaron H; Kraft, Robert A; Laurienti, Paul J; Burdette, Jonathan H

    2009-01-01

    Here we describe the Wake Forest University Pipeline, a fully automated method for the processing of fMRI data using SPM. The method includes fully automated data transfer and archiving from the point of acquisition, real-time batch script generation, distributed grid processing, interface to SPM in MATLAB, error recovery and data provenance, DICOM conversion and PACS insertion. It has been used for automated processing of fMRI experiments, as well as for the clinical implementation of fMRI and spin-tag perfusion imaging. The pipeline requires no manual intervention, and can be extended to any studies requiring offline processing.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goins, Bobby

    A systems based approach will be used to evaluate the nitrogen delivery process. This approach involves principles found in Lean, Reliability, Systems Thinking, and Requirements. This unique combination of principles and thought process yields a very in depth look into the system to which it is applied. By applying a systems based approach to the nitrogen delivery process there should be improvements in cycle time, efficiency, and a reduction in the required number of personnel needed to sustain the delivery process. This will in turn reduce the amount of demurrage charges that the site incurs. In addition there should bemore » less frustration associated with the delivery process.« less

  4. A Systems Approach to Nitrogen Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goins, Bobby

    A systems based approach will be used to evaluate the nitrogen delivery process. This approach involves principles found in Lean, Reliability, Systems Thinking, and Requirements. This unique combination of principles and thought process yields a very in depth look into the system to which it is applied. By applying a systems based approach to the nitrogen delivery process there should be improvements in cycle time, efficiency, and a reduction in the required number of personnel needed to sustain the delivery process. This will in turn reduce the amount of demurrage charges that the site incurs. In addition there should bemore » less frustration associated with the delivery process.« less

  5. Hurricane Wave Topography and Directional Wave Spectra in Near Real-Time

    DTIC Science & Technology

    2005-09-30

    Develop and/or modify the real - time operating system and analysis techniques and programs of the NASA Scanning Radar Altimeter (SRA) to process the...Wayne Wright is responsible for the real - time operating system of the SRA and making whatever modifications are required to enable near real-time

  6. One Step at a Time: SBM as an Incremental Process.

    ERIC Educational Resources Information Center

    Conrad, Mark

    1995-01-01

    Discusses incremental SBM budgeting and answers questions regarding resource equity, bookkeeping requirements, accountability, decision-making processes, and purchasing. Approaching site-based management as an incremental process recognizes that every school system engages in some level of site-based decisions. Implementation can be gradual and…

  7. Accessing Information in Working Memory: Can the Focus of Attention Grasp Two Elements at the Same Time?

    ERIC Educational Resources Information Center

    Oberauer, Klaus; Bialkova, Svetlana

    2009-01-01

    Processing information in working memory requires selective access to a subset of working-memory contents by a focus of attention. Complex cognition often requires joint access to 2 items in working memory. How does the focus select 2 items? Two experiments with an arithmetic task and 1 with a spatial task investigate time demands for successive…

  8. On demand processing of climate station sensor data

    NASA Astrophysics Data System (ADS)

    Wöllauer, Stephan; Forteva, Spaska; Nauss, Thomas

    2015-04-01

    Large sets of climate stations with several sensors produce big amounts of finegrained time series data. To gain value of this data, further processing and aggregation is needed. We present a flexible system to process the raw data on demand. Several aspects need to be considered to process the raw data in a way that scientists can use the processed data conveniently for their specific research interests. First of all, it is not feasible to pre-process the data in advance because of the great variety of ways it can be processed. Therefore, in this approach only the raw measurement data is archived in a database. When a scientist requires some time series, the system processes the required raw data according to the user-defined request. Based on the type of measurement sensor, some data validation is needed, because the climate station sensors may produce erroneous data. Currently, three validation methods are integrated in the on demand processing system and are optionally selectable. The most basic validation method checks if measurement values are within a predefined range of possible values. For example, it may be assumed that an air temperature sensor measures values within a range of -40 °C to +60 °C. Values outside of this range are considered as a measurement error by this validation method and consequently rejected. An other validation method checks for outliers in the stream of measurement values by defining a maximum change rate between subsequent measurement values. The third validation method compares measurement data to the average values of neighboring stations and rejects measurement values with a high variance. These quality checks are optional, because especially extreme climatic values may be valid but rejected by some quality check method. An other important task is the preparation of measurement data in terms of time. The observed stations measure values in intervals of minutes to hours. Often scientists need a coarser temporal resolution (days, months, years). Therefore, the interval of time aggregation is selectable for the processing. For some use cases it is desirable that the resulting time series are as continuous as possible. To meet these requirements, the processing system includes techniques to fill gaps of missing values by interpolating measurement values with data from adjacent stations using available contemporaneous measurements from the respective stations as training datasets. Alongside processing of sensor values, we created interactive visualization techniques to get a quick overview of a big amount of archived time series data.

  9. Detailed requirements document for the Interactive Financial Management System (IFMS), volume 1

    NASA Technical Reports Server (NTRS)

    Dodson, D. B.

    1975-01-01

    The detailed requirements for phase 1 (online fund control, subauthorization accounting, and accounts receivable functional capabilities) of the Interactive Financial Management System (IFMS) are described. This includes information on the following: systems requirements, performance requirements, test requirements, and production implementation. Most of the work is centered on systems requirements, and includes discussions on the following processes: resources authority, allotment, primary work authorization, reimbursable order acceptance, purchase request, obligation, cost accrual, cost distribution, disbursement, subauthorization performance, travel, accounts receivable, payroll, property, edit table maintenance, end-of-year, backup input. Other subjects covered include: external systems interfaces, general inquiries, general report requirements, communication requirements, and miscellaneous. Subjects covered under performance requirements include: response time, processing volumes, system reliability, and accuracy. Under test requirements come test data sources, general test approach, and acceptance criteria. Under production implementation come data base establishment, operational stages, and operational requirements.

  10. Real-Time Optical Image Processing Techniques

    DTIC Science & Technology

    1988-10-31

    pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-chan- nel spatial...required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness...pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the

  11. Assessment of atmospheric moisture harvesting by direct cooling

    NASA Astrophysics Data System (ADS)

    Gido, Ben; Friedler, Eran; Broday, David M.

    2016-12-01

    The enormous amount of water vapor present in the atmosphere may serve as a potential water resource. An index is proposed for assessing the feasibility and energy requirements of atmospheric moisture harvesting by a direct cooling process. A climate-based analysis of different locations reveals the global potential of this process. We demonstrate that the Moisture Harvesting Index (MHI) can be used for assessing the energy requirements of atmospheric moisture harvesting. The efficiency of atmospheric moisture harvesting is highly weather and climate dependent, with the smallest estimated energy requirement found at the tropical regions of the Philippines (0.23 kW/L). Less favorable locations have much higher energy demands for the operation of an atmospheric moisture harvesting device. In such locations, using the MHI to select the optimal operation time periods (during the day and the year) can reduce the specific energy requirements of the process dramatically. Still, using current technology the energy requirement of atmospheric moisture harvesting by a direct air cooling process is significantly higher than of desalination by reverse osmosis.

  12. Requirements and Usage of NVM in Advanced Onboard Data Processing Systems

    NASA Technical Reports Server (NTRS)

    Some, R.

    2001-01-01

    This viewgraph presentation gives an overview of the requirements and uses of non-volatile memory (NVM) in advanced onboard data processing systems. Supercomputing in space presents the only viable approach to the bandwidth problem (can't get data down to Earth), controlling constellations of cooperating satellites, reducing mission operating costs, and real-time intelligent decision making and science data gathering. Details are given on the REE vision and impact on NASA and Department of Defense missions, objectives of REE, baseline architecture, and issues. NVM uses and requirements are listed.

  13. Implications of the Turing machine model of computation for processor and programming language design

    NASA Astrophysics Data System (ADS)

    Hunter, Geoffrey

    2004-01-01

    A computational process is classified according to the theoretical model that is capable of executing it; computational processes that require a non-predeterminable amount of intermediate storage for their execution are Turing-machine (TM) processes, while those whose storage are predeterminable are Finite Automation (FA) processes. Simple processes (such as traffic light controller) are executable by Finite Automation, whereas the most general kind of computation requires a Turing Machine for its execution. This implies that a TM process must have a non-predeterminable amount of memory allocated to it at intermediate instants of its execution; i.e. dynamic memory allocation. Many processes encountered in practice are TM processes. The implication for computational practice is that the hardware (CPU) architecture and its operating system must facilitate dynamic memory allocation, and that the programming language used to specify TM processes must have statements with the semantic attribute of dynamic memory allocation, for in Alan Turing"s thesis on computation (1936) the "standard description" of a process is invariant over the most general data that the process is designed to process; i.e. the program describing the process should never have to be modified to allow for differences in the data that is to be processed in different instantiations; i.e. data-invariant programming. Any non-trivial program is partitioned into sub-programs (procedures, subroutines, functions, modules, etc). Examination of the calls/returns between the subprograms reveals that they are nodes in a tree-structure; this tree-structure is independent of the programming language used to encode (define) the process. Each sub-program typically needs some memory for its own use (to store values intermediate between its received data and its computed results); this locally required memory is not needed before the subprogram commences execution, and it is not needed after its execution terminates; it may be allocated as its execution commences, and deallocated as its execution terminates, and if the amount of this local memory is not known until just before execution commencement, then it is essential that it be allocated dynamically as the first action of its execution. This dynamically allocated/deallocated storage of each subprogram"s intermediate values, conforms with the stack discipline; i.e. last allocated = first to be deallocated, an incidental benefit of which is automatic overlaying of variables. This stack-based dynamic memory allocation was a semantic implication of the nested block structure that originated in the ALGOL-60 programming language. AGLOL-60 was a TM language, because the amount of memory allocated on subprogram (block/procedure) entry (for arrays, etc) was computable at execution time. A more general requirement of a Turing machine process is for code generation at run-time; this mandates access to the source language processor (compiler/interpretor) during execution of the process. This fundamental aspect of computer science is important to the future of system design, because it has been overlooked throughout the 55 years since modern computing began in 1048. The popular computer systems of this first half-century of computing were constrained by compile-time (or even operating system boot-time) memory allocation, and were thus limited to executing FA processes. The practical effect was that the distinction between the data-invariant program and its variable data was blurred; programmers had to make trial and error executions, modifying the program"s compile-time constants (array dimensions) to iterate towards the values required at run-time by the data being processed. This era of trial and error computing still persists; it pervades the culture of current (2003) computing practice.

  14. HL-20 operations and support requirements for the Personnel Launch System mission

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; White, Nancy H.; Caldwell, Ronald G.

    1993-01-01

    The processing, mission planning, and support requirements were defined for the HL-20 lifting-body configuration that can serve as a Personnel Launch System. These requirements were based on the assumption of an operating environment that incorporates aircraft and airline support methods and techniques that are applicable to operations. The study covered the complete turnaround process for the HL-20, including landing through launch, and mission operations, but did not address the support requirements of the launch vehicle except for the integrated activities. Support is defined in terms of manpower, staffing levels, facilities, ground support equipment, maintenance/sparing requirements, and turnaround processing time. Support results were drawn from two contracted studies, plus an in-house analysis used to define the maintenance manpower. The results of the contracted studies were used as the basis for a stochastic simulation of the support environment to determine the sufficiency of support and the effect of variance on vehicle processing. Results indicate the levels of support defined for the HL-20 through this process to be sufficient to achieve the desired flight rate of eight flights per year.

  15. Real-time processing of radar return on a parallel computer

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1992-01-01

    NASA is working with the FAA to demonstrate the feasibility of pulse Doppler radar as a candidate airborne sensor to detect low altitude windshears. The need to provide the pilot with timely information about possible hazards has motivated a demand for real-time processing of a radar return. Investigated here is parallel processing as a means of accommodating the high data rates required. A PC based parallel computer, called the transputer, is used to investigate issues in real time concurrent processing of radar signals. A transputer network is made up of an array of single instruction stream processors that can be networked in a variety of ways. They are easily reconfigured and software development is largely independent of the particular network topology. The performance of the transputer is evaluated in light of the computational requirements. A number of algorithms have been implemented on the transputers in OCCAM, a language specially designed for parallel processing. These include signal processing algorithms such as the Fast Fourier Transform (FFT), pulse-pair, and autoregressive modelling, as well as routing software to support concurrency. The most computationally intensive task is estimating the spectrum. Two approaches have been taken on this problem, the first and most conventional of which is to use the FFT. By using table look-ups for the basis function and other optimizing techniques, an algorithm has been developed that is sufficient for real time. The other approach is to model the signal as an autoregressive process and estimate the spectrum based on the model coefficients. This technique is attractive because it does not suffer from the spectral leakage problem inherent in the FFT. Benchmark tests indicate that autoregressive modeling is feasible in real time.

  16. Temporal texture of associative encoding modulates recall processes.

    PubMed

    Tibon, Roni; Levy, Daniel A

    2014-02-01

    Binding aspects of an experience that are distributed over time is an important element of episodic memory. In the current study, we examined how the temporal complexity of an experience may govern the processes required for its retrieval. We recorded event-related potentials during episodic cued recall following pair associate learning of concurrently and sequentially presented object-picture pairs. Cued recall success effects over anterior and posterior areas were apparent in several time windows. In anterior locations, these recall success effects were similar for concurrently and sequentially encoded pairs. However, in posterior sites clustered over parietal scalp the effect was larger for the retrieval of sequentially encoded pairs. We suggest that anterior aspects of the mid-latency recall success effects may reflect working-with-memory operations or direct access recall processes, while more posterior aspects reflect recollective processes which are required for retrieval of episodes of greater temporal complexity. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. A Cartesian reflex assessment of face processing.

    PubMed

    Polewan, Robert J; Vigorito, Christopher M; Nason, Christopher D; Block, Richard A; Moore, John W

    2006-03-01

    Commands to blink were embedded within pictures of faces and simple geometric shapes or forms. The faces and shapes were conditioned stimuli (CSs), and the required responses were conditioned responses, or more properly, Cartesian reflexes (CRs). As in classical conditioning protocols, response times (RTs) were measured from CS onset. RTs provided a measure of the processing cost (PC) of attending to a CS. A PC is the extra time required to respond relative to RTs to unconditioned stimulus (US) commands presented alone. They reflect the interplay between attentional processing of the informational content of a CS and its signaling function with respect to the US command. This resulted in longer RTs to embedded commands. Differences between PCs of faces and geometric shapes represent a starting place for a new mental chronometry based on the traditional idea that differences in RT reflect differences in information processing.

  18. System analysis for technology transfer readiness assessment of horticultural postharvest

    NASA Astrophysics Data System (ADS)

    Hayuningtyas, M.; Djatna, T.

    2018-04-01

    Availability of postharvest technology is becoming abundant, but only a few technologies are applicable and useful to a wider community purposes. Based on this problem it requires a significant readiness level of transfer technology approach. This system is reliable to access readiness a technology with level, from 1-9 and to minimize time of transfer technology in every level, time required technology from the selection process can be minimum. Problem was solved by using Relief method to determine ranking by weighting feasible criteria on postharvest technology in each level and PERT (Program Evaluation Review Technique) to schedule. The results from ranking process of post-harvest technology in the field of horticulture is able to pass level 7. That, technology can be developed to increase into pilot scale and minimize time required for technological readiness on PERT with optimistic time of 7,9 years. Readiness level 9 shows that technology has been tested on the actual conditions also tied with estimated production price compared to competitors. This system can be used to determine readiness of technology innovation that is derived from agricultural raw materials and passes certain stages.

  19. Autoverification process improvement by Six Sigma approach: Clinical chemistry & immunoassay.

    PubMed

    Randell, Edward W; Short, Garry; Lee, Natasha; Beresford, Allison; Spencer, Margaret; Kennell, Marina; Moores, Zoë; Parry, David

    2018-05-01

    This study examines effectiveness of a project to enhance an autoverification (AV) system through application of Six Sigma (DMAIC) process improvement strategies. Similar AV systems set up at three sites underwent examination and modification to produce improved systems while monitoring proportions of samples autoverified, the time required for manual review and verification, sample processing time, and examining characteristics of tests not autoverified. This information was used to identify areas for improvement and monitor the impact of changes. Use of reference range based criteria had the greatest impact on the proportion of tests autoverified. To improve AV process, reference range based criteria was replaced with extreme value limits based on a 99.5% test result interval, delta check criteria were broadened, and new specimen consistency rules were implemented. Decision guidance tools were also developed to assist staff using the AV system. The mean proportion of tests and samples autoverified improved from <62% for samples and <80% for tests, to >90% for samples and >95% for tests across all three sites. The new AV system significantly decreased turn-around time and total sample review time (to about a third), however, time spent for manual review of held samples almost tripled. There was no evidence of compromise to the quality of testing process and <1% of samples held for exceeding delta check or extreme limits required corrective action. The Six Sigma (DMAIC) process improvement methodology was successfully applied to AV systems resulting in an increase in overall test and sample AV by >90%, improved turn-around time, reduced time for manual verification, and with no obvious compromise to quality or error detection. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  20. DREAMS and IMAGE: A Model and Computer Implementation for Concurrent, Life-Cycle Design of Complex Systems

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.

  1. The effects of quantity and depth of processing on children's time perception.

    PubMed

    Arlin, M

    1986-08-01

    Two experiments were conducted to investigate the effects of quantity and depth of processing on children's time perception. These experiments tested the appropriateness of two adult time-perception models (attentional and storage size) for younger ages. Children were given stimulus sets of equal time which varied by level of processing (deep/shallow) and quantity (list length). In the first experiment, 28 children in Grade 6 reproduced presentation times of various quantities of pictures under deep (living/nonliving categorization) or shallow (repeating label) conditions. Students also compared pairs of durations. In the second experiment, 128 children in Grades K, 2, 4, and 6 reproduced presentation times under similar conditions with three or six pictures and with deep or shallow processing requirements. Deep processing led to decreased estimation of time. Higher quantity led to increased estimation of time. Comparative judgments were influenced by quantity. The interaction between age and depth of processing was significant. Older children were more affected by depth differences than were younger children. Results were interpreted as supporting different aspects of each adult model as explanations of children's time perception. The processing effect supported the attentional model and the quantity effect supported the storage size model.

  2. The role of palaeoecological records in assessing ecosystem services

    NASA Astrophysics Data System (ADS)

    Jeffers, Elizabeth S.; Nogué, Sandra; Willis, Katherine J.

    2015-03-01

    Biological conservation and environmental management are increasingly focussing on the preservation and restoration of ecosystem services (i.e. the benefits that humans receive from the natural functioning of healthy ecosystems). Over the past decade there has been a rapid increase in the number of palaeoecological studies that have contributed to conservation of biodiversity and management of ecosystem processes; however, there are relatively few instances in which attempts have been made to estimate the continuity of ecosystem goods and services over time. How resistant is an ecosystem service to environmental perturbations? And, if damaged, how long it does it take an ecosystem service to recover? Both questions are highly relevant to conservation and management of landscapes that are important for ecosystem service provision and require an in-depth understanding of the way ecosystems function in space and time. An understanding of time is particularly relevant for those ecosystem services - be they supporting, provisioning, regulating or cultural services that involve processes that vary over a decadal (or longer) timeframe. Most trees, for example, have generation times >50 years. Understanding the response of forested ecosystems to environmental perturbations and therefore the continuity of the ecosystem services they provide for human well-being - be it for example, carbon draw-down (regulating service) or timber (provisioning service) - requires datasets that reflect the typical replacement rates in these systems and the lifecycle of processes that alter their trajectories of change. Therefore, data are required that span decadal to millennial time-scales. Very rarely, however, is this information available from neo-ecological datasets and in many ecosystem service assessments, this lack of a temporal record is acknowledged as a significant information gap. This review aims to address this knowledge gap by examining the type and nature of palaeoecological datasets that might be critical to assessing the persistence of ecosystem services across a variety of time scales. Specifically we examine the types of palaeoecological records that can inform on the dynamics of ecosystem processes and services over time - and their response to complex environmental changes. We focus on three key areas: a) exploring the suitability of palaeoecological records for examining variability in space and time of ecosystem processes; b) using palaeoecological data to determine the resilience and persistence of ecosystem services and goods over time in response to drivers of change; and c) how best to translate raw palaeoecological data into the relevant currencies required for ecosystem service assessments.

  3. Care processes associated with quicker door-in-door-out times for patients with ST-elevation-myocardial infarction requiring transfer: results from a statewide regionalization program.

    PubMed

    Glickman, Seth W; Lytle, Barbara L; Ou, Fang-Shu; Mears, Greg; O'Brien, Sean; Cairns, Charles B; Garvey, J Lee; Bohle, David J; Peterson, Eric D; Jollis, James G; Granger, Christopher B

    2011-07-01

    The ability to rapidly identify patients with ST-segment elevation-myocardial infarction (STEMI) at hospitals without percutaneous coronary intervention (PCI) and transfer them to hospitals with PCI capability is critical to STEMI regionalization efforts. Our objective was to assess the association of prehospital, emergency department (ED), and hospital processes of care implemented as part of a statewide STEMI regionalization program with door-in-door-out times at non-PCI hospitals. Door-in-door-out times for 436 STEMI patients at 55 non-PCI hospitals were determined before (July 2005 to September 2005) and after (January 2007 to March 2007) a year-long implementation of standardized protocols as part of a statewide regionalization program (Reperfusion of Acute Myocardial Infarction in North Carolina Emergency Departments, RACE). The association of 8 system care processes (encompassing emergency medical services [EMS], ED, and hospital settings) with door-in-door-out times was determined using multivariable linear regression. Median door-in-door-out times improved significantly with the intervention (before: 97.0 minutes, interquartile range, 56.0 to 160.0 minutes; after: 58.0 minutes, interquartile range, 35.0 to 90.0 minutes; P<0.0001). Hospital, ED, and EMS care processes were each independently associated with shorter door-in-door-out times (-17.7 [95% confidence interval, -27.5 to -7.9]; -10.1 [95% confidence interval, -19.0 to -1.1], and -7.3 [95% confidence interval, -13.0 to -1.5] minutes for each additional hospital, ED, and EMS process, respectively). Combined, adoption of EMS processes was associated with the shortest median treatment times (44 versus 138 minutes for hospitals that adopted all EMS processes versus none). Prehospital, ED, and hospital processes of care were independently associated with shorter door-in-door-out times for STEMI patients requiring transfer. Adoption of several EMS processes was associated with the largest reduction in treatment times. These findings highlight the need for an integrated, system-based approach to improving STEMI care.

  4. Critical Review of NOAA's Observation Requirements Process

    NASA Astrophysics Data System (ADS)

    LaJoie, M.; Yapur, M.; Vo, T.; Templeton, A.; Bludis, D.

    2017-12-01

    NOAA's Observing Systems Council (NOSC) maintains a comprehensive database of user observation requirements. The requirements collection process engages NOAA subject matter experts to document and effectively communicate the specific environmental observation measurements (parameters and attributes) needed to produce operational products and pursue research objectives. User observation requirements documented using a structured and standardized manner and framework enables NOAA to assess its needs across organizational lines in an impartial, objective, and transparent manner. This structure provides the foundation for: selecting, designing, developing, acquiring observing technologies, systems and architectures; budget and contract formulation and decision-making; and assessing in a repeatable fashion the productivity, efficiency and optimization of NOAA's observing system enterprise. User observation requirements are captured independently from observing technologies. Therefore, they can be addressed by a variety of current or expected observing capabilities and allow flexibility to be remapped to new and evolving technologies. NOAA's current inventory of user observation requirements were collected over a ten-year period, and there have been many changes in policies, mission priorities, and funding levels during this time. In light of these changes, the NOSC initiated a critical, in-depth review to examine all aspects of user observation requirements and associated processes during 2017. This presentation provides background on the NOAA requirements process, major milestones and outcomes of the critical review, and plans for evolving and connecting observing requirements processes in the next year.

  5. Getting to the point: Rapid point selection and variable density InSAR time series for urban deformation monitoring

    NASA Astrophysics Data System (ADS)

    Spaans, K.; Hooper, A. J.

    2017-12-01

    The short revisit time and high data acquisition rates of current satellites have resulted in increased interest in the development of deformation monitoring and rapid disaster response capability, using InSAR. Fast, efficient data processing methodologies are required to deliver the timely results necessary for this, and also to limit computing resources required to process the large quantities of data being acquired. Contrary to volcano or earthquake applications, urban monitoring requires high resolution processing, in order to differentiate movements between buildings, or between buildings and the surrounding land. Here we present Rapid time series InSAR (RapidSAR), a method that can efficiently update high resolution time series of interferograms, and demonstrate its effectiveness over urban areas. The RapidSAR method estimates the coherence of pixels on an interferogram-by-interferogram basis. This allows for rapid ingestion of newly acquired images without the need to reprocess the earlier acquired part of the time series. The coherence estimate is based on ensembles of neighbouring pixels with similar amplitude behaviour through time, which are identified on an initial set of interferograms, and need be re-evaluated only occasionally. By taking into account scattering properties of points during coherence estimation, a high quality coherence estimate is achieved, allowing point selection at full resolution. The individual point selection maximizes the amount of information that can be extracted from each interferogram, as no selection compromise has to be reached between high and low coherence interferograms. In other words, points do not have to be coherent throughout the time series to contribute to the deformation time series. We demonstrate the effectiveness of our method over urban areas in the UK. We show how the algorithm successfully extracts high density time series from full resolution Sentinel-1 interferograms, and distinguish clearly between buildings and surrounding vegetation or streets. The fact that new interferograms can be processed separately from the remainder of the time series helps manage the high data volumes, both in space and time, generated by current missions.

  6. FAWKES Information Management for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Spetka, S.; Ramseyer, G.; Tucker, S.

    2010-09-01

    Current space situational awareness assets can be fully utilized by managing their inputs and outputs in real time. Ideally, sensors are tasked to perform specific functions to maximize their effectiveness. Many sensors are capable of collecting more data than is needed for a particular purpose, leading to the potential to enhance a sensor’s utilization by allowing it to be re-tasked in real time when it is determined that sufficient data has been acquired to meet the first task’s requirements. In addition, understanding a situation involving fast-traveling objects in space may require inputs from more than one sensor, leading to a need for information sharing in real time. Observations that are not processed in real time may be archived to support forensic analysis for accidents and for long-term studies. Space Situational Awareness (SSA) requires an extremely robust distributed software platform to appropriately manage the collection and distribution for both real-time decision-making as well as for analysis. FAWKES is being developed as a Joint Space Operations Center (JSPOC) Mission System (JMS) compliant implementation of the AFRL Phoenix information management architecture. It implements a pub/sub/archive/query (PSAQ) approach to communications designed for high performance applications. FAWKES provides an easy to use, reliable interface for structuring parallel processing, and is particularly well suited to the requirements of SSA. In addition to supporting point-to-point communications, it offers an elegant and robust implementation of collective communications, to scatter, gather and reduce values. A query capability is also supported that enhances reliability. Archived messages can be queried to re-create a computation or to selectively retrieve previous publications. PSAQ processes express their role in a computation by subscribing to their inputs and by publishing their results. Sensors on the edge can subscribe to inputs by appropriately authorized users, allowing dynamic tasking capabilities. Previously, the publication of sensor data collected by mobile systems was demonstrated. Thumbnails of infrared imagery that were imaged in real time by an aircraft [1] were published over a grid. This airborne system subscribed to requests for and then published the requested detailed images. In another experiment a system employing video subscriptions [2] drove the analysis of live video streams, resulting in a published stream of processed video output. We are currently implementing an SSA system that uses FAWKES to deliver imagery from telescopes through a pipeline of processing steps that are performed on high performance computers. PSAQ facilitates the decomposition of a problem into components that can be distributed across processing assets from the smallest sensors in space to the largest high performance computing (HPC) centers, as well as the integration and distribution of the results, all in real time. FAWKES supports the real-time latency requirements demanded by all of these applications. It also enhances reliability by easily supporting redundant computation. This study shows how FAWKES/PSAQ is utilized in SSA applications, and presents performance results for latency and throughput that meet these needs.

  7. Near-Term Fetuses Process Temporal Features of Speech

    ERIC Educational Resources Information Center

    Granier-Deferre, Carolyn; Ribeiro, Aurelie; Jacquet, Anne-Yvonne; Bassereau, Sophie

    2011-01-01

    The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or…

  8. 44 CFR 9.8 - Public notice requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... for review and comment at the earliest possible time and throughout the decision-making process; and upon completion of this process, provide the public with an accounting of its final decisions (see § 9... for public involvement in the decision-making process through the provision of public notice upon...

  9. National Board Certification: It's Time for Preschool Teachers!

    ERIC Educational Resources Information Center

    Gillentine, Jonathan

    2010-01-01

    National Board Certification is a voluntary process by which teachers of students ages 3 through 18 demonstrate accomplished teaching. This article describes the process and certification requirements for one certificate--the Early Childhood/Generalist (ECG), for teachers of children ages 3 through 8. The National Board Certification process was…

  10. DATA QUALITY OBJECTIVES FOR SELECTING WASTE SAMPLES FOR BENCH-SCALE REFORMER TREATABILITY STUDIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BANNING DL

    2011-02-11

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Bench-Scale Reforming testing. The type, quantity, and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluidized bed steam reformer. A determination of the adequacy of the fluidized bed steam reformer process to treat Hanford tank waste is required.more » The initial step in determining the adequacy of the fluidized bed steam reformer process is to select archived waste samples from the 222-S Laboratory that will be used in a bench scale tests. Analyses of the selected samples will be required to confirm the samples meet the shipping requirements and for comparison to the bench scale reformer (BSR) test sample selection requirements.« less

  11. Using Life-Cycle Human Factors Engineering to Avoid $2.4 Million in Costs: Lessons Learned from NASA's Requirements Verification Process for Space Payloads

    NASA Technical Reports Server (NTRS)

    Carr, Daniel; Ellenberger, Rich

    2008-01-01

    The Human Factors Implementation Team (HFIT) process has been used to verify human factors requirements for NASA International Space Station (ISS) payloads since 2003, resulting in $2.4 million in avoided costs. This cost benefit has been realized by greatly reducing the need to process time-consuming formal waivers (exceptions) for individual requirements violations. The HFIT team, which includes astronauts and their technical staff, acts as the single source for human factors requirements integration of payloads. HFIT has the authority to provide inputs during early design phases, thus eliminating many potential requirements violations in a cost-effective manner. In those instances where it is not economically or technically feasible to meet the precise metric of a given requirement, HFIT can work with the payload engineers to develop common sense solutions and formally document that the resulting payload design does not materially affect the astronaut s ability to operate and interact with the payload. The HFIT process is fully ISO 9000 compliant and works concurrently with NASA s formal systems engineering work flow. Due to its success with payloads, the HFIT process is being adapted and extended to ISS systems hardware. Key aspects of this process are also being considered for NASA's Space Shuttle replacement, the Crew Exploration Vehicle.

  12. Functional and performance requirements of the next NOAA-Kasas City computer system

    NASA Technical Reports Server (NTRS)

    Mosher, F. R.

    1985-01-01

    The development of the Advanced Weather Interactive Processing System for the 1990's (AWIPS-90) will result in more timely and accurate forecasts with improved cost effectiveness. As part of the AWIPS-90 initiative, the National Meteorological Center (NMC), the National Severe Storms Forecast Center (NSSFC), and the National Hurricane Center (NHC) are to receive upgrades of interactive processing systems. This National Center Upgrade program will support the specialized inter-center communications, data acquisition, and processing needs of these centers. The missions, current capabilities and general functional requirements for the upgrade to the NSSFC are addressed. System capabilities are discussed along with the requirements for the upgraded system.

  13. EOS Data Products Latency and Reprocessing Evaluation

    NASA Astrophysics Data System (ADS)

    Ramapriyan, H. K.; Wanchoo, L.

    2012-12-01

    NASA's Earth Observing System (EOS) Data and Information System (EOSDIS) program has been processing, archiving, and distributing EOS data since the launch of Terra platform in 1999. The EOSDIS Distributed Active Archive Centers (DAACs) and Science-Investigator-led Processing Systems (SIPSs) are generating over 5000 unique products with a daily average volume of 1.7 Petabytes. Initially EOSDIS had requirements to make process data products within 24 hours of receiving all inputs needed for generating them. Thus, generally, the latency would be slightly over 24 and 48 hours after satellite data acquisition, respectively, for Level 1 and Level 2 products. Due to budgetary constraints these requirements were relaxed, with the requirement being to avoid a growing backlog of unprocessed data. However, the data providers have been generating these products in as timely a manner as possible. The reduction in costs of computing hardware has helped considerably. It is of interest to analyze the actual latencies achieved over the past several years in processing and inserting the data products into the EOSDIS archives for the users to support various scientific studies such as land processes, oceanography, hydrology, atmospheric science, cryospheric science, etc. The instrument science teams have continuously evaluated the data products since the launches of EOS satellites and improved the science algorithms to provide high quality products. Data providers have periodically reprocessed the previously acquired data with these improved algorithms. The reprocessing campaigns run for an extended time period in parallel with forward processing, since all data starting from the beginning of the mission need to be reprocessed. Each reprocessing activity involves more data than the previous reprocessing. The historical record of the reprocessing times would be of interest to future missions, especially those involving large volumes of data and/or computational loads due to complexity of algorithms. Evaluation of latency and reprocessing times requires some of the product metadata information, such as the beginning and ending time of data acquisition, processing date, and version number. This information for each product is made available by data providers to the ESDIS Metrics System (EMS). The EMS replaced the earlier ESDIS Data Gathering and Reporting System (EDGRS) in FY2005. Since then it has collected information about data products' ingest, archive, and distribution. The analysis of latencies and reprocessing times will provide an insight to the data provider process and identify potential areas of weakness in providing timely data to the user community. Delays may be caused by events such as system unavailability, disk failures, delay in level 0 data delivery, availability of input data, network problems, and power failures. Analysis of metrics will highlight areas for focused examination of root causes for delays. The purposes of this study are to: 1) perform a detailed analysis of latency of selected instrument products for last 6 years; 2) analyze the reprocessed data from various data providers to determine the times taken for reprocessing campaigns; 3) identify potential reasons for any anomalies in these metrics.

  14. ITOHealth: a multimodal middleware-oriented integrated architecture for discovering medical entities.

    PubMed

    Alor-Hernández, Giner; Sánchez-Cervantes, José Luis; Juárez-Martínez, Ulises; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Aguilar-Laserre, Alberto

    2012-03-01

    Emergency healthcare is one of the emerging application domains for information services, which requires highly multimodal information services. The time of consuming pre-hospital emergency process is critical. Therefore, the minimization of required time for providing primary care and consultation to patients is one of the crucial factors when trying to improve the healthcare delivery in emergency situations. In this sense, dynamic location of medical entities is a complex process that needs time and it can be critical when a person requires medical attention. This work presents a multimodal location-based system for locating and assigning medical entities called ITOHealth. ITOHealth provides a multimodal middleware-oriented integrated architecture using a service-oriented architecture in order to provide information of medical entities in mobile devices and web browsers with enriched interfaces providing multimodality support. ITOHealth's multimodality is based on the use of Microsoft Agent Characters, the integration of natural language voice to the characters, and multi-language and multi-characters support providing an advantage for users with visual impairments.

  15. Monitoring landscape level processes using remote sensing of large plots

    Treesearch

    Raymond L. Czaplewski

    1991-01-01

    Global and regional assessaents require timely information on landscape level status (e.g., areal extent of different ecosystems) and processes (e.g., changes in land use and land cover). To measure and understand these processes at the regional level, and model their impacts, remote sensing is often necessary. However, processing massive volumes of remotely sensing...

  16. Automatic detection of health changes using statistical process control techniques on measured transfer times of elderly.

    PubMed

    Baldewijns, Greet; Luca, Stijn; Nagels, William; Vanrumste, Bart; Croonenborghs, Tom

    2015-01-01

    It has been shown that gait speed and transfer times are good measures of functional ability in elderly. However, data currently acquired by systems that measure either gait speed or transfer times in the homes of elderly people require manual reviewing by healthcare workers. This reviewing process is time-consuming. To alleviate this burden, this paper proposes the use of statistical process control methods to automatically detect both positive and negative changes in transfer times. Three SPC techniques: tabular CUSUM, standardized CUSUM and EWMA, known for their ability to detect small shifts in the data, are evaluated on simulated transfer times. This analysis shows that EWMA is the best-suited method with a detection accuracy of 82% and an average detection time of 9.64 days.

  17. Validating the Use of Deep Learning Neural Networks for Correction of Large Hydrometric Datasets

    NASA Astrophysics Data System (ADS)

    Frazier, N.; Ogden, F. L.; Regina, J. A.; Cheng, Y.

    2017-12-01

    Collection and validation of Earth systems data can be time consuming and labor intensive. In particular, high resolution hydrometric data, including rainfall and streamflow measurements, are difficult to obtain due to a multitude of complicating factors. Measurement equipment is subject to clogs, environmental disturbances, and sensor drift. Manual intervention is typically required to identify, correct, and validate these data. Weirs can become clogged and the pressure transducer may float or drift over time. We typically employ a graphical tool called Time Series Editor to manually remove clogs and sensor drift from the data. However, this process is highly subjective and requires hydrological expertise. Two different people may produce two different data sets. To use this data for scientific discovery and model validation, a more consistent method is needed to processes this field data. Deep learning neural networks have proved to be excellent mechanisms for recognizing patterns in data. We explore the use of Recurrent Neural Networks (RNN) to capture the patterns in the data over time using various gating mechanisms (LSTM and GRU), network architectures, and hyper-parameters to build an automated data correction model. We also explore the required amount of manually corrected training data required to train the network for reasonable accuracy. The benefits of this approach are that the time to process a data set is significantly reduced, and the results are 100% reproducible after training is complete. Additionally, we train the RNN and calibrate a physically-based hydrological model against the same portion of data. Both the RNN and the model are applied to the remaining data using a split-sample methodology. Performance of the machine learning is evaluated for plausibility by comparing with the output of the hydrological model, and this analysis identifies potential periods where additional investigation is warranted.

  18. International Ultraviolet Explorer Final Archive

    NASA Technical Reports Server (NTRS)

    1997-01-01

    CSC processed IUE images through the Final Archive Data Processing System. Raw images were obtained from both NDADS and the IUEGTC optical disk platters for processing on the Alpha cluster, and from the IUEGTC optical disk platters for DECstation processing. Input parameters were obtained from the IUE database. Backup tapes of data to send to VILSPA were routinely made on the Alpha cluster. IPC handled more than 263 requests for priority NEWSIPS processing during the contract. Staff members also answered various questions and requests for information and sent copies of IUE documents to requesters. CSC implemented new processing capabilities into the NEWSIPS processing systems as they became available. In addition, steps were taken to improve efficiency and throughput whenever possible. The node TORTE was reconfigured as the I/O server for Alpha processing in May. The number of Alpha nodes used for the NEWSIPS processing queue was increased to a maximum of six in measured fashion in order to understand the dependence of throughput on the number of nodes and to be able to recognize when a point of diminishing returns was reached. With Project approval, generation of the VD FITS files was dropped in July. This action not only saved processing time but, even more significantly, also reduced the archive storage media requirements, and the time required to perform the archiving, drastically. The throughput of images verified through CDIVS and processed through NEWSIPS for the contract period is summarized below. The number of images of a given dispersion type and camera that were processed in any given month reflects several factors, including the availability of the required NEWSIPS software system, the availability of the corresponding required calibrations (e.g., the LWR high-dispersion ripple correction and absolute calibration), and the occurrence of reprocessing efforts such as that conducted to incorporate the updated SWP sensitivity-degradation correction in May.

  19. Strategic planning: getting from here to there.

    PubMed

    Kaleba, Richard

    2006-11-01

    Hospitals should develop a strategic plan that defines specific actions in a realistic time frame. Hospitals can follow a five-phase process to develop a strategic plan. The strategic planning process requires a project leader and medical staff buy-in.

  20. Elimination of water pathogens with solar radiation using an automated sequential batch CPC reactor.

    PubMed

    Polo-López, M I; Fernández-Ibáñez, P; Ubomba-Jaswa, E; Navntoft, C; García-Fernández, I; Dunlop, P S M; Schmid, M; Byrne, J A; McGuigan, K G

    2011-11-30

    Solar disinfection (SODIS) of water is a well-known, effective treatment process which is practiced at household level in many developing countries. However, this process is limited by the small volume treated and there is no indication of treatment efficacy for the user. Low cost glass tube reactors, together with compound parabolic collector (CPC) technology, have been shown to significantly increase the efficiency of solar disinfection. However, these reactors still require user input to control each batch SODIS process and there is no feedback that the process is complete. Automatic operation of the batch SODIS process, controlled by UVA-radiation sensors, can provide information on the status of the process, can ensure the required UVA dose to achieve complete disinfection is received and reduces user work-load through automatic sequential batch processing. In this work, an enhanced CPC photo-reactor with a concentration factor of 1.89 was developed. The apparatus was automated to achieve exposure to a pre-determined UVA dose. Treated water was automatically dispensed into a reservoir tank. The reactor was tested using Escherichia coli as a model pathogen in natural well water. A 6-log inactivation of E. coli was achieved following exposure to the minimum uninterrupted lethal UVA dose. The enhanced reactor decreased the exposure time required to achieve the lethal UVA dose, in comparison to a CPC system with a concentration factor of 1.0. Doubling the lethal UVA dose prevented the need for a period of post-exposure dark inactivation and reduced the overall treatment time. Using this reactor, SODIS can be automatically carried out at an affordable cost, with reduced exposure time and minimal user input. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Interactive brain shift compensation using GPU based programming

    NASA Astrophysics Data System (ADS)

    van der Steen, Sander; Noordmans, Herke Jan; Verdaasdonk, Rudolf

    2009-02-01

    Processing large images files or real-time video streams requires intense computational power. Driven by the gaming industry, the processing power of graphic process units (GPUs) has increased significantly. With the pixel shader model 4.0 the GPU can be used for image processing 10x faster than the CPU. Dedicated software was developed to deform 3D MR and CT image sets for real-time brain shift correction during navigated neurosurgery using landmarks or cortical surface traces defined by the navigation pointer. Feedback was given using orthogonal slices and an interactively raytraced 3D brain image. GPU based programming enables real-time processing of high definition image datasets and various applications can be developed in medicine, optics and image sciences.

  2. Comparison on Human Resource Requirement between Manual and Automated Dispensing Systems.

    PubMed

    Noparatayaporn, Prapaporn; Sakulbumrungsil, Rungpetch; Thaweethamcharoen, Tanita; Sangseenil, Wunwisa

    2017-05-01

    This study was conducted to compare human resource requirement among manual, automated, and modified automated dispensing systems. Data were collected from the pharmacy department at the 2100-bed university hospital (Siriraj Hospital, Bangkok, Thailand). Data regarding the duration of the medication distribution process were collected by using self-reported forms for 1 month. The data on the automated dispensing machine (ADM) system were obtained from 1 piloted inpatient ward, whereas those on the manual system were the average of other wards. Data on dispensing, returned unused medication, and stock management processes under the traditional manual system and the ADM system were from actual activities, whereas the modified ADM system was modeled. The full-time equivalent (FTE) of each model was estimated for comparison. The result showed that the manual system required 46.84 FTEs of pharmacists and 132.66 FTEs of pharmacy technicians. By adding pharmacist roles on screening and verification under the ADM system, the ADM system required 117.61 FTEs of pharmacists. Replacing counting and filling medication functions by ADM has decreased the number of pharmacy technicians to 55.38 FTEs. After the modified ADM system canceled the return unused medication process, FTEs requirement for pharmacists and pharmacy technicians decreased to 69.78 and 51.90 FTEs, respectively. The ADM system decreased the workload of pharmacy technicians, whereas it required more time from pharmacists. However, the increased workload of pharmacists was associated with more comprehensive patient care functions, which resulted from the redesigned work process. Copyright © 2017. Published by Elsevier Inc.

  3. A collaborative approach to lean laboratory workstation design reduces wasted technologist travel.

    PubMed

    Yerian, Lisa M; Seestadt, Joseph A; Gomez, Erron R; Marchant, Kandice K

    2012-08-01

    Lean methodologies have been applied in many industries to reduce waste. We applied Lean techniques to redesign laboratory workstations with the aim of reducing the number of times employees must leave their workstations to complete their tasks. At baseline in 68 workflows (aggregates or sequence of process steps) studied, 251 (38%) of 664 tasks required workers to walk away from their workstations. After analysis and redesign, only 59 (9%) of the 664 tasks required technologists to leave their workstations to complete these tasks. On average, 3.4 travel events were removed for each workstation. Time studies in a single laboratory section demonstrated that workers spend 8 to 70 seconds in travel each time they step away from the workstation. The redesigned workstations will allow employees to spend less time travelling around the laboratory. Additional benefits include employee training in waste identification, improved overall laboratory layout, and identification of other process improvement opportunities in our laboratory.

  4. Temporal production and visuospatial processing.

    PubMed

    Benuzzi, Francesca; Basso, Gianpaolo; Nichelli, Paolo

    2005-12-01

    Current models of prospective timing hypothesize that estimated duration is influenced either by the attentional load or by the short-term memory requirements of a concurrent nontemporal task. In the present study, we addressed this issue with four dual-task experiments. In Exp. 1, the effect of memory load on both reaction time and temporal production was proportional to the number of items of a visuospatial pattern to hold in memory. In Exps. 2, 3, and 4, a temporal production task was combined with two visual search tasks involving either pre-attentive or attentional processing. Visual tasks interfered with temporal production: produced intervals were lengthened proportionally to the display size. In contrast, reaction times increased with display size only when a serial, effortful search was required. It appears that memory and perceptual set size, rather than nonspecific attentional or short-term memory load, can influence prospective timing.

  5. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  6. Real-Time On-Board Processing Validation of MSPI Ground Camera Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.

    2010-01-01

    The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.

  7. 48 CFR 39.106 - Year 2000 compliance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CATEGORIES OF CONTRACTING ACQUISITION OF INFORMATION TECHNOLOGY General 39.106 Year 2000 compliance. When acquiring information technology that will be required to perform date/time processing involving dates... information technology to be Year 2000 compliant; or (2) Require that non-compliant information technology be...

  8. 45 CFR 2507.5 - How does the Corporation process requests for records?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... compelled to create new records or do statistical computations. For example, the Corporation is not required... feasible way to respond to a request. The Corporation is not required to perform any research for the... duplicating all of them. For example, if it requires less time and expense to provide a computer record as a...

  9. Models for discrete-time self-similar vector processes with application to network traffic

    NASA Astrophysics Data System (ADS)

    Lee, Seungsin; Rao, Raghuveer M.; Narasimha, Rajesh

    2003-07-01

    The paper defines self-similarity for vector processes by employing the discrete-time continuous-dilation operation which has successfully been used previously by the authors to define 1-D discrete-time stochastic self-similar processes. To define self-similarity of vector processes, it is required to consider the cross-correlation functions between different 1-D processes as well as the autocorrelation function of each constituent 1-D process in it. System models to synthesize self-similar vector processes are constructed based on the definition. With these systems, it is possible to generate self-similar vector processes from white noise inputs. An important aspect of the proposed models is that they can be used to synthesize various types of self-similar vector processes by choosing proper parameters. Additionally, the paper presents evidence of vector self-similarity in two-channel wireless LAN data and applies the aforementioned systems to simulate the corresponding network traffic traces.

  10. Development of a Paradigm to Assess Nutritive and Biochemical Substances in Humans: A Preliminary Report on the Effects of Tyrosine upon Altitude- and Cold-Induced Stress Responses

    DTIC Science & Technology

    1987-03-01

    3/4 hours. Performance tests evaluated simple and choice reaction time to visual stimuli, vigilance, and processing of symbolic, numerical, verbal...minimize the adverse consequences of these stressors. Tyrosine enhanced performance (e.g. complex information processing , vigilance, and reaction time... processes inherent in many real-world tasks. For example, Map Compass requires association of Wsi PL AFCm uA O-SV CHETCLtISS) direction and degree

  11. Congenital syphilis investigation processes and timing in Louisiana.

    PubMed

    Bradley, Heather; Gruber, DeAnn; Introcaso, Camille E; Foxhood, Joseph; Wendell, Debbie; Rahman, Mohammad; Ewell, Joy; Kirkcaldy, Robert D; Weinstock, Hillard S

    2014-09-01

    Congenital syphilis (CS) is a potentially life-threatening yet preventable infection. State and local public health jurisdictions conduct investigations of possible CS cases to determine case status and to inform public health prevention efforts. These investigations occur when jurisdictions receive positive syphilis test results from pregnant women or from infants. We extracted data from Louisiana's electronic case management system for 328 infants investigated as possible CS cases in 2010 to 2011. Using date stamps from the case management system, we described CS investigations in terms of processes and timing. Eighty-seven investigations were prompted by positive test results from women who were known to be pregnant by the health jurisdiction, and 241 investigations were prompted by positive syphilis test results from infants. Overall, investigations required a median of 101 days to complete, although 25% were complete within 36 days. Investigations prompted by positive test results from infants required a median of 135 days to complete, and those prompted by positive test results from pregnant women required a median of 41 days. Three times as many CS investigations began with reported positive syphilis test results from infants as from pregnant women, and these investigations required more time to complete. When CS investigations begin after an infant's birth, the opportunity to ensure that women are treated during pregnancy is missed, and surveillance data cannot inform prevention efforts on a timely basis. Consistently ascertaining pregnancy status among women whose positive syphilis test results are reported to public health jurisdictions could help to assure timely CS prevention efforts.

  12. EVALUATING MC AND A EFFECTIVENESS TO VERIFY THE PRESENCE OF NUCLEAR MATERIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. G. DAWSON; J. A MORZINSKI; ET AL

    Traditional materials accounting is focused exclusively on the material balance area (MBA), and involves periodically closing a material balance based on accountability measurements conducted during a physical inventory. In contrast, the physical inventory for Los Alamos National Laboratory's near-real-time accounting system is established around processes and looks more like an item inventory. That is, the intent is not to measure material for accounting purposes, since materials have already been measured in the normal course of daily operations. A given unit process operates many times over the course of a material balance period. The product of a given unit process maymore » move for processing within another unit process in the same MBA or may be transferred out of the MBA. Since few materials are unmeasured the physical inventory for a near-real-time process area looks more like an item inventory. Thus, the intent of the physical inventory is to locate the materials on the books and verify information about the materials contained in the books. Closing a materials balance for such an area is a matter of summing all the individual mass balances for the batches processed by all unit processes in the MBA. Additionally, performance parameters are established to measure the program's effectiveness. Program effectiveness for verifying the presence of nuclear material is required to be equal to or greater than a prescribed performance level, process measurements must be within established precision and accuracy values, physical inventory results meet or exceed performance requirements, and inventory differences are less than a target/goal quantity. This approach exceeds DOE established accounting and physical inventory program requirements. Hence, LANL is committed to this approach and to seeking opportunities for further improvement through integrated technologies. This paper will provide a detailed description of this evaluation process.« less

  13. Optimization of Primary Drying in Lyophilization during Early Phase Drug Development using a Definitive Screening Design with Formulation and Process Factors.

    PubMed

    Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram

    2018-06-08

    Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.

  14. Prediction of collision events: an EEG coherence analysis.

    PubMed

    Spapé, Michiel M; Serrien, Deborah J

    2011-05-01

    A common daily-life task is the interaction with moving objects for which prediction of collision events is required. To evaluate the sources of information used in this process, this EEG study required participants to judge whether two moving objects would collide with one another or not. In addition, the effect of a distractor object is evaluated. The measurements included the behavioural decision time and accuracy, eye movement fixation times, and the neural dynamics which was determined by means of EEG coherence, expressing functional connectivity between brain areas. Collision judgment involved widespread information processing across both hemispheres. When a distractor object was present, task-related activity was increased whereas distractor activity induced modulation of local sensory processing. Also relevant were the parietal regions communicating with bilateral occipital and midline areas and a left-sided sensorimotor circuit. Besides visual cues, cognitive and strategic strategies are used to establish a decision of events in time. When distracting information is introduced into the collision judgment process, it is managed at different processing levels and supported by distinct neural correlates. These data shed light on the processing mechanisms that support judgment of collision events; an ability that implicates higher-order decision-making. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  15. AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images

    USGS Publications Warehouse

    Price Tack, Jennifer L.; West, Brian S.; McGowan, Conor P.; Ditchkoff, Stephen S.; Reeves, Stanley J.; Keever, Allison; Grand, James B.

    2017-01-01

    Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~40% and correctly identified >90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.

  16. Simulation and Validation of Injection-Compression Filling Stage of Liquid Moulding with Fast Curing Resins

    NASA Astrophysics Data System (ADS)

    Martin, Ffion A.; Warrior, Nicholas A.; Simacek, Pavel; Advani, Suresh; Hughes, Adrian; Darlington, Roger; Senan, Eissa

    2018-03-01

    Very short manufacture cycle times are required if continuous carbon fibre and epoxy composite components are to be economically viable solutions for high volume composite production for the automotive industry. Here, a manufacturing process variant of resin transfer moulding (RTM), targets a reduction of in-mould manufacture time by reducing the time to inject and cure components. The process involves two stages; resin injection followed by compression. A flow simulation methodology using an RTM solver for the process has been developed. This paper compares the simulation prediction to experiments performed using industrial equipment. The issues encountered during the manufacturing are included in the simulation and their sensitivity to the process is explored.

  17. Software Design Methodology Migration for a Distributed Ground System

    NASA Technical Reports Server (NTRS)

    Ritter, George; McNair, Ann R. (Technical Monitor)

    2002-01-01

    The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has been developed and has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes. The new Software processes still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Process have evolved highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project .

  18. GPU applications for data processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch; Aleksandrov, Andrey; INFN sezione di Napoli, I-80125 Napoli

    2015-12-31

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  19. Noise suppression methods for robust speech processing

    NASA Astrophysics Data System (ADS)

    Boll, S. F.; Ravindra, H.; Randall, G.; Armantrout, R.; Power, R.

    1980-05-01

    Robust speech processing in practical operating environments requires effective environmental and processor noise suppression. This report describes the technical findings and accomplishments during this reporting period for the research program funded to develop real time, compressed speech analysis synthesis algorithms whose performance in invariant under signal contamination. Fulfillment of this requirement is necessary to insure reliable secure compressed speech transmission within realistic military command and control environments. Overall contributions resulting from this research program include the understanding of how environmental noise degrades narrow band, coded speech, development of appropriate real time noise suppression algorithms, and development of speech parameter identification methods that consider signal contamination as a fundamental element in the estimation process. This report describes the current research and results in the areas of noise suppression using the dual input adaptive noise cancellation using the short time Fourier transform algorithms, articulation rate change techniques, and a description of an experiment which demonstrated that the spectral subtraction noise suppression algorithm can improve the intelligibility of 2400 bps, LPC 10 coded, helicopter speech by 10.6 point.

  20. A high throughput MATLAB program for automated force-curve processing using the AdG polymer model.

    PubMed

    O'Connor, Samantha; Gaddis, Rebecca; Anderson, Evan; Camesano, Terri A; Burnham, Nancy A

    2015-02-01

    Research in understanding biofilm formation is dependent on accurate and representative measurements of the steric forces related to brush on bacterial surfaces. A MATLAB program to analyze force curves from an AFM efficiently, accurately, and with minimal user bias has been developed. The analysis is based on a modified version of the Alexander and de Gennes (AdG) polymer model, which is a function of equilibrium polymer brush length, probe radius, temperature, separation distance, and a density variable. Automating the analysis reduces the amount of time required to process 100 force curves from several days to less than 2min. The use of this program to crop and fit force curves to the AdG model will allow researchers to ensure proper processing of large amounts of experimental data and reduce the time required for analysis and comparison of data, thereby enabling higher quality results in a shorter period of time. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Semi-physical simulation test for micro CMOS star sensor

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Zhang, Guang-jun; Jiang, Jie; Fan, Qiao-yun

    2008-03-01

    A designed star sensor must be extensively tested before launching. Testing star sensor requires complicated process with much time and resources input. Even observing sky on the ground is a challenging and time-consuming job, requiring complicated and expensive equipments, suitable time and location, and prone to be interfered by weather. And moreover, not all stars distributed on the sky can be observed by this testing method. Semi-physical simulation in laboratory reduces the testing cost and helps to debug, analyze and evaluate the star sensor system while developing the model. The test system is composed of optical platform, star field simulator, star field simulator computer, star sensor and the central data processing computer. The test system simulates the starlight with high accuracy and good parallelism, and creates static or dynamic image in FOV (Field of View). The conditions of the test are close to observing real sky. With this system, the test of a micro star tracker designed by Beijing University of Aeronautics and Astronautics has been performed successfully. Some indices including full-sky autonomous star identification time, attitude update frequency and attitude precision etc. meet design requirement of the star sensor. Error source of the testing system is also analyzed. It is concluded that the testing system is cost-saving, efficient, and contributes to optimizing the embed arithmetic, shortening the development cycle and improving engineering design processes.

  2. The medical decision model and decision maker tools for management of radiological and nuclear incidents.

    PubMed

    Koerner, John F; Coleman, C Norman; Murrain-Hill, Paula; FitzGerald, Denis J; Sullivan, Julie M

    2014-06-01

    Effective decision making during a rapidly evolving emergency such as a radiological or nuclear incident requires timely interim decisions and communications from onsite decision makers while further data processing, consultation, and review are ongoing by reachback experts. The authors have recently proposed a medical decision model for use during a radiological or nuclear disaster, which is similar in concept to that used in medical care, especially when delay in action can have disastrous effects. For decision makers to function most effectively during a complex response, they require access to onsite subject matter experts who can provide information, recommendations, and participate in public communication efforts. However, in the time before this expertise is available or during the planning phase, just-in-time tools are essential that provide critical overview of the subject matter written specifically for the decision makers. Recognizing the complexity of the science, risk assessment, and multitude of potential response assets that will be required after a nuclear incident, the Office of the Assistant Secretary for Preparedness and Response, in collaboration with other government and non-government experts, has prepared a practical guide for decision makers. This paper illustrates how the medical decision model process could facilitate onsite decision making that includes using the deliberative reachback process from science and policy experts and describes the tools now available to facilitate timely and effective incident management.

  3. High pressure processing's potential to inactivate norovirus and other fooodborne viruses

    USDA-ARS?s Scientific Manuscript database

    High pressure processing (HPP) can inactivate human norovirus. However, all viruses are not equally susceptible to HPP. Pressure treatment parameters such as required pressure levels, initial pressurization temperatures, and pressurization times substantially affect inactivation. How food matrix ...

  4. Information Fusion for Feature Extraction and the Development of Geospatial Information

    DTIC Science & Technology

    2004-07-01

    of automated processing . 2. Requirements for Geospatial Information Accurate, timely geospatial information is critical for many military...this evaluation illustrates some of the difficulties in comparing manual and automated processing results (figure 5). The automated delineation of

  5. Software Development and Test Methodology for a Distributed Ground System

    NASA Technical Reports Server (NTRS)

    Ritter, George; Guillebeau, Pat; McNair, Ann R. (Technical Monitor)

    2002-01-01

    The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes in an effort to minimize unnecessary overhead while maximizing process benefits. The Software processes that have evolved still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Processes have evolved, highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project.

  6. Program Helps Decompose Complicated Design Problems

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.

    1993-01-01

    Time saved by intelligent decomposition into smaller, interrelated problems. DeMAID is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Displays modules in N x N matrix format. Requires investment of time to generate and refine list of modules for input, it saves considerable amount of money and time in total design process, particularly new design problems in which ordering of modules has not been defined. Program also implemented to examine assembly-line process or ordering of tasks and milestones.

  7. High efficiency solar cell processing

    NASA Technical Reports Server (NTRS)

    Ho, F.; Iles, P. A.

    1985-01-01

    At the time of writing, cells made by several groups are approaching 19% efficiency. General aspects of the processing required for such cells are discussed. Most processing used for high efficiency cells is derived from space-cell or concentrator cell technology, and recent advances have been obtained from improved techniques rather than from better understanding of the limiting mechanisms. Theory and modeling are fairly well developed, and adequate to guide further asymptotic increases in performance of near conventional cells. There are several competitive cell designs with promise of higher performance ( 20%) but for these designs further improvements are required. The available cell processing technology to fabricate high efficiency cells is examined.

  8. Application of a distributed systems architecture for increased speed in image processing on an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Wright, Adam A.; Momin, Orko; Shin, Young Ho; Shakya, Rahul; Nepal, Kumud; Ahlgren, David J.

    2010-01-01

    This paper presents the application of a distributed systems architecture to an autonomous ground vehicle, Q, that participates in both the autonomous and navigation challenges of the Intelligent Ground Vehicle Competition. In the autonomous challenge the vehicle is required to follow a course, while avoiding obstacles and staying within the course boundaries, which are marked by white lines. For the navigation challenge, the vehicle is required to reach a set of target destinations, known as way points, with given GPS coordinates and avoid obstacles that it encounters in the process. Previously the vehicle utilized a single laptop to execute all processing activities including image processing, sensor interfacing and data processing, path planning and navigation algorithms and motor control. National Instruments' (NI) LabVIEW served as the programming language for software implementation. As an upgrade to last year's design, a NI compact Reconfigurable Input/Output system (cRIO) was incorporated to the system architecture. The cRIO is NI's solution for rapid prototyping that is equipped with a real time processor, an FPGA and modular input/output. Under the current system, the real time processor handles the path planning and navigation algorithms, the FPGA gathers and processes sensor data. This setup leaves the laptop to focus on running the image processing algorithm. Image processing as previously presented by Nepal et. al. is a multi-step line extraction algorithm and constitutes the largest processor load. This distributed approach results in a faster image processing algorithm which was previously Q's bottleneck. Additionally, the path planning and navigation algorithms are executed more reliably on the real time processor due to the deterministic nature of operation. The implementation of this architecture required exploration of various inter-system communication techniques. Data transfer between the laptop and the real time processor using UDP packets was established as the most reliable protocol after testing various options. Improvement can be made to the system by migrating more algorithms to the hardware based FPGA to further speed up the operations of the vehicle.

  9. Age-related differences in reaction time task performance in young children.

    PubMed

    Kiselev, Sergey; Espy, Kimberly Andrews; Sheffield, Tiffany

    2009-02-01

    Performance of reaction time (RT) tasks was investigated in young children and adults to test the hypothesis that age-related differences in processing speed supersede a "global" mechanism and are a function of specific differences in task demands and processing requirements. The sample consisted of 54 4-year-olds, 53 5-year-olds, 59 6-year-olds, and 35 adults from Russia. Using the regression approach pioneered by Brinley and the transformation method proposed by Madden and colleagues and Ridderinkhoff and van der Molen, age-related differences in processing speed differed among RT tasks with varying demands. In particular, RTs differed between children and adults on tasks that required response suppression, discrimination of color or spatial orientation, reversal of contingencies of previously learned stimulus-response rules, and greater stimulus-response complexity. Relative costs of these RT task differences were larger than predicted by the global difference hypothesis except for response suppression. Among young children, age-related differences larger than predicted by the global difference hypothesis were evident when tasks required color or spatial orientation discrimination and stimulus-response rule complexity, but not for response suppression or reversal of stimulus-response contingencies. Process-specific, age-related differences in processing speed that support heterochronicity of brain development during childhood were revealed.

  10. Rapid high-throughput cloning and stable expression of antibodies in HEK293 cells.

    PubMed

    Spidel, Jared L; Vaessen, Benjamin; Chan, Yin Yin; Grasso, Luigi; Kline, J Bradford

    2016-12-01

    Single-cell based amplification of immunoglobulin variable regions is a rapid and powerful technique for cloning antigen-specific monoclonal antibodies (mAbs) for purposes ranging from general laboratory reagents to therapeutic drugs. From the initial screening process involving small quantities of hundreds or thousands of mAbs through in vitro characterization and subsequent in vivo experiments requiring large quantities of only a few, having a robust system for generating mAbs from cloning through stable cell line generation is essential. A protocol was developed to decrease the time, cost, and effort required by traditional cloning and expression methods by eliminating bottlenecks in these processes. Removing the clonal selection steps from the cloning process using a highly efficient ligation-independent protocol and from the stable cell line process by utilizing bicistronic plasmids to generate stable semi-clonal cell pools facilitated an increased throughput of the entire process from plasmid assembly through transient transfections and selection of stable semi-clonal cell pools. Furthermore, the time required by a single individual to clone, express, and select stable cell pools in a high-throughput format was reduced from 4 to 6months to only 4 to 6weeks. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  11. The Solid Phase Curing Time Effect of Asbuton with Texapon Emulsifier at the Optimum Bitumen Content

    NASA Astrophysics Data System (ADS)

    Sarwono, D.; Surya D, R.; Setyawan, A.; Djumari

    2017-07-01

    Buton asphalt (asbuton) could not be utilized optimally in Indonesia. Asbuton utilization rate was still low because the processed product of asbuton still have impracticable form in the term of use and also requiring high processing costs. This research aimed to obtain asphalt products from asbuton practical for be used through the extraction process and not requiring expensive processing cost. This research was done with experimental method in laboratory. The composition of emulsify asbuton were 5/20 grain, premium, texapon, HCl, and aquades. Solid phase was the mixture asbuton 5/20 grain and premium with 3 minutes mixing time. Liquid phase consisted texapon, HCl and aquades. The aging process was done after solid phase mixing process in order to reaction and tie of solid phase mixed become more optimal for high solubility level of asphalt production. Aging variable time were 30, 60, 90, 120, and 150 minutes. Solid and liquid phase was mixed for emulsify asbuton production, then extracted for 25 minutes. Solubility level of asphalt, water level, and asphalt characteristic was tested at extraction result of emulsify asbuton with most optimum ashphal level. The result of analysis tested data asphalt solubility level at extract asbuton resulted 94.77% on 120 minutes aging variable time. Water level test resulted water content reduction on emulsify asbuton more long time on occurring of aging solid phase. Examination of asphalt characteristic at extraction result of emulsify asbuton with optimum asphalt solubility level, obtain specimen that have rigid and strong texture in order that examination result have not sufficient ductility and penetration value.

  12. 40 CFR 63.117 - Process vent provisions-reporting and recordkeeping requirements for group and TRE determinations...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Vents, Storage Vessels, Transfer Operations, and Wastewater § 63.117 Process vent provisions—reporting... incinerators, boilers or process heaters specified in table 3 of this subpart, and averaged over the same time... content determinations, flow rate measurements, and exit velocity determinations made during the...

  13. 40 CFR 63.117 - Process vent provisions-reporting and recordkeeping requirements for group and TRE determinations...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Vents, Storage Vessels, Transfer Operations, and Wastewater § 63.117 Process vent provisions—reporting... incinerators, boilers or process heaters specified in table 3 of this subpart, and averaged over the same time... content determinations, flow rate measurements, and exit velocity determinations made during the...

  14. r-process nucleosynthesis in dynamic helium-burning environments

    NASA Technical Reports Server (NTRS)

    Cowan, J. J.; Cameron, A. G. W.; Truran, J. W.

    1985-01-01

    The results of an extended examination of r-process nucleosynthesis in helium-burning enviroments are presented. Using newly calculated nuclear rates, dynamical r-process calculations have been made of thermal runaways in helium cores typical of low-mass stars and in the helium zones of stars undergoing supernova explosions. These calculations show that, for a sufficient flux of neutrons produced by the C-13 neutron source, r-process nuclei in solar proportions can be produced. The conditions required for r-process production are found to be 10 to the 20th-10 to the 21st neutrons per cubic centimeter for times of 0.01-0.1 s and neutron number densities in excess of 10 to the 19th per cubic centimeter for times of about 1 s. The amount of C-13 required is found to be exceedingly high - larger than is found to occur in any current stellar evolutionary model. It is thus unlikely that these helium-burning environments are responsible for producing the bulk of the r-process elements seen in the solar system.

  15. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  16. 40 CFR 98.266 - Data reporting requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... phosphoric acid process lines. (8) Number of times missing data procedures were used to estimate phosphate... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Data reporting requirements. 98.266... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Phosphoric Acid Production § 98.266 Data reporting...

  17. Accentra Pharmaceuticals: Thrashing through ERP Systems

    ERIC Educational Resources Information Center

    Bradds, Nathan; Hills, Emily; Masters, Kelly; Weiss, Kevin; Havelka, Douglas

    2017-01-01

    Implementing and integrating an Enterprise Resource Planning (ERP) system into an organization is an enormous undertaking that requires substantial cash outlays, time commitments, and skilled IT and business personnel. It requires careful and detailed planning, thorough testing and training, and a change management process that creates a…

  18. Yet one more dwell time algorithm

    NASA Astrophysics Data System (ADS)

    Haberl, Alexander; Rascher, Rolf

    2017-06-01

    The current demand of even more powerful and efficient microprocessors, for e.g. deep learning, has led to an ongoing trend of reducing the feature size of the integrated circuits. These processors are patterned with EUV-lithography which enables 7 nm chips [1]. To produce mirrors which satisfy the needed requirements is a challenging task. Not only increasing requirements on the imaging properties, but also new lens shapes, such as aspheres or lenses with free-form surfaces, require innovative production processes. However, these lenses need new deterministic sub-aperture polishing methods that have been established in the past few years. These polishing methods are characterized, by an empirically determined TIF and local stock removal. Such a deterministic polishing method is ion-beam-figuring (IBF). The beam profile of an ion beam is adjusted to a nearly ideal Gaussian shape by various parameters. With the known removal function, a dwell time profile can be generated for each measured error profile. Such a profile is always generated pixel-accurately to the predetermined error profile, with the aim always of minimizing the existing surface structures up to the cut-off frequency of the tool used [2]. The processing success of a correction-polishing run depends decisively on the accuracy of the previously computed dwell-time profile. So the used algorithm to calculate the dwell time has to accurately reflect the reality. But furthermore the machine operator should have no influence on the dwell-time calculation. Conclusively there mustn't be any parameters which have an influence on the calculation result. And lastly it should take a minimum of machining time to get a minimum of remaining error structures. Unfortunately current dwell time algorithm calculations are divergent, user-dependent, tending to create high processing times and need several parameters to bet set. This paper describes an, realistic, convergent and user independent dwell time algorithm. The typical processing times are reduced to about 80 % up to 50 % compared to conventional algorithms (Lucy-Richardson, Van-Cittert …) as used in established machines. To verify its effectiveness a plane surface was machined on an IBF.

  19. Tactical Miniature Crystal Oscillator.

    DTIC Science & Technology

    1980-08-01

    manufactured by this process are expected to require 30 days to achieve minimum aging rates. (4) FUNDEMENTAL CRYSTAL RETRACE MEASUREMENT. An important crystal...considerable measurement time to detect differences and characterize components. Before investing considerable time in a candidate reactive element, a

  20. 43 CFR 46.240 - Establishing time limits for the NEPA process.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: (1) Set time limits from the start to the finish of the NEPA analysis and documentation, consistent with the requirements of 40 CFR 1501.8 and other legal obligations, including statutory and regulatory timeframes; (2) Consult with cooperating agencies in setting time limits; and (3) Encourage cooperating...

  1. 17 CFR Appendix B to Part 145 - Schedule of Fees

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... professional personnel in searching or reviewing records. (3) When searches require the expertise of a computer... shared access network servers, the computer processing time is included in the search time for the staff... equivalent of two hours of professional search time. (d) Aggregation of requests. For purposes of determining...

  2. 43 CFR 46.240 - Establishing time limits for the NEPA process.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: (1) Set time limits from the start to the finish of the NEPA analysis and documentation, consistent with the requirements of 40 CFR 1501.8 and other legal obligations, including statutory and regulatory timeframes; (2) Consult with cooperating agencies in setting time limits; and (3) Encourage cooperating...

  3. When Time Makes a Difference: Addressing Ergodicity and Complexity in Education

    ERIC Educational Resources Information Center

    Koopmans, Matthijs

    2015-01-01

    The detection of complexity in behavioral outcomes often requires an estimation of their variability over a prolonged time spectrum to assess processes of stability and transformation. Conventional scholarship typically relies on time-independent measures, "snapshots", to analyze those outcomes, assuming that group means and their…

  4. Design of neurophysiologically motivated structures of time-pulse coded neurons

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lobodzinska, Raisa F.

    2009-04-01

    The common methodology of biologically motivated concept of building of processing sensors systems with parallel input and picture operands processing and time-pulse coding are described in paper. Advantages of such coding for creation of parallel programmed 2D-array structures for the next generation digital computers which require untraditional numerical systems for processing of analog, digital, hybrid and neuro-fuzzy operands are shown. The optoelectronic time-pulse coded intelligent neural elements (OETPCINE) simulation results and implementation results of a wide set of neuro-fuzzy logic operations are considered. The simulation results confirm engineering advantages, intellectuality, circuit flexibility of OETPCINE for creation of advanced 2D-structures. The developed equivalentor-nonequivalentor neural element has power consumption of 10mW and processing time about 10...100us.

  5. Putting ROSE to Work: A Proposed Application of a Request-Oriented Scheduling Engine for Space Station Operations

    NASA Technical Reports Server (NTRS)

    Jaap, John; Muery, Kim

    2000-01-01

    Scheduling engines are found at the core of software systems that plan and schedule activities and resources. A Request-Oriented Scheduling Engine (ROSE) is one that processes a single request (adding a task to a timeline) and then waits for another request. For the International Space Station, a robust ROSE-based system would support multiple, simultaneous users, each formulating requests (defining scheduling requirements), submitting these requests via the internet to a single scheduling engine operating on a single timeline, and immediately viewing the resulting timeline. ROSE is significantly different from the engine currently used to schedule Space Station operations. The current engine supports essentially one person at a time, with a pre-defined set of requirements from many payloads, working in either a "batch" scheduling mode or an interactive/manual scheduling mode. A planning and scheduling process that takes advantage of the features of ROSE could produce greater customer satisfaction at reduced cost and reduced flow time. This paper describes a possible ROSE-based scheduling process and identifies the additional software component required to support it. Resulting changes to the management and control of the process are also discussed.

  6. A Framework for Automating Cost Estimates in Assembly Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calton, T.L.; Peters, R.R.

    1998-12-09

    When a product concept emerges, the manufacturing engineer is asked to sketch out a production strategy and estimate its cost. The engineer is given an initial product design, along with a schedule of expected production volumes. The engineer then determines the best approach to manufacturing the product, comparing a variey of alternative production strategies. The engineer must consider capital cost, operating cost, lead-time, and other issues in an attempt to maximize pro$ts. After making these basic choices and sketching the design of overall production, the engineer produces estimates of the required capital, operating costs, and production capacity. 177is process maymore » iterate as the product design is refined in order to improve its pe~ormance or manufacturability. The focus of this paper is on the development of computer tools to aid manufacturing engineers in their decision-making processes. This computer sof~are tool provides aj?amework in which accurate cost estimates can be seamlessly derivedfiom design requirements at the start of any engineering project. Z+e result is faster cycle times through first-pass success; lower ll~e cycie cost due to requirements-driven design and accurate cost estimates derived early in the process.« less

  7. Modern Corneal Eye-Banking Using a Software-Based IT Management Solution.

    PubMed

    Kern, C; Kortuem, K; Wertheimer, C; Nilmayer, O; Dirisamer, M; Priglinger, S; Mayer, W J

    2018-01-01

    Increasing government legislation and regulations in manufacturing have led to additional documentation regarding the pharmaceutical product requirements of corneal grafts in the European Union. The aim of this project was to develop a software within a hospital information system (HIS) to support the documentation process, to improve the management of the patient waiting list and to increase informational flow between the clinic and eye bank. After an analysis of the current documentation process, a new workflow and software were implemented in our electronic health record (EHR) system. The software takes over most of the documentation and reduces the time required for record keeping. It guarantees real-time tracing of all steps during human corneal tissue processing from the start of production until allocation during surgery and includes follow-up within the HIS. Moreover, listing of the patient for surgery as well as waiting list management takes place in the same system. The new software for corneal eye banking supports the whole process chain by taking over both most of the required documentation and the management of the transplant waiting list. It may provide a standardized IT-based solution for German eye banks working within the same HIS.

  8. Spacelab Mission Implementation Cost Assessment (SMICA)

    NASA Technical Reports Server (NTRS)

    Guynes, B. V.

    1984-01-01

    A total savings of approximately 20 percent is attainable if: (1) mission management and ground processing schedules are compressed; (2) the equipping, staffing, and operating of the Payload Operations Control Center is revised, and (3) methods of working with experiment developers are changed. The development of a new mission implementation technique, which includes mission definition, experiment development, and mission integration/operations, is examined. The Payload Operations Control Center is to relocate and utilize new computer equipment to produce cost savings. Methods of reducing costs by minimizing the Spacelab and payload processing time during pre- and post-mission operation at KSC are analyzed. The changes required to reduce costs in the analytical integration process are studied. The influence of time, requirements accountability, and risk on costs is discussed. Recommendation for cost reductions developed by the Spacelab Mission Implementation Cost Assessment study are listed.

  9. The impact of a lean rounding process in a pediatric intensive care unit.

    PubMed

    Vats, Atul; Goin, Kristin H; Villarreal, Monica C; Yilmaz, Tuba; Fortenberry, James D; Keskinocak, Pinar

    2012-02-01

    Poor workflow associated with physician rounding can produce inefficiencies that decrease time for essential activities, delay clinical decisions, and reduce staff and patient satisfaction. Workflow and provider resources were not optimized when a pediatric intensive care unit increased by 22,000 square feet (to 33,000) and by nine beds (to 30). Lean methods (focusing on essential processes) and scenario analysis were used to develop and implement a patient-centric standardized rounding process, which we hypothesize would lead to improved rounding efficiency, decrease required physician resources, improve satisfaction, and enhance throughput. Human factors techniques and statistical tools were used to collect and analyze observational data for 11 rounding events before and 12 rounding events after process redesign. Actions included: 1) recording rounding events, times, and patient interactions and classifying them as essential, nonessential, or nonvalue added; 2) comparing rounding duration and time per patient to determine the impact on efficiency; 3) analyzing discharge orders for timeliness; 4) conducting staff surveys to assess improvements in communication and care coordination; and 5) analyzing customer satisfaction data to evaluate impact on patient experience. Thirty-bed pediatric intensive care unit in a children's hospital with academic affiliation. Eight attending pediatric intensivists and their physician rounding teams. Eight attending physician-led teams were observed for 11 rounding events before and 12 rounding events after implementation of a standardized lean rounding process focusing on essential processes. Total rounding time decreased significantly (157 ± 35 mins before vs. 121 ± 20 mins after), through a reduction in time spent on nonessential (53 ± 30 vs. 9 ± 6 mins) activities. The previous process required three attending physicians for an average of 157 mins (7.55 attending physician man-hours), while the new process required two attending physicians for an average of 121 mins (4.03 attending physician man-hours). Cumulative distribution of completed patient rounds by hour of day showed an improvement from 40% to 80% of patients rounded by 9:30 AM. Discharge data showed pediatric intensive care unit patients were discharged an average of 58.05 mins sooner (p < .05). Staff surveys showed a significant increase in satisfaction with the new process (including increased efficiency, improved physician identification, and clearer understanding of process). Customer satisfaction scores showed improvement after implementing the new process. Implementation of a lean-focused, patient-centric rounding structure stressing essential processes was associated with increased timeliness and efficiency of rounds, improved staff and customer satisfaction, improved throughput, and reduced attending physician man-hours.

  10. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.

  11. A real-time spectrum acquisition system design based on quantum dots-quantum well detector

    NASA Astrophysics Data System (ADS)

    Zhang, S. H.; Guo, F. M.

    2016-01-01

    In this paper, we studied the structure characteristics of quantum dots-quantum well photodetector with response wavelength range from 400 nm to 1000 nm. It has the characteristics of high sensitivity, low dark current and the high conductance gain. According to the properties of the quantum dots-quantum well photodetectors, we designed a new type of capacitive transimpedence amplifier (CTIA) readout circuit structure with the advantages of adjustable gain, wide bandwidth and high driving ability. We have implemented the chip packaging between CTIA-CDS structure readout circuit and quantum dots detector and tested the readout response characteristics. According to the timing signals requirements of our readout circuit, we designed a real-time spectral data acquisition system based on FPGA and ARM. Parallel processing mode of programmable devices makes the system has high sensitivity and high transmission rate. In addition, we realized blind pixel compensation and smoothing filter algorithm processing to the real time spectrum data by using C++. Through the fluorescence spectrum measurement of carbon quantum dots and the signal acquisition system and computer software system to realize the collection of the spectrum signal processing and analysis, we verified the excellent characteristics of detector. It meets the design requirements of quantum dot spectrum acquisition system with the characteristics of short integration time, real-time and portability.

  12. Subprimal purchasing and merchandising decisions for pork: relationship to retail yield and fabrication time.

    PubMed

    Lorenzen, C L; Griffin, D B; Dockerty, T R; Walter, J P; Johnson, H K; Savell, J W

    1996-01-01

    Boxed pork was obtained to represent four different purchase specifications (different anatomical separation locations and[or] external fat trim levels) common in the pork industry to conduct a study of retail yields and labor requirements. Bone-in loins (n = 180), boneless loins (n = 94), and Boston butts (n = 148) were assigned randomly to fabrication styles within subprimals. When comparing cutting styles within subprimals, it was evident that cutting style affected percentage of retail yield and cutting time. When more bone-in cuts were prepared from bone-in loin subprimals, retail yields ranged from 92.80 +/- .61 to 95.28 +/- .45%, and processing times ranged from 222.57 +/- 10.13 to 318.99 +/- 7.85 s, from the four suppliers. When more boneless cuts were prepared from bone-in loin subprimals, retail yields ranged from 71.12 +/- 1.10 to 77.92 +/- .77% and processing times ranged from 453.49 +/- 8.95 to 631.09 +/- 15.04 s from the different loins. Comparing boneless to bone-in cuts from bone-in loins resulted in lower yields and required greater processing times. Significant variations in yields and times were found within cutting styles. These differences seemed to have been the result of variation in supplier fat trim level and anatomical separation (primarily scribe length).

  13. INO340 telescope control system: middleware requirements, design, and evaluation

    NASA Astrophysics Data System (ADS)

    Shalchian, Hengameh; Ravanmehr, Reza

    2016-07-01

    The INO340 Control System (INOCS) is being designed in terms of a distributed real-time architecture. The real-time (soft and firm) nature of many processes inside INOCS causes the communication paradigm between its different components to be time-critical and sensitive. For this purpose, we have chosen the Data Distribution Service (DDS) standard as the communications middleware which is itself based on the publish-subscribe paradigm. In this paper, we review and compare the main middleware types, and then we illustrate the middleware architecture of INOCS and its specific requirements. Finally, we present the experimental results, performed to evaluate our middleware in order to ensure that it meets our requirements.

  14. The use of artificial intelligence techniques to improve the multiple payload integration process

    NASA Technical Reports Server (NTRS)

    Cutts, Dannie E.; Widgren, Brian K.

    1992-01-01

    A maximum return of science and products with a minimum expenditure of time and resources is a major goal of mission payload integration. A critical component then, in successful mission payload integration is the acquisition and analysis of experiment requirements from the principal investigator and payload element developer teams. One effort to use artificial intelligence techniques to improve the acquisition and analysis of experiment requirements within the payload integration process is described.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litchfield, J.W.; Watts, R.L.; Gurwell, W.E.

    A materials assessment methodology for identifying specific critical material requirements that could hinder the implementation of solar energy has been developed and demonstrated. The methodology involves an initial screening process, followed by a more detailed materials assessment. The detailed assessment considers such materials concerns and constraints as: process and production constraints, reserve and resource limitations, lack of alternative supply sources, geopolitical problems, environmental and energy concerns, time constraints, and economic constraints. Data for 55 bulk and 53 raw materials are currently available on the data base. These materials are required in the example photovoltaic systems. One photovoltaic system and thirteenmore » photovoltaic cells, ten solar heating and cooling systems, and two agricultural and industrial process heat systems have been characterized to define their engineering and bulk material requirements.« less

  16. A revisit to the requirements methodologies for services provided by the Office of Space Communications

    NASA Technical Reports Server (NTRS)

    Fishkind, Stanley; Harris, Richard N.; Pfeiffer, William A.

    1996-01-01

    The methodologies of the NASA requirements processing system, originally designed to enhance NASA's customer interface and response time, are reviewed. The response of NASA to the problems associated with the system is presented, and it is shown what was done to facilitate the process and to improve customer relations. The requirements generation system (RGS), a computer-supported client-server system, adopted by NASA is presented. The RGS system is configurable on a per-mission basis and can be structured to allow levels of requirements. The details provided concerning the RGS include the recommended configuration, information on becoming an RGS user and network connectivity worksheets for computer users.

  17. Sustaining Public Involvement in Long Range Planning Using a Stakeholder Based Process: A Case Study from Eugene-Springfield ,Oregon

    DOT National Transportation Integrated Search

    1998-09-16

    The Intermodal Surface Transportation Efficiency Act requires a proactive : public involvement process that provides complete information, timely public : notice, full public access to key decisions, and supports early and continuing : involvement of...

  18. Investigation of Allan variance for determining noise spectral forms with application to microwave radiometry

    NASA Technical Reports Server (NTRS)

    Stanley, William D.

    1994-01-01

    An investigation of the Allan variance method as a possible means for characterizing fluctuations in radiometric noise diodes has been performed. The goal is to separate fluctuation components into white noise, flicker noise, and random-walk noise. The primary means is by discrete-time processing, and the study focused primarily on the digital processes involved. Noise satisfying the requirements was generated by direct convolution, fast Fourier transformation (FFT) processing in the time domain, and FFT processing in the frequency domain. Some of the numerous results obtained are presented along with the programs used in the study.

  19. Masked multichannel analyzer

    DOEpatents

    Winiecki, A.L.; Kroop, D.C.; McGee, M.K.; Lenkszus, F.R.

    1984-01-01

    An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.

  20. Masked multichannel analyzer

    DOEpatents

    Winiecki, Alan L.; Kroop, David C.; McGee, Marilyn K.; Lenkszus, Frank R.

    1986-01-01

    An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.

  1. Real-time processing for full-range Fourier-domain optical-coherence tomography with zero-filling interpolation using multiple graphic processing units.

    PubMed

    Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi

    2010-09-01

    The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.

  2. Programmable neural processing on a smartdust for brain-computer interfaces.

    PubMed

    Yuwen Sun; Shimeng Huang; Oresko, Joseph J; Cheng, Allen C

    2010-10-01

    Brain-computer interfaces (BCIs) offer tremendous promise for improving the quality of life for disabled individuals. BCIs use spike sorting to identify the source of each neural firing. To date, spike sorting has been performed by either using off-chip analysis, which requires a wired connection penetrating the skull to a bulky external power/processing unit, or via custom application-specific integrated circuits that lack the programmability to perform different algorithms and upgrades. In this research, we propose and test the feasibility of performing on-chip, real-time spike sorting on a programmable smartdust, including feature extraction, classification, compression, and wireless transmission. A detailed power/performance tradeoff analysis using DVFS is presented. Our experimental results show that the execution time and power density meet the requirements to perform real-time spike sorting and wireless transmission on a single neural channel.

  3. Measurement of the aerosol absorption coefficient with the nonequilibrium process

    NASA Astrophysics Data System (ADS)

    Li, Liang; Li, Jingxuan; Bai, Hailong; Li, Baosheng; Liu, Shanlin; Zhang, Yang

    2018-02-01

    On the basis of the conventional Jamin interferometer,the improved measuring method is proposed that using a polarization type reentrant Jamin interferometer measures atmospheric aerosol absorption coefficient under the photothermal effect.The paper studies the relationship between the absorption coefficient of atmospheric aerosol particles and the refractive index change of the atmosphere.In Matlab environment, the variation curves of the output voltage of the interferometer with different concentration aerosol samples under stimulated laser irradiation were plotted.Besides, the paper also studies the relationship between aerosol concentration and the time required for the photothermal effect to reach equilibrium.When using the photothermal interferometry the results show that the time required for the photothermal effect to reach equilibrium is also increasing with the increasing concentration of aerosol particles,the absorption coefficient and time of aerosol in the process of nonequilibrium are exponentially changing.

  4. Parametric Cost Analysis: A Design Function

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1989-01-01

    Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.

  5. Using Six Sigma to improve once daily gentamicin dosing and therapeutic drug monitoring performance.

    PubMed

    Egan, Sean; Murphy, Philip G; Fennell, Jerome P; Kelly, Sinead; Hickey, Mary; McLean, Carolyn; Pate, Muriel; Kirke, Ciara; Whiriskey, Annette; Wall, Niall; McCullagh, Eddie; Murphy, Joan; Delaney, Tim

    2012-12-01

    Safe, effective therapy with the antimicrobial gentamicin requires good practice in dose selection and monitoring of serum levels. Suboptimal therapy occurs with breakdown in the process of drug dosing, serum blood sampling, laboratory processing and level interpretation. Unintentional underdosing may result. This improvement effort aimed to optimise this process in an academic teaching hospital using Six Sigma process improvement methodology. A multidisciplinary project team was formed. Process measures considered critical to quality were defined, and baseline practice was examined through process mapping and audit. Root cause analysis informed improvement measures. These included a new dosing and monitoring schedule, and standardised assay sampling and drug administration timing which maximised local capabilities. Three iterations of the improvement cycle were conducted over a 24-month period. The attainment of serum level sampling in the required time window improved by 85% (p≤0.0001). A 66% improvement in accuracy of dosing was observed (p≤0.0001). Unnecessary dose omission while awaiting level results and inadvertent disruption to therapy due to dosing and monitoring process breakdown were eliminated. Average daily dose administered increased from 3.39 mg/kg to 4.78 mg/kg/day. Using Six Sigma methodology enhanced gentamicin usage process performance. Local process related factors may adversely affect adherence to practice guidelines for gentamicin, a drug which is complex to use. It is vital to adapt dosing guidance and monitoring requirements so that they are capable of being implemented in the clinical environment as a matter of routine. Improvement may be achieved through a structured localised approach with multidisciplinary stakeholder involvement.

  6. An architecture for real-time vision processing

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong

    1994-01-01

    To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.

  7. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than

  8. Satellite Imagery Production and Processing Using Apache Hadoop

    NASA Astrophysics Data System (ADS)

    Hill, D. V.; Werpy, J.

    2011-12-01

    The United States Geological Survey's (USGS) Earth Resources Observation and Science (EROS) Center Land Science Research and Development (LSRD) project has devised a method to fulfill its processing needs for Essential Climate Variable (ECV) production from the Landsat archive using Apache Hadoop. Apache Hadoop is the distributed processing technology at the heart of many large-scale, processing solutions implemented at well-known companies such as Yahoo, Amazon, and Facebook. It is a proven framework and can be used to process petabytes of data on thousands of processors concurrently. It is a natural fit for producing satellite imagery and requires only a few simple modifications to serve the needs of science data processing. This presentation provides an invaluable learning opportunity and should be heard by anyone doing large scale image processing today. The session will cover a description of the problem space, evaluation of alternatives, feature set overview, configuration of Hadoop for satellite image processing, real-world performance results, tuning recommendations and finally challenges and ongoing activities. It will also present how the LSRD project built a 102 core processing cluster with no financial hardware investment and achieved ten times the initial daily throughput requirements with a full time staff of only one engineer. Satellite Imagery Production and Processing Using Apache Hadoop is presented by David V. Hill, Principal Software Architect for USGS LSRD.

  9. A real-time dashboard for managing pathology processes

    PubMed Central

    Halwani, Fawaz; Li, Wei Chen; Banerjee, Diponkar; Lessard, Lysanne; Amyot, Daniel; Michalowski, Wojtek; Giffen, Randy

    2016-01-01

    Context: The Eastern Ontario Regional Laboratory Association (EORLA) is a newly established association of all the laboratory and pathology departments of Eastern Ontario that currently includes facilities from eight hospitals. All surgical specimens for EORLA are processed in one central location, the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital (TOH), where the rapid growth and influx of surgical and cytology specimens has created many challenges in ensuring the timely processing of cases and reports. Although the entire process is maintained and tracked in a clinical information system, this system lacks pre-emptive warnings that can help management address issues as they arise. Aims: Dashboard technology provides automated, real-time visual clues that could be used to alert management when a case or specimen is not being processed within predefined time frames. We describe the development of a dashboard helping pathology clinical management to make informed decisions on specimen allocation and tracking. Methods: The dashboard was designed and developed in two phases, following a prototyping approach. The first prototype of the dashboard helped monitor and manage pathology processes at the DPLM. Results: The use of this dashboard helped to uncover operational inefficiencies and contributed to an improvement of turn-around time within The Ottawa Hospital's DPML. It also allowed the discovery of additional requirements, leading to a second prototype that provides finer-grained, real-time information about individual cases and specimens. Conclusion: We successfully developed a dashboard that enables managers to address delays and bottlenecks in specimen allocation and tracking. This support ensures that pathology reports are provided within time frame standards required for high-quality patient care. Given the importance of rapid diagnostics for a number of diseases, the use of real-time dashboards within pathology departments could contribute to improving the quality of patient care beyond EORLA's. PMID:27217974

  10. Predictability of process resource usage - A measurement-based study on UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1989-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient betweeen the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82 percent of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  11. Predictability of process resource usage: A measurement-based study of UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1987-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  12. A framework for software fault tolerance in real-time systems

    NASA Technical Reports Server (NTRS)

    Anderson, T.; Knight, J. C.

    1983-01-01

    A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.

  13. On-board multispectral classification study

    NASA Technical Reports Server (NTRS)

    Ewalt, D.

    1979-01-01

    The factors relating to onboard multispectral classification were investigated. The functions implemented in ground-based processing systems for current Earth observation sensors were reviewed. The Multispectral Scanner, Thematic Mapper, Return Beam Vidicon, and Heat Capacity Mapper were studied. The concept of classification was reviewed and extended from the ground-based image processing functions to an onboard system capable of multispectral classification. Eight different onboard configurations, each with varying amounts of ground-spacecraft interaction, were evaluated. Each configuration was evaluated in terms of turnaround time, onboard processing and storage requirements, geometric and classification accuracy, onboard complexity, and ancillary data required from the ground.

  14. Preparation of composite materials in space. Volume 2: Technical report

    NASA Technical Reports Server (NTRS)

    Steurer, W. H.; Kaye, S.

    1973-01-01

    A study to define promising materials, significant processing criteria, and the related processing techniques and apparatus for the preparation of composite materials in space was conducted. The study also established a program for zero gravity experiments and the required developmental efforts. The following composite types were considered: (1) metal-base fiber and particle composites, including cemented compacts, (2) controlled density metals, comprising plain and reinforced metal foams, and (3) unidirectionally solidified eutectic alloys. A program of suborbital and orbital experiments for the 1972 to 1978 time period was established to identify materials, processes, and required experiment equipment.

  15. Scheduling Software for Complex Scenarios

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Preparing a vehicle and its payload for a single launch is a complex process that involves thousands of operations. Because the equipment and facilities required to carry out these operations are extremely expensive and limited in number, optimal assignment and efficient use are critically important. Overlapping missions that compete for the same resources, ground rules, safety requirements, and the unique needs of processing vehicles and payloads destined for space impose numerous constraints that, when combined, require advanced scheduling. Traditional scheduling systems use simple algorithms and criteria when selecting activities and assigning resources and times to each activity. Schedules generated by these simple decision rules are, however, frequently far from optimal. To resolve mission-critical scheduling issues and predict possible problem areas, NASA historically relied upon expert human schedulers who used their judgment and experience to determine where things should happen, whether they will happen on time, and whether the requested resources are truly necessary.

  16. Analysis of Forgery Attack on One-Time Proxy Signature and the Improvement

    NASA Astrophysics Data System (ADS)

    Wang, Tian-Yin; Wei, Zong-Li

    2016-02-01

    In a recent paper, Yang et al. (Quant. Inf. Process. 13(9), 2007-2016, 2014) analyzed the security of one-time proxy signature scheme Wang and Wei (Quant. Inf. Process. 11(2), 455-463, 2012) and pointed out that it cannot satisfy the security requirements of unforgeability and undeniability because an eavesdropper Eve can forge a valid proxy signature on a message chosen by herself. However, we find that the so-called proxy message-signature pair forged by Eve is issued by the proxy signer in fact, and anybody can obtain it as a requester, which means that the forgery attack is not considered as a successful attack. Therefore, the conclusion that this scheme cannot satisfy the security requirements of proxy signature against forging and denying is not appropriate in this sense. Finally, we study the reason for the misunderstanding and clarify the security requirements for proxy signatures.

  17. Requirements Development Issues for Advanced Life Support Systems: Solid Waste Management

    NASA Technical Reports Server (NTRS)

    Levri, Julie A.; Fisher, John W.; Alazraki, Michael P.; Hogan, John A.

    2002-01-01

    Long duration missions pose substantial new challenges for solid waste management in Advanced Life Support (ALS) systems. These possibly include storing large volumes of waste material in a safe manner, rendering wastes stable or sterilized for extended periods of time, and/or processing wastes for recovery of vital resources. This is further complicated because future missions remain ill-defined with respect to waste stream quantity, composition and generation schedule. Without definitive knowledge of this information, development of requirements is hampered. Additionally, even if waste streams were well characterized, other operational and processing needs require clarification (e.g. resource recovery requirements, planetary protection constraints). Therefore, the development of solid waste management (SWM) subsystem requirements for long duration space missions is an inherently uncertain, complex and iterative process. The intent of this paper is to address some of the difficulties in writing requirements for missions that are not completely defined. This paper discusses an approach and motivation for ALS SWM requirements development, the characteristics of effective requirements, and the presence of those characteristics in requirements that are developed for uncertain missions. Associated drivers for life support system technological capability are also presented. A general means of requirements forecasting is discussed, including successive modification of requirements and the need to consider requirements integration among subsystems.

  18. Dogs cannot bark: event-related brain responses to true and false negated statements as indicators of higher-order conscious processing.

    PubMed

    Herbert, Cornelia; Kübler, Andrea

    2011-01-01

    The present study investigated event-related brain potentials elicited by true and false negated statements to evaluate if discrimination of the truth value of negated information relies on conscious processing and requires higher-order cognitive processing in healthy subjects across different levels of stimulus complexity. The stimulus material consisted of true and false negated sentences (sentence level) and prime-target expressions (word level). Stimuli were presented acoustically and no overt behavioral response of the participants was required. Event-related brain potentials to target words preceded by true and false negated expressions were analyzed both within group and at the single subject level. Across the different processing conditions (word pairs and sentences), target words elicited a frontal negativity and a late positivity in the time window from 600-1000 msec post target word onset. Amplitudes of both brain potentials varied as a function of the truth value of the negated expressions. Results were confirmed at the single-subject level. In sum, our results support recent suggestions according to which evaluation of the truth value of a negated expression is a time- and cognitively demanding process that cannot be solved automatically, and thus requires conscious processing. Our paradigm provides insight into higher-order processing related to language comprehension and reasoning in healthy subjects. Future studies are needed to evaluate if our paradigm also proves sensitive for the detection of consciousness in non-responsive patients.

  19. Ikonos Imagery Product Nonuniformity Assessment

    NASA Technical Reports Server (NTRS)

    Ryan, Robert; Zanoni, Vicki; Pagnutti, Mary; Holekamp, Kara; Smith, Charles

    2002-01-01

    During the early stages of the NASA Scientific Data Purchase (SDP) program, three approximately equal vertical stripes were observable in the IKONOS imagery of highly spatially uniform sites. Although these effects appeared to be less than a few percent of the mean signal, several investigators requested new imagery. Over time, Space Imaging updated its processing to minimize these artifacts. This however, produced differences in Space Imaging products derived from archive imagery processed at different times. Imagery processed before 2/22/01 is processed with one set of coefficients, while imagery processed after that date requires another set. Space Imaging produces its products from raw imagery, so changes in the ground processing over time can change the delivered digital number (DN) values, even for identical orders of a previously acquired scene. NASA Stennis initiated studies to investigate the magnitude and changes in these artifacts over the lifetime of the system and before and after processing updates.

  20. Motor Effort Alters Changes of Mind in Sensorimotor Decision Making

    PubMed Central

    Burk, Diana; Ingram, James N.; Franklin, David W.; Shadlen, Michael N.; Wolpert, Daniel M.

    2014-01-01

    After committing to an action, a decision-maker can change their mind to revise the action. Such changes of mind can even occur when the stream of information that led to the action is curtailed at movement onset. This is explained by the time delays in sensory processing and motor planning which lead to a component at the end of the sensory stream that can only be processed after initiation. Such post-initiation processing can explain the pattern of changes of mind by asserting an accumulation of additional evidence to a criterion level, termed change-of-mind bound. Here we test the hypothesis that physical effort associated with the movement required to change one's mind affects the level of the change-of-mind bound and the time for post-initiation deliberation. We varied the effort required to change from one choice target to another in a reaching movement by varying the geometry of the choice targets or by applying a force field between the targets. We show that there is a reduction in the frequency of change of mind when the separation of the choice targets would require a larger excursion of the hand from the initial to the opposite choice. The reduction is best explained by an increase in the evidence required for changes of mind and a reduced time period of integration after the initial decision. Thus the criteria to revise an initial choice is sensitive to energetic costs. PMID:24651615

  1. Additive Manufacturing in Production: A Study Case Applying Technical Requirements

    NASA Astrophysics Data System (ADS)

    Ituarte, Iñigo Flores; Coatanea, Eric; Salmi, Mika; Tuomi, Jukka; Partanen, Jouni

    Additive manufacturing (AM) is expanding the manufacturing capabilities. However, quality of AM produced parts is dependent on a number of machine, geometry and process parameters. The variability of these parameters affects the manufacturing drastically and therefore standardized processes and harmonized methodologies need to be developed to characterize the technology for end use applications and enable the technology for manufacturing. This research proposes a composite methodology integrating Taguchi Design of Experiments, multi-objective optimization and statistical process control, to optimize the manufacturing process and fulfil multiple requirements imposed to an arbitrary geometry. The proposed methodology aims to characterize AM technology depending upon manufacturing process variables as well as to perform a comparative assessment of three AM technologies (Selective Laser Sintering, Laser Stereolithography and Polyjet). Results indicate that only one machine, laser-based Stereolithography, was feasible to fulfil simultaneously macro and micro level geometrical requirements but mechanical properties were not at required level. Future research will study a single AM system at the time to characterize AM machine technical capabilities and stimulate pre-normative initiatives of the technology for end use applications.

  2. A Geometric Analysis to Protect Manned Assets from Newly Launched Objects - Cola Gap Analysis

    NASA Technical Reports Server (NTRS)

    Hametz, Mark E.; Beaver, Brian A.

    2013-01-01

    A safety risk was identified for the International Space Station (ISS) by The Aerospace Corporation, where the ISS would be unable to react to a conjunction with a newly launched object following the end of the launch Collision Avoidance (COLA) process. Once an object is launched, there is a finite period of time required to track, catalog, and evaluate that new object as part of standard onorbit COLA screening processes. Additionally, should a conjunction be identified, there is an additional period of time required to plan and execute a collision avoidance maneuver. While the computed prelaunch probability of collision with any object is extremely low, NASA/JSC has requested that all US launches take additional steps to protect the ISS during this "COLA gap" period. This paper details a geometric-based COLA gap analysis method developed by the NASA Launch Services Program to determine if launch window cutouts are required to mitigate this risk. Additionally, this paper presents the results of several missions where this process has been used operationally.

  3. Effect of input data variability on estimations of the equivalent constant temperature time for microbial inactivation by HTST and retort thermal processing.

    PubMed

    Salgado, Diana; Torres, J Antonio; Welti-Chanes, Jorge; Velazquez, Gonzalo

    2011-08-01

    Consumer demand for food safety and quality improvements, combined with new regulations, requires determining the processor's confidence level that processes lowering safety risks while retaining quality will meet consumer expectations and regulatory requirements. Monte Carlo calculation procedures incorporate input data variability to obtain the statistical distribution of the output of prediction models. This advantage was used to analyze the survival risk of Mycobacterium avium subspecies paratuberculosis (M. paratuberculosis) and Clostridium botulinum spores in high-temperature short-time (HTST) milk and canned mushrooms, respectively. The results showed an estimated 68.4% probability that the 15 sec HTST process would not achieve at least 5 decimal reductions in M. paratuberculosis counts. Although estimates of the raw milk load of this pathogen are not available to estimate the probability of finding it in pasteurized milk, the wide range of the estimated decimal reductions, reflecting the variability of the experimental data available, should be a concern to dairy processors. Knowledge of the C. botulinum initial load and decimal thermal time variability was used to estimate an 8.5 min thermal process time at 110 °C for canned mushrooms reducing the risk to 10⁻⁹ spores/container with a 95% confidence. This value was substantially higher than the one estimated using average values (6.0 min) with an unacceptable 68.6% probability of missing the desired processing objective. Finally, the benefit of reducing the variability in initial load and decimal thermal time was confirmed, achieving a 26.3% reduction in processing time when standard deviation values were lowered by 90%. In spite of novel technologies, commercialized or under development, thermal processing continues to be the most reliable and cost-effective alternative to deliver safe foods. However, the severity of the process should be assessed to avoid under- and over-processing and determine opportunities for improvement. This should include a systematic approach to consider variability in the parameters for the models used by food process engineers when designing a thermal process. The Monte Carlo procedure here presented is a tool to facilitate this task for the determination of process time at a constant lethal temperature. © 2011 Institute of Food Technologists®

  4. Autogen Version 2.0

    NASA Technical Reports Server (NTRS)

    Gladden, Roy

    2007-01-01

    Version 2.0 of the autogen software has been released. "Autogen" (automated sequence generation) signifies both a process and software used to implement the process of automated generation of sequences of commands in a standard format for uplink to spacecraft. Autogen requires fewer workers than are needed for older manual sequence-generation processes and reduces sequence-generation times from weeks to minutes.

  5. Delimbing hybrid poplar prior to processing with a flail/chipper

    Treesearch

    Bruce Hartsough; Raffaele Spinelli; Steve Pottle

    2000-01-01

    Processing whole trees into pulp chips with chain flail delimber/debarker/chippers (DDCs) is costly. Production rates of DDCs are limited by the residence time required to remove limbs and bark. Using a pull-through delimber, we delimbed trees prior to flailing and chipping, with the objective of speeding up the latter processes. Pre-delimbing increased the...

  6. Delimbing hybrid poplar prior to processing with a flail/chipper

    Treesearch

    Bruce R. Hartsough; Raffaele Spinelli; Steve J. Pottle

    2002-01-01

    Processing whole trees into pulp chips with chain flail delimber/debarker/chippers (DDCs) is costly. Production rates of DDCs are limited by the residence time required to remove limbs and bark. Using a pull-through delimber, we delimbed trees prior to flailing and chipping, with the objective of speeding up the latter processes. Pre-delimbing increased the...

  7. A Different Approach to Studying the Charge and Discharge of a Capacitor without an Oscilloscope

    ERIC Educational Resources Information Center

    Ladino, L. A.

    2013-01-01

    A different method to study the charging and discharging processes of a capacitor is presented. The method only requires a high impedance voltmeter. The charging and discharging processes of a capacitor are usually studied experimentally using an oscilloscope and, therefore, both processes are studied as a function of time. The approach presented…

  8. Fast Pixel Buffer For Processing With Lookup Tables

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.

    1992-01-01

    Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.

  9. A novel time-domain signal processing algorithm for real time ventricular fibrillation detection

    NASA Astrophysics Data System (ADS)

    Monte, G. E.; Scarone, N. C.; Liscovsky, P. O.; Rotter S/N, P.

    2011-12-01

    This paper presents an application of a novel algorithm for real time detection of ECG pathologies, especially ventricular fibrillation. It is based on segmentation and labeling process of an oversampled signal. After this treatment, analyzing sequence of segments, global signal behaviours are obtained in the same way like a human being does. The entire process can be seen as a morphological filtering after a smart data sampling. The algorithm does not require any ECG digital signal pre-processing, and the computational cost is low, so it can be embedded into the sensors for wearable and permanent applications. The proposed algorithms could be the input signal description to expert systems or to artificial intelligence software in order to detect other pathologies.

  10. Application of enhanced modern structured analysis techniques to Space Station Freedom electric power system requirements

    NASA Technical Reports Server (NTRS)

    Biernacki, John; Juhasz, John; Sadler, Gerald

    1991-01-01

    A team of Space Station Freedom (SSF) system engineers are in the process of extensive analysis of the SSF requirements, particularly those pertaining to the electrical power system (EPS). The objective of this analysis is the development of a comprehensive, computer-based requirements model, using an enhanced modern structured analysis methodology (EMSA). Such a model provides a detailed and consistent representation of the system's requirements. The process outlined in the EMSA methodology is unique in that it allows the graphical modeling of real-time system state transitions, as well as functional requirements and data relationships, to be implemented using modern computer-based tools. These tools permit flexible updating and continuous maintenance of the models. Initial findings resulting from the application of EMSA to the EPS have benefited the space station program by linking requirements to design, providing traceability of requirements, identifying discrepancies, and fostering an understanding of the EPS.

  11. 42 CFR 457.1180 - Program specific review process: Notice.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... SERVICES (CONTINUED) STATE CHILDREN'S HEALTH INSURANCE PROGRAMS (SCHIPs) ALLOTMENTS AND GRANTS TO STATES State Plan Requirements: Applicant and Enrollee Protections § 457.1180 Program specific review process... explanation of applicable rights to review of that determination, the standard and expedited time frames for...

  12. 42 CFR 457.1180 - Program specific review process: Notice.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... SERVICES (CONTINUED) STATE CHILDREN'S HEALTH INSURANCE PROGRAMS (SCHIPs) ALLOTMENTS AND GRANTS TO STATES State Plan Requirements: Applicant and Enrollee Protections § 457.1180 Program specific review process... explanation of applicable rights to review of that determination, the standard and expedited time frames for...

  13. 42 CFR 457.1180 - Program specific review process: Notice.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) STATE CHILDREN'S HEALTH INSURANCE PROGRAMS (SCHIPs) ALLOTMENTS AND GRANTS TO STATES State Plan Requirements: Applicant and Enrollee Protections § 457.1180 Program specific review process... explanation of applicable rights to review of that determination, the standard and expedited time frames for...

  14. 42 CFR 457.1180 - Program specific review process: Notice.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SERVICES (CONTINUED) STATE CHILDREN'S HEALTH INSURANCE PROGRAMS (SCHIPs) ALLOTMENTS AND GRANTS TO STATES State Plan Requirements: Applicant and Enrollee Protections § 457.1180 Program specific review process... explanation of applicable rights to review of that determination, the standard and expedited time frames for...

  15. 42 CFR 457.1180 - Program specific review process: Notice.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) STATE CHILDREN'S HEALTH INSURANCE PROGRAMS (SCHIPs) ALLOTMENTS AND GRANTS TO STATES State Plan Requirements: Applicant and Enrollee Protections § 457.1180 Program specific review process... explanation of applicable rights to review of that determination, the standard and expedited time frames for...

  16. 21 CFR 113.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... steam into the closed retort and the time when the retort reaches the required processing temperature..., school, penal, or other organization) processing of food, including pet food. Persons engaged in the... flames to achieve sterilization temperatures. A holding period in a heated section may follow the initial...

  17. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  18. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  19. Power and Performance Trade-offs for Space Time Adaptive Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino

    Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less

  20. Data acquisition for a real time fault monitoring and diagnosis knowledge-based system for space power system

    NASA Technical Reports Server (NTRS)

    Wilhite, Larry D.; Lee, S. C.; Lollar, Louis F.

    1989-01-01

    The design and implementation of the real-time data acquisition and processing system employed in the AMPERES project is described, including effective data structures for efficient storage and flexible manipulation of the data by the knowledge-based system (KBS), the interprocess communication mechanism required between the data acquisition system and the KBS, and the appropriate data acquisition protocols for collecting data from the sensors. Sensor data are categorized as critical or noncritical data on the basis of the inherent frequencies of the signals and the diagnostic requirements reflected in their values. The critical data set contains 30 analog values and 42 digital values and is collected every 10 ms. The noncritical data set contains 240 analog values and is collected every second. The collected critical and noncritical data are stored in separate circular buffers. Buffers are created in shared memory to enable other processes, i.e., the fault monitoring and diagnosis process and the user interface process, to freely access the data sets.

  1. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  2. Programming and reprogramming sequence timing following high and low contextual interference practice.

    PubMed

    Wright, David L; Magnuson, Curt E; Black, Charles B

    2005-09-01

    Individuals practiced two unique discrete sequence production tasks that differed in their relative time profile in either a blocked or random practice schedule. Each participant was subsequently administered a "precuing" protocol to examine the cost of initially compiling or modifying the plan for an upcoming movement's relative timing. The findings indicated that, in general, random practice facilitated the programming of the required movement timing, and this was accomplished while exhibiting greater accuracy in movement production. Participants exposed to random practice exhibited the greatest motor programming benefit, when a modification to an already prepared movement timing profile was required. When movement timing was only partially constructed prior to the imperative signal, the individuals who were trained in blocked and random practice formats accrued a similar cost to complete the programming process. These data provide additional support for the recent claim of Immink & Wright (2001) that at least some of the benefit from experience in a random as opposed to blocked training context can be localized to superior development and implementation of the motor programming process before executing the movement.

  3. 24 CFR 3286.307 - Process for obtaining trainer's qualification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... verification of the experience required in § 3286.305. This verification may be in the form of statements by past or present employers or a self-certification that the applicant meets those experience requirements, but HUD may contact the applicant for additional verification at any time. The applicant must...

  4. 32 CFR Enclosure 1 - Requirements for Environmental Considerations-Global Commons

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the responsible decision-making official to be informed of pertinent environmental considerations. The... making an appropriate record with respect to this requirement is for the decision-maker to sign and date...-making process. Other means of making an appropriate record are also acceptable. 9. Timing. No decision...

  5. Learning Analytics Platform, towards an Open Scalable Streaming Solution for Education

    ERIC Educational Resources Information Center

    Lewkow, Nicholas; Zimmerman, Neil; Riedesel, Mark; Essa, Alfred

    2015-01-01

    Next generation digital learning environments require delivering "just-in-time feedback" to learners and those who support them. Unlike traditional business intelligence environments, streaming data requires resilient infrastructure that can move data at scale from heterogeneous data sources, process the data quickly for use across…

  6. Cognate Facilitation in Sentence Context--Translation Production by Interpreting Trainees and Non-Interpreting Trilinguals

    ERIC Educational Resources Information Center

    Lijewska, Agnieszka; Chmiel, Agnieszka

    2015-01-01

    Conference interpreters form a special case of language users because the simultaneous interpretation practice requires very specific lexical processing. Word comprehension and production in respective languages is performed under strict time constraints and requires constant activation of the involved languages. The present experiment aimed at…

  7. Educational Computer Utilization and Computer Communications.

    ERIC Educational Resources Information Center

    Singh, Jai P.; Morgan, Robert P.

    As part of an analysis of educational needs and telecommunications requirements for future educational satellite systems, three studies were carried out. 1) The role of the computer in education was examined and both current status and future requirements were analyzed. Trade-offs between remote time sharing and remote batch process were explored…

  8. Biocatalysis in the Pharmaceutical Industry: The Need for Speed

    PubMed Central

    2017-01-01

    The use of biocatalysis in the pharmaceutical industry continues to expand as a result of increased access to enzymes and the ability to engineer those enzymes to meet the demands of industrial processes. However, we are still just scratching the surface of potential biocatalytic applications. The time pressures present in pharmaceutical process development are incompatible with the long lead times required for engineering a suitable biocatalyst. Dramatic increases in the speed of protein engineering are needed to deliver on the ever increasing opportunities for industrial biocatalytic processes. PMID:28523096

  9. Biocatalysis in the Pharmaceutical Industry: The Need for Speed.

    PubMed

    Truppo, Matthew D

    2017-05-11

    The use of biocatalysis in the pharmaceutical industry continues to expand as a result of increased access to enzymes and the ability to engineer those enzymes to meet the demands of industrial processes. However, we are still just scratching the surface of potential biocatalytic applications. The time pressures present in pharmaceutical process development are incompatible with the long lead times required for engineering a suitable biocatalyst. Dramatic increases in the speed of protein engineering are needed to deliver on the ever increasing opportunities for industrial biocatalytic processes.

  10. Climate Observing Systems: Where are we and where do we need to be in the future

    NASA Astrophysics Data System (ADS)

    Baker, B.; Diamond, H. J.

    2017-12-01

    Climate research and monitoring requires an observational strategy that blends long-term, carefully calibrated measurements as well as short-term, focused process studies. The operation and implementation of operational climate observing networks and the provision of related climate services, both have a significant role to play in assisting the development of national climate adaptation policies and in facilitating national economic development. Climate observing systems will require a strong research element for a long time to come. This requires improved observations of the state variables and the ability to set them in a coherent physical (as well as a chemical and biological) framework with models. Climate research and monitoring requires an integrated strategy of land/ocean/atmosphere observations, including both in situ and remote sensing platforms, and modeling and analysis. It is clear that we still need more research and analysis on climate processes, sampling strategies, and processing algorithms.

  11. Enhanced Time Out: An Improved Communication Process.

    PubMed

    Nelson, Patricia E

    2017-06-01

    An enhanced time out is an improved communication process initiated to prevent such surgical errors as wrong-site, wrong-procedure, or wrong-patient surgery. The enhanced time out at my facility mandates participation from all members of the surgical team and requires designated members to respond to specified time out elements on the surgical safety checklist. The enhanced time out incorporated at my facility expands upon the safety measures from the World Health Organization's surgical safety checklist and ensures that all personnel involved in a surgical intervention perform a final check of relevant information. Initiating the enhanced time out at my facility was intended to improve communication and teamwork among surgical team members and provide a highly reliable safety process to prevent wrong-site, wrong-procedure, and wrong-patient surgery. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  12. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    PubMed

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N A; Bin Zaheer, Kashif

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  13. 75 FR 68702 - Regulation SHO

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-09

    ... extended compliance period will give industry participants additional time for programming and testing for... time for programming and testing for compliance with the Rule's requirements. We have been informed that there have been some delays in the programming process, due in part to certain information, which...

  14. q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans.

    PubMed

    Golkov, Vladimir; Dosovitskiy, Alexey; Sperl, Jonathan I; Menzel, Marion I; Czisch, Michael; Samann, Philipp; Brox, Thomas; Cremers, Daniel

    2016-05-01

    Numerous scientific fields rely on elaborate but partly suboptimal data processing pipelines. An example is diffusion magnetic resonance imaging (diffusion MRI), a non-invasive microstructure assessment method with a prominent application in neuroimaging. Advanced diffusion models providing accurate microstructural characterization so far have required long acquisition times and thus have been inapplicable for children and adults who are uncooperative, uncomfortable, or unwell. We show that the long scan time requirements are mainly due to disadvantages of classical data processing. We demonstrate how deep learning, a group of algorithms based on recent advances in the field of artificial neural networks, can be applied to reduce diffusion MRI data processing to a single optimized step. This modification allows obtaining scalar measures from advanced models at twelve-fold reduced scan time and detecting abnormalities without using diffusion models. We set a new state of the art by estimating diffusion kurtosis measures from only 12 data points and neurite orientation dispersion and density measures from only 8 data points. This allows unprecedentedly fast and robust protocols facilitating clinical routine and demonstrates how classical data processing can be streamlined by means of deep learning.

  15. Real-Time Plasma Process Condition Sensing and Abnormal Process Detection

    PubMed Central

    Yang, Ryan; Chen, Rongshun

    2010-01-01

    The plasma process is often used in the fabrication of semiconductor wafers. However, due to the lack of real-time etching control, this may result in some unacceptable process performances and thus leads to significant waste and lower wafer yield. In order to maximize the product wafer yield, a timely and accurately process fault or abnormal detection in a plasma reactor is needed. Optical emission spectroscopy (OES) is one of the most frequently used metrologies in in-situ process monitoring. Even though OES has the advantage of non-invasiveness, it is required to provide a huge amount of information. As a result, the data analysis of OES becomes a big challenge. To accomplish real-time detection, this work employed the sigma matching method technique, which is the time series of OES full spectrum intensity. First, the response model of a healthy plasma spectrum was developed. Then, we defined a matching rate as an indictor for comparing the difference between the tested wafers response and the health sigma model. The experimental results showed that this proposal method can detect process faults in real-time, even in plasma etching tools. PMID:22219683

  16. Using lean methodology to improve productivity in a hospital oncology pharmacy.

    PubMed

    Sullivan, Peter; Soefje, Scott; Reinhart, David; McGeary, Catherine; Cabie, Eric D

    2014-09-01

    Quality improvements achieved by a hospital pharmacy through the use of lean methodology to guide i.v. compounding workflow changes are described. The outpatient oncology pharmacy of Yale-New Haven Hospital conducted a quality-improvement initiative to identify and implement workflow changes to support a major expansion of chemotherapy services. Applying concepts of lean methodology (i.e., elimination of non-value-added steps and waste in the production process), the pharmacy team performed a failure mode and effects analysis, workflow mapping, and impact analysis; staff pharmacists and pharmacy technicians identified 38 opportunities to decrease waste and increase efficiency. Three workflow processes (order verification, compounding, and delivery) accounted for 24 of 38 recommendations and were targeted for lean process improvements. The workflow was decreased to 14 steps, eliminating 6 non-value-added steps, and pharmacy staff resources and schedules were realigned with the streamlined workflow. The time required for pharmacist verification of patient-specific oncology orders was decreased by 33%; the time required for product verification was decreased by 52%. The average medication delivery time was decreased by 47%. The results of baseline and postimplementation time trials indicated a decrease in overall turnaround time to about 70 minutes, compared with a baseline time of about 90 minutes. The use of lean methodology to identify non-value-added steps in oncology order processing and the implementation of staff-recommended workflow changes resulted in an overall reduction in the turnaround time per dose. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  17. Semi-autonomous remote sensing time series generation tool

    NASA Astrophysics Data System (ADS)

    Babu, Dinesh Kumar; Kaufmann, Christof; Schmidt, Marco; Dhams, Thorsten; Conrad, Christopher

    2017-10-01

    High spatial and temporal resolution data is vital for crop monitoring and phenology change detection. Due to the lack of satellite architecture and frequent cloud cover issues, availability of daily high spatial data is still far from reality. Remote sensing time series generation of high spatial and temporal data by data fusion seems to be a practical alternative. However, it is not an easy process, since it involves multiple steps and also requires multiple tools. In this paper, a framework of Geo Information System (GIS) based tool is presented for semi-autonomous time series generation. This tool will eliminate the difficulties by automating all the steps and enable the users to generate synthetic time series data with ease. Firstly, all the steps required for the time series generation process are identified and grouped into blocks based on their functionalities. Later two main frameworks are created, one to perform all the pre-processing steps on various satellite data and the other one to perform data fusion to generate time series. The two frameworks can be used individually to perform specific tasks or they could be combined to perform both the processes in one go. This tool can handle most of the known geo data formats currently available which makes it a generic tool for time series generation of various remote sensing satellite data. This tool is developed as a common platform with good interface which provides lot of functionalities to enable further development of more remote sensing applications. A detailed description on the capabilities and the advantages of the frameworks are given in this paper.

  18. Nuclear shell model code CRUNCHER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resler, D.A.; Grimes, S.M.

    1988-05-01

    A new nuclear shell model code CRUNCHER, patterned after the code VLADIMIR, has been developed. While CRUNCHER and VLADIMIR employ the techniques of an uncoupled basis and the Lanczos process, improvements in the new code allow it to handle much larger problems than the previous code and to perform them more efficiently. Tests involving a moderately sized calculation indicate that CRUNCHER running on a SUN 3/260 workstation requires approximately one-half the central processing unit (CPU) time required by VLADIMIR running on a CRAY-1 supercomputer.

  19. Radio Frequency Identification for Space Habitat Inventory and Stowage Allocation Management

    NASA Technical Reports Server (NTRS)

    Wagner, Carole Y.

    2015-01-01

    To date, the most extensive space-based inventory management operation has been the International Space Station (ISS). Approximately 20,000 items are tracked with the Inventory Management System (IMS) software application that requires both flight and ground crews to update the database daily. This audit process is manually intensive and laborious, requiring the crew to open cargo transfer bags (CTBs), then Ziplock bags therein, to retrieve individual items. This inventory process contributes greatly to the time allocated for general crew tasks.

  20. The Algorithm Theoretical Basis Document for Level 1A Processing

    NASA Technical Reports Server (NTRS)

    Jester, Peggy L.; Hancock, David W., III

    2012-01-01

    The first process of the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software converts the Level 0 data into the Level 1A Data Products. The Level 1A Data Products are the time ordered instrument data converted from counts to engineering units. This document defines the equations that convert the raw instrument data into engineering units. Required scale factors, bias values, and coefficients are defined in this document. Additionally, required quality assurance and browse products are defined in this document.

  1. Artificial intelligence techniques for scheduling Space Shuttle missions

    NASA Technical Reports Server (NTRS)

    Henke, Andrea L.; Stottler, Richard H.

    1994-01-01

    Planning and scheduling of NASA Space Shuttle missions is a complex, labor-intensive process requiring the expertise of experienced mission planners. We have developed a planning and scheduling system using combinations of artificial intelligence knowledge representations and planning techniques to capture mission planning knowledge and automate the multi-mission planning process. Our integrated object oriented and rule-based approach reduces planning time by orders of magnitude and provides planners with the flexibility to easily modify planning knowledge and constraints without requiring programming expertise.

  2. Certification renewal process of the American Board of Orthodontics.

    PubMed

    Castelein, Paul T; DeLeon, Eladio; Dugoni, Steven A; Chung, Chun-Hsi; Tadlock, Larry P; Barone, Nicholas D; Kulbersh, Valmy P; Sabott, David G; Kastrop, Marvin C

    2015-05-01

    The American Board of Orthodontics was established in 1929 and is the oldest specialty board in dentistry. Its goal is to protect the public by ensuring competency through the certification of eligible orthodontists. Originally, applicants for certification submitted a thesis, 5 case reports, and a set of casts with appliances. Once granted, the certification never expired. Requirements have changed over the years. In 1950, 15 cases were required, and then 10 in 1987. The Board has continued to refine and improve the certification process. In 1998, certification became time limited, and a renewal process was initiated. The Board continues to improve the recertification process. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  3. The importance of decision onset

    PubMed Central

    Grinband, Jack; Ferrera, Vincent

    2015-01-01

    The neural mechanisms of decision making are thought to require the integration of evidence over time until a response threshold is reached. Much work suggests that response threshold can be adjusted via top-down control as a function of speed or accuracy requirements. In contrast, the time of integration onset has received less attention and is believed to be determined mostly by afferent or preprocessing delays. However, a number of influential studies over the past decade challenge this assumption and begin to paint a multifaceted view of the phenomenology of decision onset. This review highlights the challenges involved in initiating the integration of evidence at the optimal time and the potential benefits of adjusting integration onset to task demands. The review outlines behavioral and electrophysiolgical studies suggesting that the onset of the integration process may depend on properties of the stimulus, the task, attention, and response strategy. Most importantly, the aggregate findings in the literature suggest that integration onset may be amenable to top-down regulation, and may be adjusted much like response threshold to exert cognitive control and strategically optimize the decision process to fit immediate behavioral requirements. PMID:26609111

  4. Triggering HIV polyprotein processing by light using rapid photodegradation of a tight-binding protease inhibitor.

    PubMed

    Schimer, Jiří; Pávová, Marcela; Anders, Maria; Pachl, Petr; Šácha, Pavel; Cígler, Petr; Weber, Jan; Majer, Pavel; Řezáčová, Pavlína; Kräusslich, Hans-Georg; Müller, Barbara; Konvalinka, Jan

    2015-03-09

    HIV protease (PR) is required for proteolytic maturation in the late phase of HIV replication and represents a prime therapeutic target. The regulation and kinetics of viral polyprotein processing and maturation are currently not understood in detail. Here we design, synthesize, validate and apply a potent, photodegradable HIV PR inhibitor to achieve synchronized induction of proteolysis. The compound exhibits subnanomolar inhibition in vitro. Its photolabile moiety is released on light irradiation, reducing the inhibitory potential by 4 orders of magnitude. We determine the structure of the PR-inhibitor complex, analyze its photolytic products, and show that the enzymatic activity of inhibited PR can be fully restored on inhibitor photolysis. We also demonstrate that proteolysis of immature HIV particles produced in the presence of the inhibitor can be rapidly triggered by light enabling thus to analyze the timing, regulation and spatial requirements of viral processing in real time.

  5. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan

    2016-04-28

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less

  6. Multi-tasking arbitration and behaviour design for human-interactive robots

    NASA Astrophysics Data System (ADS)

    Kobayashi, Yuichi; Onishi, Masaki; Hosoe, Shigeyuki; Luo, Zhiwei

    2013-05-01

    Robots that interact with humans in household environments are required to handle multiple real-time tasks simultaneously, such as carrying objects, collision avoidance and conversation with human. This article presents a design framework for the control and recognition processes to meet these requirements taking into account stochastic human behaviour. The proposed design method first introduces a Petri net for synchronisation of multiple tasks. The Petri net formulation is converted to Markov decision processes and processed in an optimal control framework. Three tasks (safety confirmation, object conveyance and conversation) interact and are expressed by the Petri net. Using the proposed framework, tasks that normally tend to be designed by integrating many if-then rules can be designed in a systematic manner in a state estimation and optimisation framework from the viewpoint of the shortest time optimal control. The proposed arbitration method was verified by simulations and experiments using RI-MAN, which was developed for interactive tasks with humans.

  7. Noncontact conductivity and dielectric measurement for high throughput roll-to-roll nanomanufacturing

    NASA Astrophysics Data System (ADS)

    Orloff, Nathan D.; Long, Christian J.; Obrzut, Jan; Maillaud, Laurent; Mirri, Francesca; Kole, Thomas P.; McMichael, Robert D.; Pasquali, Matteo; Stranick, Stephan J.; Alexander Liddle, J.

    2015-11-01

    Advances in roll-to-roll processing of graphene and carbon nanotubes have at last led to the continuous production of high-quality coatings and filaments, ushering in a wave of applications for flexible and wearable electronics, woven fabrics, and wires. These applications often require specific electrical properties, and hence precise control over material micro- and nanostructure. While such control can be achieved, in principle, by closed-loop processing methods, there are relatively few noncontact and nondestructive options for quantifying the electrical properties of materials on a moving web at the speed required in modern nanomanufacturing. Here, we demonstrate a noncontact microwave method for measuring the dielectric constant and conductivity (or geometry for samples of known dielectric properties) of materials in a millisecond. Such measurement times are compatible with current and future industrial needs, enabling real-time materials characterization and in-line control of processing variables without disrupting production.

  8. Triggering HIV polyprotein processing by light using rapid photodegradation of a tight-binding protease inhibitor

    PubMed Central

    Schimer, Jiří; Pávová, Marcela; Anders, Maria; Pachl, Petr; Šácha, Pavel; Cígler, Petr; Weber, Jan; Majer, Pavel; Řezáčová, Pavlína; Kräusslich, Hans-Georg; Müller, Barbara; Konvalinka, Jan

    2015-01-01

    HIV protease (PR) is required for proteolytic maturation in the late phase of HIV replication and represents a prime therapeutic target. The regulation and kinetics of viral polyprotein processing and maturation are currently not understood in detail. Here we design, synthesize, validate and apply a potent, photodegradable HIV PR inhibitor to achieve synchronized induction of proteolysis. The compound exhibits subnanomolar inhibition in vitro. Its photolabile moiety is released on light irradiation, reducing the inhibitory potential by 4 orders of magnitude. We determine the structure of the PR-inhibitor complex, analyze its photolytic products, and show that the enzymatic activity of inhibited PR can be fully restored on inhibitor photolysis. We also demonstrate that proteolysis of immature HIV particles produced in the presence of the inhibitor can be rapidly triggered by light enabling thus to analyze the timing, regulation and spatial requirements of viral processing in real time. PMID:25751579

  9. Mapping CMMI Level 2 to Scrum Practices: An Experience Report

    NASA Astrophysics Data System (ADS)

    Diaz, Jessica; Garbajosa, Juan; Calvo-Manzano, Jose A.

    CMMI has been adopted advantageously in large companies for improvements in software quality, budget fulfilling, and customer satisfaction. However SPI strategies based on CMMI-DEV require heavy software development processes and large investments in terms of cost and time that medium/small companies do not deal with. The so-called light software development processes, such as Agile Software Development (ASD), deal with these challenges. ASD welcomes changing requirements and stresses the importance of adaptive planning, simplicity and continuous delivery of valuable software by short time-framed iterations. ASD is becoming convenient in a more and more global, and changing software market. It would be greatly useful to be able to introduce agile methods such as Scrum in compliance with CMMI process model. This paper intends to increase the understanding of the relationship between ASD and CMMI-DEV reporting empirical results that confirm theoretical comparisons between ASD practices and CMMI level2.

  10. Noncontact conductivity and dielectric measurement for high throughput roll-to-roll nanomanufacturing

    PubMed Central

    Orloff, Nathan D.; Long, Christian J.; Obrzut, Jan; Maillaud, Laurent; Mirri, Francesca; Kole, Thomas P.; McMichael, Robert D.; Pasquali, Matteo; Stranick, Stephan J.; Alexander Liddle, J.

    2015-01-01

    Advances in roll-to-roll processing of graphene and carbon nanotubes have at last led to the continuous production of high-quality coatings and filaments, ushering in a wave of applications for flexible and wearable electronics, woven fabrics, and wires. These applications often require specific electrical properties, and hence precise control over material micro- and nanostructure. While such control can be achieved, in principle, by closed-loop processing methods, there are relatively few noncontact and nondestructive options for quantifying the electrical properties of materials on a moving web at the speed required in modern nanomanufacturing. Here, we demonstrate a noncontact microwave method for measuring the dielectric constant and conductivity (or geometry for samples of known dielectric properties) of materials in a millisecond. Such measurement times are compatible with current and future industrial needs, enabling real-time materials characterization and in-line control of processing variables without disrupting production. PMID:26592441

  11. Real time software tools and methodologies

    NASA Technical Reports Server (NTRS)

    Christofferson, M. J.

    1981-01-01

    Real time systems are characterized by high speed processing and throughput as well as asynchronous event processing requirements. These requirements give rise to particular implementations of parallel or pipeline multitasking structures, of intertask or interprocess communications mechanisms, and finally of message (buffer) routing or switching mechanisms. These mechanisms or structures, along with the data structue, describe the essential character of the system. These common structural elements and mechanisms are identified, their implementation in the form of routines, tasks or macros - in other words, tools are formalized. The tools developed support or make available the following: reentrant task creation, generalized message routing techniques, generalized task structures/task families, standardized intertask communications mechanisms, and pipeline and parallel processing architectures in a multitasking environment. Tools development raise some interesting prospects in the areas of software instrumentation and software portability. These issues are discussed following the description of the tools themselves.

  12. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    PubMed Central

    Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost

    2016-01-01

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167

  13. Dimension Reduction of Multivariable Optical Emission Spectrometer Datasets for Industrial Plasma Processes

    PubMed Central

    Yang, Jie; McArdle, Conor; Daniels, Stephen

    2014-01-01

    A new data dimension-reduction method, called Internal Information Redundancy Reduction (IIRR), is proposed for application to Optical Emission Spectroscopy (OES) datasets obtained from industrial plasma processes. For example in a semiconductor manufacturing environment, real-time spectral emission data is potentially very useful for inferring information about critical process parameters such as wafer etch rates, however, the relationship between the spectral sensor data gathered over the duration of an etching process step and the target process output parameters is complex. OES sensor data has high dimensionality (fine wavelength resolution is required in spectral emission measurements in order to capture data on all chemical species involved in plasma reactions) and full spectrum samples are taken at frequent time points, so that dynamic process changes can be captured. To maximise the utility of the gathered dataset, it is essential that information redundancy is minimised, but with the important requirement that the resulting reduced dataset remains in a form that is amenable to direct interpretation of the physical process. To meet this requirement and to achieve a high reduction in dimension with little information loss, the IIRR method proposed in this paper operates directly in the original variable space, identifying peak wavelength emissions and the correlative relationships between them. A new statistic, Mean Determination Ratio (MDR), is proposed to quantify the information loss after dimension reduction and the effectiveness of IIRR is demonstrated using an actual semiconductor manufacturing dataset. As an example of the application of IIRR in process monitoring/control, we also show how etch rates can be accurately predicted from IIRR dimension-reduced spectral data. PMID:24451453

  14. Clinical image processing engine

    NASA Astrophysics Data System (ADS)

    Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald

    2009-02-01

    Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.

  15. Aging and feature search: the effect of search area.

    PubMed

    Burton-Danner, K; Owsley, C; Jackson, G R

    2001-01-01

    The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.

  16. Risk/Requirements Trade-off Guidelines for Low Cost Satellite Systems

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Man, Kin F.

    1996-01-01

    The accelerating trend toward faster, better, cheaper missions places increasing emphasis on the trade-offs between requirements and risk to reduce cost and development times, while still improving quality and reliability. The Risk/Requirement Trade-off Guidelines discussed in this paper are part of an integrated approach to address the main issues by focusing on the sum of prevention, analysis, control, or test (PACT) processes.

  17. Minimizing Input-to-Output Latency in Virtual Environment

    NASA Technical Reports Server (NTRS)

    Adelstein, Bernard D.; Ellis, Stephen R.; Hill, Michael I.

    2009-01-01

    A method and apparatus were developed to minimize latency (time delay ) in virtual environment (VE) and other discrete- time computer-base d systems that require real-time display in response to sensor input s. Latency in such systems is due to the sum of the finite time requi red for information processing and communication within and between sensors, software, and displays.

  18. Landsat-5 bumper-mode geometric correction

    USGS Publications Warehouse

    Storey, James C.; Choate, Michael J.

    2004-01-01

    The Landsat-5 Thematic Mapper (TM) scan mirror was switched from its primary operating mode to a backup mode in early 2002 in order to overcome internal synchronization problems arising from long-term wear of the scan mirror mechanism. The backup bumper mode of operation removes the constraints on scan start and stop angles enforced in the primary scan angle monitor operating mode, requiring additional geometric calibration effort to monitor the active scan angles. It also eliminates scan timing telemetry used to correct the TM scan geometry. These differences require changes to the geometric correction algorithms used to process TM data. A mathematical model of the scan mirror's behavior when operating in bumper mode was developed. This model includes a set of key timing parameters that characterize the time-varying behavior of the scan mirror bumpers. To simplify the implementation of the bumper-mode model, the bumper timing parameters were recast in terms of the calibration and telemetry data items used to process normal TM imagery. The resulting geometric performance, evaluated over 18 months of bumper-mode operations, though slightly reduced from that achievable in the primary operating mode, is still within the Landsat specifications when the data are processed with the most up-to-date calibration parameters.

  19. ED Triage Process Improvement: Timely Vital Signs for Less Acute Patients.

    PubMed

    Falconer, Stella S; Karuppan, Corinne M; Kiehne, Emily; Rama, Shravan

    2018-06-13

    Vital signs can result in an upgrade of patients' Emergency Severity Index (ESI) levels. It is therefore preferable to obtain vital signs early in the triage process, particularly for ESI level 3 patients. Emergency departments have an opportunity to redesign triage processes to meet required protocols while enhancing the quality and experience of care. We performed process analyses to redesign the door-to-vital signs process. We also developed spaghetti diagrams to reconfigure the patient arrival area. The door-to-vital signs time was reduced from 43.1 minutes to 6.44 minutes. Both patients and triage staff seemed more satisfied with the new process. The patient arrival area was less congested and more welcoming. Performing activities in parallel reduces flow time with no additional resources. Staff involvement in process planning, redesign, and control ensures engagement and early buy-in. One should anticipate how changes to one process might affect other processes. Copyright © 2018. Published by Elsevier Inc.

  20. A single aerobic exercise session accelerates movement execution but not central processing.

    PubMed

    Beyer, Kit B; Sage, Michael D; Staines, W Richard; Middleton, Laura E; McIlroy, William E

    2017-03-27

    Previous research has demonstrated that aerobic exercise has disparate effects on speed of processing and movement execution. In simple and choice reaction tasks, aerobic exercise appears to increase speed of movement execution while speed of processing is unaffected. In the flanker task, aerobic exercise has been shown to reduce response time on incongruent trials more than congruent trials, purportedly reflecting a selective influence on speed of processing related to cognitive control. However, it is unclear how changes in speed of processing and movement execution contribute to these exercise-induced changes in response time during the flanker task. This study examined how a single session of aerobic exercise influences speed of processing and movement execution during a flanker task using electromyography to partition response time into reaction time and movement time, respectively. Movement time decreased during aerobic exercise regardless of flanker congruence but returned to pre-exercise levels immediately after exercise. Reaction time during incongruent flanker trials decreased over time in both an aerobic exercise and non-exercise control condition indicating it was not specifically influenced by exercise. This disparate influence of aerobic exercise on movement time and reaction time indicates the importance of partitioning response time when examining the influence of aerobic exercise on speed of processing. The decrease in reaction time over time independent of aerobic exercise indicates that interpreting pre-to-post exercise changes in behavior requires caution. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. Collaborative Manufacturing for Small-Medium Enterprises

    NASA Astrophysics Data System (ADS)

    Irianto, D.

    2016-02-01

    Manufacturing systems involve decisions concerning production processes, capacity, planning, and control. In a MTO manufacturing systems, strategic decisions concerning fulfilment of customer requirement, manufacturing cost, and due date of delivery are the most important. In order to accelerate the decision making process, research on decision making structure when receiving order and sequencing activities under limited capacity is required. An effective decision making process is typically required by small-medium components and tools maker as supporting industries to large industries. On one side, metal small-medium enterprises are expected to produce parts, components or tools (i.e. jigs, fixture, mold, and dies) with high precision, low cost, and exact delivery time. On the other side, a metal small- medium enterprise may have weak bargaining position due to aspects such as low production capacity, limited budget for material procurement, and limited high precision machine and equipment. Instead of receiving order exclusively, a small-medium enterprise can collaborate with other small-medium enterprise in order to fulfill requirements high quality, low manufacturing cost, and just in time delivery. Small-medium enterprises can share their best capabilities to form effective supporting industries. Independent body such as community service at university can take a role as a collaboration manager. The Laboratory of Production Systems at Bandung Institute of Technology has implemented shared manufacturing systems for small-medium enterprise collaboration.

  2. 77 FR 15115 - Notice of Proposed Information Collection: Comment Request; FHA-Insured Mortgage Loan Servicing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-14

    ... Conveyance Process, Property Inspection/Preservation AGENCY: Office of the Assistant Secretary for Housing... Mortgage Loan Servicing Involving the Claims and Conveyance Process, Property Inspection/Preservation. OMB... for Preservation and Protection, HUD-50012, Mortgagees Request for Extension of Time Requirements, HUD...

  3. Introduction to Neural Networks.

    DTIC Science & Technology

    1992-03-01

    parallel processing of information that can greatly reduce the time required to perform operations which are needed in pattern recognition. Neural network, Artificial neural network , Neural net, ANN.

  4. Mission Engineering of a Rapid Cycle Spacecraft Logistics Fleet

    NASA Technical Reports Server (NTRS)

    Holladay, Jon; McClendon, Randy (Technical Monitor)

    2002-01-01

    The requirement for logistics re-supply of the International Space Station has provided a unique opportunity for engineering the implementation of NASA's first dedicated pressurized logistics carrier fleet. The NASA fleet is comprised of three Multi-Purpose Logistics Modules (MPLM) provided to NASA by the Italian Space Agency in return for operations time aboard the International Space Station. Marshall Space Flight Center was responsible for oversight of the hardware development from preliminary design through acceptance of the third flight unit, and currently manages the flight hardware sustaining engineering and mission engineering activities. The actual MPLM Mission began prior to NASA acceptance of the first flight unit in 1999 and will continue until the de-commission of the International Space Station that is planned for 20xx. Mission engineering of the MPLM program requires a broad focus on three distinct yet inter-related operations processes: pre-flight, flight operations, and post-flight turn-around. Within each primary area exist several complex subsets of distinct and inter-related activities. Pre-flight processing includes the evaluation of carrier hardware readiness for space flight. This includes integration of payload into the carrier, integration of the carrier into the launch vehicle, and integration of the carrier onto the orbital platform. Flight operations include the actual carrier operations during flight and any required real-time ground support. Post-flight processing includes de-integration of the carrier hardware from the launch vehicle, de-integration of the payload, and preparation for returning the carrier to pre-flight staging. Typical space operations are engineered around the requirements and objectives of a dedicated mission on a dedicated operational platform (i.e. Launch or Orbiting Vehicle). The MPLM, however, has expanded this envelope by requiring operations with both vehicles during flight as well as pre-launch and post-landing operations. These unique requirements combined with a success-oriented schedule of four flights within a ten-month period have provided numerous opportunities for understanding and improving operations processes. Furthermore, it has increased the knowledge base of future Payload Carrier and Launch Vehicle hardware and requirement developments. Discussion of the process flows and target areas for process improvement are provided in the subject paper. Special emphasis is also placed on supplying guidelines for hardware development. The combination of process knowledge and hardware development knowledge will provide a comprehensive overview for future vehicle developments as related to integration and transportation of payloads.

  5. Information processing requirements for on-board monitoring of automatic landing

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Karmarkar, J. S.

    1977-01-01

    A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.

  6. System simulation of direct-current speed regulation based on Simulink

    NASA Astrophysics Data System (ADS)

    Yang, Meiying

    2018-06-01

    Many production machines require the smooth adjustment of speed in a certain range In the process of modern industrial production, and require good steady-state and dynamic performance. Direct-current speed regulation system with wide speed regulation range, small relative speed variation, good stability, large overload capacity, can bear the frequent impact load, can realize stepless rapid starting-braking and inversion of frequency and other good dynamic performances, can meet the different kinds of special operation requirements in production process of automation system. The direct-current power drive system is almost always used in the field of drive technology of high performance for a long time.

  7. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman-Davies, C. S.; Benzinger, L.; Beshers, G.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1986-01-01

    Research into software development is required to reduce its production cost and to improve its quality. Modern software systems, such as the embedded software required for NASA's space station initiative, stretch current software engineering techniques. The requirements to build large, reliable, and maintainable software systems increases with time. Much theoretical and practical research is in progress to improve software engineering techniques. One such technique is to build a software system or environment which directly supports the software engineering process, i.e., the SAGA project, comprising the research necessary to design and build a software development which automates the software engineering process. Progress under SAGA is described.

  8. Rapid oxidation/stabilization technique for carbon foams, carbon fibers and C/C composites

    DOEpatents

    Tan, Seng; Tan, Cher-Dip

    2004-05-11

    An enhanced method for the post processing, i.e. oxidation or stabilization, of carbon materials including, but not limited to, carbon foams, carbon fibers, dense carbon-carbon composites, carbon/ceramic and carbon/metal composites, which method requires relatively very short and more effective such processing steps. The introduction of an "oxygen spill over catalyst" into the carbon precursor by blending with the carbon starting material or exposure of the carbon precursor to such a material supplies required oxygen at the atomic level and permits oxidation/stabilization of carbon materials in a fraction of the time and with a fraction of the energy normally required to accomplish such carbon processing steps. Carbon based foams, solids, composites and fiber products made utilizing this method are also described.

  9. Controlling Real-Time Processes On The Space Station With Expert Systems

    NASA Astrophysics Data System (ADS)

    Leinweber, David; Perry, John

    1987-02-01

    Many aspects of space station operations involve continuous control of real-time processes. These processes include electrical power system monitoring, propulsion system health and maintenance, environmental and life support systems, space suit checkout, on-board manufacturing, and servicing of attached vehicles such as satellites, shuttles, orbital maneuvering vehicles, orbital transfer vehicles and remote teleoperators. Traditionally, monitoring of these critical real-time processes has been done by trained human experts monitoring telemetry data. However, the long duration of space station missions and the high cost of crew time in space creates a powerful economic incentive for the development of highly autonomous knowledge-based expert control procedures for these space stations. In addition to controlling the normal operations of these processes, the expert systems must also be able to quickly respond to anomalous events, determine their cause and initiate corrective actions in a safe and timely manner. This must be accomplished without excessive diversion of system resources from ongoing control activities and any events beyond the scope of the expert control and diagnosis functions must be recognized and brought to the attention of human operators. Real-time sensor based expert systems (as opposed to off-line, consulting or planning systems receiving data via the keyboard) pose particular problems associated with sensor failures, sensor degradation and data consistency, which must be explicitly handled in an efficient manner. A set of these systems must also be able to work together in a cooperative manner. This paper describes the requirements for real-time expert systems in space station control, and presents prototype implementations of space station expert control procedures in PICON (process intelligent control). PICON is a real-time expert system shell which operates in parallel with distributed data acquisition systems. It incorporates a specialized inference engine with a specialized scheduling portion specifically designed to match the allocation of system resources with the operational requirements of real-time control systems. Innovative knowledge engineering techniques used in PICON to facilitate the development of real-time sensor-based expert systems which use the special features of the inference engine are illustrated in the prototype examples.

  10. Real-time blind image deconvolution based on coordinated framework of FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun

    2015-10-01

    Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.

  11. An adaptive bit synchronization algorithm under time-varying environment.

    NASA Technical Reports Server (NTRS)

    Chow, L. R.; Owen, H. A., Jr.; Wang, P. P.

    1973-01-01

    This paper presents an adaptive estimation algorithm for bit synchronization, assuming that the parameters of the incoming data process are time-varying. Experiment results have proved that this synchronizer is workable either judged by the amount of data required or the speed of convergence.

  12. Donation FAQs

    MedlinePlus

    ... to-six-week period. This does not include travel time, which is defined by air travel and staying overnight in a hotel. Nearly 40% of donors will travel during the donation process. Marrow and PBSC donation require about the same total time commitment. What if I have medical complications related ...

  13. Towards a Cloud Computing Environment: Near Real-time Cloud Product Processing and Distribution for Next Generation Satellites

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Chee, T.; Minnis, P.; Palikonda, R.; Smith, W. L., Jr.; Spangenberg, D.

    2016-12-01

    The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) processes and derives near real-time (NRT) global cloud products from operational geostationary satellite imager datasets. These products are being used in NRT to improve forecast model, aircraft icing warnings, and support aircraft field campaigns. Next generation satellites, such as the Japanese Himawari-8 and the upcoming NOAA GOES-R, present challenges for NRT data processing and product dissemination due to the increase in temporal and spatial resolution. The volume of data is expected to increase to approximately 10 folds. This increase in data volume will require additional IT resources to keep up with the processing demands to satisfy NRT requirements. In addition, these resources are not readily available due to cost and other technical limitations. To anticipate and meet these computing resource requirements, we have employed a hybrid cloud computing environment to augment the generation of SatCORPS products. This paper will describe the workflow to ingest, process, and distribute SatCORPS products and the technologies used. Lessons learn from working on both AWS Clouds and GovCloud will be discussed: benefits, similarities, and differences that could impact decision to use cloud computing and storage. A detail cost analysis will be presented. In addition, future cloud utilization, parallelization, and architecture layout will be discussed for GOES-R.

  14. Gain Scheduling for the Orion Launch Abort Vehicle Controller

    NASA Technical Reports Server (NTRS)

    McNamara, Sara J.; Restrepo, Carolina I.; Madsen, Jennifer M.; Medina, Edgar A.; Proud, Ryan W.; Whitley, Ryan J.

    2011-01-01

    One of NASAs challenges for the Orion vehicle is the control system design for the Launch Abort Vehicle (LAV), which is required to abort safely at any time during the atmospheric ascent portion of ight. The focus of this paper is the gain design and scheduling process for a controller that covers the wide range of vehicle configurations and flight conditions experienced during the full envelope of potential abort trajectories from the pad to exo-atmospheric flight. Several factors are taken into account in the automation process for tuning the gains including the abort effectors, the environmental changes and the autopilot modes. Gain scheduling is accomplished using a linear quadratic regulator (LQR) approach for the decoupled, simplified linear model throughout the operational envelope in time, altitude and Mach number. The derived gains are then implemented into the full linear model for controller requirement validation. Finally, the gains are tested and evaluated in a non-linear simulation using the vehicles ight software to ensure performance requirements are met. An overview of the LAV controller design and a description of the linear plant models are presented. Examples of the most significant challenges with the automation of the gain tuning process are then discussed. In conclusion, the paper will consider the lessons learned through out the process, especially in regards to automation, and examine the usefulness of the gain scheduling tool and process developed as applicable to non-Orion vehicles.

  15. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    NASA Astrophysics Data System (ADS)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  16. Periodical capacity setting methods for make-to-order multi-machine production systems

    PubMed Central

    Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert

    2014-01-01

    The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649

  17. Continued Data Acquisition Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwellenbach, David

    This task focused on improving techniques for integrating data acquisition of secondary particles correlated in time with detected cosmic-ray muons. Scintillation detectors with Pulse Shape Discrimination (PSD) capability show the most promise as a detector technology based on work in FY13. Typically PSD parameters are determined prior to an experiment and the results are based on these parameters. By saving data in list mode, including the fully digitized waveform, any experiment can effectively be replayed to adjust PSD and other parameters for the best data capture. List mode requires time synchronization of two independent data acquisitions (DAQ) systems: the muonmore » tracker and the particle detector system. Techniques to synchronize these systems were studied. Two basic techniques were identified: real time mode and sequential mode. Real time mode is the preferred system but has proven to be a significant challenge since two FPGA systems with different clocking parameters must be synchronized. Sequential processing is expected to work with virtually any DAQ but requires more post processing to extract the data.« less

  18. DSP Implementation of the Retinex Image Enhancement Algorithm

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2004-01-01

    The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial/spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame-grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.

  19. Computer simulations and real-time control of ELT AO systems using graphical processing units

    NASA Astrophysics Data System (ADS)

    Wang, Lianqi; Ellerbroek, Brent

    2012-07-01

    The adaptive optics (AO) simulations at the Thirty Meter Telescope (TMT) have been carried out using the efficient, C based multi-threaded adaptive optics simulator (MAOS, http://github.com/lianqiw/maos). By porting time-critical parts of MAOS to graphical processing units (GPU) using NVIDIA CUDA technology, we achieved a 10 fold speed up for each GTX 580 GPU used compared to a modern quad core CPU. Each time step of full scale end to end simulation for the TMT narrow field infrared AO system (NFIRAOS) takes only 0.11 second in a desktop with two GTX 580s. We also demonstrate that the TMT minimum variance reconstructor can be assembled in matrix vector multiply (MVM) format in 8 seconds with 8 GTX 580 GPUs, meeting the TMT requirement for updating the reconstructor. Analysis show that it is also possible to apply the MVM using 8 GTX 580s within the required latency.

  20. Performance enhancement of various real-time image processing techniques via speculative execution

    NASA Astrophysics Data System (ADS)

    Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.

    1996-03-01

    In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.

  1. Real-time fMRI processing with physiological noise correction - Comparison with off-line analysis.

    PubMed

    Misaki, Masaya; Barzigar, Nafise; Zotev, Vadim; Phillips, Raquel; Cheng, Samuel; Bodurka, Jerzy

    2015-12-30

    While applications of real-time functional magnetic resonance imaging (rtfMRI) are growing rapidly, there are still limitations in real-time data processing compared to off-line analysis. We developed a proof-of-concept real-time fMRI processing (rtfMRIp) system utilizing a personal computer (PC) with a dedicated graphic processing unit (GPU) to demonstrate that it is now possible to perform intensive whole-brain fMRI data processing in real-time. The rtfMRIp performs slice-timing correction, motion correction, spatial smoothing, signal scaling, and general linear model (GLM) analysis with multiple noise regressors including physiological noise modeled with cardiac (RETROICOR) and respiration volume per time (RVT). The whole-brain data analysis with more than 100,000voxels and more than 250volumes is completed in less than 300ms, much faster than the time required to acquire the fMRI volume. Real-time processing implementation cannot be identical to off-line analysis when time-course information is used, such as in slice-timing correction, signal scaling, and GLM. We verified that reduced slice-timing correction for real-time analysis had comparable output with off-line analysis. The real-time GLM analysis, however, showed over-fitting when the number of sampled volumes was small. Our system implemented real-time RETROICOR and RVT physiological noise corrections for the first time and it is capable of processing these steps on all available data at a given time, without need for recursive algorithms. Comprehensive data processing in rtfMRI is possible with a PC, while the number of samples should be considered in real-time GLM. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Maintainability Program Requirements for Space Systems

    NASA Technical Reports Server (NTRS)

    1987-01-01

    This document is established to provide common general requirements for all NASA programs to: design maintainability into all systems where maintenance is a factor in system operation and mission success; and ensure that maintainability characteristics are developed through the systems engineering process. These requirements are not new. Design for ease of maintenance and minimization of repair time have always been fundamental requirements of the systems engineering process. However, new or reusable orbital manned and in-flight maintainable unmanned space systems demand special emphasis on maintainability, and this document has been prepared to meet that need. Maintainability requirements on many NASA programs differ in phasing and task emphasis from requirements promulgated by other Government agencies. This difference is due to the research and development nature of NASA programs where quantities produced are generally small; therefore, the depth of logistics support typical of many programs is generally not warranted. The cost of excessive maintenance is very high due to the logistics problems associated with the space environment. The ability to provide timely maintenance often involves safety considerations for manned space flight applications. This document represents a basic set of requirements that will achieve a design for maintenance. These requirements are directed primarily at manned and unmanned orbital space systems. To be effective, maintainability requirements should be tailored to meet specific NASA program and project needs and constraints. NASA activities shall invoke the requirements of this document consistent with program planning in procurements or on inhouse development efforts.

  3. Time takes space: selective effects of multitasking on concurrent spatial processing.

    PubMed

    Mäntylä, Timo; Coni, Valentina; Kubik, Veit; Todorov, Ivo; Del Missier, Fabio

    2017-08-01

    Many everyday activities require coordination and monitoring of complex relations of future goals and deadlines. Cognitive offloading may provide an efficient strategy for reducing control demands by representing future goals and deadlines as a pattern of spatial relations. We tested the hypothesis that multiple-task monitoring involves time-to-space transformational processes, and that these spatial effects are selective with greater demands on coordinate (metric) than categorical (nonmetric) spatial relation processing. Participants completed a multitasking session in which they monitored four series of deadlines, running on different time scales, while making concurrent coordinate or categorical spatial judgments. We expected and found that multitasking taxes concurrent coordinate, but not categorical, spatial processing. Furthermore, males showed a better multitasking performance than females. These findings provide novel experimental evidence for the hypothesis that efficient multitasking involves metric relational processing.

  4. Optimizing process and equipment efficiency using integrated methods

    NASA Astrophysics Data System (ADS)

    D'Elia, Michael J.; Alfonso, Ted F.

    1996-09-01

    The semiconductor manufacturing industry is continually riding the edge of technology as it tries to push toward higher design limits. Mature fabs must cut operating costs while increasing productivity to remain profitable and cannot justify large capital expenditures to improve productivity. Thus, they must push current tool production capabilities to cut manufacturing costs and remain viable. Working to continuously improve mature production methods requires innovation. Furthermore, testing and successful implementation of these ideas into modern production environments require both supporting technical data and commitment from those working with the process daily. At AMD, natural work groups (NWGs) composed of operators, technicians, engineers, and supervisors collaborate to foster innovative thinking and secure commitment. Recently, an AMD NWG improved equipment cycle time on the Genus tungsten silicide (WSi) deposition system. The team used total productive manufacturing (TPM) to identify areas for process improvement. Improved in-line equipment monitoring was achieved by constructing a real time overall equipment effectiveness (OEE) calculator which tracked equipment down, idle, qualification, and production times. In-line monitoring results indicated that qualification time associated with slow Inspex turn-around time and machine downtime associated with manual cleans contributed greatly to reduced availability. Qualification time was reduced by 75% by implementing a new Inspex monitor pre-staging technique. Downtime associated with manual cleans was reduced by implementing an in-situ plasma etch back to extend the time between manual cleans. A designed experiment was used to optimize the process. Time between 18 hour manual cleans has been improved from every 250 to every 1500 cycles. Moreover defect density realized a 3X improvement. Overall, the team achieved a 35% increase in tool availability. This paper details the above strategies and accomplishments.

  5. Manufacturing Enhancement through Reduction of Cycle Time using Different Lean Techniques

    NASA Astrophysics Data System (ADS)

    Suganthini Rekha, R.; Periyasamy, P.; Nallusamy, S.

    2017-08-01

    In recent manufacturing system the most important parameters in production line are work in process, TAKT time and line balancing. In this article lean tools and techniques were implemented to reduce the cycle time. The aim is to enhance the productivity of the water pump pipe by identifying the bottleneck stations and non value added activities. From the initial time study the bottleneck processes were identified and then necessary expanding processes were also identified for the bottleneck process. Subsequently the improvement actions have been established and implemented using different lean tools like value stream mapping, 5S and line balancing. The current state value stream mapping was developed to describe the existing status and to identify various problem areas. 5S was used to implement the steps to reduce the process cycle time and unnecessary movements of man and material. The improvement activities were implemented with required suggested and the future state value stream mapping was developed. From the results it was concluded that the total cycle time was reduced about 290.41 seconds and the customer demand has been increased about 760 units.

  6. Event-Based Processing of Neutron Scattering Data

    DOE PAGES

    Peterson, Peter F.; Campbell, Stuart I.; Reuter, Michael A.; ...

    2015-09-16

    Many of the world's time-of-flight spallation neutrons sources are migrating to the recording of individual neutron events. This provides for new opportunities in data processing, the least of which is to filter the events based on correlating them with logs of sample environment and other ancillary equipment. This paper will describe techniques for processing neutron scattering data acquired in event mode that preserve event information all the way to a final spectrum, including any necessary corrections or normalizations. This results in smaller final errors, while significantly reducing processing time and memory requirements in typical experiments. Results with traditional histogramming techniquesmore » will be shown for comparison.« less

  7. Testing single point incremental forming molds for thermoforming operations

    NASA Astrophysics Data System (ADS)

    Afonso, Daniel; de Sousa, Ricardo Alves; Torcato, Ricardo

    2016-10-01

    Low pressure polymer processing processes as thermoforming or rotational molding use much simpler molds then high pressure processes like injection. However, despite the low forces involved with the process, molds manufacturing for this operations is still a very material, energy and time consuming operation. The goal of the research is to develop and validate a method for manufacturing plastically formed sheets metal molds by single point incremental forming (SPIF) operation for thermoforming operation. Stewart platform based SPIF machines allow the forming of thick metal sheets, granting the required structural stiffness for the mold surface, and keeping the short lead time manufacture and low thermal inertia.

  8. Extended forms of the second law for general time-dependent stochastic processes.

    PubMed

    Ge, Hao

    2009-08-01

    The second law of thermodynamics represents a universal principle applicable to all natural processes, physical systems, and engineering devices. Hatano and Sasa have recently put forward an extended form of the second law for transitions between nonequilibrium stationary states [Phys. Rev. Lett. 86, 3463 (2001)]. In this paper we further extend this form to an instantaneous interpretation, which is satisfied by quite general time-dependent stochastic processes including master-equation models and Langevin dynamics without the requirements of the stationarity for the initial and final states. The theory is applied to several thermodynamic processes, and its consistence with the classical thermodynamics is shown.

  9. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  10. Numerical Analysis of Heat Transfer During Quenching Process

    NASA Astrophysics Data System (ADS)

    Madireddi, Sowjanya; Krishnan, Krishnan Nambudiripad; Reddy, Ammana Satyanarayana

    2018-04-01

    A numerical model is developed to simulate the immersion quenching process of metals. The time of quench plays an important role if the process involves a defined step quenching schedule to obtain the desired characteristics. Lumped heat capacity analysis used for this purpose requires the value of heat transfer coefficient, whose evaluation requires large experimental data. Experimentation on a sample work piece may not represent the actual component which may vary in dimension. A Fluid-Structure interaction technique with a coupled interface between the solid (metal) and liquid (quenchant) is used for the simulations. Initial times of quenching shows boiling heat transfer phenomenon with high values of heat transfer coefficients (5000-2.5 × 105 W/m2K). Shape of the work piece with equal dimension shows less influence on the cooling rate Non-uniformity in hardness at the sharp corners can be reduced by rounding off the edges. For a square piece of 20 mm thickness, with 3 mm fillet radius, this difference is reduced by 73 %. The model can be used for any metal-quenchant combination to obtain time-temperature data without the necessity of experimentation.

  11. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    NASA Astrophysics Data System (ADS)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  12. Space Debris Detection on the HPDP, a Coarse-Grained Reconfigurable Array Architecture for Space

    NASA Astrophysics Data System (ADS)

    Suarez, Diego Andres; Bretz, Daniel; Helfers, Tim; Weidendorfer, Josef; Utzmann, Jens

    2016-08-01

    Stream processing, widely used in communications and digital signal processing applications, requires high- throughput data processing that is achieved in most cases using Application-Specific Integrated Circuit (ASIC) designs. Lack of programmability is an issue especially in space applications, which use on-board components with long life-cycles requiring applications updates. To this end, the High Performance Data Processor (HPDP) architecture integrates an array of coarse-grained reconfigurable elements to provide both flexible and efficient computational power suitable for stream-based data processing applications in space. In this work the capabilities of the HPDP architecture are demonstrated with the implementation of a real-time image processing algorithm for space debris detection in a space-based space surveillance system. The implementation challenges and alternatives are described making trade-offs to improve performance at the expense of negligible degradation of detection accuracy. The proposed implementation uses over 99% of the available computational resources. Performance estimations based on simulations show that the HPDP can amply match the application requirements.

  13. HVOF repair of steering rams for the USS Saipan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dwyer, A.L.; Jones, S.A.; Wykle, R.J.

    1995-12-31

    The steering rams aboard the USS Saipan (LHA-2) were badly corroded after 18 years of service. These rams are hydraulically operated and change the angle of the ship`s rudder. This corrosion allowed excessive leaking of hydraulic fluid into the machinery space. Permanent repairs were required as the ship has more than 20 years of service life remaining. Two methods of repair were considered, chrome plating and a HVOF applied coating. The size, 13 in. diameter and 15 ft in length, posed a significant problem for either process. The cost of the repair was similar but the time for completion wasmore » better with the HVOF process since chrome plating would have to be accomplished off yard. The HVOF process was not available within the shipyard at the time and the process and material to be used had not been approved. Extensive testing was required to get approval to proceed, a facility to accomplish the work had to be built, and the operators and HVOF procedure had to be qualified. After completion of spraying, single point machining and honing was used to obtain the required surface finish. This was the largest single HVOF coating applied by the Navy and great interest to all concerned.« less

  14. Overview of the Smart Network Element Architecture and Recent Innovations

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.

    2008-01-01

    In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.

  15. Determining the Value of Contractor Performance Assessment Reporting System (CPARS) Narratives for the Acquisition Process

    DTIC Science & Technology

    2014-05-15

    of the following tasks: sweep floors daily, mop floors with soap and water one time per week, wax floors one time per quarter, remove trash from...officer, nor does he/she hold any contracting authority. The FAR specifically states that a COR “has no authority to make any commitments or...key requirements to make sure everyone has the same view of the performance requirements. The customer states that they want the vendor to check

  16. Determining the Value of Contractor Performance Assessment Reporting System (CPARS) Narratives for the Acquisition Process

    DTIC Science & Technology

    2014-06-01

    sweep floors daily, mop floors with soap and water one time per week, wax floors one time per quarter, remove trash from garbage bins every Thursday...authority. The FAR specifically states that a COR “has no authority to make any commitments or changes that affect price, quality, quantity, delivery, or...to define key requirements to make sure everyone has the same view of the performance requirements. The customer states that they want the vendor

  17. Automated processing of whole blood units: operational value and in vitro quality of final blood components

    PubMed Central

    Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz

    2012-01-01

    Background The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Materials and methods Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. Results The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. Discussion These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement. PMID:22044958

  18. Automated processing of whole blood units: operational value and in vitro quality of final blood components.

    PubMed

    Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz

    2012-01-01

    The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement.

  19. Implementation of Super-Encryption with Trithemius Algorithm and Double Transposition Cipher in Securing PDF Files on Android Platform

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Rachmawati, D.; Jessica

    2018-03-01

    This study aims to combine the trithemus algorithm and double transposition cipher in file security that will be implemented to be an Android-based application. The parameters being examined are the real running time, and the complexity value. The type of file to be used is a file in PDF format. The overall result shows that the complexity of the two algorithms with duper encryption method is reported as Θ (n 2). However, the processing time required in the encryption process uses the Trithemius algorithm much faster than using the Double Transposition Cipher. With the length of plaintext and password linearly proportional to the processing time.

  20. ECO fill: automated fill modification to support late-stage design changes

    NASA Astrophysics Data System (ADS)

    Davis, Greg; Wilson, Jeff; Yu, J. J.; Chiu, Anderson; Chuang, Yao-Jen; Yang, Ricky

    2014-03-01

    One of the most critical factors in achieving a positive return for a design is ensuring the design not only meets performance specifications, but also produces sufficient yield to meet the market demand. The goal of design for manufacturability (DFM) technology is to enable designers to address manufacturing requirements during the design process. While new cell-based, DP-aware, and net-aware fill technologies have emerged to provide the designer with automated fill engines that support these new fill requirements, design changes that arrive late in the tapeout process (as engineering change orders, or ECOs) can have a disproportionate effect on tapeout schedules, due to the complexity of replacing fill. If not handled effectively, the impacts on file size, run time, and timing closure can significantly extend the tapeout process. In this paper, the authors examine changes to design flow methodology, supported by new fill technology, that enable efficient, fast, and accurate adjustments to metal fill late in the design process. We present an ECO fill methodology coupled with the support of advanced fill tools that can quickly locate the portion of the design affected by the change, remove and replace only the fill in that area, while maintaining the fill hierarchy. This new fill approach effectively reduces run time, contains fill file size, minimizes timing impact, and minimizes mask costs due to ECO-driven fill changes, all of which are critical factors to ensuring time-to-market schedules are maintained.

  1. Microwave processing of a dental ceramic used in computer-aided design/computer-aided manufacturing.

    PubMed

    Pendola, Martin; Saha, Subrata

    2015-01-01

    Because of their favorable mechanical properties and natural esthetics, ceramics are widely used in restorative dentistry. The conventional ceramic sintering process required for their use is usually slow, however, and the equipment has an elevated energy consumption. Sintering processes that use microwaves have several advantages compared to regular sintering: shorter processing times, lower energy consumption, and the capacity for volumetric heating. The objective of this study was to test the mechanical properties of a dental ceramic used in computer-aided design/computer-aided manufacturing (CAD/CAM) after the specimens were processed with microwave hybrid sintering. Density, hardness, and bending strength were measured. When ceramic specimens were sintered with microwaves, the processing times were reduced and protocols were simplified. Hardness was improved almost 20% compared to regular sintering, and flexural strength measurements suggested that specimens were approximately 50% stronger than specimens sintered in a conventional system. Microwave hybrid sintering may preserve or improve the mechanical properties of dental ceramics designed for CAD/CAM processing systems, reducing processing and waiting times.

  2. A State Space Modeling Approach to Mediation Analysis

    ERIC Educational Resources Information Center

    Gu, Fei; Preacher, Kristopher J.; Ferrer, Emilio

    2014-01-01

    Mediation is a causal process that evolves over time. Thus, a study of mediation requires data collected throughout the process. However, most applications of mediation analysis use cross-sectional rather than longitudinal data. Another implicit assumption commonly made in longitudinal designs for mediation analysis is that the same mediation…

  3. Context-Sensitive Adjustment of Cognitive Control in Dual-Task Performance

    ERIC Educational Resources Information Center

    Fischer, Rico; Gottschalk, Caroline; Dreisbach, Gesine

    2014-01-01

    Performing 2 highly similar tasks at the same time requires an adaptive regulation of cognitive control to shield prioritized primary task processing from between-task (cross-talk) interference caused by secondary task processing. In the present study, the authors investigated how implicitly and explicitly delivered information promotes the…

  4. 47 CFR 0.504 - Processing requests for declassification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... and review time required to process the request. A final determination shall be made within one year... warrants protection, it shall be declassified and made available to the requester, unless withholding is... appeal the denial to the Classification Review Committee, and given notice that such an appeal must be...

  5. 47 CFR 0.504 - Processing requests for declassification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and review time required to process the request. A final determination shall be made within one year... warrants protection, it shall be declassified and made available to the requester, unless withholding is... appeal the denial to the Classification Review Committee, and given notice that such an appeal must be...

  6. Multitime correlation functions in nonclassical stochastic processes

    NASA Astrophysics Data System (ADS)

    Krumm, F.; Sperling, J.; Vogel, W.

    2016-06-01

    A general method is introduced for verifying multitime quantum correlations through the characteristic function of the time-dependent P functional that generalizes the Glauber-Sudarshan P function. Quantum correlation criteria are derived which identify quantum effects for an arbitrary number of points in time. The Magnus expansion is used to visualize the impact of the required time ordering, which becomes crucial in situations when the interaction problem is explicitly time dependent. We show that the latter affects the multi-time-characteristic function and, therefore, the temporal evolution of the nonclassicality. As an example, we apply our technique to an optical parametric process with a frequency mismatch. The resulting two-time-characteristic function yields full insight into the two-time quantum correlation properties of such a system.

  7. 40 CFR 63.7720 - What are my general requirements for complying with this subpart?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... operation and maintenance requirements in this subpart at all times, except during periods of startup... process and emissions control equipment. (c) You must develop a written startup, shutdown, and malfunction plan according to the provisions in § 63.6(e)(3). The startup, shutdown, and malfunction plan also must...

  8. 40 CFR 63.7720 - What are my general requirements for complying with this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operation and maintenance requirements in this subpart at all times, except during periods of startup... process and emissions control equipment. (c) You must develop a written startup, shutdown, and malfunction plan according to the provisions in § 63.6(e)(3). The startup, shutdown, and malfunction plan also must...

  9. 40 CFR 63.7720 - What are my general requirements for complying with this subpart?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operation and maintenance requirements in this subpart at all times, except during periods of startup... process and emissions control equipment. (c) You must develop a written startup, shutdown, and malfunction plan according to the provisions in § 63.6(e)(3). The startup, shutdown, and malfunction plan also must...

  10. 40 CFR 63.7720 - What are my general requirements for complying with this subpart?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operation and maintenance requirements in this subpart at all times, except during periods of startup... process and emissions control equipment. (c) You must develop a written startup, shutdown, and malfunction plan according to the provisions in § 63.6(e)(3). The startup, shutdown, and malfunction plan also must...

  11. 40 CFR 63.7720 - What are my general requirements for complying with this subpart?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... operation and maintenance requirements in this subpart at all times, except during periods of startup... process and emissions control equipment. (c) You must develop a written startup, shutdown, and malfunction plan according to the provisions in § 63.6(e)(3). The startup, shutdown, and malfunction plan also must...

  12. Rapid doubling of the critical current of YBa 2Cu 3O 7-δ coated conductors for viable high-speed industrial processing

    DOE PAGES

    Leroux, M.; Kihlstrom, K. J.; Holleis, S.; ...

    2015-11-09

    Here, we demonstrate that 3.5-MeV oxygen irradiation can markedly enhance the in-field critical current of commercial second generation superconducting tapes with an exposure time of just 1 s per 0.8 cm 2. Furthermore we demonstrate how speed is now at the level required for an industrial reel-to-reel post-processing. The irradiation is made on production line samples through the protective silver coating and does not require any modification of the growth process. From TEM imaging, we identify small clusters as the main source of increased vortex pinning.

  13. The Characteristics and Limits of Rapid Visual Categorization

    PubMed Central

    Fabre-Thorpe, Michèle

    2011-01-01

    Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID:22007180

  14. Controls on the Environmental Fate of Compounds Controlled by Coupled Hydrologic and Reactive Processes

    NASA Astrophysics Data System (ADS)

    Hixson, J.; Ward, A. S.; McConville, M.; Remucal, C.

    2017-12-01

    Current understanding of how compounds interact with hydrologic processes or reactive processes have been well established. However, the environmental fate for compounds that interact with hydrologic AND reactive processes is not well known, yet critical in evaluating environmental risk. Evaluations of risk are often simplified to homogenize processes in space and time and to assess processes independently of one another. However, we know spatial heterogeneity and time-variable reactivities complicate predictions of environmental transport and fate, and is further complicated by the interaction of these processes, limiting our ability to accurately predict risk. Compounds that interact with both systems, such as photolytic compounds, require that both components are fully understood in order to predict transport and fate. Release of photolytic compounds occurs through both unintentional releases and intentional loadings. Evaluating risks associated with unintentional releases and implementing best management practices for intentional releases requires an in-depth understanding of the sensitivity of photolytic compounds to external controls. Lampricides, such as 3-trifluoromethyl-4-nitrophenol (TFM), are broadly applied in the Great Lakes system to control the population of invasive sea lamprey. Over-dosing can yield fish kills and other detrimental impacts. Still, planning accounts for time of passage and dilution, but not the interaction of the physical and chemical systems (i.e., storage in the hyporheic zone and time-variable decay rates). In this study, we model a series of TFM applications to test the efficacy of dosing as a function of system characteristics. Overall, our results demonstrate the complexity associated with photo-sensitive compounds through stream-hyporheic systems, and highlight the need to better understand how physical and chemical systems interact to control transport and fate in the environment.

  15. Biological reduction of chlorinated solvents: Batch-scale geochemical modeling

    NASA Astrophysics Data System (ADS)

    Kouznetsova, Irina; Mao, Xiaomin; Robinson, Clare; Barry, D. A.; Gerhard, Jason I.; McCarty, Perry L.

    2010-09-01

    Simulation of biodegradation of chlorinated solvents in dense non-aqueous phase liquid (DNAPL) source zones requires a model that accounts for the complexity of processes involved and that is consistent with available laboratory studies. This paper describes such a comprehensive modeling framework that includes microbially mediated degradation processes, microbial population growth and decay, geochemical reactions, as well as interphase mass transfer processes such as DNAPL dissolution, gas formation and mineral precipitation/dissolution. All these processes can be in equilibrium or kinetically controlled. A batch modeling example was presented where the degradation of trichloroethene (TCE) and its byproducts and concomitant reactions (e.g., electron donor fermentation, sulfate reduction, pH buffering by calcite dissolution) were simulated. Local and global sensitivity analysis techniques were applied to delineate the dominant model parameters and processes. Sensitivity analysis indicated that accurate values for parameters related to dichloroethene (DCE) and vinyl chloride (VC) degradation (i.e., DCE and VC maximum utilization rates, yield due to DCE utilization, decay rate for DCE/VC dechlorinators) are important for prediction of the overall dechlorination time. These parameters influence the maximum growth rate of the DCE and VC dechlorinating microorganisms and, thus, the time required for a small initial population to reach a sufficient concentration to significantly affect the overall rate of dechlorination. Self-inhibition of chlorinated ethenes at high concentrations and natural buffering provided by the sediment were also shown to significantly influence the dechlorination time. Furthermore, the analysis indicated that the rates of the competing, nonchlorinated electron-accepting processes relative to the dechlorination kinetics also affect the overall dechlorination time. Results demonstrated that the model developed is a flexible research tool that is able to provide valuable insight into the fundamental processes and their complex interactions during bioremediation of chlorinated ethenes in DNAPL source zones.

  16. (abstract) Application of the GPS Worldwide Network in the Study of Global Ionospheric Storms

    NASA Technical Reports Server (NTRS)

    Ho, C. M.; Mannucci, A. J.; Lindqwister, U. J.; Pi, X.; Sparks, L. C.; Rao, A. M.; Wilsion, B. D.; Yuan, D. N.; Reyes, M.

    1997-01-01

    Ionospheric storm dynamics as a response to the geomagnetic storms is a very complicated global process involving many different mechanisms. Studying ionospheric storms will help us to understand the energy coupling process between the Sun and Earth and possibly also to effectively forecast space weather changes. Such a study requires a worldwide monitoring system. The worldwide GPS network, for the first time, makes near real-time global ionospheric TEC measurements a possibility.

  17. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    NASA Astrophysics Data System (ADS)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  18. (abstract) A High Throughput 3-D Inner Product Processor

    NASA Technical Reports Server (NTRS)

    Daud, Tuan

    1996-01-01

    A particularily challenging image processing application is the real time scene acquisition and object discrimination. It requires spatio-temporal recognition of point and resolved objects at high speeds with parallel processing algorithms. Neural network paradigms provide fine grain parallism and, when implemented in hardware, offer orders of magnitude speed up. However, neural networks implemented on a VLSI chip are planer architectures capable of efficient processing of linear vector signals rather than 2-D images. Therefore, for processing of images, a 3-D stack of neural-net ICs receiving planar inputs and consuming minimal power are required. Details of the circuits with chip architectures will be described with need to develop ultralow-power electronics. Further, use of the architecture in a system for high-speed processing will be illustrated.

  19. Development and Applications of a Mobile Ecogenomic Sensor

    NASA Astrophysics Data System (ADS)

    Yamahara, K.; Preston, C. M.; Pargett, D.; Jensen, S.; Roman, B.; Walz, K.; Birch, J. M.; Hobson, B.; Kieft, B.; Zhang, Y.; Ryan, J. P.; Chavez, F.; Scholin, C. A.

    2016-12-01

    Modern molecular biological analytical methods have revolutionized our understanding of organism diversity in the ocean. Such advancements have profound implications for use in environmental research and resource management. However, the application of such technology to comprehensively document biodiversity and understand ecosystem processes in an ocean setting will require repeated observations over vast space and time scales. A fundamental challenge associated with meeting that requirement is acquiring discrete samples over spatial scales and frequencies necessary to document cause-and-effect relationships that link biological processes to variable physical and chemical gradients in rapidly changing water masses. Accomplishing that objective using ships alone is not practical. We are working to overcome this fundamental challenge by developing a new generation of biological instrumentation, the third generation ESP (3G ESP). The 3G ESP is a robotic device that automates sample collection, preservation, and/or in situ processing for real-time target molecule detection. Here we present the development of the 3G ESP and its integration with a Tethys-class Long Range AUV (LRAUV), and demonstrate its ability to collect and preserve material for subsequent metagenomic and quantitative PCR (qPCR) analyses. Further, we elucidate the potential of employing multiple mobile ecogenomic sensors to monitor ocean biodiversity, as well as following ecosystems over time to reveal time/space relationships of biological processes in response to changing environmental conditions.

  20. Vision readiness of the reserve forces of the U.S. Army.

    PubMed

    Weaver, J L; McAlister, W H

    2001-01-01

    In 1996 and 1997, the Army conducted an exercise to assess the ability to rapidly mobilize the reserve forces. In accordance with Army requirements, each soldier was evaluated to determine if he or she met vision and optical readiness standards. Of the 1,947 individuals processed through the optometry section, 40% met vision requirements without correction and 32% met vision requirements with their current spectacles. The remaining 28% required examination. A major impediment to processing reserve units for deployment is the lack of vision and optical readiness. In the mobilization for the Persian Gulf War, significant delays were incurred because of the time required to perform eye examinations and fabricate eyewear. However, as a result of this exercise, current prescriptions will be available in the event of mobilization. To ensure readiness, all units should perform such exercises periodically.

  1. A Fully Automated Microfluidic Femtosecond Laser Axotomy Platform for Nerve Regeneration Studies in C. elegans

    PubMed Central

    Gokce, Sertan Kutal; Guo, Samuel X.; Ghorashian, Navid; Everett, W. Neil; Jarrell, Travis; Kottek, Aubri; Bovik, Alan C.; Ben-Yakar, Adela

    2014-01-01

    Femtosecond laser nanosurgery has been widely accepted as an axonal injury model, enabling nerve regeneration studies in the small model organism, Caenorhabditis elegans. To overcome the time limitations of manual worm handling techniques, automation and new immobilization technologies must be adopted to improve throughput in these studies. While new microfluidic immobilization techniques have been developed that promise to reduce the time required for axotomies, there is a need for automated procedures to minimize the required amount of human intervention and accelerate the axotomy processes crucial for high-throughput. Here, we report a fully automated microfluidic platform for performing laser axotomies of fluorescently tagged neurons in living Caenorhabditis elegans. The presented automation process reduces the time required to perform axotomies within individual worms to ∼17 s/worm, at least one order of magnitude faster than manual approaches. The full automation is achieved with a unique chip design and an operation sequence that is fully computer controlled and synchronized with efficient and accurate image processing algorithms. The microfluidic device includes a T-shaped architecture and three-dimensional microfluidic interconnects to serially transport, position, and immobilize worms. The image processing algorithms can identify and precisely position axons targeted for ablation. There were no statistically significant differences observed in reconnection probabilities between axotomies carried out with the automated system and those performed manually with anesthetics. The overall success rate of automated axotomies was 67.4±3.2% of the cases (236/350) at an average processing rate of 17.0±2.4 s. This fully automated platform establishes a promising methodology for prospective genome-wide screening of nerve regeneration in C. elegans in a truly high-throughput manner. PMID:25470130

  2. Design and Data Management System

    NASA Technical Reports Server (NTRS)

    Messer, Elizabeth; Messer, Brad; Carter, Judy; Singletary, Todd; Albasini, Colby; Smith, Tammy

    2007-01-01

    The Design and Data Management System (DDMS) was developed to automate the NASA Engineering Order (EO) and Engineering Change Request (ECR) processes at the Propulsion Test Facilities at Stennis Space Center for efficient and effective Configuration Management (CM). Prior to the development of DDMS, the CM system was a manual, paper-based system that required an EO or ECR submitter to walk the changes through the acceptance process to obtain necessary approval signatures. This approval process could take up to two weeks, and was subject to a variety of human errors. The process also requires that the CM office make copies and distribute them to the Configuration Control Board members for review prior to meetings. At any point, there was a potential for an error or loss of the change records, meaning the configuration of record was not accurate. The new Web-based DDMS eliminates unnecessary copies, reduces the time needed to distribute the paperwork, reduces time to gain the necessary signatures, and prevents the variety of errors inherent in the previous manual system. After implementation of the DDMS, all EOs and ECRs can be automatically checked prior to submittal to ensure that the documentation is complete and accurate. Much of the configuration information can be documented in the DDMS through pull-down forms to ensure consistent entries by the engineers and technicians in the field. The software also can electronically route the documents through the signature process to obtain the necessary approvals needed for work authorization. The workflow of the system allows for backups and timestamps that determine the correct routing and completion of all required authorizations in a more timely manner, as well as assuring the quality and accuracy of the configuration documents.

  3. RAPID DETECTION METHOD FOR E.COLI, ENTEROCOCCI AND BACTEROIDES IN RECREATIONAL WATER

    EPA Science Inventory

    Current methodology for determining fecal contamination of drinking water sources and recreational waters rely on the time-consuming process of bacterial multiplication and require at least 24 hours from the time of sampling to the possible determination that the water is unsafe ...

  4. Impact of Salt Waste Processing Facility Streams on the Nitric-Glycolic Flowsheet in the Chemical Processing Cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martino, C.

    An evaluation of the previous Chemical Processing Cell (CPC) testing was performed to determine whether the planned concurrent operation, or “coupled” operations, of the Defense Waste Processing Facility (DWPF) with the Salt Waste Processing Facility (SWPF) has been adequately covered. Tests with the nitricglycolic acid flowsheet, which were both coupled and uncoupled with salt waste streams, included several tests that required extended boiling times. This report provides the evaluation of previous testing and the testing recommendation requested by Savannah River Remediation. The focus of the evaluation was impact on flammability in CPC vessels (i.e., hydrogen generation rate, SWPF solvent components,more » antifoam degradation products) and processing impacts (i.e., acid window, melter feed target, rheological properties, antifoam requirements, and chemical composition).« less

  5. MRO DKF Post-Processing Tool

    NASA Technical Reports Server (NTRS)

    Ayap, Shanti; Fisher, Forest; Gladden, Roy; Khanampompan, Teerapat

    2008-01-01

    This software tool saves time and reduces risk by automating two labor-intensive and error-prone post-processing steps required for every DKF [DSN (Deep Space Network) Keyword File] that MRO (Mars Reconnaissance Orbiter) produces, and is being extended to post-process the corresponding TSOE (Text Sequence Of Events) as well. The need for this post-processing step stems from limitations in the seq-gen modeling resulting in incorrect DKF generation that is then cleaned up in post-processing.

  6. Fast processing of microscopic images using object-based extended depth of field.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades

    2016-12-22

    Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.

  7. Effect of film-based versus filmless operation on the productivity of CT technologists.

    PubMed

    Reiner, B I; Siegel, E L; Hooper, F J; Glasser, D

    1998-05-01

    To determine the relative time required for a technologist to perform a computed tomographic (CT) examination in a "filmless" versus a film-based environment. Time-motion studies were performed in 204 consecutive CT examinations. Images from 96 examinations were electronically transferred to a picture archiving and communication system (PACS) without being printed to film, and 108 were printed to film. The time required to obtain and electronically transfer the images or print the images to film and make the current and previous studies available to the radiologists for interpretation was recorded. The time required for a technologist to complete a CT examination was reduced by 45% with direct image transfer to the PACS compared with the time required in the film-based mode. This reduction was due to the elimination of a number of steps in the filming process, such as the printing at multiple window or level settings. The use of a PACS can result in the elimination of multiple time-intensive tasks for the CT technologist, resulting in a marked reduction in examination time. This reduction can result in increased productivity, and, hence greater cost-effectiveness with filmless operation.

  8. The "Motor" in Implicit Motor Sequence Learning: A Foot-stepping Serial Reaction Time Task.

    PubMed

    Du, Yue; Clark, Jane E

    2018-05-03

    This protocol describes a modified serial reaction time (SRT) task used to study implicit motor sequence learning. Unlike the classic SRT task that involves finger-pressing movements while sitting, the modified SRT task requires participants to step with both feet while maintaining a standing posture. This stepping task necessitates whole body actions that impose postural challenges. The foot-stepping task complements the classic SRT task in several ways. The foot-stepping SRT task is a better proxy for the daily activities that require ongoing postural control, and thus may help us better understand sequence learning in real-life situations. In addition, response time serves as an indicator of sequence learning in the classic SRT task, but it is unclear whether response time, reaction time (RT) representing mental process, or movement time (MT) reflecting the movement itself, is a key player in motor sequence learning. The foot-stepping SRT task allows researchers to disentangle response time into RT and MT, which may clarify how motor planning and movement execution are involved in sequence learning. Lastly, postural control and cognition are interactively related, but little is known about how postural control interacts with learning motor sequences. With a motion capture system, the movement of the whole body (e.g., the center of mass (COM)) can be recorded. Such measures allow us to reveal the dynamic processes underlying discrete responses measured by RT and MT, and may aid in elucidating the relationship between postural control and the explicit and implicit processes involved in sequence learning. Details of the experimental set-up, procedure, and data processing are described. The representative data are adopted from one of our previous studies. Results are related to response time, RT, and MT, as well as the relationship between the anticipatory postural response and the explicit processes involved in implicit motor sequence learning.

  9. File Usage Analysis and Resource Usage Prediction: a Measurement-Based Study. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.-S.

    1987-01-01

    A probabilistic scheme was developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The coefficient of correlation between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  10. Using Time-Driven Activity-Based Costing as a Key Component of the Value Platform: A Pilot Analysis of Colonoscopy, Aortic Valve Replacement and Carpal Tunnel Release Procedures.

    PubMed

    Martin, Jacob A; Mayhew, Christopher R; Morris, Amanda J; Bader, Angela M; Tsai, Mitchell H; Urman, Richard D

    2018-04-01

    Time-driven activity-based costing (TDABC) is a methodology that calculates the costs of healthcare resources consumed as a patient moves along a care process. Limited data exist on the application of TDABC from the perspective of an anesthesia provider. We describe the use of TDABC, a bottom-up costing strategy and financial outcomes for three different medical-surgical procedures. In each case, a multi-disciplinary team created process maps describing the care delivery cycle for a patient encounter using the TDABC methodology. Each step in a process map delineated an activity required for delivery of patient care. The resources (personnel, equipment and supplies) associated with each step were identified. A per minute cost for each resource expended was generated, known as the capacity cost rate, and multiplied by its time requirement. The total cost for an episode of care was obtained by adding the cost of each individual resource consumed as the patient moved along a clinical pathway. We built process maps for colonoscopy in the gastroenterology suite, calculated costs of an aortic valve replacement by comparing surgical aortic valve replacement (SAVR) versus transcatheter aortic valve replacement (TAVR) techniques, and determined the cost of carpal tunnel release in an operating room versus an ambulatory procedure room. TDABC is central to the value-based healthcare platform. Application of TDABC provides a framework to identify process improvements for health care delivery. The first case demonstrates cost-savings and improved wait times by shifting some of the colonoscopies scheduled with an anesthesiologist from the main hospital to the ambulatory facility. In the second case, we show that the deployment of an aortic valve via the transcatheter route front loads the costs compared to traditional, surgical replacement. The last case demonstrates significant cost savings to the healthcare system associated with re-organization of staff required to execute a carpal tunnel release.

  11. Using Time-Driven Activity-Based Costing as a Key Component of the Value Platform: A Pilot Analysis of Colonoscopy, Aortic Valve Replacement and Carpal Tunnel Release Procedures

    PubMed Central

    Martin, Jacob A.; Mayhew, Christopher R.; Morris, Amanda J.; Bader, Angela M.; Tsai, Mitchell H.; Urman, Richard D.

    2018-01-01

    Background Time-driven activity-based costing (TDABC) is a methodology that calculates the costs of healthcare resources consumed as a patient moves along a care process. Limited data exist on the application of TDABC from the perspective of an anesthesia provider. We describe the use of TDABC, a bottom-up costing strategy and financial outcomes for three different medical-surgical procedures. Methods In each case, a multi-disciplinary team created process maps describing the care delivery cycle for a patient encounter using the TDABC methodology. Each step in a process map delineated an activity required for delivery of patient care. The resources (personnel, equipment and supplies) associated with each step were identified. A per minute cost for each resource expended was generated, known as the capacity cost rate, and multiplied by its time requirement. The total cost for an episode of care was obtained by adding the cost of each individual resource consumed as the patient moved along a clinical pathway. Results We built process maps for colonoscopy in the gastroenterology suite, calculated costs of an aortic valve replacement by comparing surgical aortic valve replacement (SAVR) versus transcatheter aortic valve replacement (TAVR) techniques, and determined the cost of carpal tunnel release in an operating room versus an ambulatory procedure room. Conclusions TDABC is central to the value-based healthcare platform. Application of TDABC provides a framework to identify process improvements for health care delivery. The first case demonstrates cost-savings and improved wait times by shifting some of the colonoscopies scheduled with an anesthesiologist from the main hospital to the ambulatory facility. In the second case, we show that the deployment of an aortic valve via the transcatheter route front loads the costs compared to traditional, surgical replacement. The last case demonstrates significant cost savings to the healthcare system associated with re-organization of staff required to execute a carpal tunnel release. PMID:29511420

  12. Advance Technology Satellites in the Commercial Environment. Volume 2: Final Report

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A forecast of transponder requirements was obtained. Certain assumptions about system configurations are implicit in this process. The factors included are interpolation of baseline year values to produce yearly figures, estimation of satellite capture, effects of peak-hours and the time-zone staggering of peak hours, circuit requirements for acceptable grade of service capacity of satellite transponders, including various compression methods where applicable, and requirements for spare transponders in orbit. The graphical distribution of traffic requirements was estimated.

  13. Age-related differences in time-based prospective memory: The role of time estimation in the clock monitoring strategy.

    PubMed

    Vanneste, Sandrine; Baudouin, Alexia; Bouazzaoui, Badiâa; Taconnat, Laurence

    2016-07-01

    Time-based prospective memory (TBPM) is required when it is necessary to remember to perform an action at a specific future point in time. This type of memory has been found to be particularly sensitive to ageing, probably because it requires a self-initiated response at a specific time. In this study, we sought to examine the involvement of temporal processes in the time monitoring strategy, which has been demonstrated to be a decisive factor in TBPM efficiency. We compared the performance of young and older adults in a TBPM task in which they had to press a response button every minute while categorising words. The design allowed participants to monitor time by checking a clock whenever they decided. Participants also completed a classic time-production task and several executive tasks assessing inhibition, updating and shifting processes. Our results confirm an age-related lack of accuracy in prospective memory performance, which seems to be related to a deficient strategic use of time monitoring. This could in turn be partially explained by age-related temporal deficits, as evidenced in the duration production task. These findings suggest that studies designed to investigate the age effect in TBPM tasks should consider the contribution of temporal mechanisms.

  14. StakeMeter: Value-Based Stakeholder Identification and Quantification Framework for Value-Based Software Systems

    PubMed Central

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N. A.; Zaheer, Kashif Bin

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called ‘StakeMeter’. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error. PMID:25799490

  15. Design analysis of levitation facility for space processing applications. [Skylab program, space shuttles

    NASA Technical Reports Server (NTRS)

    Frost, R. T.; Kornrumpf, W. P.; Napaluch, L. J.; Harden, J. D., Jr.; Walden, J. P.; Stockhoff, E. H.; Wouch, G.; Walker, L. H.

    1974-01-01

    Containerless processing facilities for the space laboratory and space shuttle are defined. Materials process examples representative of the most severe requirements for the facility in terms of electrical power, radio frequency equipment, and the use of an auxiliary electron beam heater were used to discuss matters having the greatest effect upon the space shuttle pallet payload interfaces and envelopes. Improved weight, volume, and efficiency estimates for the RF generating equipment were derived. Results are particularly significant because of the reduced requirements for heat rejection from electrical equipment, one of the principal envelope problems for shuttle pallet payloads. It is shown that although experiments on containerless melting of high temperature refractory materials make it desirable to consider the highest peak powers which can be made available on the pallet, total energy requirements are kept relatively low by the very fast processing times typical of containerless experiments and allows consideration of heat rejection capabilities lower than peak power demand if energy storage in system heat capacitances is considered. Batteries are considered to avoid a requirement for fuel cells capable of furnishing this brief peak power demand.

  16. Scheduling algorithms for automatic control systems for technological processes

    NASA Astrophysics Data System (ADS)

    Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.

    2017-01-01

    Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays - all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.

  17. Time lens assisted photonic sampling extraction

    NASA Astrophysics Data System (ADS)

    Petrillo, Keith Gordon

    Telecommunication bandwidth demands have dramatically increased in recent years due to Internet based services like cloud computing and storage, large file sharing, and video streaming. Additionally, sensing systems such as wideband radar, magnetic imaging resonance systems, and complex modulation formats to handle large data transfer in telecommunications require high speed, high resolution analog-to-digital converters (ADCs) to interpret the data. Accurately processing and acquiring the information at next generation data rates from these systems has become challenging for electronic systems. The largest contributors to the electronic bottleneck are bandwidth and timing jitter which limit speed and reduce accuracy. Optical systems have shown to have at least three orders of magnitude increase in bandwidth capabilities and state of the art mode locked lasers have reduced timing jitters into thousands of attoseconds. Such features have encouraged processing signals without the use of electronics or using photonics to assist electronics. All optical signal processing has allowed the processing of telecommunication line rates up to 1.28 Tb/s and high resolution analog-to-digital converters in the 10s of gigahertz. The major drawback to these optical systems is the high cost of the components. The application of all optical processing techniques such as a time lens and chirped processing can greatly reduce bandwidth and cost requirements of optical serial to parallel converters and push photonically assisted ADCs into the 100s of gigahertz. In this dissertation, the building blocks to a high speed photonically assisted ADC are demonstrated, each providing benefits to its own respective application. A serial to parallel converter using a continuously operating time lens as an optical Fourier processor is demonstrated to fully convert a 160-Gb/s optical time division multiplexed signal to 16 10-Gb/s channels with error free operation. Using chirped processing, an optical sample and hold concept is demonstrated and analyzed as a resolution improvement to existing photonically assisted ADCs. Simulations indicate that the application of a continuously operating time lens to a photonically assisted sampling system can increase photonically sampled systems by an order of magnitude while acquiring properties similar to an optical sample and hold system.

  18. Airfoil Shape Optimization based on Surrogate Model

    NASA Astrophysics Data System (ADS)

    Mukesh, R.; Lingadurai, K.; Selvakumar, U.

    2018-02-01

    Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.

  19. Dynamic information processing states revealed through neurocognitive models of object semantics

    PubMed Central

    Clarke, Alex

    2015-01-01

    Recognising objects relies on highly dynamic, interactive brain networks to process multiple aspects of object information. To fully understand how different forms of information about objects are represented and processed in the brain requires a neurocognitive account of visual object recognition that combines a detailed cognitive model of semantic knowledge with a neurobiological model of visual object processing. Here we ask how specific cognitive factors are instantiated in our mental processes and how they dynamically evolve over time. We suggest that coarse semantic information, based on generic shared semantic knowledge, is rapidly extracted from visual inputs and is sufficient to drive rapid category decisions. Subsequent recurrent neural activity between the anterior temporal lobe and posterior fusiform supports the formation of object-specific semantic representations – a conjunctive process primarily driven by the perirhinal cortex. These object-specific representations require the integration of shared and distinguishing object properties and support the unique recognition of objects. We conclude that a valuable way of understanding the cognitive activity of the brain is though testing the relationship between specific cognitive measures and dynamic neural activity. This kind of approach allows us to move towards uncovering the information processing states of the brain and how they evolve over time. PMID:25745632

  20. High performance real-time flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1994-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.

  1. Six Sigma process utilization in reducing door-to-balloon time at a single academic tertiary care center.

    PubMed

    Kelly, Elizabeth W; Kelly, Jonathan D; Hiestand, Brian; Wells-Kiser, Kathy; Starling, Stephanie; Hoekstra, James W

    2010-01-01

    Rapid reperfusion in patients with ST-elevation myocardial infarction (STEMI) is associated with lower mortality. Reduction in door-to-balloon (D2B) time for percutaneous coronary intervention requires multidisciplinary cooperation, process analysis, and quality improvement methodology. Six Sigma methodology was used to reduce D2B times in STEMI patients presenting to a tertiary care center. Specific steps in STEMI care were determined, time goals were established, and processes were changed to reduce each step's duration. Outcomes were tracked, and timely feedback was given to providers. After process analysis and implementation of improvements, mean D2B times decreased from 128 to 90 minutes. Improvement has been sustained; as of June 2010, the mean D2B was 56 minutes, with 100% of patients meeting the 90-minute window for the year. Six Sigma methodology and immediate provider feedback result in significant reductions in D2B times. The lessons learned may be extrapolated to other primary percutaneous coronary intervention centers. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. 12 CFR 614.4200 - General requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM LOAN POLICIES AND OPERATIONS Loan Terms...-related business, a marketing or processing operation, a rural residence, or real estate used as an... titles I or II of the Act shall be provided to the borrower at the time of execution and at any time...

  3. Monitoring Satellite Data Ingest and Processing for the Atmosphere Science Investigator-led Processing Systems (SIPS)

    NASA Astrophysics Data System (ADS)

    Witt, J.; Gumley, L.; Braun, J.; Dutcher, S.; Flynn, B.

    2017-12-01

    The Atmosphere SIPS (Science Investigator-led Processing Systems) team at the Space Science and Engineering Center (SSEC), which is funded through a NASA contract, creates Level 2 cloud and aerosol products from the VIIRS instrument aboard the S-NPP satellite. In order to monitor the ingest and processing of files, we have developed an extensive monitoring system to observe every step in the process. The status grid is used for real time monitoring, and shows the current state of the system, including what files we have and whether or not we are meeting our latency requirements. Our snapshot tool displays the state of the system in the past. It displays which files were available at a given hour and is used for historical and backtracking purposes. In addition to these grid like tools we have created histograms and other statistical graphs for tracking processing and ingest metrics, such as total processing time, job queue time, and latency statistics.

  4. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  5. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  6. A web platform for integrated surface water - groundwater modeling and data management

    NASA Astrophysics Data System (ADS)

    Fatkhutdinov, Aybulat; Stefan, Catalin; Junghanns, Ralf

    2016-04-01

    Model-based decision support systems are considered to be reliable and time-efficient tools for resources management in various hydrology related fields. However, searching and acquisition of the required data, preparation of the data sets for simulations as well as post-processing, visualization and publishing of the simulations results often requires significantly more work and time than performing the modeling itself. The purpose of the developed software is to combine data storage facilities, data processing instruments and modeling tools in a single platform which potentially can reduce time required for performing simulations, hence decision making. The system is developed within the INOWAS (Innovative Web Based Decision Support System for Water Sustainability under a Changing Climate) project. The platform integrates spatially distributed catchment scale rainfall - runoff, infiltration and groundwater flow models with data storage, processing and visualization tools. The concept is implemented in a form of a web-GIS application and is build based on free and open source components, including the PostgreSQL database management system, Python programming language for modeling purposes, Mapserver for visualization and publishing the data, Openlayers for building the user interface and others. Configuration of the system allows performing data input, storage, pre- and post-processing and visualization in a single not disturbed workflow. In addition, realization of the decision support system in the form of a web service provides an opportunity to easily retrieve and share data sets as well as results of simulations over the internet, which gives significant advantages for collaborative work on the projects and is able to significantly increase usability of the decision support system.

  7. Acoustic Data Processing and Transient Signal Analysis for the Hybrid Wing Body 14- by 22-Foot Subsonic Wind Tunnel Test

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Brooks, Thomas F.; Humphreys, William M.; Spalt, Taylor B.; Stead, Daniel J.

    2014-01-01

    An advanced vehicle concept, the HWB N2A-EXTE aircraft design, was tested in NASA Langley's 14- by 22-Foot Subsonic Wind Tunnel to study its acoustic characteristics for var- ious propulsion system installation and airframe con gurations. A signi cant upgrade to existing data processing systems was implemented, with a focus on portability and a re- duction in turnaround time. These requirements were met by updating codes originally written for a cluster environment and transferring them to a local workstation while en- abling GPU computing. Post-test, additional processing of the time series was required to remove transient hydrodynamic gusts from some of the microphone time series. A novel automated procedure was developed to analyze and reject contaminated blocks of data, under the assumption that the desired acoustic signal of interest was a band-limited sta- tionary random process, and of lower variance than the hydrodynamic contamination. The procedure is shown to successfully identify and remove contaminated blocks of data and retain the desired acoustic signal. Additional corrections to the data, mainly background subtraction, shear layer refraction calculations, atmospheric attenuation and microphone directivity corrections, were all necessary for initial analysis and noise assessments. These were implemented for the post-processing of spectral data, and are shown to behave as expected.

  8. Distributed processing for features improvement in real-time portable medical devices.

    PubMed

    Mercado, Erwin John Saavedra

    2008-01-01

    Portable biomedical devices are being developed and incorporated in daily life. Nevertheless, their standalone capacity is diminished due to the lack of processing power required to face such duties as for example, signal artifacts robustness in EKG monitor devices. The following paper presents a multiprocessor architecture made from simple microcontrollers to provide an increase in processing performance, power consumption efficiency and lower cost.

  9. Quantum state conversion in opto-electro-mechanical systems via shortcut to adiabaticity

    NASA Astrophysics Data System (ADS)

    Zhou, Xiao; Liu, Bao-Jie; Shao, L.-B.; Zhang, Xin-Ding; Xue, Zheng-Yuan

    2017-09-01

    Adiabatic processes have found many important applications in modern physics, the distinct merit of which is that accurate control over process timing is not required. However, such processes are slow, which limits their application in quantum computation, due to the limited coherent times of typical quantum systems. Here, we propose a scheme to implement quantum state conversion in opto-electro-mechanical systems via a shortcut to adiabaticity, where the process can be greatly speeded up while precise timing control is still not necessary. In our scheme, by modifying only the coupling strength, we can achieve fast quantum state conversion with high fidelity, where the adiabatic condition does not need to be met. In addition, the population of the unwanted intermediate state can be further suppressed. Therefore, our protocol presents an important step towards practical state conversion between optical and microwave photons, and thus may find many important applications in hybrid quantum information processing.

  10. Investigation of Cepstrum Analysis for Seismic/Acoustic Signal Sensor Range Determination.

    DTIC Science & Technology

    1981-01-01

    distorted by transmission through a linear system . For example, the effect of multipath and reverberation may be modeled in terms of a signal that is...called the short time averaged cepstrum. To derive some analytical expressions for short time average cepstrums we choose some functions of interest...linear process applied to the time series or any equivalent time function Repiod Period The amount of time required for one cycle of a time series Saphe

  11. Modelling health care processes for eliciting user requirements: a way to link a quality paradigm and clinical information system design.

    PubMed

    Staccini, P; Joubert, M; Quaranta, J F; Fieschi, D; Fieschi, M

    2000-01-01

    Hospital information systems have to support quality improvement objectives. The design issues of health care information system can be classified into three categories: 1) time-oriented and event-labelled storage of patient data; 2) contextual support of decision-making; 3) capabilities for modular upgrading. The elicitation of the requirements has to meet users' needs in relation to both the quality (efficacy, safety) and the monitoring of all health care activities (traceability). Information analysts need methods to conceptualize clinical information systems that provide actors with individual benefits and guide behavioural changes. A methodology is proposed to elicit and structure users' requirements using a process-oriented analysis, and it is applied to the field of blood transfusion. An object-oriented data model of a process has been defined in order to identify its main components: activity, sub-process, resources, constrains, guidelines, parameters and indicators. Although some aspects of activity, such as "where", "what else", and "why" are poorly represented by the data model alone, this method of requirement elicitation fits the dynamic of data input for the process to be traced. A hierarchical representation of hospital activities has to be found for this approach to be generalised within the organisation, for the processes to be interrelated, and for their characteristics to be shared.

  12. Annotation: What Electrical Brain Activity Tells Us about Brain Function that Other Techniques Cannot Tell Us--A Child Psychiatric Perspective

    ERIC Educational Resources Information Center

    Banaschewski, Tobias; Brandeis, Daniel

    2007-01-01

    Background: Monitoring brain processes in real time requires genuine subsecond resolution to follow the typical timing and frequency of neural events. Non-invasive recordings of electric (EEG/ERP) and magnetic (MEG) fields provide this time resolution. They directly measure neural activations associated with a wide variety of brain states and…

  13. Development of polymer nano composite patterns using fused deposition modeling for rapid investment casting process

    NASA Astrophysics Data System (ADS)

    Vivek, Tiwary; Arunkumar, P.; Deshpande, A. S.; Vinayak, Malik; Kulkarni, R. M.; Asif, Angadi

    2018-04-01

    Conventional investment casting is one of the oldest and most economical manufacturing techniques to produce intricate and complex part geometries. However, investment casting is considered economical only if the volume of production is large. Design iterations and design optimisations in this technique proves to be very costly due to time and tooling cost for making dies for producing wax patterns. However, with the advent of Additive manufacturing technology, plastic patterns promise a very good potential to replace the wax patterns. This approach can be very useful for low volume production & lab requirements, since the cost and time required to incorporate the changes in the design is very low. This research paper discusses the steps involved for developing polymer nanocomposite filaments and checking its suitability for investment castings. The process parameters of the 3D printer machine are also optimized using the DOE technique to obtain mechanically stronger plastic patterns. The study is done to develop a framework for rapid investment casting for lab as well as industrial requirements.

  14. A generic template for automated bioanalytical ligand-binding assays using modular robotic scripts in support of discovery biotherapeutic programs.

    PubMed

    Duo, Jia; Dong, Huijin; DeSilva, Binodh; Zhang, Yan J

    2013-07-01

    Sample dilution and reagent pipetting are time-consuming steps in ligand-binding assays (LBAs). Traditional automation-assisted LBAs use assay-specific scripts that require labor-intensive script writing and user training. Five major script modules were developed on Tecan Freedom EVO liquid handling software to facilitate the automated sample preparation and LBA procedure: sample dilution, sample minimum required dilution, standard/QC minimum required dilution, standard/QC/sample addition, and reagent addition. The modular design of automation scripts allowed the users to assemble an automated assay with minimal script modification. The application of the template was demonstrated in three LBAs to support discovery biotherapeutic programs. The results demonstrated that the modular scripts provided the flexibility in adapting to various LBA formats and the significant time saving in script writing and scientist training. Data generated by the automated process were comparable to those by manual process while the bioanalytical productivity was significantly improved using the modular robotic scripts.

  15. Narrative writing: Effective ways and best practices

    PubMed Central

    Ledade, Samir D.; Jain, Shishir N.; Darji, Ankit A.; Gupta, Vinodkumar H.

    2017-01-01

    A narrative is a brief summary of specific events experienced by patients, during the course of a clinical trial. Narrative writing involves multiple activities such as generation of patient profiles, review of data sources, and identification of events for which narratives are required. A sponsor outsources narrative writing activities to leverage the expertise of service providers which in turn requires effective management of resources, cost, time, quality, and overall project management. Narratives are included as an appendix to the clinical study report and are submitted to the regulatory authorities as a part of dossier. Narratives aid in the evaluation of the safety profile of the investigational drug under study. To deliver high-quality narratives within the specified timeframe to the sponsor can be achieved by standardizing processes, increasing efficiency, optimizing working capacity, implementing automation, and reducing cost. This paper focuses on effective ways to design narrative writing process and suggested best practices, which enable timely delivery of high-quality narratives to fulfill the regulatory requirement. PMID:28447014

  16. Narrative writing: Effective ways and best practices.

    PubMed

    Ledade, Samir D; Jain, Shishir N; Darji, Ankit A; Gupta, Vinodkumar H

    2017-01-01

    A narrative is a brief summary of specific events experienced by patients, during the course of a clinical trial. Narrative writing involves multiple activities such as generation of patient profiles, review of data sources, and identification of events for which narratives are required. A sponsor outsources narrative writing activities to leverage the expertise of service providers which in turn requires effective management of resources, cost, time, quality, and overall project management. Narratives are included as an appendix to the clinical study report and are submitted to the regulatory authorities as a part of dossier. Narratives aid in the evaluation of the safety profile of the investigational drug under study. To deliver high-quality narratives within the specified timeframe to the sponsor can be achieved by standardizing processes, increasing efficiency, optimizing working capacity, implementing automation, and reducing cost. This paper focuses on effective ways to design narrative writing process and suggested best practices, which enable timely delivery of high-quality narratives to fulfill the regulatory requirement.

  17. Method for Reducing the Refresh Rate of Fiber Bragg Grating Sensors

    NASA Technical Reports Server (NTRS)

    Parker, Allen R., Jr. (Inventor)

    2014-01-01

    The invention provides a method of obtaining the FBG data in final form (transforming the raw data into frequency and location data) by taking the raw FBG sensor data and dividing the data into a plurality of segments over time. By transforming the raw data into a plurality of smaller segments, processing time is significantly decreased. Also, by defining the segments over time, only one processing step is required. By employing this method, the refresh rate of FBG sensor systems can be improved from about 1 scan per second to over 20 scans per second.

  18. Automated matching software for clinical trials eligibility: measuring efficiency and flexibility.

    PubMed

    Penberthy, Lynne; Brown, Richard; Puma, Federico; Dahman, Bassam

    2010-05-01

    Clinical trials (CT) serve as the media that translates clinical research into standards of care. Low or slow recruitment leads to delays in delivery of new therapies to the public. Determination of eligibility in all patients is one of the most important factors to assure unbiased results from the clinical trials process and represents the first step in addressing the issue of under representation and equal access to clinical trials. This is a pilot project evaluating the efficiency, flexibility, and generalizibility of an automated clinical trials eligibility screening tool across 5 different clinical trials and clinical trial scenarios. There was a substantial total savings during the study period in research staff time spent in evaluating patients for eligibility ranging from 165h to 1329h. There was a marked enhancement in efficiency with the automated system for all but one study in the pilot. The ratio of mean staff time required per eligible patient identified ranged from 0.8 to 19.4 for the manual versus the automated process. The results of this study demonstrate that automation offers an opportunity to reduce the burden of the manual processes required for CT eligibility screening and to assure that all patients have an opportunity to be evaluated for participation in clinical trials as appropriate. The automated process greatly reduces the time spent on eligibility screening compared with the traditional manual process by effectively transferring the load of the eligibility assessment process to the computer. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  19. Analytical Modeling and Performance Prediction of Remanufactured Gearbox Components

    NASA Astrophysics Data System (ADS)

    Pulikollu, Raja V.; Bolander, Nathan; Vijayakar, Sandeep; Spies, Matthew D.

    Gearbox components operate in extreme environments, often leading to premature removal or overhaul. Though worn or damaged, these components still have the ability to function given the appropriate remanufacturing processes are deployed. Doing so reduces a significant amount of resources (time, materials, energy, manpower) otherwise required to produce a replacement part. Unfortunately, current design and analysis approaches require extensive testing and evaluation to validate the effectiveness and safety of a component that has been used in the field then processed outside of original OEM specification. To test all possible combination of component coupled with various levels of potential damage repaired through various options of processing would be an expensive and time consuming feat, thus prohibiting a broad deployment of remanufacturing processes across industry. However, such evaluation and validation can occur through Integrated Computational Materials Engineering (ICME) modeling and simulation. Sentient developed a microstructure-based component life prediction (CLP) tool to quantify and assist gearbox components remanufacturing process. This was achieved by modeling the design-manufacturing-microstructure-property relationship. The CLP tool assists in remanufacturing of high value, high demand rotorcraft, automotive and wind turbine gears and bearings. This paper summarizes the CLP models development, and validation efforts by comparing the simulation results with rotorcraft spiral bevel gear physical test data. CLP analyzes gear components and systems for safety, longevity, reliability and cost by predicting (1) New gearbox component performance, and optimal time-to-remanufacture (2) Qualification of used gearbox components for remanufacturing process (3) Predicting the remanufactured component performance.

  20. The SpaceCube Family of Hybrid On-Board Science Data Processors: An Update

    NASA Astrophysics Data System (ADS)

    Flatley, T.

    2012-12-01

    SpaceCube is an FPGA based on-board hybrid science data processing system developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. The SpaceCube design strategy incorporates commercial rad-tolerant FPGA technology and couples it with an upset mitigation software architecture to provide "order of magnitude" improvements in computing power over traditional rad-hard flight systems. Many of the missions proposed in the Earth Science Decadal Survey (ESDS) will require "next generation" on-board processing capabilities to meet their specified mission goals. Advanced laser altimeter, radar, lidar and hyper-spectral instruments are proposed for at least ten of the ESDS missions, and all of these instrument systems will require advanced on-board processing capabilities to facilitate the timely conversion of Earth Science data into Earth Science information. Both an "order of magnitude" increase in processing power and the ability to "reconfigure on the fly" are required to implement algorithms that detect and react to events, to produce data products on-board for applications such as direct downlink, quick look, and "first responder" real-time awareness, to enable "sensor web" multi-platform collaboration, and to perform on-board "lossless" data reduction by migrating typical ground-based processing functions on-board, thus reducing on-board storage and downlink requirements. This presentation will highlight a number of SpaceCube technology developments to date and describe current and future efforts, including the collaboration with the U.S. Department of Defense - Space Test Program (DoD/STP) on the STP-H4 ISS experiment pallet (launch June 2013) that will demonstrate SpaceCube 2.0 technology on-orbit.; ;

  1. Can CH-53K 3D Technical Data Support the Provisioning Process

    DTIC Science & Technology

    2017-05-01

    contain the minimum required data characteristics and elements (Appendix A) and (2) are fully anno - tated, they can be converted to 3D PDF files that...were anno - tated as part of the original model development/creation process, the time to perform an annotation would be about 50 percent (about 1.5...files, after the fact. If the models had been anno - tated at the time they were created, we estimate that the cost to implement a 3D PDF solution for

  2. Discrete Film Cooling in a Rocket with Curved Walls

    DTIC Science & Technology

    2009-12-01

    insight to be gained by observing the process of effusion cooling in its most basic elements. In rocket applications, the first desired condition is...ηspan. Convergence was determined by doubling the number of cells, mostly in the region near the hole, until less than a 1 % change was observed in the...method was required to determine the absolute start time for the transient process . To find the time error, start again with TS − Ti Taw − Ti = 1 − exp

  3. Recent advances in phase shifted time averaging and stroboscopic interferometry

    NASA Astrophysics Data System (ADS)

    Styk, Adam; Józwik, Michał

    2016-08-01

    Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.

  4. Range and mission scheduling automation using combined AI and operations research techniques

    NASA Technical Reports Server (NTRS)

    Arbabi, Mansur; Pfeifer, Michael

    1987-01-01

    Ground-based systems for Satellite Command, Control, and Communications (C3) operations require a method for planning, scheduling and assigning the range resources such as: antenna systems scattered around the world, communications systems, and personnel. The method must accommodate user priorities, last minute changes, maintenance requirements, and exceptions from nominal requirements. Described are computer programs which solve 24 hour scheduling problems, using heuristic algorithms and a real time interactive scheduling process.

  5. Using 2H and 18O in assessing evaporation and water residence time of lakes in EPA’s National Lakes Assessment.

    EPA Science Inventory

    Stable isotopes of water and organic material can be very useful in monitoring programs because stable isotopes integrate information about ecological processes and record this information. Most ecological processes of interest for water quality (i.e. denitrification) require si...

  6. 9 CFR 590.420 - Inspection.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... requirements of § 590.100(a) in the preparation of any articles for human food shall be deemed to be a plant... human food at the time it leaves the plant. Upon any such inspection, if any product or portion thereof... regulations, of the processing of egg products in each official plant processing egg products for commerce...

  7. 9 CFR 590.420 - Inspection.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... requirements of § 590.100(a) in the preparation of any articles for human food shall be deemed to be a plant... human food at the time it leaves the plant. Upon any such inspection, if any product or portion thereof... regulations, of the processing of egg products in each official plant processing egg products for commerce...

  8. Regulation of cold-induced sweetening in potatoes and markers for fast-track new variety development

    USDA-ARS?s Scientific Manuscript database

    Potato breeding is a tedious, time consuming process. With the growing requirements of the potato processing industry for new potato varieties, there is need for effective tools to speed-up new variety development. The purpose of this study was to understand the enzymatic regulation of cold-induce...

  9. Machine-Aided Indexing. Technical Progress Report for Period January 1967-June 1969.

    ERIC Educational Resources Information Center

    Klingbiel, Paul H.

    Working toward the goal of an automatic indexing system which is truly competitive with human indexing in cost, time and comprehensiveness the Machine-Aided Indexing (MAI) process was developed at the Defense Documentation Center (DDC). This indexing process uses linguistic techniques but does not require complete syntactic analysis of sentences…

  10. Analysis of Alternatives (AoA) Process Improvement Study

    DTIC Science & Technology

    2016-12-01

    stakeholders, and mapped the process activities and durations. We tasked the SAG members with providing the information required on case studies and...are the expected time saves/cost/risk of any changes? (3) Utilization of case studies for both “good” and “challenged” AoAs to identify lessons...16 4 CASE STUDIES

  11. Grading Homework to Emphasize Problem-Solving Process Skills

    ERIC Educational Resources Information Center

    Harper, Kathleen A.

    2012-01-01

    This article describes a grading approach that encourages students to employ particular problem-solving skills. Some strengths of this method, called "process-based grading," are that it is easy to implement, requires minimal time to grade, and can be used in conjunction with either an online homework delivery system or paper-based homework.

  12. Promoting the School Learning Processes: Principals as Learning Boundary Spanners

    ERIC Educational Resources Information Center

    Benoliel, Pascale; Schechter, Chen

    2017-01-01

    Purpose: The ongoing challenge to sustain school learning and improvement requires schools to explore new ways, and at the same time exploit previous experience. The purpose of this paper is to attempt to expand the knowledge of mechanisms that can facilitate school learning processes by proposing boundary activities and learning mechanisms in…

  13. Design for Review - Applying Lessons Learned to Improve the FPGA Review Process

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Li, Kenneth E.

    2014-01-01

    Flight Field Programmable Gate Array (FPGA) designs are required to be independently reviewed. This paper provides recommendations to Flight FPGA designers to properly prepare their designs for review in order to facilitate the review process, and reduce the impact of the review time in the overall project schedule.

  14. The Sensitivity of Memory Consolidation and Reconsolidation to Inhibitors of Protein Synthesis and Kinases: Computational Analysis

    ERIC Educational Resources Information Center

    Zhang, Yili; Smolen, Paul; Baxter, Douglas A.; Byrne, John H.

    2010-01-01

    Memory consolidation and reconsolidation require kinase activation and protein synthesis. Blocking either process during or shortly after training or recall disrupts memory stabilization, which suggests the existence of a critical time window during which these processes are necessary. Using a computational model of kinase synthesis and…

  15. Engineering study for the functional design of a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Miller, J. S.; Vandever, W. H.; Stanten, S. F.; Avakian, A. E.; Kosmala, A. L.

    1972-01-01

    The results are presented of a study to generate a functional system design of a multiprocessing computer system capable of satisfying the computational requirements of a space station. These data management system requirements were specified to include: (1) real time control, (2) data processing and storage, (3) data retrieval, and (4) remote terminal servicing.

  16. 77 FR 39125 - Defense Acquisition Regulations System; Defense Federal Acquisition Regulation Supplement; Only...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-29

    ... resolicitation period has been added to address the application to small business set-asides. The final rule... transactional process time in all cases where only a single offer is received in response to a competitive... requirement, if only one offer is received, has also been added for small business set-asides. c. Requirements...

  17. Examining the Role of Concentration, Vocabulary and Self-Concept in Listening and Reading Comprehension

    ERIC Educational Resources Information Center

    Wolfgramm, Christine; Suter, Nicole; Göksel, Eva

    2016-01-01

    Listening is regarded as a key requirement for successful communication and is fundamentally linked to other language skills. Unlike reading, it requires both hearing and processing information in real-time. We therefore propose that the ability to concentrate is a strong predictor of listening comprehension. Using structural equation modeling,…

  18. Processes involved in the development of latent fingerprints using the cyanoacrylate fuming method.

    PubMed

    Lewis, L A; Smithwick, R W; Devault, G L; Bolinger, B; Lewis, S A

    2001-03-01

    Chemical processes involved in the development of latent fingerprints using the cyanoacrylate fuming method have been studied. Two major types of latent prints have been investigated-clean and oily prints. Scanning electron microscopy (SEM) has been used as a tool for determining the morphology of the polymer developed separately on clean and oily prints after cyanoacrylate fuming. A correlation between the chemical composition of an aged latent fingerprint, prior to development, and the quality of a developed fingerprint has been observed in the morphology. The moisture in the print prior to fuming has been found to be more important than the moisture in the air during fuming for the development of a useful latent print. In addition, the amount of time required to develop a high quality latent print has been found to be within 2 min. The cyanoacrylate polymerization process is extremely rapid. When heat is used to accelerate the fuming process, typically a period of 2 min is required to develop the print. The optimum development time depends upon the concentration of cyanoacrylate vapors within the enclosure.

  19. Deciphering the Genetic Programme Triggering Timely and Spatially-Regulated Chitin Deposition

    PubMed Central

    Rotstein, Bárbara; Casali, Andreu; Llimargas, Marta

    2015-01-01

    Organ and tissue formation requires a finely tuned temporal and spatial regulation of differentiation programmes. This is necessary to balance sufficient plasticity to undergo morphogenesis with the acquisition of the mature traits needed for physiological activity. Here we addressed this issue by analysing the deposition of the chitinous extracellular matrix of Drosophila, an essential element of the cuticle (skin) and respiratory system (tracheae) in this insect. Chitin deposition requires the activity of the chitin synthase Krotzkopf verkehrt (Kkv). Our data demonstrate that this process equally requires the activity of two other genes, namely expansion (exp) and rebuf (reb). We found that Exp and Reb have interchangeable functions, and in their absence no chitin is produced, in spite of the presence of Kkv. Conversely, when Kkv and Exp/Reb are co-expressed in the ectoderm, they promote chitin deposition, even in tissues normally devoid of this polysaccharide. Therefore, our results indicate that both functions are not only required but also sufficient to trigger chitin accumulation. We show that this mechanism is highly regulated in time and space, ensuring chitin accumulation in the correct tissues and developmental stages. Accordingly, we observed that unregulated chitin deposition disturbs morphogenesis, thus highlighting the need for tight regulation of this process. In summary, here we identify the genetic programme that triggers the timely and spatially regulated deposition of chitin and thus provide new insights into the extracellular matrix maturation required for physiological activity. PMID:25617778

  20. High-speed single-shot optical focusing through dynamic scattering media with full-phase wavefront shaping.

    PubMed

    Hemphill, Ashton S; Shen, Yuecheng; Liu, Yan; Wang, Lihong V

    2017-11-27

    In biological applications, optical focusing is limited by the diffusion of light, which prevents focusing at depths greater than ∼1 mm in soft tissue. Wavefront shaping extends the depth by compensating for phase distortions induced by scattering and thus allows for focusing light through biological tissue beyond the optical diffusion limit by using constructive interference. However, due to physiological motion, light scattering in tissue is deterministic only within a brief speckle correlation time. In in vivo tissue, this speckle correlation time is on the order of milliseconds, and so the wavefront must be optimized within this brief period. The speed of digital wavefront shaping has typically been limited by the relatively long time required to measure and display the optimal phase pattern. This limitation stems from the low speeds of cameras, data transfer and processing, and spatial light modulators. While binary-phase modulation requiring only two images for the phase measurement has recently been reported, most techniques require at least three frames for the full-phase measurement. Here, we present a full-phase digital optical phase conjugation method based on off-axis holography for single-shot optical focusing through scattering media. By using off-axis holography in conjunction with graphics processing unit based processing, we take advantage of the single-shot full-phase measurement while using parallel computation to quickly reconstruct the phase map. With this system, we can focus light through scattering media with a system latency of approximately 9 ms, on the order of the in vivo speckle correlation time.

  1. High-speed single-shot optical focusing through dynamic scattering media with full-phase wavefront shaping

    NASA Astrophysics Data System (ADS)

    Hemphill, Ashton S.; Shen, Yuecheng; Liu, Yan; Wang, Lihong V.

    2017-11-01

    In biological applications, optical focusing is limited by the diffusion of light, which prevents focusing at depths greater than ˜1 mm in soft tissue. Wavefront shaping extends the depth by compensating for phase distortions induced by scattering and thus allows for focusing light through biological tissue beyond the optical diffusion limit by using constructive interference. However, due to physiological motion, light scattering in tissue is deterministic only within a brief speckle correlation time. In in vivo tissue, this speckle correlation time is on the order of milliseconds, and so the wavefront must be optimized within this brief period. The speed of digital wavefront shaping has typically been limited by the relatively long time required to measure and display the optimal phase pattern. This limitation stems from the low speeds of cameras, data transfer and processing, and spatial light modulators. While binary-phase modulation requiring only two images for the phase measurement has recently been reported, most techniques require at least three frames for the full-phase measurement. Here, we present a full-phase digital optical phase conjugation method based on off-axis holography for single-shot optical focusing through scattering media. By using off-axis holography in conjunction with graphics processing unit based processing, we take advantage of the single-shot full-phase measurement while using parallel computation to quickly reconstruct the phase map. With this system, we can focus light through scattering media with a system latency of approximately 9 ms, on the order of the in vivo speckle correlation time.

  2. Application of process monitoring to anomaly detection in nuclear material processing systems via system-centric event interpretation of data from multiple sensors of varying reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao

    In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less

  3. Factors influencing timely initiation and completion of gestational diabetes mellitus screening and diagnosis - a qualitative study from Tamil Nadu, India.

    PubMed

    Nielsen, Karoline Kragelund; Rheinländer, Thilde; Kapur, Anil; Damm, Peter; Seshiah, Veerasamy; Bygbjerg, Ib C

    2017-08-01

    In 2007, universal screening for gestational diabetes mellitus (GDM) was introduced in Tamil Nadu, India. To identify factors hindering or facilitating timely initiation and completion of the GDM screening and diagnosis process, our study investigated how pregnant women in rural and urban Tamil Nadu access and navigate different GDM related health services. The study was carried out in two settings: an urban private diabetes centre and a rural government primary health centre. Observations of the process of screening and diagnosis at the health centres as well as semi-structured interviews with 30 pregnant women and nine health care providers were conducted. Data was analysed using qualitative content analysis. There were significant differences in the process of GDM screening and diagnosis in the urban and rural settings. Several factors hindering or facilitating timely initiation and completion of the process were identified. Timely attendance required awareness, motivation and opportunity to attend. Women had to attend the health centre at the right time and sometimes at the right gestational age to initiate the test, wait to complete the test and obtain the test report in time to initiate further action. All these steps and requirements were influenced by factors within and outside the health system such as getting right information from health care providers, clinic timings, characteristics of the test, availability of transport, social network and support, and social norms and cultural practices. Minimising and aligning complex stepwise processes of prenatal care and GDM screening delivery and attention to the factors influencing it are important for further improving and expanding GDM screening and related services, not only in Tamil Nadu but in other similar low and middle income settings. This study stresses the importance of guidelines and diagnostic criteria which are simple and feasible on the ground.

  4. Foodservice yield and fabrication times for beef as influenced by purchasing options and merchandising styles.

    PubMed

    Weatherly, B H; Griffin, D B; Johnson, H K; Walter, J P; De La Zerda, M J; Tipton, N C; Savell, J W

    2001-12-01

    Selected beef subprimals were obtained from fabrication lines of three foodservice purveyors to assist in the development of a software support program for the beef foodservice industry. Subprimals were fabricated into bone-in or boneless foodservice ready-to-cook portion-sized cuts and associated components by professional meat cutters. Each subprimal was cut to generate mean foodservice cutting yields and labor requirements, which were calculated from observed weights (kilograms) and processing times (seconds). Once fabrication was completed, data were analyzed to determine means and standard errors of percentage yields and processing times for each subprimal. Subprimals cut to only one end point were evaluated for mean foodservice yields and processing times, but no comparisons were made within subprimal. However, those traditionally cut into various end points were additionally compared by cutting style. Subprimals cut by a single cutting style included rib, roast-ready; ribeye roll, lip-on, bone-in; brisket, deckle-off, boneless; top (inside) round; and bottom sirloin butt, flap, boneless. Subprimals cut into multiple end points or styles included ribeye, lip-on; top sirloin, cap; tenderloin butt, defatted; shortloin, short-cut; strip loin, boneless; top sirloin butt, boneless; and tenderloin, full, side muscle on, defatted. Mean yields of portion cuts, and mean fabrication times required to manufacture these cuts differed (P < 0.05) by cutting specification of the final product. In general, as the target portion size of fabricated steaks decreased, the mean number of steaks derived from any given subprimal cut increased, causing total foodservice yield to decrease and total processing time to increase. Therefore, an inverse relationship tended to exist between processing times and foodservice yields. With a method of accurately evaluating various beef purchase options, such as traditional commodity subprimals, closely trimmed subprimals, and pre-cut portion steaks in terms of yield and labor cost, foodservice operators will be better equipped to decide what option is more viable for their operation.

  5. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  6. Interpreting Quantifier Scope Ambiguity: Evidence of Heuristic First, Algorithmic Second Processing

    PubMed Central

    Dwivedi, Veena D.

    2013-01-01

    The present work suggests that sentence processing requires both heuristic and algorithmic processing streams, where the heuristic processing strategy precedes the algorithmic phase. This conclusion is based on three self-paced reading experiments in which the processing of two-sentence discourses was investigated, where context sentences exhibited quantifier scope ambiguity. Experiment 1 demonstrates that such sentences are processed in a shallow manner. Experiment 2 uses the same stimuli as Experiment 1 but adds questions to ensure deeper processing. Results indicate that reading times are consistent with a lexical-pragmatic interpretation of number associated with context sentences, but responses to questions are consistent with the algorithmic computation of quantifier scope. Experiment 3 shows the same pattern of results as Experiment 2, despite using stimuli with different lexical-pragmatic biases. These effects suggest that language processing can be superficial, and that deeper processing, which is sensitive to structure, only occurs if required. Implications for recent studies of quantifier scope ambiguity are discussed. PMID:24278439

  7. Economy with the time delay of information flow—The stock market case

    NASA Astrophysics Data System (ADS)

    Miśkiewicz, Janusz

    2012-02-01

    Any decision process requires information about the past and present state of the system, but in an economy acquiring data and processing it is an expensive and time-consuming task. Therefore, the state of the system is often measured over some legal interval, analysed after the end of well defined time periods and the results announced much later before any strategic decision is envisaged. The various time delay roles have to be crucially examined. Here, a model of stock market coupled with an economy is investigated to emphasise the role of the time delay span on the information flow. It is shown that the larger the time delay the more important the collective behaviour of agents since one observes time oscillations in the absolute log-return autocorrelations.

  8. Traceability of Software Safety Requirements in Legacy Safety Critical Systems

    NASA Technical Reports Server (NTRS)

    Hill, Janice L.

    2007-01-01

    How can traceability of software safety requirements be created for legacy safety critical systems? Requirements in safety standards are imposed most times during contract negotiations. On the other hand, there are instances where safety standards are levied on legacy safety critical systems, some of which may be considered for reuse for new applications. Safety standards often specify that software development documentation include process-oriented and technical safety requirements, and also require that system and software safety analyses are performed supporting technical safety requirements implementation. So what can be done if the requisite documents for establishing and maintaining safety requirements traceability are not available?

  9. Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Nikbay, Melike; Heeg, Jennifer

    2017-01-01

    This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.

  10. Current State of the Art Historic Building Information Modelling

    NASA Astrophysics Data System (ADS)

    Dore, C.; Murphy, M.

    2017-08-01

    In an extensive review of existing literature a number of observations were made in relation to the current approaches for recording and modelling existing buildings and environments: Data collection and pre-processing techniques are becoming increasingly automated to allow for near real-time data capture and fast processing of this data for later modelling applications. Current BIM software is almost completely focused on new buildings and has very limited tools and pre-defined libraries for modelling existing and historic buildings. The development of reusable parametric library objects for existing and historic buildings supports modelling with high levels of detail while decreasing the modelling time. Mapping these parametric objects to survey data, however, is still a time-consuming task that requires further research. Promising developments have been made towards automatic object recognition and feature extraction from point clouds for as-built BIM. However, results are currently limited to simple and planar features. Further work is required for automatic accurate and reliable reconstruction of complex geometries from point cloud data. Procedural modelling can provide an automated solution for generating 3D geometries but lacks the detail and accuracy required for most as-built applications in AEC and heritage fields.

  11. Additive Manufacturing of Tooling for Refrigeration Cabinet Foaming Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Post, Brian K; Nuttall, David; Cukier, Michael

    The primary objective of this project was to leverage the Big Area Additive Manufacturing (BAAM) process and materials into a long term, quick change tooling concept to drastically reduce product lead and development timelines and costs. Current refrigeration foam molds are complicated to manufacture involving casting several aluminum parts in an approximate shape, machining components of the molds and post fitting and shimming of the parts in an articulated fixture. The total process timeline can take over 6 months. The foaming process is slower than required for production, therefore multiple fixtures, 10 to 27, are required per refrigerator model. Moldsmore » are particular to a specific product configuration making mixed model assembly challenging for sequencing, mold changes or auto changeover features. The initial goal was to create a tool leveraging the ORNL materials and additive process to build a tool in 4 to 6 weeks or less. A secondary goal was to create common fixture cores and provide lightweight fixture sections that could be revised in a very short time to increase equipment flexibility reduce lead times, lower the barriers to first production trials, and reduce tooling costs.« less

  12. Reinventing The Design Process: Teams and Models

    NASA Technical Reports Server (NTRS)

    Wall, Stephen D.

    1999-01-01

    The future of space mission designing will be dramatically different from the past. Formerly, performance-driven paradigms emphasized data return with cost and schedule being secondary issues. Now and in the future, costs are capped and schedules fixed-these two variables must be treated as independent in the design process. Accordingly, JPL has redesigned its design process. At the conceptual level, design times have been reduced by properly defining the required design depth, improving the linkages between tools, and managing team dynamics. In implementation-phase design, system requirements will be held in crosscutting models, linked to subsystem design tools through a central database that captures the design and supplies needed configuration management and control. Mission goals will then be captured in timelining software that drives the models, testing their capability to execute the goals. Metrics are used to measure and control both processes and to ensure that design parameters converge through the design process within schedule constraints. This methodology manages margins controlled by acceptable risk levels. Thus, teams can evolve risk tolerance (and cost) as they would any engineering parameter. This new approach allows more design freedom for a longer time, which tends to encourage revolutionary and unexpected improvements in design.

  13. Multiple objects tracking with HOGs matching in circular windows

    NASA Astrophysics Data System (ADS)

    Miramontes-Jaramillo, Daniel; Kober, Vitaly; Díaz-Ramírez, Víctor H.

    2014-09-01

    In recent years tracking applications with development of new technologies like smart TVs, Kinect, Google Glass and Oculus Rift become very important. When tracking uses a matching algorithm, a good prediction algorithm is required to reduce the search area for each object to be tracked as well as processing time. In this work, we analyze the performance of different tracking algorithms based on prediction and matching for a real-time tracking multiple objects. The used matching algorithm utilizes histograms of oriented gradients. It carries out matching in circular windows, and possesses rotation invariance and tolerance to viewpoint and scale changes. The proposed algorithm is implemented in a personal computer with GPU, and its performance is analyzed in terms of processing time in real scenarios. Such implementation takes advantage of current technologies and helps to process video sequences in real-time for tracking several objects at the same time.

  14. Real-time biscuit tile image segmentation method based on edge detection.

    PubMed

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors

    PubMed Central

    Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle

    2016-01-01

    With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement. PMID:27845757

  16. Scheduling revisited workstations in integrated-circuit fabrication

    NASA Technical Reports Server (NTRS)

    Kline, Paul J.

    1992-01-01

    The cost of building new semiconductor wafer fabrication factories has grown rapidly, and a state-of-the-art fab may cost 250 million dollars or more. Obtaining an acceptable return on this investment requires high productivity from the fabrication facilities. This paper describes the Photo Dispatcher system which was developed to make machine-loading recommendations on a set of key fab machines. Dispatching policies that generally perform well in job shops (e.g., Shortest Remaining Processing Time) perform poorly for workstations such as photolithography which are visited several times by the same lot of silicon wafers. The Photo Dispatcher evaluates the history of workloads throughout the fab and identifies bottleneck areas. The scheduler then assigns priorities to lots depending on where they are headed after photolithography. These priorities are designed to avoid starving bottleneck workstations and to give preference to lots that are headed to areas where they can be processed with minimal waiting. Other factors considered by the scheduler to establish priorities are the nearness of a lot to the end of its process flow and the time that the lot has already been waiting in queue. Simulations that model the equipment and products in one of Texas Instrument's wafer fabs show the Photo Dispatcher can produce a 10 percent improvement in the time required to fabricate integrated circuits.

  17. Centrifugal contactor operations for UREX process flowsheet. An update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereira, Candido; Vandegrift, George F.

    2014-08-01

    The uranium extraction (UREX) process separates uranium, technetium, and a fraction of the iodine from the other components of the irradiated fuel in nitric acid solution. In May 2012, the time, material, and footprint requirements for treatment of 260 L batches of a solution containing 130 g-U/L were evaluated for two commercial annular centrifugal contactors from CINC Industries. These calculated values were based on the expected volume and concentration of fuel arising from treatment of a single target solution vessel (TSV). The general conclusions of that report were that a CINC V-2 contactor would occupy a footprint of 3.2 mmore » 2 (0.25 m x 15 m) if each stage required twice the nominal footprint of an individual stage, and approximately 1,131 minutes or nearly 19 hours is required to process all of the feed solution. A CINC V-5 would require approximately 9.9 m 2 (0.4 m x 25 m) of floor space but would require only 182 minutes or ~ 3 hours to process the spent target solution. Subsequent comparison with the Modular Caustic Side Solvent Extraction Unit (MCU) at Savannah River Site (SRS) in October 2013 suggested that a more compact arrangement is feasible, and the linear dimension for the CINC V-5 may be reduced to about 8 m; a comparable reduction for the CINC V-2 yields a length of 5 m. That report also described an intermediate-scale (10 cm) contactor design developed by Argonne in the early 1980s that would better align with the SHINE operations as they stood in May 2012. In this report, we revisit the previous evaluation of contactor operations after discussions with CINC Industries and analysis of the SHINE process flow diagrams for the cleanup of the TSV, which were not available at the time of the first assessment.« less

  18. Some Findings Concerning Requirements in Agile Methodologies

    NASA Astrophysics Data System (ADS)

    Rodríguez, Pilar; Yagüe, Agustín; Alarcón, Pedro P.; Garbajosa, Juan

    Agile methods have appeared as an attractive alternative to conventional methodologies. These methods try to reduce the time to market and, indirectly, the cost of the product through flexible development and deep customer involvement. The processes related to requirements have been extensively studied in literature, in most cases in the frame of conventional methods. However, conclusions of conventional methodologies could not be necessarily valid for Agile; in some issues, conventional and Agile processes are radically different. As recent surveys report, inadequate project requirements is one of the most conflictive issues in agile approaches and better understanding about this is needed. This paper describes some findings concerning requirements activities in a project developed under an agile methodology. The project intended to evolve an existing product and, therefore, some background information was available. The major difficulties encountered were related to non-functional needs and management of requirements dependencies.

  19. [Japanese learners' processing time for reading English relative clauses analyzed in relation to their English listening proficiency].

    PubMed

    Oyama, Yoshinori

    2011-06-01

    The present study examined Japanese university students' processing time for English subject and object relative clauses in relation to their English listening proficiency. In Analysis 1, the relation between English listening proficiency and reading span test scores was analyzed. The results showed that the high and low listening comprehension groups' reading span test scores do not differ. Analysis 2 investigated English listening proficiency and processing time for sentences with subject and object relative clauses. The results showed that reading the relative clause ending and the main verb section of a sentence with an object relative clause (such as "attacked" and "admitted" in the sentence "The reporter that the senator attacked admitted the error") takes less time for learners with high English listening scores than for learners with low English listening scores. In Analysis 3, English listening proficiency and comprehension accuracy for sentences with subject and object relative clauses were examined. The results showed no significant difference in comprehension accuracy between the high and low listening-comprehension groups. These results indicate that processing time for English relative clauses is related to the cognitive processes involved in listening comprehension, which requires immediate processing of syntactically complex audio information.

  20. Single-step affinity purification of enzyme biotherapeutics: a platform methodology for accelerated process development.

    PubMed

    Brower, Kevin P; Ryakala, Venkat K; Bird, Ryan; Godawat, Rahul; Riske, Frank J; Konstantinov, Konstantin; Warikoo, Veena; Gamble, Jean

    2014-01-01

    Downstream sample purification for quality attribute analysis is a significant bottleneck in process development for non-antibody biologics. Multi-step chromatography process train purifications are typically required prior to many critical analytical tests. This prerequisite leads to limited throughput, long lead times to obtain purified product, and significant resource requirements. In this work, immunoaffinity purification technology has been leveraged to achieve single-step affinity purification of two different enzyme biotherapeutics (Fabrazyme® [agalsidase beta] and Enzyme 2) with polyclonal and monoclonal antibodies, respectively, as ligands. Target molecules were rapidly isolated from cell culture harvest in sufficient purity to enable analysis of critical quality attributes (CQAs). Most importantly, this is the first study that demonstrates the application of predictive analytics techniques to predict critical quality attributes of a commercial biologic. The data obtained using the affinity columns were used to generate appropriate models to predict quality attributes that would be obtained after traditional multi-step purification trains. These models empower process development decision-making with drug substance-equivalent product quality information without generation of actual drug substance. Optimization was performed to ensure maximum target recovery and minimal target protein degradation. The methodologies developed for Fabrazyme were successfully reapplied for Enzyme 2, indicating platform opportunities. The impact of the technology is significant, including reductions in time and personnel requirements, rapid product purification, and substantially increased throughput. Applications are discussed, including upstream and downstream process development support to achieve the principles of Quality by Design (QbD) as well as integration with bioprocesses as a process analytical technology (PAT). © 2014 American Institute of Chemical Engineers.

Top