Sample records for ensures processive runs

  1. Strategies for Maximizing Successful Drug Substance Technology Transfer Using Engineering, Shake-Down, and Wet Test Runs.

    PubMed

    Abraham, Sushil; Bain, David; Bowers, John; Larivee, Victor; Leira, Francisco; Xie, Jasmina

    2015-01-01

    The technology transfer of biological products is a complex process requiring control of multiple unit operations and parameters to ensure product quality and process performance. To achieve product commercialization, the technology transfer sending unit must successfully transfer knowledge about both the product and the process to the receiving unit. A key strategy for maximizing successful scale-up and transfer efforts is the effective use of engineering and shake-down runs to confirm operational performance and product quality prior to embarking on good manufacturing practice runs such as process performance qualification runs. We consider key factors to consider in making the decision to perform shake-down or engineering runs. We also present industry benchmarking results of how engineering runs are used in drug substance technology transfers alongside the main themes and best practices that have emerged. Our goal is to provide companies with a framework for ensuring the "right first time" technology transfers with effective deployment of resources within increasingly aggressive timeline constraints. © PDA, Inc. 2015.

  2. Running Records: Authentic Instruction in Early Childhood Education

    ERIC Educational Resources Information Center

    Shea, Mary

    2012-01-01

    The most effective way to understand what a child knows about the reading process is to take a running record. In "Running Records", Mary Shea demonstrates how teachers can use this powerful tool to design lessons that decrease reading difficulties, build on strengths, and stimulate motivation, ensuring that children develop self-sustaining…

  3. Implementation of an adaptive controller for the startup and steady-state running of a biomethanation process operated in the CSTR mode.

    PubMed

    Renard, P; Van Breusegem, V; Nguyen, M T; Naveau, H; Nyns, E J

    1991-10-20

    An adaptive control algorithm has been implemented on a biomethanation process to maintain propionate concentration, a stable variable, at a given low value, by steering the dilution rate. It was thereby expected to ensure the stability of the process during the startup and during steady-state running with an acceptable performance. The methane pilot reactor was operated in the completely mixed, once-through mode and computer-controlled during 161 days. The results yielded the real-life validation of the adaptive control algorithm, and documented the stability and acceptable performance expected.

  4. Towards Compensation Correctness in Interactive Systems

    NASA Astrophysics Data System (ADS)

    Vaz, Cátia; Ferreira, Carla

    One fundamental idea of service-oriented computing is that applications should be developed by composing already available services. Due to the long running nature of service interactions, a main challenge in service composition is ensuring correctness of failure recovery. In this paper, we use a process calculus suitable for modelling long running transactions with a recovery mechanism based on compensations. Within this setting, we discuss and formally state correctness criteria for compensable processes compositions, assuming that each process is correct with respect to failure recovery. Under our theory, we formally interpret self-healing compositions, that can detect and recover from failures, as correct compositions of compensable processes.

  5. 21 CFR 820.80 - Receiving, in-process, and finished device acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Receiving, in-process, and finished device acceptance. 820.80 Section 820.80 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... acceptance to ensure that each production run, lot, or batch of finished devices meets acceptance criteria...

  6. 21 CFR 820.80 - Receiving, in-process, and finished device acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Receiving, in-process, and finished device acceptance. 820.80 Section 820.80 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... acceptance to ensure that each production run, lot, or batch of finished devices meets acceptance criteria...

  7. Robust iterative learning control for multi-phase batch processes: an average dwell-time method with 2D convergence indexes

    NASA Astrophysics Data System (ADS)

    Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong

    2018-01-01

    In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.

  8. Auditing of chromatographic data.

    PubMed

    Mabie, J T

    1998-01-01

    During a data audit, it is important to ensure that there is clear documentation and an audit trail. The Quality Assurance Unit should review all areas, including the laboratory, during the conduct of the sample analyses. The analytical methodology that is developed should be documented prior to sample analyses. This is an important document for the auditor, as it is the instrumental piece used by the laboratory personnel to maintain integrity throughout the process. It is expected that this document will give insight into the sample analysis, run controls, run sequencing, instrument parameters, and acceptance criteria for the samples. The sample analysis and all supporting documentation should be audited in conjunction with this written analytical method and any supporting Standard Operating Procedures to ensure the quality and integrity of the data.

  9. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  10. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  11. Improvements of the ALICE HLT data transport framework for LHC Run 2

    NASA Astrophysics Data System (ADS)

    Rohr, David; Krzwicki, Mikolaj; Engel, Heiko; Lehrbach, Johannes; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    The ALICE HLT uses a data transport framework based on the publisher- subscriber message principle, which transparently handles the communication between processing components over the network and between processing components on the same node via shared memory with a zero copy approach. We present an analysis of the performance in terms of maximum achievable data rates and event rates as well as processing capabilities during Run 1 and Run 2. Based on this analysis, we present new optimizations we have developed for ALICE in Run 2. These include support for asynchronous transport via Zero-MQ which enables loops in the reconstruction chain graph and which is used to ship QA histograms to DQM. We have added asynchronous processing capabilities in order to support long-running tasks besides the event-synchronous reconstruction tasks in normal HLT operation. These asynchronous components run in an isolated process such that the HLT as a whole is resilient even to fatal errors in these asynchronous components. In this way, we can ensure that new developments cannot break data taking. On top of that, we have tuned the processing chain to cope with the higher event and data rates expected from the new TPC readout electronics (RCU2) and we have improved the configuration procedure and the startup time in order to increase the time where ALICE can take physics data. We analyze the maximum achievable data processing rates taking into account processing capabilities of CPUs and GPUs, buffer sizes, network bandwidth, the incoming links from the detectors, and the outgoing links to data acquisition.

  12. 78 FR 47217 - Proposed Supervisory Guidance on Implementing Dodd-Frank Act Company-Run Stress Tests for Banking...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-05

    ..., that are designed to ensure that its stress testing processes are effective in meeting the requirements... specific methodological practices. Consistent with this approach, this guidance sets general supervisory... use any specific methodological practices for their stress tests. Companies may use various practices...

  13. Test Operations Procedure (TOP) 06-2-301 Wind Testing

    DTIC Science & Technology

    2017-06-14

    critical to ensure that the test item is exposed to the required wind speeds. This may be an iterative process as the fan blade pitch, fan speed...fan speed is the variable that is adjusted to reach the required velocities. Calibration runs with a range of fan speeds are performed and a

  14. The CMS Tier0 goes cloud and grid for LHC Run 2

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threadedmore » framework to deal with the increased event complexity and to ensure efficient use of the resources. Furthermore, this contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.« less

  15. The CMS TierO goes Cloud and Grid for LHC Run 2

    NASA Astrophysics Data System (ADS)

    Hufnagel, Dirk

    2015-12-01

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threaded framework to deal with the increased event complexity and to ensure efficient use of the resources. This contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.

  16. A Strategic Plan of Academic Management System as Preparation for EAC Accreditation Visit--From UKM Perspective

    ERIC Educational Resources Information Center

    Ab-Rahman, Mohammad Syuhaimi; Yusoff, Abdul Rahman Mohd; Abdul, Nasrul Amir; Hipni, Afiq

    2015-01-01

    Development of a robust platform is important to ensure that the engineering accreditation process can run smoothly, completely and the most important is to fulfill the criteria requirements. In case of Malaysia, the preparation for EAC (Engineering Accreditation Committee) assessment required a good strategic plan of academic management system…

  17. Physician recruitment success: how to acquire top physician talent.

    PubMed

    Rosman, Judy

    2011-01-01

    This article provides step-by-step instructions on how to complete the strategic planning needed to ensure success in physician recruitment efforts, outlines how to build a successful recruitment team, and provides helpful advice to avoid common recruiting mistakes that can sabotage the recruitment efforts of even the best practices. This article discusses the role of the in-house hospital recruiter in the recruitment process, how to evaluate independent search firms, how to make use of the physicians in your group to ensure success during a site visit, and how to ensure that your new hire will be able to successfully develop a practice. The article also discusses how to find and use benchmarking data to ensure that your compensation package is competitive, and provides advice on how to help your new physician hit the ground running.

  18. Virus elimination during the purification of monoclonal antibodies by column chromatography and additional steps.

    PubMed

    Roberts, Peter L

    2014-01-01

    The theoretical potential for virus transmission by monoclonal antibody based therapeutic products has led to the inclusion of appropriate virus reduction steps. In this study, virus elimination by the chromatographic steps used during the purification process for two (IgG-1 & -3) monoclonal antibodies (MAbs) have been investigated. Both the Protein G (>7log) and ion-exchange (5 log) chromatography steps were very effective for eliminating both enveloped and non-enveloped viruses over the life-time of the chromatographic gel. However, the contribution made by the final gel filtration step was more limited, i.e., 3 log. Because these chromatographic columns were recycled between uses, the effectiveness of the column sanitization procedures (guanidinium chloride for protein G or NaOH for ion-exchange) were tested. By evaluating standard column runs immediately after each virus spiked run, it was possible to directly confirm that there was no cross contamination with virus between column runs (guanidinium chloride or NaOH). To further ensure the virus safety of the product, two specific virus elimination steps have also been included in the process. A solvent/detergent step based on 1% triton X-100 rapidly inactivating a range of enveloped viruses by >6 log inactivation within 1 min of a 60 min treatment time. Virus removal by virus filtration step was also confirmed to be effective for those viruses of about 50 nm or greater. In conclusion, the combination of these multiple steps ensures a high margin of virus safety for this purification process. © 2014 American Institute of Chemical Engineers.

  19. ECO fill: automated fill modification to support late-stage design changes

    NASA Astrophysics Data System (ADS)

    Davis, Greg; Wilson, Jeff; Yu, J. J.; Chiu, Anderson; Chuang, Yao-Jen; Yang, Ricky

    2014-03-01

    One of the most critical factors in achieving a positive return for a design is ensuring the design not only meets performance specifications, but also produces sufficient yield to meet the market demand. The goal of design for manufacturability (DFM) technology is to enable designers to address manufacturing requirements during the design process. While new cell-based, DP-aware, and net-aware fill technologies have emerged to provide the designer with automated fill engines that support these new fill requirements, design changes that arrive late in the tapeout process (as engineering change orders, or ECOs) can have a disproportionate effect on tapeout schedules, due to the complexity of replacing fill. If not handled effectively, the impacts on file size, run time, and timing closure can significantly extend the tapeout process. In this paper, the authors examine changes to design flow methodology, supported by new fill technology, that enable efficient, fast, and accurate adjustments to metal fill late in the design process. We present an ECO fill methodology coupled with the support of advanced fill tools that can quickly locate the portion of the design affected by the change, remove and replace only the fill in that area, while maintaining the fill hierarchy. This new fill approach effectively reduces run time, contains fill file size, minimizes timing impact, and minimizes mask costs due to ECO-driven fill changes, all of which are critical factors to ensuring time-to-market schedules are maintained.

  20. Multi-canister overpack project -- verification and validation, MCNP 4A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldmann, L.H.

    This supporting document contains the software verification and validation (V and V) package used for Phase 2 design of the Spent Nuclear Fuel Multi-Canister Overpack. V and V packages for both ANSYS and MCNP are included. Description of Verification Run(s): This software requires that it be compiled specifically for the machine it is to be used on. Therefore to facilitate ease in the verification process the software automatically runs 25 sample problems to ensure proper installation and compilation. Once the runs are completed the software checks for verification by performing a file comparison on the new output file and themore » old output file. Any differences between any of the files will cause a verification error. Due to the manner in which the verification is completed a verification error does not necessarily indicate a problem. This indicates that a closer look at the output files is needed to determine the cause of the error.« less

  1. Mitigating risks related to facilities management.

    PubMed

    O'Neill, Daniel P; Scarborough, Sydney

    2013-07-01

    By looking at metrics focusing on the functionality, age, capital investment, transparency, and sustainability (FACTS) of their organizations' facilities, facilities management teams can build potential business cases to justify upgrading the facilities. A FACTS analysis can ensure that capital spent on facilities will produce a higher or more certain ROI than alternatives. A consistent process for managing spending helps to avoid unexpected spikes that cost the enterprise more in the long run.

  2. How to run a successful Journal

    PubMed Central

    Jawaid, Shaukat Ali; Jawaid, Masood

    2017-01-01

    Publishing and successfully running a good quality peer reviewed biomedical scientific journal is not an easy task. Some of the pre-requisites include a competent experienced editor supported by a team. Long term sustainability of a journal will depend on good quality manuscripts, active editorial board, good quality of reviewers, workable business model to ensure financial support, increased visibility which will ensure increased submissions, indexation in various important databases, online availability and easy to use website. This manuscript outlines the logistics and technical issues which need to be resolved before starting a new journal and ensuring sustainability of a good quality peer reviewed journal. PMID:29492089

  3. Robust Optimization Design for Turbine Blade-Tip Radial Running Clearance using Hierarchically Response Surface Method

    NASA Astrophysics Data System (ADS)

    Zhiying, Chen; Ping, Zhou

    2017-11-01

    Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.

  4. ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond

    NASA Astrophysics Data System (ADS)

    van Gemmeren, P.; Cranshaw, J.; Malon, D.; Vaniachine, A.

    2015-12-01

    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework's state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires further enhancement of metadata infrastructure in order to ensure semantic coherence and robust bookkeeping. This paper describes the evolution of ATLAS metadata infrastructure for Run 2 and beyond, including the transition to dual-use tools—tools that can operate inside or outside the ATLAS control framework—and the implications thereof. It further examines how the design of this infrastructure is changing to accommodate the requirements of future frameworks and emerging event processing architectures.

  5. 76 FR 63858 - Drawbridge Operation Regulation; Trent River, New Bern, NC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-14

    ... River Bridge Runs. This deviation allows the bridge to remain in the closed position to ensure safe..., Docket Operations, telephone 202-366-9826. SUPPLEMENTARY INFORMATION: The Neuse River Bridge Run... River, mile 0.0, at New Bern, NC. The route of the three Neuse River Bridge Run races cross the bridge...

  6. Maximizing Safety, Social Support, and Participation in Walking/Jogging/Running Classes

    ERIC Educational Resources Information Center

    Consolo, Kitty A.

    2007-01-01

    Physical education instructors who teach high school or college walking/jogging/running classes, or who include walking or running as a segment of a wellness class, face a particular challenge in trying to meet each student's individual fitness needs while ensuring safety. This article provides strategies for effectively meeting individual needs…

  7. Evolution of CMS workload management towards multicore job support

    NASA Astrophysics Data System (ADS)

    Pérez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.; Letts, J.; Majewski, K.; Rodrigues, A. M.; McCrea, A.; Vaandering, E.

    2015-12-01

    The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.

  8. Evolution of CMS Workload Management Towards Multicore Job Support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.

    The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single andmore » multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.« less

  9. ICT and mobile health to improve clinical process delivery. a research project for therapy management process innovation.

    PubMed

    Locatelli, Paolo; Montefusco, Vittorio; Sini, Elena; Restifo, Nicola; Facchini, Roberta; Torresani, Michele

    2013-01-01

    The volume and the complexity of clinical and administrative information make Information and Communication Technologies (ICTs) essential for running and innovating healthcare. This paper tells about a project aimed to design, develop and implement a set of organizational models, acknowledged procedures and ICT tools (Mobile & Wireless solutions and Automatic Identification and Data Capture technologies) to improve actual support, safety, reliability and traceability of a specific therapy management (stem cells). The value of the project is to design a solution based on mobile and identification technology in tight collaboration with physicians and actors involved in the process to ensure usability and effectivenes in process management.

  10. InkTag: Secure Applications on an Untrusted Operating System

    PubMed Central

    Hofmann, Owen S.; Kim, Sangman; Dunn, Alan M.; Lee, Michael Z.; Witchel, Emmett

    2014-01-01

    InkTag is a virtualization-based architecture that gives strong safety guarantees to high-assurance processes even in the presence of a malicious operating system. InkTag advances the state of the art in untrusted operating systems in both the design of its hypervisor and in the ability to run useful applications without trusting the operating system. We introduce paraverification, a technique that simplifies the InkTag hypervisor by forcing the untrusted operating system to participate in its own verification. Attribute-based access control allows trusted applications to create decentralized access control policies. InkTag is also the first system of its kind to ensure consistency between secure data and metadata, ensuring recoverability in the face of system crashes. PMID:24429939

  11. InkTag: Secure Applications on an Untrusted Operating System.

    PubMed

    Hofmann, Owen S; Kim, Sangman; Dunn, Alan M; Lee, Michael Z; Witchel, Emmett

    2013-01-01

    InkTag is a virtualization-based architecture that gives strong safety guarantees to high-assurance processes even in the presence of a malicious operating system. InkTag advances the state of the art in untrusted operating systems in both the design of its hypervisor and in the ability to run useful applications without trusting the operating system. We introduce paraverification , a technique that simplifies the InkTag hypervisor by forcing the untrusted operating system to participate in its own verification. Attribute-based access control allows trusted applications to create decentralized access control policies. InkTag is also the first system of its kind to ensure consistency between secure data and metadata, ensuring recoverability in the face of system crashes.

  12. Patient Data Synchronization Process in a Continuity of Care Environment

    PubMed Central

    Haras, Consuela; Sauquet, Dominique; Ameline, Philippe; Jaulent, Marie-Christine; Degoulet, Patrice

    2005-01-01

    In a distributed patient record environment, we analyze the processes needed to ensure exchange and access to EHR data. We propose an adapted method and the tools for data synchronization. Our study takes into account the issues of user rights management for data access and of decreasing the amount of data exchanged over the network. We describe a XML-based synchronization model that is portable and independent of specific medical data models. The implemented platform consists of several servers, of local network clients, of workstations running user’s interfaces and of data exchange and synchronization tools. PMID:16779049

  13. An Authentication Protocol for Future Sensor Networks.

    PubMed

    Bilal, Muhammad; Kang, Shin-Gak

    2017-04-28

    Authentication is one of the essential security services in Wireless Sensor Networks (WSNs) for ensuring secure data sessions. Sensor node authentication ensures the confidentiality and validity of data collected by the sensor node, whereas user authentication guarantees that only legitimate users can access the sensor data. In a mobile WSN, sensor and user nodes move across the network and exchange data with multiple nodes, thus experiencing the authentication process multiple times. The integration of WSNs with Internet of Things (IoT) brings forth a new kind of WSN architecture along with stricter security requirements; for instance, a sensor node or a user node may need to establish multiple concurrent secure data sessions. With concurrent data sessions, the frequency of the re-authentication process increases in proportion to the number of concurrent connections. Moreover, to establish multiple data sessions, it is essential that a protocol participant have the capability of running multiple instances of the protocol run, which makes the security issue even more challenging. The currently available authentication protocols were designed for the autonomous WSN and do not account for the above requirements. Hence, ensuring a lightweight and efficient authentication protocol has become more crucial. In this paper, we present a novel, lightweight and efficient key exchange and authentication protocol suite called the Secure Mobile Sensor Network (SMSN) Authentication Protocol. In the SMSN a mobile node goes through an initial authentication procedure and receives a re-authentication ticket from the base station. Later a mobile node can use this re-authentication ticket when establishing multiple data exchange sessions and/or when moving across the network. This scheme reduces the communication and computational complexity of the authentication process. We proved the strength of our protocol with rigorous security analysis (including formal analysis using the BAN-logic) and simulated the SMSN and previously proposed schemes in an automated protocol verifier tool. Finally, we compared the computational complexity and communication cost against well-known authentication protocols.

  14. An Authentication Protocol for Future Sensor Networks

    PubMed Central

    Bilal, Muhammad; Kang, Shin-Gak

    2017-01-01

    Authentication is one of the essential security services in Wireless Sensor Networks (WSNs) for ensuring secure data sessions. Sensor node authentication ensures the confidentiality and validity of data collected by the sensor node, whereas user authentication guarantees that only legitimate users can access the sensor data. In a mobile WSN, sensor and user nodes move across the network and exchange data with multiple nodes, thus experiencing the authentication process multiple times. The integration of WSNs with Internet of Things (IoT) brings forth a new kind of WSN architecture along with stricter security requirements; for instance, a sensor node or a user node may need to establish multiple concurrent secure data sessions. With concurrent data sessions, the frequency of the re-authentication process increases in proportion to the number of concurrent connections. Moreover, to establish multiple data sessions, it is essential that a protocol participant have the capability of running multiple instances of the protocol run, which makes the security issue even more challenging. The currently available authentication protocols were designed for the autonomous WSN and do not account for the above requirements. Hence, ensuring a lightweight and efficient authentication protocol has become more crucial. In this paper, we present a novel, lightweight and efficient key exchange and authentication protocol suite called the Secure Mobile Sensor Network (SMSN) Authentication Protocol. In the SMSN a mobile node goes through an initial authentication procedure and receives a re-authentication ticket from the base station. Later a mobile node can use this re-authentication ticket when establishing multiple data exchange sessions and/or when moving across the network. This scheme reduces the communication and computational complexity of the authentication process. We proved the strength of our protocol with rigorous security analysis (including formal analysis using the BAN-logic) and simulated the SMSN and previously proposed schemes in an automated protocol verifier tool. Finally, we compared the computational complexity and communication cost against well-known authentication protocols. PMID:28452937

  15. Job Priorities on Peregrine | High-Performance Computing | NREL

    Science.gov Websites

    allocation when run with qos=high. Requesting a Node Reservation If you are doing work that requires real scheduler more efficiently plan resources for larger jobs. When projects reach their allocation limit, jobs associated with those projects will run at very low priority, which will ensure that these jobs run only when

  16. Third Party Services for Enabling Business-to-Business Interactions

    NASA Astrophysics Data System (ADS)

    Shrivastava, Santosh

    Business-to-business (B2B) interactions concerned with the fulfilment of a given business function (e.g., order processing) requires business partners to exchange electronic business documents and to act on them. This activity can be viewed as the business partners taking part in the execution of a shared business process, where each partner is responsible for performing their part in the process. Naturally, business process executions at each partner must be coordinated at run-time to ensure that the partners are performing mutually consistent actions (e.g., the seller is not hipping a product when the corresponding order has been cancelled by the buyer). A number of factors combine to make the task of business process coordination surprisingly hard:

  17. Design and implementation of laser target simulator in hardware-in-the-loop simulation system based on LabWindows/CVI and RTX

    NASA Astrophysics Data System (ADS)

    Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong

    2016-11-01

    In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.

  18. Software Quality Assurance and Verification for the MPACT Library Generation Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuxuan; Williams, Mark L.; Wiarda, Dorothea

    This report fulfills the requirements for the Consortium for the Advanced Simulation of Light-Water Reactors (CASL) milestone L2:RTM.P14.02, “SQA and Verification for MPACT Library Generation,” by documenting the current status of the software quality, verification, and acceptance testing of nuclear data libraries for MPACT. It provides a brief overview of the library generation process, from general-purpose evaluated nuclear data files (ENDF/B) to a problem-dependent cross section library for modeling of light-water reactors (LWRs). The software quality assurance (SQA) programs associated with each of the software used to generate the nuclear data libraries are discussed; specific tests within the SCALE/AMPX andmore » VERA/XSTools repositories are described. The methods and associated tests to verify the quality of the library during the generation process are described in detail. The library generation process has been automated to a degree to (1) ensure that it can be run without user intervention and (2) to ensure that the library can be reproduced. Finally, the acceptance testing process that will be performed by representatives from the Radiation Transport Methods (RTM) Focus Area prior to the production library’s release is described in detail.« less

  19. KSC-2014-4149

    NASA Image and Video Library

    2014-09-25

    CAPE CANAVERAL, Fla. – Coupled Florida East Coast Railway, or FEC, locomotives No. 433 and No. 428 make the first run past the Orbiter Processing Facility and Thermal Protection System Facility in Launch Complex 39 at NASA’s Kennedy Space Center in Florida during the Rail Vibration Test for the Canaveral Port Authority. Seismic monitors are collecting data as the train passes by. The purpose of the test is to collect amplitude, frequency and vibration test data utilizing two Florida East Coast locomotives operating on KSC tracks to ensure that future railroad operations will not affect launch vehicle processing at the center. Buildings instrumented for the test include the Rotation Processing Surge Facility, Thermal Protection Systems Facility, Vehicle Assembly Building, Orbiter Processing Facility and Booster Fabrication Facility. Photo credit: NASA/Daniel Casper

  20. Preprocessing for Eddy Dissipation Rate and TKE Profile Generation

    NASA Technical Reports Server (NTRS)

    Zak, J. Allen; Rodgers, William G., Jr.; McKissick, Burnell T. (Technical Monitor)

    2001-01-01

    The Aircraft Vortex Spacing System (AVOSS), a set of algorithms to determine aircraft spacing according to wake vortex behavior prediction, requires turbulence profiles to appropriately determine arrival and departure aircraft spacing. The ambient atmospheric turbulence profile must always be produced, even if the result is an arbitrary (canned) profile. The original turbulence profile code was generated By North Carolina State University and used in a non-real-time environment in the past. All the input parameters could be carefully selected and screened prior to input. Since this code must run in real-time using actual measurements in the field as input, it became imperative to begin a data checking and screening process as part of the real-time implementation. The process described herein is a step towards ensuring that the best possible turbulence profile is always provided to AVOSS. Data fill-ins, constant profiles and arbitrary profiles are used only as a last resort, but are essential to ensure uninterrupted application of AVOSS.

  1. Development, Validation and Integration of the ATLAS Trigger System Software in Run 2

    NASA Astrophysics Data System (ADS)

    Keyes, Robert; ATLAS Collaboration

    2017-10-01

    The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.

  2. Commercial application of rainfall simulation

    NASA Astrophysics Data System (ADS)

    Loch, Rob J.

    2010-05-01

    Landloch Pty Ltd is a commercial consulting firm, providing advice on a range of land management issues to the mining and construction industries in Australia. As part of the company's day-to-day operations, rainfall simulation is used to assess material erodibility and to investigate a range of site attributes. (Landloch does carry out research projects, though such are not its core business.) When treated as an everyday working tool, several aspects of rainfall simulation practice are distinctively modified. Firstly, the equipment used is regularly maintained, and regularly upgraded with a primary focus on ease, safety, and efficiency of use and on reliability of function. As well, trained and experienced technical support is considered essential. Landloch's chief technician has over 10 years experience in running rainfall simulators at locations across Australia and in Africa and the Pacific. Secondly, the specific experimental conditions established for each set of rainfall simulator runs are carefully considered to ensure that they accurately represent the field conditions to which the data will be subsequently applied. Considerations here include: • wetting and drying cycles to ensure material consolidation and/or cementation if appropriate; • careful attention to water quality if dealing with clay soils or with amendments such as gypsum; • strong focus on ensuring that the erosion processes considered are those of greatest importance to the field situation of concern; and • detailed description of both material and plot properties, to increase the potential for data to be applicable to a wider range of projects and investigations. Other important company procedures include: • For each project, the scientist or engineer responsible for analysing and reporting rainfall simulator data is present during the running of all field plots, as it is essential that they be aware of any specific conditions that may have developed when the plots were subjected to rain; and • Regular calibration of all equipment. In general, typical errors when rainfall simulation is carried out by inexperienced researchers include: • Failure to accurately measure rainfall rates (the most common error); • Inappropriate initial conditions, including wetting treatments; • Use of inappropriately small plots - relating to our concern at the erosion processes considered be those of genuine field relevance; • Inappropriate rainfall kinetic energies; and • Failure to observe critical processes operating on the study plots, such as saturation excess or the presence of impeding layers at shallow depths. Landloch regularly uses erodibility data to design stable batter profiles for minesite waste dumps. Subsequent monitoring of designed dumps has confirmed that modelled erosion rates are consistent with those subsequently measured under field conditions.

  3. Enhanced methodology of focus control and monitoring on scanner tool

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Jen; Kim, Young Ki; Hao, Xueli; Gomez, Juan-Manuel; Tian, Ye; Kamalizadeh, Ferhad; Hanson, Justin K.

    2017-03-01

    As the demand of the technology node shrinks from 14nm to 7nm, the reliability of tool monitoring techniques in advanced semiconductor fabs to achieve high yield and quality becomes more critical. Tool health monitoring methods involve periodic sampling of moderately processed test wafers to detect for particles, defects, and tool stability in order to ensure proper tool health. For lithography TWINSCAN scanner tools, the requirements for overlay stability and focus control are very strict. Current scanner tool health monitoring methods include running BaseLiner to ensure proper tool stability on a periodic basis. The focus measurement on YIELDSTAR by real-time or library-based reconstruction of critical dimensions (CD) and side wall angle (SWA) has been demonstrated as an accurate metrology input to the control loop. The high accuracy and repeatability of the YIELDSTAR focus measurement provides a common reference of scanner setup and user process. In order to further improve the metrology and matching performance, Diffraction Based Focus (DBF) metrology enabling accurate, fast, and non-destructive focus acquisition, has been successfully utilized for focus monitoring/control of TWINSCAN NXT immersion scanners. The optimal DBF target was determined to have minimized dose crosstalk, dynamic precision, set-get residual, and lens aberration sensitivity. By exploiting this new measurement target design, 80% improvement in tool-to-tool matching, >16% improvement in run-to-run mean focus stability, and >32% improvement in focus uniformity have been demonstrated compared to the previous BaseLiner methodology. Matching <2.4 nm across multiple NXT immersion scanners has been achieved with the new methodology of set baseline reference. This baseline technique, with either conventional BaseLiner low numerical aperture (NA=1.20) mode or advanced illumination high NA mode (NA=1.35), has also been evaluated to have consistent performance. This enhanced methodology of focus control and monitoring on multiple illumination conditions, opens an avenue to significantly reduce Focus-Exposure Matrix (FEM) wafer exposure for new product/layer best focus (BF) setup.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorensen, Christian

    The effort to collect and process foam for the purpose of recycling performed by the Material Sustainability and Pollution Prevention (MSP2) team at Sandia National Laboratories is an incredible one, but in order to make it run more efficiently it needed some tweaking. This project started in June of 2015. We used the Value Stream Mapping process to allow us to look at the current state of the foam collection and processing operation. We then thought of all the possible ways the process could be improved. Soon after that we discussed which of the "dreams" were feasible. And finally, wemore » assigned action items to members of the team so as to ensure that the improvements actually occur. These improvements will then, due to varying factors, continue to occur over the next couple years.« less

  5. Data management and database framework for the MICE experiment

    NASA Astrophysics Data System (ADS)

    Martyniak, J.; Nebrensky, J. J.; Rajaram, D.; MICE Collaboration

    2017-10-01

    The international Muon Ionization Cooling Experiment (MICE) currently operating at the Rutherford Appleton Laboratory in the UK, is designed to demonstrate the principle of muon ionization cooling for application to a future Neutrino Factory or Muon Collider. We present the status of the framework for the movement and curation of both raw and reconstructed data. A raw data-mover has been designed to safely upload data files onto permanent tape storage as soon as they have been written out. The process has been automated, and checks have been built in to ensure the integrity of data at every stage of the transfer. The data processing framework has been recently redesigned in order to provide fast turnaround of reconstructed data for analysis. The automated reconstruction is performed on a dedicated machine in the MICE control room and any reprocessing is done at Tier-2 Grid sites. In conjunction with this redesign, a new reconstructed-data-mover has been designed and implemented. We also review the implementation of a robust database system that has been designed for MICE. The processing of data, whether raw or Monte Carlo, requires accurate knowledge of the experimental conditions. MICE has several complex elements ranging from beamline magnets to particle identification detectors to superconducting magnets. A Configuration Database, which contains information about the experimental conditions (magnet currents, absorber material, detector calibrations, etc.) at any given time has been developed to ensure accurate and reproducible simulation and reconstruction. A fully replicated, hot-standby database system has been implemented with a firewall-protected read-write master running in the control room, and a read-only slave running at a different location. The actual database is hidden from end users by a Web Service layer, which provides platform and programming language-independent access to the data.

  6. AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading

    NASA Astrophysics Data System (ADS)

    Leggett, Charles; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; van Gemmeren, Peter; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; ATLAS Collaboration

    2017-10-01

    ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying handling of features such as event and time dependent data, asynchronous callbacks, metadata, integration with the online High Level Trigger for partial processing in certain regions of interest, concurrent I/O, as well as ensuring thread safety of core services. We also report on upgrading the framework to handle Algorithms that are fully re-entrant.

  7. Standard operating procedures for clinical research departments.

    PubMed

    Kee, Ashley Nichole

    2011-01-01

    A set of standard operating procedures (SOPs) provides a clinical research department with clear roles, responsibilities, and processes to ensure compliance, accuracy, and timeliness of data. SOPs also serve as a standardized training program for new employees. A practice may have an employee that can assist in the development of SOPs. There are also consultants that specialize in working with a practice to develop and write practice-specific SOPs. Making SOPs a priority will save a practice time and money in the long run and make the research practice more attractive to corporate study sponsors.

  8. Portability scenarios for intelligent robotic control agent software

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2014-06-01

    Portability scenarios are critical in ensuring that a piece of AI control software will run effectively across the collection of craft that it is required to control. This paper presents scenarios for control software that is designed to control multiple craft with heterogeneous movement and functional characteristics. For each prospective target-craft type, its capabilities, mission function, location, communications capabilities and power profile are presented and performance characteristics are reviewed. This work will inform future work related to decision making related to software capabilities, hardware control capabilities and processing requirements.

  9. A Rule-Based Modeling for the Description of Flexible and Self-healing Business Processes

    NASA Astrophysics Data System (ADS)

    Boukhebouze, Mohamed; Amghar, Youssef; Benharkat, Aïcha-Nabila; Maamar, Zakaria

    In this paper we discuss the importance of ensuring that business processes are label robust and agile at the same time robust and agile. To this end, we consider reviewing the way business processes are managed. For instance we consider offering a flexible way to model processes so that changes in regulations are handled through some self-healing mechanisms. These changes may raise exceptions at run-time if not properly reflected on these processes. To this end we propose a new rule based model that adopts the ECA rules and is built upon formal tools. The business logic of a process can be summarized with a set of rules that implement an organization’s policies. Each business rule is formalized using our ECAPE formalism (Event-Condition-Action-Post condition- post Event). This formalism allows translating a process into a graph of rules that is analyzed in terms of reliably and flexibility.

  10. Operating a wide-area remote observing system for the W. M. Keck Observatory

    NASA Astrophysics Data System (ADS)

    Wirth, Gregory D.; Kibrick, Robert I.; Goodrich, Robert W.; Lyke, James E.

    2008-07-01

    For over a decade, the W. M. Keck Observatory's two 10-meter telescopes have been operated remotely from its Waimea headquarters. Over the last 6 years, WMKO remote observing has expanded to allow teams at dedicated sites in California to observe either in collaboration with colleagues in Waimea or entirely from the U.S. mainland. Once an experimental effort, the Observatory's mainland observing capability is now fully operational, supported on all science instruments (except the interferometer) and regularly used by astronomers at eight mainland sites. Establishing a convenient and secure observing capability from those sites required careful planning to ensure that they are properly equipped and configured. It also entailed a significant investment in hardware and software, including both custom scripts to simplify launching the instrument interface at remote sites and automated routers employing ISDN backup lines to ensure continuation of observing during Internet outages. Observers often wait until shortly before their runs to request use of the mainland facilities. Scheduling these requests and ensuring proper system operation prior to observing requires close coordination between personnel at WMKO and the mainland sites. An established protocol for approving requests and carrying out pre-run checkout has proven useful in ensuring success. The Observatory anticipates enhancing and expanding its remote observing system. Future plans include deploying dedicated summit computers for running VNC server software, implementing a web-based tracking system for mainland-based observing requests, expanding the system to additional mainland sites, and converting to full-time VNC operation for all instruments.

  11. 78 FR 24037 - Airworthiness Directives; The Boeing Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-24

    ... and to detect a pump running in an empty fuel tank. We are issuing this AD to reduce the potential of... features to detect electrical faults, to detect a pump running in an empty fuel tank, and to ensure that a fuel pump's operation is not affected by certain conditions. Comments We gave the public the...

  12. State politics and the creation of health insurance exchanges.

    PubMed

    Jones, David K; Greer, Scott L

    2013-08-01

    Health insurance exchanges are a key component of the Affordable Care Act. Each exchange faces the challenge of minimizing friction with existing policies, coordinating churn between programs, and maximizing take-up. State-run exchanges would likely be better positioned to address these issues than a federally run exchange, yet only one third of states chose this path. Policymakers must ensure that their exchange-whether state or federally run-succeeds. Whether this happens will greatly depend on the political dynamics in each state.

  13. Measurer’s Handbook: U.S. Army Anthropometric Survey, 1987-1988

    DTIC Science & Technology

    1988-05-04

    clothing, equipment, and systems properly accommodate Army personnel who run the body-size gamut from small women to large men . 20. DISTRIBUTION I...will form the basis for ensuring that Army clothing, equipment, and s ystems properly accommodate Army personnel who run the body-size gamut from s...interes ting men and women whose jobs in the Army run the gamut from armorers to pedi atricians. Many will be interested in you and your job. Most of the

  14. Setting Standards for Medically-Based Running Analysis

    PubMed Central

    Vincent, Heather K.; Herman, Daniel C.; Lear-Barnes, Leslie; Barnes, Robert; Chen, Cong; Greenberg, Scott; Vincent, Kevin R.

    2015-01-01

    Setting standards for medically based running analyses is necessary to ensure that runners receive a high-quality service from practitioners. Medical and training history, physical and functional tests, and motion analysis of running at self-selected and faster speeds are key features of a comprehensive analysis. Self-reported history and movement symmetry are critical factors that require follow-up therapy or long-term management. Pain or injury is typically the result of a functional deficit above or below the site along the kinematic chain. PMID:25014394

  15. Dynamic modelling of an adsorption storage tank using a hybrid approach combining computational fluid dynamics and process simulation

    USGS Publications Warehouse

    Mota, J.P.B.; Esteves, I.A.A.C.; Rostam-Abadi, M.

    2004-01-01

    A computational fluid dynamics (CFD) software package has been coupled with the dynamic process simulator of an adsorption storage tank for methane fuelled vehicles. The two solvers run as independent processes and handle non-overlapping portions of the computational domain. The codes exchange data on the boundary interface of the two domains to ensure continuity of the solution and of its gradient. A software interface was developed to dynamically suspend and activate each process as necessary, and be responsible for data exchange and process synchronization. This hybrid computational tool has been successfully employed to accurately simulate the discharge of a new tank design and evaluate its performance. The case study presented here shows that CFD and process simulation are highly complementary computational tools, and that there are clear benefits to be gained from a close integration of the two. ?? 2004 Elsevier Ltd. All rights reserved.

  16. Understanding quantum measurement from the solution of dynamical models

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Balian, Roger; Nieuwenhuizen, Theo M.

    2013-04-01

    The quantum measurement problem, to wit, understanding why a unique outcome is obtained in each individual experiment, is currently tackled by solving models. After an introduction we review the many dynamical models proposed over the years for elucidating quantum measurements. The approaches range from standard quantum theory, relying for instance on quantum statistical mechanics or on decoherence, to quantum-classical methods, to consistent histories and to modifications of the theory. Next, a flexible and rather realistic quantum model is introduced, describing the measurement of the z-component of a spin through interaction with a magnetic memory simulated by a Curie-Weiss magnet, including N≫1 spins weakly coupled to a phonon bath. Initially prepared in a metastable paramagnetic state, it may transit to its up or down ferromagnetic state, triggered by its coupling with the tested spin, so that its magnetization acts as a pointer. A detailed solution of the dynamical equations is worked out, exhibiting several time scales. Conditions on the parameters of the model are found, which ensure that the process satisfies all the features of ideal measurements. Various imperfections of the measurement are discussed, as well as attempts of incompatible measurements. The first steps consist in the solution of the Hamiltonian dynamics for the spin-apparatus density matrix Dˆ(t). Its off-diagonal blocks in a basis selected by the spin-pointer coupling, rapidly decay owing to the many degrees of freedom of the pointer. Recurrences are ruled out either by some randomness of that coupling, or by the interaction with the bath. On a longer time scale, the trend towards equilibrium of the magnet produces a final state Dˆ(t) that involves correlations between the system and the indications of the pointer, thus ensuring registration. Although Dˆ(t) has the form expected for ideal measurements, it only describes a large set of runs. Individual runs are approached by analyzing the final states associated with all possible subensembles of runs, within a specified version of the statistical interpretation. There the difficulty lies in a quantum ambiguity: There exist many incompatible decompositions of the density matrix Dˆ(t) into a sum of sub-matrices, so that one cannot infer from its sole determination the states that would describe small subsets of runs. This difficulty is overcome by dynamics due to suitable interactions within the apparatus, which produce a special combination of relaxation and decoherence associated with the broken invariance of the pointer. Any subset of runs thus reaches over a brief delay a stable state which satisfies the same hierarchic property as in classical probability theory; the reduction of the state for each individual run follows. Standard quantum statistical mechanics alone appears sufficient to explain the occurrence of a unique answer in each run and the emergence of classicality in a measurement process. Finally, pedagogical exercises are proposed and lessons for future works on models are suggested, while the statistical interpretation is promoted for teaching.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordan, Amy B.; Boukhalfa, Hakim; Caporuscio, Florie Andre

    To gain confidence in the predictive capability of numerical models, experimental validation must be performed to ensure that parameters and processes are correctly simulated. The laboratory investigations presented herein aim to address knowledge gaps for heat-generating nuclear waste (HGNW) disposal in bedded salt that remain after examination of prior field and laboratory test data. Primarily, we are interested in better constraining the thermal, hydrological, and physicochemical behavior of brine, water vapor, and salt when moist salt is heated. The target of this work is to use run-of-mine (RoM) salt; however during FY2015 progress was made using high-purity, granular sodium chloride.

  18. Department of Education Educator Equity Initiative

    ERIC Educational Resources Information Center

    Lindsey, Kevin

    2014-01-01

    On July 7, 2014, the U.S. Department of Education announced plans to enforce a provision of the 2001 No Child Left Behind (NCLB) Act meant to ensure that every student is taught by a great teacher and attends a school run by great administrators. The provision requires states to develop and implement plans to ensure that no subgroup of students is…

  19. Healthcare4VideoStorm: Making Smart Decisions Based on Storm Metrics.

    PubMed

    Zhang, Weishan; Duan, Pengcheng; Chen, Xiufeng; Lu, Qinghua

    2016-04-23

    Storm-based stream processing is widely used for real-time large-scale distributed processing. Knowing the run-time status and ensuring performance is critical to providing expected dependability for some applications, e.g., continuous video processing for security surveillance. The existing scheduling strategies' granularity is too coarse to have good performance, and mainly considers network resources without computing resources while scheduling. In this paper, we propose Healthcare4Storm, a framework that finds Storm insights based on Storm metrics to gain knowledge from the health status of an application, finally ending up with smart scheduling decisions. It takes into account both network and computing resources and conducts scheduling at a fine-grained level using tuples instead of topologies. The comprehensive evaluation shows that the proposed framework has good performance and can improve the dependability of the Storm-based applications.

  20. DSISoft—a MATLAB VSP data processing package

    NASA Astrophysics Data System (ADS)

    Beaty, K. S.; Perron, G.; Kay, I.; Adam, E.

    2002-05-01

    DSISoft is a public domain vertical seismic profile processing software package developed at the Geological Survey of Canada. DSISoft runs under MATLAB version 5.0 and above and hence is portable between computer operating systems supported by MATLAB (i.e. Unix, Windows, Macintosh, Linux). The package includes processing modules for reading and writing various standard seismic data formats, performing data editing, sorting, filtering, and other basic processing modules. The processing sequence can be scripted allowing batch processing and easy documentation. A structured format has been developed to ensure future additions to the package are compatible with existing modules. Interactive modules have been created using MATLAB's graphical user interface builder for displaying seismic data, picking first break times, examining frequency spectra, doing f- k filtering, and plotting the trace header information. DSISoft modular design facilitates the incorporation of new processing algorithms as they are developed. This paper gives an overview of the scope of the software and serves as a guide for the addition of new modules.

  1. Just-in-time Time Data Analytics and Visualization of Climate Simulations using the Bellerophon Framework

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.

    2015-12-01

    Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.

  2. [Validation of cold chain during distribution of parenteral nutrition].

    PubMed

    Tuan, Federico; Perone, Virginia; Verdini, Rocio; Pell, Maria Betina; Traverso, Maria Luz

    2015-09-01

    this study aims to demonstrate the suitability of the process used to condition the extemporaneous mixtures of parenteral nutrition for distribution, considering the objective of preserving the cold chain during transport until it reaches the patient, necessary to ensure stability, effectiveness and safety of these mixtures. concurrent validation, design and implementation of a protocol for evaluating the process of packaging and distribution of MNPE developed by a pharmaceutical laboratory. Running tests, according to predefined acceptance criteria. It is performed twice, in summer and on routes that require longer transfer time. Evaluation of conservation of temperature by monitoring the internal temperature values of each type of packaging, recorded by data loggers calibrated equipment. the different tests meet the established criteria. The collected data ensure the maintenance of the cold chain for longer than the transfer time to the most distant points. this study establishes the suitability of the processes to maintaining the cold chain for transfer from the laboratory to the patient pharmacist. Whereas the breaking of cold chain can cause changes of compatibility and stability of parenteral nutrition and failures nutritional support, this study contributes to patient safety, one of the relevant dimensions of quality of care the health. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  3. Level-2 Milestone 3244: Deploy Dawn ID Machine for Initial Science Runs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, D

    2009-09-21

    This report documents the delivery, installation, integration, testing, and acceptance of the Dawn system, ASC L2 milestone 3244: Deploy Dawn ID Machine for Initial Science Runs, due September 30, 2009. The full text of the milestone is included in Attachment 1. The description of the milestone is: This milestone will be a result of work started three years ago with the planning for a multi-petaFLOPS UQ-focused platform (Sequoia) and will be satisfied when a smaller ID version of the final system is delivered, installed, integrated, tested, accepted, and deployed at LLNL for initial science runs in support of SSP mission.more » The deliverable for this milestone will be a LA petascale computing system (named Dawn) usable for code development and scaling necessary to ensure effective use of a final Sequoia platform (expected in 2011-2012), and for urgent SSP program needs. Allocation and scheduling of Dawn as an LA system will likely be performed informally, similar to what has been used for BlueGene/L. However, provision will be made to allow for dedicated access times for application scaling studies across the entire Dawn resource. The milestone was completed on April 1, 2009, when science runs began running on the Dawn system. The following sections describe the Dawn system architecture, current status, installation and integration time line, and testing and acceptance process. A project plan is included as Attachment 2. Attachment 3 is a letter certifying the handoff of the system to a nuclear weapons stockpile customer. Attachment 4 presents the results of science runs completed on the system.« less

  4. Pathways to designing and running an operational flood forecasting system: an adventure game!

    NASA Astrophysics Data System (ADS)

    Arnal, Louise; Pappenberger, Florian; Ramos, Maria-Helena; Cloke, Hannah; Crochemore, Louise; Giuliani, Matteo; Aalbers, Emma

    2017-04-01

    In the design and building of an operational flood forecasting system, a large number of decisions have to be taken. These include technical decisions related to the choice of the meteorological forecasts to be used as input to the hydrological model, the choice of the hydrological model itself (its structure and parameters), the selection of a data assimilation procedure to run in real-time, the use (or not) of a post-processor, and the computing environment to run the models and display the outputs. Additionally, a number of trans-disciplinary decisions are also involved in the process, such as the way the needs of the users will be considered in the modelling setup and how the forecasts (and their quality) will be efficiently communicated to ensure usefulness and build confidence in the forecasting system. We propose to reflect on the numerous, alternative pathways to designing and running an operational flood forecasting system through an adventure game. In this game, the player is the protagonist of an interactive story driven by challenges, exploration and problem-solving. For this presentation, you will have a chance to play this game, acting as the leader of a forecasting team at an operational centre. Your role is to manage the actions of your team and make sequential decisions that impact the design and running of the system in preparation to and during a flood event, and that deal with the consequences of the forecasts issued. Your actions are evaluated by how much they cost you in time, money and credibility. Your aim is to take decisions that will ultimately lead to a good balance between time and money spent, while keeping your credibility high over the whole process. This game was designed to highlight the complexities behind decision-making in an operational forecasting and emergency response context, in terms of the variety of pathways that can be selected as well as the timescale, cost and timing of effective actions.

  5. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics.

    PubMed

    Röst, Hannes L; Liu, Yansheng; D'Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-09-01

    Next-generation mass spectrometric (MS) techniques such as SWATH-MS have substantially increased the throughput and reproducibility of proteomic analysis, but ensuring consistent quantification of thousands of peptide analytes across multiple liquid chromatography-tandem MS (LC-MS/MS) runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we developed TRIC (http://proteomics.ethz.ch/tric/), a software tool that utilizes fragment-ion data to perform cross-run alignment, consistent peak-picking and quantification for high-throughput targeted proteomics. TRIC reduced the identification error compared to a state-of-the-art SWATH-MS analysis without alignment by more than threefold at constant recall while correcting for highly nonlinear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups. Thus, TRIC fills a gap in the pipeline for automated analysis of massively parallel targeted proteomics data sets.

  6. Use of software engineering techniques in the design of the ALEPH data acquisition system

    NASA Astrophysics Data System (ADS)

    Charity, T.; McClatchey, R.; Harvey, J.

    1987-08-01

    The SASD methodology is being used to provide a rigorous design framework for various components of the ALEPH data acquisition system. The Entity-Relationship data model is used to describe the layout and configuration of the control and acquisition systems and detector components. State Transition Diagrams are used to specify control applications such as run control and resource management and Data Flow Diagrams assist in decomposing software tasks and defining interfaces between processes. These techniques encourage rigorous software design leading to enhanced functionality and reliability. Improved documentation and communication ensures continuity over the system life-cycle and simplifies project management.

  7. Learning to walk before we run: what can medical education learn from the human body about integrated care.

    PubMed

    Manusov, Eron G; Marlowe, Daniel P; Teasley, Deborah J

    2013-04-01

    True integration requires a shift in all levels of medical and allied health education; one that emphasizes team learning, practicing, and evaluating from the beginning of each students' educational experience whether that is as physician, nurse, psychologist, or any other health profession. Integration of healthcare services will not occur until medical education focuses, like the human body, on each system working inter-dependently and cohesively to maintain balance through continual change and adaptation. The human body develops and maintains homeostasis by a process of communication: true integrated care relies on learned interprofessionality and ensures shared responsibility and practice.

  8. Learning to walk before we run: what can medical education learn from the human body about integrated care

    PubMed Central

    Manusov, Eron G; Marlowe, Daniel P; Teasley, Deborah J

    2013-01-01

    True integration requires a shift in all levels of medical and allied health education; one that emphasizes team learning, practicing, and evaluating from the beginning of each students’ educational experience whether that is as physician, nurse, psychologist, or any other health profession. Integration of healthcare services will not occur until medical education focuses, like the human body, on each system working inter-dependently and cohesively to maintain balance through continual change and adaptation. The human body develops and maintains homeostasis by a process of communication: true integrated care relies on learned interprofessionality and ensures shared responsibility and practice. PMID:23882167

  9. Effects of running with backpack loads during simulated gravitational transitions: Improvements in postural control

    NASA Astrophysics Data System (ADS)

    Brewer, Jeffrey David

    The National Aeronautics and Space Administration is planning for long-duration manned missions to the Moon and Mars. For feasible long-duration space travel, improvements in exercise countermeasures are necessary to maintain cardiovascular fitness, bone mass throughout the body and the ability to perform coordinated movements in a constant gravitational environment that is six orders of magnitude higher than the "near weightlessness" condition experienced during transit to and/or orbit of the Moon, Mars, and Earth. In such gravitational transitions feedback and feedforward postural control strategies must be recalibrated to ensure optimal locomotion performance. In order to investigate methods of improving postural control adaptation during these gravitational transitions, a treadmill based precision stepping task was developed to reveal changes in neuromuscular control of locomotion following both simulated partial gravity exposure and post-simulation exercise countermeasures designed to speed lower extremity impedance adjustment mechanisms. The exercise countermeasures included a short period of running with or without backpack loads immediately after partial gravity running. A novel suspension type partial gravity simulator incorporating spring balancers and a motor-driven treadmill was developed to facilitate body weight off loading and various gait patterns in both simulated partial and full gravitational environments. Studies have provided evidence that suggests: the environmental simulator constructed for this thesis effort does induce locomotor adaptations following partial gravity running; the precision stepping task may be a helpful test for illuminating these adaptations; and musculoskeletal loading in the form of running with or without backpack loads may improve the locomotor adaptation process.

  10. XML Translator for Interface Descriptions

    NASA Technical Reports Server (NTRS)

    Boroson, Elizabeth R.

    2009-01-01

    A computer program defines an XML schema for specifying the interface to a generic FPGA from the perspective of software that will interact with the device. This XML interface description is then translated into header files for C, Verilog, and VHDL. User interface definition input is checked via both the provided XML schema and the translator module to ensure consistency and accuracy. Currently, programming used on both sides of an interface is inconsistent. This makes it hard to find and fix errors. By using a common schema, both sides are forced to use the same structure by using the same framework and toolset. This makes for easy identification of problems, which leads to the ability to formulate a solution. The toolset contains constants that allow a programmer to use each register, and to access each field in the register. Once programming is complete, the translator is run as part of the make process, which ensures that whenever an interface is changed, all of the code that uses the header files describing it is recompiled.

  11. Brexit will boost pay.

    PubMed

    Dorries, Nadine

    2016-07-13

    Now that we are leaving the European Union we will have more control of our health service. This will allow us to increase the resources available and ensure we have the medical staff needed to keep it running.

  12. The ALICE Glance Shift Accounting Management System (SAMS)

    NASA Astrophysics Data System (ADS)

    Martins Silva, H.; Abreu Da Silva, I.; Ronchetti, F.; Telesca, A.; Maidantchik, C.

    2015-12-01

    ALICE (A Large Ion Collider Experiment) is an experiment at the CERN LHC (Large Hadron Collider) studying the physics of strongly interacting matter and the quark-gluon plasma. The experiment operation requires a 24 hours a day and 7 days a week shift crew at the experimental site, composed by the ALICE collaboration members. Shift duties are calculated for each institute according to their correlated members. In order to ensure the full coverage of the experiment operation as well as its good quality, the ALICE Shift Accounting Management System (SAMS) is used to manage the shift bookings as well as the needed training. ALICE SAMS is the result of a joint effort between the Federal University of Rio de Janeiro (UFRJ) and the ALICE Collaboration. The Glance technology, developed by the UFRJ and the ATLAS experiment, sits at the basis of the system as an intermediate layer isolating the particularities of the databases. In this paper, we describe the ALICE SAMS development process and functionalities. The database has been modelled according to the collaboration needs and is fully integrated with the ALICE Collaboration repository to access members information and respectively roles and activities. Run, period and training coordinators can manage their subsystem operation and ensure an efficient personnel management. Members of the ALICE collaboration can book shifts and on-call according to pre-defined rights. ALICE SAMS features a user profile containing all the statistics and user contact information as well as the Institutes profile. Both the user and institute profiles are public (within the scope of the collaboration) and show the credit balance in real time. A shift calendar allows the Run Coordinator to plan data taking periods in terms of which subsystems shifts are enabled or disabled and on-call responsible people and slots. An overview display presents the shift crew present in the control room and allows the Run Coordination team to confirm the presence of both regular and trainees shift personnel, necessary for credit accounting.

  13. Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans.

    PubMed

    Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth

    2006-10-01

    This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given.

  14. Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans (L)

    PubMed Central

    Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth

    2007-01-01

    This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given. PMID:17069275

  15. EMPRESS: A European Project to Enhance Process Control Through Improved Temperature Measurement

    NASA Astrophysics Data System (ADS)

    Pearce, J. V.; Edler, F.; Elliott, C. J.; Rosso, L.; Sutton, G.; Andreu, A.; Machin, G.

    2017-08-01

    A new European project called EMPRESS, funded by the EURAMET program `European Metrology Program for Innovation and Research,' is described. The 3 year project, which started in the summer of 2015, is intended to substantially augment the efficiency of high-value manufacturing processes by improving temperature measurement techniques at the point of use. The project consortium has 18 partners and 5 external collaborators, from the metrology sector, high-value manufacturing, sensor manufacturing, and academia. Accurate control of temperature is key to ensuring process efficiency and product consistency and is often not achieved to the level required for modern processes. Enhanced efficiency of processes may take several forms including reduced product rejection/waste; improved energy efficiency; increased intervals between sensor recalibration/maintenance; and increased sensor reliability, i.e., reduced amount of operator intervention. Traceability of temperature measurements to the International Temperature Scale of 1990 (ITS-90) is a critical factor in establishing low measurement uncertainty and reproducible, consistent process control. Introducing such traceability in situ (i.e., within the industrial process) is a theme running through this project.

  16. Investigation on the Practicality of Developing Reduced Thermal Models

    NASA Technical Reports Server (NTRS)

    Lombardi, Giancarlo; Yang, Kan

    2015-01-01

    Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.

  17. The World Optical Depth Research and Calibration Center (WORCC) quality assurance and quality control of GAW-PFR AOD measurements

    NASA Astrophysics Data System (ADS)

    Kazadzis, Stelios; Kouremeti, Natalia; Nyeki, Stephan; Gröbner, Julian; Wehrli, Christoph

    2018-02-01

    The World Optical Depth Research Calibration Center (WORCC) is a section within the World Radiation Center at Physikalisches-Meteorologisches Observatorium (PMOD/WRC), Davos, Switzerland, established after the recommendations of the World Meteorological Organization for calibration of aerosol optical depth (AOD)-related Sun photometers. WORCC is mandated to develop new methods for instrument calibration, to initiate homogenization activities among different AOD networks and to run a network (GAW-PFR) of Sun photometers. In this work we describe the calibration hierarchy and methods used under WORCC and the basic procedures, tests and processing techniques in order to ensure the quality assurance and quality control of the AOD-retrieved data.

  18. Opportunities and pitfalls in clinical proof-of-concept: principles and examples.

    PubMed

    Chen, Chao

    2018-04-01

    Clinical proof-of-concept trials crucially inform major resource deployment decisions. This paper discusses several mechanisms for enhancing their rigour and efficiency. The importance of careful consideration when using a surrogate endpoint is illustrated; situational effectiveness of run-in patient enrichment is explored; a versatile tool is introduced to ensure a strong pharmacological underpinning; the benefits of dose-titration are revealed by simulation; and the importance of adequately scheduled observations is shown. The general process of model-based trial design and analysis is described and several examples demonstrate the value in historical data, simulation-guided design, model-based analysis and trial adaptation informed by interim analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. State Politics and the Creation of Health Insurance Exchanges

    PubMed Central

    Greer, Scott L.

    2013-01-01

    Health insurance exchanges are a key component of the Affordable Care Act. Each exchange faces the challenge of minimizing friction with existing policies, coordinating churn between programs, and maximizing take-up. State-run exchanges would likely be better positioned to address these issues than a federally run exchange, yet only one third of states chose this path. Policymakers must ensure that their exchange—whether state or federally run—succeeds. Whether this happens will greatly depend on the political dynamics in each state. PMID:23763405

  20. Security-aware Virtual Machine Allocation in the Cloud: A Game Theoretic Approach

    DTIC Science & Technology

    2015-01-13

    predecessor, however, this paper used empirical evidence and actual data from running experiments on the Amazon EC2 cloud . They began by running all 5...is through effective VM allocation management of the cloud provider to ensure delivery of maximum security for all cloud users. The negative... Cloud : A Game Theoretic Approach 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f

  1. Assessment and management of the performance risk of a pilot reclaimed water disinfection process.

    PubMed

    Zhou, Guangyu; Zhao, Xinhua; Zhang, Lei; Wu, Qing

    2013-10-01

    Chlorination disinfection has been widely used in reclaimed water treatment plants to ensure water quality. In order to assess the downstream quality risk of a running reclaimed water disinfection process, a set of dynamic equations was developed to simulate reactions in the disinfection process concerning variables of bacteria, chemical oxygen demand (COD), ammonia and monochloramine. The model was calibrated by the observations obtained from a pilot disinfection process which was designed to simulate the actual process in a reclaimed water treatment plant. A Monte Carlo algorithm was applied to calculate the predictive effluent quality distributions that were used in the established hierarchical assessment system for the downstream quality risk, and the key factors affecting the downstream quality risk were defined using the Regional Sensitivity Analysis method. The results showed that the seasonal upstream quality variation caused considerable downstream quality risk; the effluent ammonia was significantly influenced by its upstream concentration; the upstream COD was a key factor determining the process effluent risk of bacterial, COD and residual disinfectant indexes; and lower COD and ammonia concentrations in the influent would mean better downstream quality.

  2. A Secure and Robust Approach to Software Tamper Resistance

    NASA Astrophysics Data System (ADS)

    Ghosh, Sudeep; Hiser, Jason D.; Davidson, Jack W.

    Software tamper-resistance mechanisms have increasingly assumed significance as a technique to prevent unintended uses of software. Closely related to anti-tampering techniques are obfuscation techniques, which make code difficult to understand or analyze and therefore, challenging to modify meaningfully. This paper describes a secure and robust approach to software tamper resistance and obfuscation using process-level virtualization. The proposed techniques involve novel uses of software check summing guards and encryption to protect an application. In particular, a virtual machine (VM) is assembled with the application at software build time such that the application cannot run without the VM. The VM provides just-in-time decryption of the program and dynamism for the application's code. The application's code is used to protect the VM to ensure a level of circular protection. Finally, to prevent the attacker from obtaining an analyzable snapshot of the code, the VM periodically discards all decrypted code. We describe a prototype implementation of these techniques and evaluate the run-time performance of applications using our system. We also discuss how our system provides stronger protection against tampering attacks than previously described tamper-resistance approaches.

  3. A distributed control system for the lower-hybrid current drive system on the Tokamak de Varennes

    NASA Astrophysics Data System (ADS)

    Bagdoo, J.; Guay, J. M.; Chaudron, G.-A.; Decoste, R.; Demers, Y.; Hubbard, A.

    1990-08-01

    An rf current drive system with an output power of 1 MW at 3.7 GHz is under development for the Tokamak de Varennes. The control system is based on an Ethernet local-area network of programmable logic controllers as front end, personal computers as consoles, and CAMAC-based DSP processors. The DSP processors ensure the PID control of the phase and rf power of each klystron, and the fast protection of high-power rf hardware, all within a 40 μs loop. Slower control and protection, event sequencing and the run-time database are provided by the programmable logic controllers, which communicate, via the LAN, with the consoles. The latter run a commercial process-control console software. The LAN protocol respects the first four layers of the ISO/OSI 802.3 standard. Synchronization with the tokamak control system is provided by commercially available CAMAC timing modules which trigger shot-related events and reference waveform generators. A detailed description of each subsystem and a performance evaluation of the system will be presented.

  4. North Atlantic Ocean OSSE system development: Nature Run evaluation and application to hurricane interaction with the Gulf Stream

    NASA Astrophysics Data System (ADS)

    Kourafalou, Vassiliki H.; Androulidakis, Yannis S.; Halliwell, George R.; Kang, HeeSook; Mehari, Michael M.; Le Hénaff, Matthieu; Atlas, Robert; Lumpkin, Rick

    2016-11-01

    A high resolution, free-running model has been developed for the hurricane region of the North Atlantic Ocean. The model is evaluated with a variety of observations to ensure that it adequately represents both the ocean climatology and variability over this region, with a focus on processes relevant to hurricane-ocean interactions. As such, it can be used as the "Nature Run" (NR) model within the framework of Observing System Simulation Experiments (OSSEs), designed specifically to improve the ocean component of coupled ocean-atmosphere hurricane forecast models. The OSSE methodology provides quantitative assessment of the impact of specific observations on the skill of forecast models and enables the comprehensive design of future observational platforms and the optimization of existing ones. Ocean OSSEs require a state-of-the-art, high-resolution free-running model simulation that represents the true ocean (the NR). This study concentrates on the development and data based evaluation of the NR model component, which leads to a reliable model simulation that has a dual purpose: (a) to provide the basis for future hurricane related OSSEs; (b) to explore process oriented studies of hurricane-ocean interactions. A specific example is presented, where the impact of Hurricane Bill (2009) on the eastward extension and transport of the Gulf Stream is analyzed. The hurricane induced cold wake is shown in both NR simulation and observations. Interaction of storm-forced currents with the Gulf Stream produced a temporary large reduction in eastward transport downstream from Cape Hatteras and had a marked influence on frontal displacement in the upper ocean. The kinetic energy due to ageostrophic currents showed a significant increase as the storm passed, and then decreased to pre-storm levels within 8 days after the hurricane advanced further north. This is a unique result of direct hurricane impact on a western boundary current, with possible implications on the ocean feedback on hurricane evolution.

  5. Cassini Archive Tracking System

    NASA Technical Reports Server (NTRS)

    Conner, Diane; Sayfi, Elias; Tinio, Adrian

    2006-01-01

    The Cassini Archive Tracking System (CATS) is a computer program that enables tracking of scientific data transfers from originators to the Planetary Data System (PDS) archives. Without CATS, there is no systematic means of locating products in the archive process or ensuring their completeness. By keeping a database of transfer communications and status, CATS enables the Cassini Project and the PDS to efficiently and accurately report on archive status. More importantly, problem areas are easily identified through customized reports that can be generated on the fly from any Web-enabled computer. A Web-browser interface and clearly defined authorization scheme provide safe distributed access to the system, where users can perform functions such as create customized reports, record a transfer, and respond to a transfer. CATS ensures that Cassini provides complete science archives to the PDS on schedule and that those archives are available to the science community by the PDS. The three-tier architecture is loosely coupled and designed for simple adaptation to multimission use. Written in the Java programming language, it is portable and can be run on any Java-enabled Web server.

  6. A Monotonic Degradation Assessment Index of Rolling Bearings Using Fuzzy Support Vector Data Description and Running Time

    PubMed Central

    Shen, Zhongjie; He, Zhengjia; Chen, Xuefeng; Sun, Chuang; Liu, Zhiwen

    2012-01-01

    Performance degradation assessment based on condition monitoring plays an important role in ensuring reliable operation of equipment, reducing production downtime and saving maintenance costs, yet performance degradation has strong fuzziness, and the dynamic information is random and fuzzy, making it a challenge how to assess the fuzzy bearing performance degradation. This study proposes a monotonic degradation assessment index of rolling bearings using fuzzy support vector data description (FSVDD) and running time. FSVDD constructs the fuzzy-monitoring coefficient ε̄ which is sensitive to the initial defect and stably increases as faults develop. Moreover, the parameter ε̄ describes the accelerating relationships between the damage development and running time. However, the index ε̄ with an oscillating trend disagrees with the irreversible damage development. The running time is introduced to form a monotonic index, namely damage severity index (DSI). DSI inherits all advantages of ε̄ and overcomes its disadvantage. A run-to-failure test is carried out to validate the performance of the proposed method. The results show that DSI reflects the growth of the damages with running time perfectly. PMID:23112591

  7. A monotonic degradation assessment index of rolling bearings using fuzzy support vector data description and running time.

    PubMed

    Shen, Zhongjie; He, Zhengjia; Chen, Xuefeng; Sun, Chuang; Liu, Zhiwen

    2012-01-01

    Performance degradation assessment based on condition monitoring plays an important role in ensuring reliable operation of equipment, reducing production downtime and saving maintenance costs, yet performance degradation has strong fuzziness, and the dynamic information is random and fuzzy, making it a challenge how to assess the fuzzy bearing performance degradation. This study proposes a monotonic degradation assessment index of rolling bearings using fuzzy support vector data description (FSVDD) and running time. FSVDD constructs the fuzzy-monitoring coefficient ε⁻ which is sensitive to the initial defect and stably increases as faults develop. Moreover, the parameter ε⁻ describes the accelerating relationships between the damage development and running time. However, the index ε⁻ with an oscillating trend disagrees with the irreversible damage development. The running time is introduced to form a monotonic index, namely damage severity index (DSI). DSI inherits all advantages of ε⁻ and overcomes its disadvantage. A run-to-failure test is carried out to validate the performance of the proposed method. The results show that DSI reflects the growth of the damages with running time perfectly.

  8. Efficient production of acetone-butanol-ethanol (ABE) from cassava by a fermentation-pervaporation coupled process.

    PubMed

    Li, Jing; Chen, Xiangrong; Qi, Benkun; Luo, Jianquan; Zhang, Yuming; Su, Yi; Wan, Yinhua

    2014-10-01

    Production of acetone-butanol-ethanol (ABE) from cassava was investigated with a fermentation-pervaporation (PV) coupled process. ABE products were in situ removed from fermentation broth to alleviate the toxicity of solvent to the Clostridium acetobutylicum DP217. Compared to the batch fermentation without PV, glucose consumption rate and solvent productivity increased by 15% and 21%, respectively, in batch fermentation-PV coupled process, while in continuous fermentation-PV coupled process running for 304 h, the substrate consumption rate, solvent productivity and yield increased by 58%, 81% and 15%, reaching 2.02 g/Lh, 0.76 g/Lh and 0.38 g/g, respectively. Silicalite-1 filled polydimethylsiloxane (PDMS)/polyacrylonitrile (PAN) membrane modules ensured media recycle without significant fouling, steadily generating a highly concentrated ABE solution containing 201.8 g/L ABE with 122.4 g/L butanol. After phase separation, a final product containing 574.3g/L ABE with 501.1g/L butanol was obtained. Therefore, the fermentation-PV coupled process has the potential to decrease the cost in ABE production. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Gpufit: An open-source toolkit for GPU-accelerated curve fitting.

    PubMed

    Przybylski, Adrian; Thiel, Björn; Keller-Findeisen, Jan; Stock, Bernd; Bates, Mark

    2017-11-16

    We present a general purpose, open-source software library for estimation of non-linear parameters by the Levenberg-Marquardt algorithm. The software, Gpufit, runs on a Graphics Processing Unit (GPU) and executes computations in parallel, resulting in a significant gain in performance. We measured a speed increase of up to 42 times when comparing Gpufit with an identical CPU-based algorithm, with no loss of precision or accuracy. Gpufit is designed such that it is easily incorporated into existing applications or adapted for new ones. Multiple software interfaces, including to C, Python, and Matlab, ensure that Gpufit is accessible from most programming environments. The full source code is published as an open source software repository, making its function transparent to the user and facilitating future improvements and extensions. As a demonstration, we used Gpufit to accelerate an existing scientific image analysis package, yielding significantly improved processing times for super-resolution fluorescence microscopy datasets.

  10. Object schemas for grounding language in a responsive robot

    NASA Astrophysics Data System (ADS)

    Hsiao, Kai-Yuh; Tellex, Stefanie; Vosoughi, Soroush; Kubat, Rony; Roy, Deb

    2008-12-01

    An approach is introduced for physically grounded natural language interpretation by robots that reacts appropriately to unanticipated physical changes in the environment and dynamically assimilates new information pertinent to ongoing tasks. At the core of the approach is a model of object schemas that enables a robot to encode beliefs about physical objects in its environment using collections of coupled processes responsible for sensorimotor interaction. These interaction processes run concurrently in order to ensure responsiveness to the environment, while co-ordinating sensorimotor expectations, action planning and language use. The model has been implemented on a robot that manipulates objects on a tabletop in response to verbal input. The implementation responds to verbal requests such as 'Group the green block and the red apple', while adapting in real time to unexpected physical collisions and taking opportunistic advantage of any new information it may receive through perceptual and linguistic channels.

  11. Process Algebra Approach for Action Recognition in the Maritime Domain

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry

    2011-01-01

    The maritime environment poses a number of challenges for autonomous operation of surface boats. Among these challenges are the highly dynamic nature of the environment, the onboard sensing and reasoning requirements for obeying the navigational rules of the road, and the need for robust day/night hazard detection and avoidance. Development of full mission level autonomy entails addressing these challenges, coupled with inference of the tactical and strategic intent of possibly adversarial vehicles in the surrounding environment. This paper introduces PACIFIC (Process Algebra Capture of Intent From Information Content), an onboard system based on formal process algebras that is capable of extracting actions/activities from sensory inputs and reasoning within a mission context to ensure proper responses. PACIFIC is part of the Behavior Engine in CARACaS (Cognitive Architecture for Robotic Agent Command and Sensing), a system that is currently running on a number of U.S. Navy unmanned surface and underwater vehicles. Results from a series of experimental studies that demonstrate the effectiveness of the system are also presented.

  12. Flexible server-side processing of climate archives

    NASA Astrophysics Data System (ADS)

    Juckes, Martin; Stephens, Ag; Damasio da Costa, Eduardo

    2014-05-01

    The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.

  13. Flexible server-side processing of climate archives

    NASA Astrophysics Data System (ADS)

    Juckes, M. N.; Stephens, A.; da Costa, E. D.

    2013-12-01

    The flexibility and interoperability of OGC Web Processing Services are combined with an extensive range of data processing operations supported by the Climate Data Operators (CDO) library to facilitate processing of the CMIP5 climate data archive. The challenges posed by this peta-scale archive allow us to test and develop systems which will help us to deal with approaching exa-scale challenges. The CEDA WPS package allows users to manipulate data in the archive and export the results without first downloading the data -- in some cases this can drastically reduce the data volumes which need to be transferred and greatly reduce the time needed for the scientists to get their results. Reductions in data transfer are achieved at the expense of an additional computational load imposed on the archive (or near-archive) infrastructure. This is managed with a load balancing system. Short jobs may be run in near real-time, longer jobs will be queued. When jobs are queued the user is provided with a web dashboard displaying job status. A clean split between the data manipulation software and the request management software is achieved by exploiting the extensive CDO library. This library has a long history of development to support the needs of the climate science community. Use of the library ensures that operations run on data by the system can be reproduced by users using the same operators installed on their own computers. Examples using the system deployed for the CMIP5 archive will be shown and issues which need to be addressed as archive volumes expand into the exa-scale will be discussed.

  14. Awareness and compliance with recommended running shoe guidelines among U.S. Army soldiers.

    PubMed

    Teyhen, Deydre S; Thomas, Rachelle M; Roberts, Candi C; Gray, Brian E; Robbins, Travis; McPoil, Thomas; Childs, John D; Molloy, Joseph M

    2010-11-01

    The purpose of this study was to determine awareness and compliance with recommended running shoe selection, sizing, and replacement guidelines among U.S. Army soldiers. Soldiers (n = 524) attending training at Fort Sam Houston, Texas completed self-report questionnaires and a foot assessment, which included measurement of foot size and arch height index. Researchers examined each soldier's running shoes for type, wear pattern, and general condition. Thirty-five percent of the soldiers wore shoes that were inappropriately sized; 56.5% wore shoes that were inappropriate for their foot type. Thirty-five percent of the soldiers had excessively worn shoes and 63% did not know recommended shoe replacement guidelines. Further efforts may be necessary to ensure that soldiers are aware of and compliant with recommended running shoe selection, sizing, and replacement guidelines. Future research is needed to determine whether adherence to these guidelines has a favorable effect on reducing risk of overuse injury.

  15. Effects of Surface Inclination on the Vertical Loading Rates and Landing Pattern during the First Attempt of Barefoot Running in Habitual Shod Runners.

    PubMed

    An, W; Rainbow, M J; Cheung, R T H

    2015-01-01

    Barefoot running has been proposed to reduce vertical loading rates, which is a risk factor of running injuries. Most of the previous studies evaluated runners on level surfaces. This study examined the effect of surface inclination on vertical loading rates and landing pattern during the first attempt of barefoot running among habitual shod runners. Twenty habitual shod runners were asked to run on treadmill at 8.0 km/h at three inclination angles (0°; +10°; -10°) with and without their usual running shoes. Vertical average rate (VALR) and instantaneous loading rate (VILR) were obtained by established methods. Landing pattern was decided using high-speed camera. VALR and VILR in shod condition were significantly higher (p < 0.001) in declined than in level or inclined treadmill running, but not in barefoot condition (p > 0.382). There was no difference (p > 0.413) in the landing pattern among all surface inclinations. Only one runner demonstrated complete transition to non-heel strike landing in all slope conditions. Reducing heel strike ratio in barefoot running did not ensure a decrease in loading rates (p > 0.15). Conversely, non-heel strike landing, regardless of footwear condition, would result in a softer landing (p < 0.011).

  16. Effects of Surface Inclination on the Vertical Loading Rates and Landing Pattern during the First Attempt of Barefoot Running in Habitual Shod Runners

    PubMed Central

    An, W.; Rainbow, M. J.; Cheung, R. T. H.

    2015-01-01

    Barefoot running has been proposed to reduce vertical loading rates, which is a risk factor of running injuries. Most of the previous studies evaluated runners on level surfaces. This study examined the effect of surface inclination on vertical loading rates and landing pattern during the first attempt of barefoot running among habitual shod runners. Twenty habitual shod runners were asked to run on treadmill at 8.0 km/h at three inclination angles (0°; +10°; −10°) with and without their usual running shoes. Vertical average rate (VALR) and instantaneous loading rate (VILR) were obtained by established methods. Landing pattern was decided using high-speed camera. VALR and VILR in shod condition were significantly higher (p < 0.001) in declined than in level or inclined treadmill running, but not in barefoot condition (p > 0.382). There was no difference (p > 0.413) in the landing pattern among all surface inclinations. Only one runner demonstrated complete transition to non-heel strike landing in all slope conditions. Reducing heel strike ratio in barefoot running did not ensure a decrease in loading rates (p > 0.15). Conversely, non-heel strike landing, regardless of footwear condition, would result in a softer landing (p < 0.011). PMID:26258133

  17. Towards Efficient Scientific Data Management Using Cloud Storage

    NASA Technical Reports Server (NTRS)

    He, Qiming

    2013-01-01

    A software prototype allows users to backup and restore data to/from both public and private cloud storage such as Amazon's S3 and NASA's Nebula. Unlike other off-the-shelf tools, this software ensures user data security in the cloud (through encryption), and minimizes users operating costs by using space- and bandwidth-efficient compression and incremental backup. Parallel data processing utilities have also been developed by using massively scalable cloud computing in conjunction with cloud storage. One of the innovations in this software is using modified open source components to work with a private cloud like NASA Nebula. Another innovation is porting the complex backup to- cloud software to embedded Linux, running on the home networking devices, in order to benefit more users.

  18. IMPACT OF NOBLE METALS AND MERCURY ON HYDROGEN GENERATION DURING HIGH LEVEL WASTE PRETREATMENT AT THE SAVANNAH RIVER SITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, M; Tommy Edwards, T; David Koopman, D

    2009-03-03

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site vitrifies radioactive High Level Waste (HLW) for repository internment. The process consists of three major steps: waste pretreatment, vitrification, and canister decontamination/sealing. HLW consists of insoluble metal hydroxides (primarily iron, aluminum, calcium, magnesium, manganese, and uranium) and soluble sodium salts (carbonate, hydroxide, nitrite, nitrate, and sulfate). The pretreatment process in the Chemical Processing Cell (CPC) consists of two process tanks, the Sludge Receipt and Adjustment Tank (SRAT) and the Slurry Mix Evaporator (SME) as well as a melter feed tank. During SRAT processing, nitric and formic acids are addedmore » to the sludge to lower pH, destroy nitrite and carbonate ions, and reduce mercury and manganese. During the SME cycle, glass formers are added, and the batch is concentrated to the final solids target prior to vitrification. During these processes, hydrogen can be produced by catalytic decomposition of excess formic acid. The waste contains silver, palladium, rhodium, ruthenium, and mercury, but silver and palladium have been shown to be insignificant factors in catalytic hydrogen generation during the DWPF process. A full factorial experimental design was developed to ensure that the existence of statistically significant two-way interactions could be determined without confounding of the main effects with the two-way interaction effects. Rh ranged from 0.0026-0.013% and Ru ranged from 0.010-0.050% in the dried sludge solids, while initial Hg ranged from 0.5-2.5 wt%, as shown in Table 1. The nominal matrix design consisted of twelve SRAT cycles. Testing included: a three factor (Rh, Ru, and Hg) study at two levels per factor (eight runs), three duplicate midpoint runs, and one additional replicate run to assess reproducibility away from the midpoint. Midpoint testing was used to identify potential quadratic effects from the three factors. A single sludge simulant was used for all tests and was spiked with the required amount of noble metals immediately prior to performing the test. Acid addition was kept effectively constant except to compensate for variations in the starting mercury concentration. SME cycles were also performed during six of the tests.« less

  19. 49 CFR 383.113 - Required skills.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... inspected to ensure a safe operating condition of each part, including: (i) Engine compartment; (ii) Cab/engine start; (iii) Steering; (iv) Suspension; (v) Brakes; (vi) Wheels; (vii) Side of vehicle; (viii... they will activate in emergency situations; (iv) With the engine running, make sure that the system...

  20. 49 CFR 383.113 - Required skills.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... inspected to ensure a safe operating condition of each part, including: (i) Engine compartment; (ii) Cab/engine start; (iii) Steering; (iv) Suspension; (v) Brakes; (vi) Wheels; (vii) Side of vehicle; (viii... they will activate in emergency situations; (iv) With the engine running, make sure that the system...

  1. 49 CFR 383.113 - Required skills.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... inspected to ensure a safe operating condition of each part, including: (i) Engine compartment; (ii) Cab/engine start; (iii) Steering; (iv) Suspension; (v) Brakes; (vi) Wheels; (vii) Side of vehicle; (viii... they will activate in emergency situations; (iv) With the engine running, make sure that the system...

  2. Training People for the Job.

    ERIC Educational Resources Information Center

    Costa, Walter Pinto

    1983-01-01

    A large training program for the drinking water supply and sanitation sector has been running continuously for over 10 years in Brazil, helping to ensure a supply of manpower for the country's National Water Supply and Sanitation Plan. Highlights of this program's activities/projects are discussed. (Author/JN)

  3. Burn Rate Modification with Carborane Polymers

    DTIC Science & Technology

    2017-11-01

    test, ARDEC electrostatic discharge test, and DSC analysis of the small-scale runs were performed to ensure the products were safe to handle. Once...Accessions Division 8725 John J. Kingman Road, Ste 0944 Fort Belvoir, VA 22060-6218 GIDEP Operations Center P.O. Box 8000 Corona , CA

  4. Smooth Sailing.

    ERIC Educational Resources Information Center

    Price, Beverley; Pincott, Maxine; Rebman, Ashley; Northcutt, Jen; Barsanti, Amy; Silkunas, Betty; Brighton, Susan K.; Reitz, David; Winkler, Maureen

    1999-01-01

    Presents discipline tips from several teachers to keep classrooms running smoothly all year. Some of the suggestions include the following: a bear-cave warning system, peer mediation, a motivational mystery, problem students acting as the teacher's assistant, a positive-behavior-reward chain, a hallway scavenger hunt (to ensure quiet passage…

  5. 49 CFR 383.113 - Required skills.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... inspected to ensure a safe operating condition of each part, including: (i) Engine compartment; (ii) Cab/engine start; (iii) Steering; (iv) Suspension; (v) Brakes; (vi) Wheels; (vii) Side of vehicle; (viii... they will activate in emergency situations; (iv) With the engine running, make sure that the system...

  6. Lightweight fuzzy processes in clinical computing.

    PubMed

    Hurdle, J F

    1997-09-01

    In spite of advances in computing hardware, many hospitals still have a hard time finding extra capacity in their production clinical information system to run artificial intelligence (AI) modules, for example: to support real-time drug-drug or drug-lab interactions; to track infection trends; to monitor compliance with case specific clinical guidelines; or to monitor/ control biomedical devices like an intelligent ventilator. Historically, adding AI functionality was not a major design concern when a typical clinical system is originally specified. AI technology is usually retrofitted 'on top of the old system' or 'run off line' in tandem with the old system to ensure that the routine work load would still get done (with as little impact from the AI side as possible). To compound the burden on system performance, most institutions have witnessed a long and increasing trend for intramural and extramural reporting, (e.g. the collection of data for a quality-control report in microbiology, or a meta-analysis of a suite of coronary artery bypass grafts techniques, etc.) and these place an ever-growing burden on typical the computer system's performance. We discuss a promising approach to adding extra AI processing power to a heavily-used system based on the notion 'lightweight fuzzy processing (LFP)', that is, fuzzy modules designed from the outset to impose a small computational load. A formal model for a useful subclass of fuzzy systems is defined below and is used as a framework for the automated generation of LFPs. By seeking to reduce the arithmetic complexity of the model (a hand-crafted process) and the data complexity of the model (an automated process), we show how LFPs can be generated for three sample datasets of clinical relevance.

  7. Can an inadequate cervical cytology sample in ThinPrep be converted to a satisfactory sample by processing it with a SurePath preparation?

    PubMed

    Sørbye, Sveinung Wergeland; Pedersen, Mette Kristin; Ekeberg, Bente; Williams, Merete E Johansen; Sauer, Torill; Chen, Ying

    2017-01-01

    The Norwegian Cervical Cancer Screening Program recommends screening every 3 years for women between 25 and 69 years of age. There is a large difference in the percentage of unsatisfactory samples between laboratories that use different brands of liquid-based cytology. We wished to examine if inadequate ThinPrep samples could be satisfactory by processing them with the SurePath protocol. A total of 187 inadequate ThinPrep specimens from the Department of Clinical Pathology at University Hospital of North Norway were sent to Akershus University Hospital for conversion to SurePath medium. Ninety-one (48.7%) were processed through the automated "gynecologic" application for cervix cytology samples, and 96 (51.3%) were processed with the "nongynecological" automatic program. Out of 187 samples that had been unsatisfactory by ThinPrep, 93 (49.7%) were satisfactory after being converted to SurePath. The rate of satisfactory cytology was 36.6% and 62.5% for samples run through the "gynecology" program and "nongynecology" program, respectively. Of the 93 samples that became satisfactory after conversion from ThinPrep to SurePath, 80 (86.0%) were screened as normal while 13 samples (14.0%) were given an abnormal diagnosis, which included 5 atypical squamous cells of undetermined significance, 5 low-grade squamous intraepithelial lesion, 2 atypical glandular cells not otherwise specified, and 1 atypical squamous cells cannot exclude high-grade squamous intraepithelial lesion. A total of 2.1% (4/187) of the women got a diagnosis of cervical intraepithelial neoplasia 2 or higher at a later follow-up. Converting cytology samples from ThinPrep to SurePath processing can reduce the number of unsatisfactory samples. The samples should be run through the "nongynecology" program to ensure an adequate number of cells.

  8. BIOASPEN: System for technology development

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The public version of ASPEN was installed in the VAX 11/750 computer. To examine the idea of BIOASPEN, a test example (the manufacture of acetone, butanol, and ethanol through a biological route) was chosen for simulation. Previous reports on the BIOASPEN project revealed the limitations of ASPEN in modeling this process. To overcome some of the difficulties, modules were written for the acid and enzyme hydrolyzers, the fermentor, and a sterilizer. Information required for these modules was obtained from the literature whenever possible. Additional support modules necessary for interfacing with ASPEN were also written. Some of ASPEN subroutines were themselves altered in order to ensure the correct running of the simulation program. After testing of these additions and charges was completed, the Acetone-Butanol-Ethanol (ABE) process was simulated. A release of ASPEN (which contained the Economic Subsystem) was obtained and installed. This subsection was tested and numerous charges were made in the FORTRAN code. Capital investment and operating cost studies were performed on the ABE process. Some alternatives in certain steps of the ABE simulation were investigated in order to elucidate their effects on the overall economics of the process.

  9. Effects of body-mapping-designed clothing on heat stress and running performance in a hot environment.

    PubMed

    Jiao, Jiao; Li, Yi; Yao, Lei; Chen, Yajun; Guo, Yueping; Wong, Stephen H S; Ng, Frency S F; Hu, Junyan

    2017-10-01

    To investigate clothing-induced differences in human thermal response and running performance, eight male athletes participated in a repeated-measure study by wearing three sets of clothing (CloA, CloB, and CloC). CloA and CloB were body-mapping-designed with 11% and 7% increased capacity of heat dissipation respectively than CloC, the commonly used running clothing. The experiments were conducted by using steady-state running followed by an all-out performance running in a controlled hot environment. Participants' thermal responses such as core temperature (T c ), mean skin temperature ([Formula: see text]), heat storage (S), and the performance running time were measured. CloA resulted in shorter performance time than CloC (323.1 ± 10.4 s vs. 353.6 ± 13.2 s, p = 0.01), and induced the lowest [Formula: see text], smallest ΔT c , and smallest S in the resting and running phases. This study indicated that clothing made with different heat dissipation capacities affects athlete thermal responses and running performance in a hot environment. Practitioner Summary: A protocol that simulated the real situation in running competitions was used to investigate the effects of body-mapping-designed clothing on athletes' thermal responses and running performance. The findings confirmed the effects of optimised clothing with body-mapping design and advanced fabrics, and ensured the practical advantage of developed clothing on exercise performance.

  10. Nurse-led clinics: 10 essential steps to setting up a service.

    PubMed

    Hatchett, Richard

    This article outlines 10 key steps for practitioners to consider when setting up and running a nurse-led clinic. It lays emphasis on careful planning, professional development and the need to audit and evaluate the service to ensure the clinic is measurably effective.

  11. Military Enlisted Aides: DOD’s Report Met Most Statutory Requirements, but Aide Allocation Could Be Improved

    DTIC Science & Technology

    2016-02-01

    purposes. Personal services performed solely for the benefit of family members or unofficial guests, including driving, shopping , running private...housing (Army). Maintaining accountability of, and ensuring care of, all government- owned furnishings, antiques , and memorabilia (Marine Corps

  12. Recovery of skeletal muscle mass after extensive injury: positive effects of increased contractile activity.

    PubMed

    Richard-Bulteau, Hélène; Serrurier, Bernard; Crassous, Brigitte; Banzet, Sébastien; Peinnequin, André; Bigard, Xavier; Koulmann, Nathalie

    2008-02-01

    The present study was designed to test the hypothesis that increasing physical activity by running exercise could favor the recovery of muscle mass after extensive injury and to determine the main molecular mechanisms involved. Left soleus muscles of female Wistar rats were degenerated by notexin injection before animals were assigned to either a sedentary group or an exercised group. Both regenerating and contralateral intact muscles from active and sedentary rats were removed 5, 7, 14, 21, 28 and 42 days after injury (n = 8 rats/group). Increasing contractile activity through running exercise during muscle regeneration ensured the full recovery of muscle mass and muscle cross-sectional area as soon as 21 days after injury, whereas muscle weight remained lower even 42 days postinjury in sedentary rats. Proliferator cell nuclear antigen and MyoD protein expression went on longer in active rats than in sedentary rats. Myogenin protein expression was higher in active animals than in sedentary animals 21 days postinjury. The Akt-mammalian target of rapamycin (mTOR) pathway was activated early during the regeneration process, with further increases of mTOR phosphorylation and its downstream effectors, eukaryotic initiation factor-4E-binding protein-1 and p70(s6k), in active rats compared with sedentary rats (days 7-14). The exercise-induced increase in mTOR phosphorylation, independently of Akt, was associated with decreased levels of phosphorylated AMP-activated protein kinase. Taken together, these results provided evidence that increasing contractile activity during muscle regeneration ensured early and full recovery of muscle mass and suggested that these beneficial effects may be due to a longer proliferative step of myogenic cells and activation of mTOR signaling, independently of Akt, during the maturation step of muscle regeneration.

  13. Calibration process of highly parameterized semi-distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.

  14. Lower extremity injuries in runners. Advances in prediction.

    PubMed

    Macera, C A

    1992-01-01

    Recreational and competitive running is practised by many individuals to improve cardiorespiratory function and general well-being. The major negative aspect of running is the high rate of injuries to the lower extremities. Several well-designed population-based studies have found no major differences in injury rates between men and women; no increasing effect of age on injuries; a declining injury rate with more years of running experience; no substantial effect of weight or height; an uncertain effect of psychological factors; and a strong effect of previous injury on future injuries. Among the modifiable risk factors studied, weekly distance is the strongest predictor of future injuries. Other training characteristics (speed, frequency, surface, timing) have little or no effect on future injuries after accounting for distance run. More studies are needed to address the effects of appropriate stretching practices and abrupt change in training patterns. For recreational runners who have sustained injuries, especially within the past year, a reduction in running to below 32 km per week is recommended. For those about to begin a running programme, moderation is the best advice. For competitive runners, great care should be taken to ensure that prior injuries are sufficiently healed before attempting any racing event, particularly a marathon.

  15. Research on memory management in embedded systems

    NASA Astrophysics Data System (ADS)

    Huang, Xian-ying; Yang, Wu

    2005-12-01

    Memory is a scarce resource in embedded system due to cost and size. Thus, applications in embedded systems cannot use memory randomly, such as in desktop applications. However, data and code must be stored into memory for running. The purpose of this paper is to save memory in developing embedded applications and guarantee running under limited memory conditions. Embedded systems often have small memory and are required to run a long time. Thus, a purpose of this study is to construct an allocator that can allocate memory effectively and bear a long-time running situation, reduce memory fragmentation and memory exhaustion. Memory fragmentation and exhaustion are related to the algorithm memory allocated. Static memory allocation cannot produce fragmentation. In this paper it is attempted to find an effective allocation algorithm dynamically, which can reduce memory fragmentation. Data is the critical part that ensures an application can run regularly, which takes up a large amount of memory. The amount of data that can be stored in the same size of memory is relevant with the selected data structure. Skills for designing application data in mobile phone are explained and discussed also.

  16. Determination of vertical pressures on running wheels of freight trolleys of bridge type cranes

    NASA Astrophysics Data System (ADS)

    Goncharov, K. A.; Denisov, I. A.

    2018-03-01

    The problematic issues of the design of the bridge-type trolley crane, connected with ensuring uniform load distribution between the running wheels, are considered. The shortcomings of the existing methods of calculation of reference pressures are described. The results of the analytical calculation of the pressure of the support wheels are compared with the results of the numerical solution of this problem for various schemes of trolley supporting frames. Conclusions are given on the applicability of various methods for calculating vertical pressures, depending on the type of metal structures used in the trolley.

  17. No Such Thing as "Good Vibrations" in Science

    ERIC Educational Resources Information Center

    Lancaster, Franklin D.

    2011-01-01

    A facilities manager must ensure that a building runs as smoothly and successfully as possible. For college, university, and school managers dealing with laboratories and other spaces for scientific study and research, this means making sure that nothing disrupts experiments and other scientific endeavors. Such disruptions can wreak havoc,…

  18. Forensic examination of ink by high-performance thin layer chromatography--the United States Secret Service Digital Ink Library.

    PubMed

    Neumann, Cedric; Ramotowski, Robert; Genessay, Thibault

    2011-05-13

    Forensic examinations of ink have been performed since the beginning of the 20th century. Since the 1960s, the International Ink Library, maintained by the United States Secret Service, has supported those analyses. Until 2009, the search and identification of inks were essentially performed manually. This paper describes the results of a project designed to improve ink samples' analytical and search processes. The project focused on the development of improved standardization procedures to ensure the best possible reproducibility between analyses run on different HPTLC plates. The successful implementation of this new calibration method enabled the development of mathematical algorithms and of a software package to complement the existing ink library. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Observations and actions to ensure equal treatment of all candidates by the European Research Council

    NASA Astrophysics Data System (ADS)

    Rydin, Claudia Alves de Jesus; Farina Busto, Luis; El Mjiyad, Nadia; Kota, Jhansi; Thelen, Lionel

    2017-04-01

    The European Research Council (ERC), Europe's premiere funding agency for frontier research, views equality of opportunities as an important challenge. The ERC monitors closely gender figures on every call and has taken actions to tackle imbalances and potential unconscious biases. The ERC talk is focused on efforts made to understand and ensure equal treatment of all candidates, with particular focus on gender balance and with specific attention to geosciences. Data and statistics collected in running highly competitive and internationally recognised funding schemes are presented. Recent initiatives to tackle geographical imbalances will also be presented.

  20. Is your ED a medical department or a business? Survey says...both.

    PubMed

    2009-07-01

    Taking a solid business-like approach to the management of your ED involves--but is certainly not limited to--getting a handle on revenues and expenses. Here are a few strategies some ED managers say help them run a tighter, and better, ship: Have a clinical audit specialist review charts, and have a clerical person "check the checker." Use a "cultural fit" interview with prospective staff members to ensure you're on the same page when it comes to service. Develop a charge structure with numerical values for clinical activities and services, to help ensure optimal reimbursement.

  1. A new Scheme for ATLAS Trigger Simulation using Legacy Code

    NASA Astrophysics Data System (ADS)

    Galster, Gorm; Stelzer, Joerg; Wiedenmann, Werner

    2014-06-01

    Analyses at the LHC which search for rare physics processes or determine with high precision Standard Model parameters require accurate simulations of the detector response and the event selection processes. The accurate determination of the trigger response is crucial for the determination of overall selection efficiencies and signal sensitivities. For the generation and the reconstruction of simulated event data, the most recent software releases are usually used to ensure the best agreement between simulated data and real data. For the simulation of the trigger selection process, however, ideally the same software release that was deployed when the real data were taken should be used. This potentially requires running software dating many years back. Having a strategy for running old software in a modern environment thus becomes essential when data simulated for past years start to present a sizable fraction of the total. We examined the requirements and possibilities for such a simulation scheme within the ATLAS software framework and successfully implemented a proof-of-concept simulation chain. One of the greatest challenges was the choice of a data format which promises long term compatibility with old and new software releases. Over the time periods envisaged, data format incompatibilities are also likely to emerge in databases and other external support services. Software availability may become an issue, when e.g. the support for the underlying operating system might stop. In this paper we present the encountered problems and developed solutions, and discuss proposals for future development. Some ideas reach beyond the retrospective trigger simulation scheme in ATLAS as they also touch more generally aspects of data preservation.

  2. Genetically improved ponderosa pine seedlings outgrow nursery-run seedlings with and without competition -- Early findings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, P.M.; Fiddler, G.O.; Kitzmiller, J.H.

    1994-04-01

    Three classes of ponderosa pine (Pinus ponderosa) seedlings (nursery-run, wind-pollinated, control-pollinated) were evaluated for stem height and diameter at the USDA Forest Service's Placerville Nursery and the Georgetown Range District in northern California. Pines in all three classes were grown with competing vegetation or maintained in a free-to-grow condition. Control-pollinated seedlings were statistically taller than nursery-run counterparts when outplanted, and after 1 and 2 growing seasons in the field with and without competition. They also had significantly larger diameters when outplanted and after 2 growing seasons in the field when free to grow. Wind-pollinated seedlings grew taller than nursery-run seedlingsmore » when free to grow. A large amount of competing vegetation [bearclover (Chamaebatia foliolosa)--29,490 plants per acre; herbaceous vegetation--11,500; hardwood sprouts--233; and whiteleaf manzanita (Arctostaphylos viscida) seedlings--100] ensure that future pine development will be tested rigorously.« less

  3. [Situation analysis of physical fitness among Chinese Han students in 2014].

    PubMed

    Song, Y; Lei, Y T; Hu, P J; Zhang, B; Ma, J

    2018-06-18

    To analyze the situation of physical fitness among Chinese Han students in 2014, so as to develop the guideline of physical activity regarding to the targeted students and to provide bases for the improvements of students' physical fitness. Subjects were from 2014 Chinese National Surveys on Students' Constitution and Health (CNSSCH). In this survey, 212 401 Han students aged 7-18 years participated and the measurement of physical fitness completed. The qualified rates of indicators regarding to physical fitness were evaluated based on "National Students Constitutional Health Standards" (2014 revised edition). Logistic regression was used to assess the association between the indicators of pull ups (boys) and endurance run (boys and girls) and influencing factors. In 2014, among the boys, the qualified rates of pull ups and endurance run were 18.7% and 76.6% respectively, while the qualified rate of endurance run was 80.6% among the girls. These two indicators were the weak items of physical fitness among the Chinese Han students. There was regional difference in the qualified rates of physical fitness, and the students in Zhejiang and Jiangsu provinces had higher qualified rates. Logistic regression showed that the urban students (OR=0.67), the students with malnutrition (OR=0.76), overweight (OR=0.32) or obesity (OR=0.12) were less likely to be qualified to pull ups; the students who had physical activity more than 1 h per day (OR=1.31) was more likely to be qualified to pull ups. The influencing factors of endurance run showed the similar pattern, in addition, the students with enough physical education (PE) were more likely to be qualified to endurance run, while the students with "Squeeze" or "no" PE class were less likely to be qualified to endurance run. The pull ups and endurance run have become the weak items of the physical fitness among primary and secondary school students in our national and provincial levels. Based on ensuring physical exercise time and PE curriculum and class hours, as well as improving students' nutrition, we should also strengthen the rational design of physical exercise and ensure the balanced development of various items so as to improve the overall development of students' physical fitness.

  4. Mini-Ckpts: Surviving OS Failures in Persistent Memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiala, David; Mueller, Frank; Ferreira, Kurt Brian

    Concern is growing in the high-performance computing (HPC) community on the reliability of future extreme-scale systems. Current efforts have focused on application fault-tolerance rather than the operating system (OS), despite the fact that recent studies have suggested that failures in OS memory are more likely. The OS is critical to a system's correct and efficient operation of the node and processes it governs -- and in HPC also for any other nodes a parallelized application runs on and communicates with: Any single node failure generally forces all processes of this application to terminate due to tight communication in HPC. Therefore,more » the OS itself must be capable of tolerating failures. In this work, we introduce mini-ckpts, a framework which enables application survival despite the occurrence of a fatal OS failure or crash. Mini-ckpts achieves this tolerance by ensuring that the critical data describing a process is preserved in persistent memory prior to the failure. Following the failure, the OS is rejuvenated via a warm reboot and the application continues execution effectively making the failure and restart transparent. The mini-ckpts rejuvenation and recovery process is measured to take between three to six seconds and has a failure-free overhead of between 3-5% for a number of key HPC workloads. In contrast to current fault-tolerance methods, this work ensures that the operating and runtime system can continue in the presence of faults. This is a much finer-grained and dynamic method of fault-tolerance than the current, coarse-grained, application-centric methods. Handling faults at this level has the potential to greatly reduce overheads and enables mitigation of additional fault scenarios.« less

  5. Automatised data quality monitoring of the LHCb Vertex Locator

    NASA Astrophysics Data System (ADS)

    Bel, L.; Crocombe, A. Ch.; Gersabeck, M.; Pearce, A.; Majewski, M.; Szumlak, T.

    2017-10-01

    The LHCb Vertex Locator (VELO) is a silicon strip semiconductor detector operating at just 8mm distance to the LHC beams. Its 172,000 strips are read at a frequency of 1.1 MHz and processed by off-detector FPGAs followed by a PC cluster that reduces the event rate to about 10 kHz. During the second run of the LHC, which lasts from 2015 until 2018, the detector performance will undergo continued change due to radiation damage effects. This necessitates a detailed monitoring of the data quality to avoid adverse effects on the physics analysis performance. The VELO monitoring infrastructure has been re-designed compared to the first run of the LHC when it was based on manual checks. The new system is based around an automatic analysis framework, which monitors the performance of new data as well as long-term trends and using dedicated algorithms flags issues whenever they arise. The new analysis framework then analyses the plots that are produced by these algorithms. One of its tasks is to perform custom comparisons between the newly processed data and that from reference runs. The most-likely scenario in which this analysis would identify an issue is the parameters of the readout electronics no longer being optimal and requiring retuning. The data of the monitoring plots can be reduced further, e.g. by evaluating averages, and these quantities are input to long-term trending. This is used to detect slow variation of quantities, which are not detectable by the comparison of two nearby runs. Such gradual change is what is expected due to radiation damage effects. It is essential to detect these changes early such that measures can be taken, e.g. adjustments of the operating voltage, to prevent any impact on the quality of high-level quantities and thus on physics analyses. The plots as well as the analysis results and trends are made available through graphical user interfaces (GUIs). These GUIs are dynamically configured by a single configuration that determines the choice and arrangement of plots and trends and ensures a common look and feel.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amerio, S.; Behari, S.; Boyd, J.

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less

  7. The Experiences of State-Run Insurance Marketplaces That Use HealthCare.gov.

    PubMed

    Giovannelli, Justin; Lucia, Kevin

    2015-09-01

    States have flexibility in implementing the Affordable Care Act's health insurance marketplaces and may choose to become more (or less) involved in marketplace operations over time. Interest in new implementation approaches has increased as states seek to ensure the long-term financial stability of their exchanges and exercise local control over marketplace oversight. This brief explores the experiences of four states--Idaho, Nevada, New Mexico, and Oregon--that established their own exchanges but have operated them with support from the federal HealthCare.gov eligibility and enrollment platform. Drawing on discussions with policymakers, insurers, and brokers, we examine how these supported state-run marketplaces perform their key functions. We find that this model may offer states the ability to maximize their influence over their insurance markets, while limiting the financial risk of running an exchange.

  8. Data preservation at the Fermilab Tevatron

    NASA Astrophysics Data System (ADS)

    Amerio, S.; Behari, S.; Boyd, J.; Brochmann, M.; Culbertson, R.; Diesburg, M.; Freeman, J.; Garren, L.; Greenlee, H.; Herner, K.; Illingworth, R.; Jayatilaka, B.; Jonckheere, A.; Li, Q.; Naymola, S.; Oleynik, G.; Sakumoto, W.; Varnes, E.; Vellidis, C.; Watts, G.; White, S.

    2017-04-01

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. These efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.

  9. eBiometrics: an enhanced multi-biometrics authentication technique for real-time remote applications on mobile devices

    NASA Astrophysics Data System (ADS)

    Kuseler, Torben; Lami, Ihsan; Jassim, Sabah; Sellahewa, Harin

    2010-04-01

    The use of mobile communication devices with advance sensors is growing rapidly. These sensors are enabling functions such as Image capture, Location applications, and Biometric authentication such as Fingerprint verification and Face & Handwritten signature recognition. Such ubiquitous devices are essential tools in today's global economic activities enabling anywhere-anytime financial and business transactions. Cryptographic functions and biometric-based authentication can enhance the security and confidentiality of mobile transactions. Using Biometric template security techniques in real-time biometric-based authentication are key factors for successful identity verification solutions, but are venerable to determined attacks by both fraudulent software and hardware. The EU-funded SecurePhone project has designed and implemented a multimodal biometric user authentication system on a prototype mobile communication device. However, various implementations of this project have resulted in long verification times or reduced accuracy and/or security. This paper proposes to use built-in-self-test techniques to ensure no tampering has taken place on the verification process prior to performing the actual biometric authentication. These techniques utilises the user personal identification number as a seed to generate a unique signature. This signature is then used to test the integrity of the verification process. Also, this study proposes the use of a combination of biometric modalities to provide application specific authentication in a secure environment, thus achieving optimum security level with effective processing time. I.e. to ensure that the necessary authentication steps and algorithms running on the mobile device application processor can not be undermined or modified by an imposter to get unauthorized access to the secure system.

  10. Algebra for Everyone.

    ERIC Educational Resources Information Center

    Edwards, Edgar L., Jr., Ed.

    The fundamentals of algebra and algebraic thinking should be a part of the background of all citizens in society. The vast increase in the use of technology requires that school mathematics ensure the teaching of algebraic thinking as well as its use at both the elementary and secondary school levels. Algebra is a universal theme that runs through…

  11. REFERENCE MANUAL FOR RASSMIT VERSION 2.1: SUB-SLAB DEPRESSURIZATION SYSTEM DESIGN PERFORMANCE SIMULATION PROGRAM

    EPA Science Inventory

    The report is a reference manual for RASSMlT Version 2.1, a computer program that was developed to simulate and aid in the design of sub-slab depressurization systems used for indoor radon mitigation. The program was designed to run on DOS-compatible personal computers to ensure ...

  12. How to Make Our Schools Healthy: Healthy Schools Program. Program Results Progress Report

    ERIC Educational Resources Information Center

    Brown, Michael H.

    2012-01-01

    The Healthy Schools Program provides technical assistance to help schools engage administrators, teachers, parents and vendors in increasing access to physical activity and healthier foods for students and staff. Current grants run to September 2013. The program addresses two policy priorities of the Childhood Obesity team: (1) Ensure that all…

  13. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele

    2001-01-01

    This viewgraph presentation provides information on support sources available for the automatic parallelization of computer program. CAPTools, a support tool developed at the University of Greenwich, transforms, with user guidance, existing sequential Fortran code into parallel message passing code. Comparison routines are then run for debugging purposes, in essence, ensuring that the code transformation was accurate.

  14. 40 CFR Table 4 to Subpart Kkkkk of... - Requirements for Performance Tests

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... block average pressure drop values for the three test runs, and determine and record the 3-hour block... limit for the limestone feeder setting Data from the limestone feeder during the performance test You must ensure that you maintain an adequate amount of limestone in the limestone hopper, storage bin...

  15. Ethical Parenting of Sexually Active Youth: Ensuring Safety While Enabling Development

    ERIC Educational Resources Information Center

    Bay-Cheng, Laina Y.

    2013-01-01

    The protection of children from harm is commonly accepted as the cardinal duty of parents. In the USA, where young people's sexuality is often regarded with anxiety, attempts to restrict adolescent sexual behaviour are seen as ethically justified and even required of "good" parents. Running counter to popular anxiety surrounding young…

  16. The Castle Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom Anderson; David Culler; James Demmel

    2000-02-16

    The goal of the Castle project was to provide a parallel programming environment that enables the construction of high performance applications that run portably across many platforms. The authors approach was to design and implement a multilayered architecture, with higher levels building on lower ones to ensure portability, but with care taken not to introduce abstractions that sacrifice performance.

  17. 78 FR 78396 - Manlifts; Extension of the Office of Management and Budget's (OMB) Approval of Information...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-26

    ... for developing information regarding the causes and prevention of occupational injuries, illnesses...' risk of death or serious injury by ensuring that manlifts are in safe operating condition. Periodic...; and any ``skip'' on the up or down run when mounting a step (indicating worn gears). A certification...

  18. Installation and management of the SPS and LEP control system computers

    NASA Astrophysics Data System (ADS)

    Bland, Alastair

    1994-12-01

    Control of the CERN SPS and LEP accelerators and service equipment on the two CERN main sites is performed via workstations, file servers, Process Control Assemblies (PCAs) and Device Stub Controllers (DSCs). This paper describes the methods and tools that have been developed to manage the file servers, PCAs and DSCs since the LEP startup in 1989. There are five operational DECstation 5000s used as file servers and boot servers for the PCAs and DSCs. The PCAs consist of 90 SCO Xenix 386 PCs, 40 LynxOS 486 PCs and more than 40 older NORD 100s. The DSCs consist of 90 OS-968030 VME crates and 10 LynxOS 68030 VME crates. In addition there are over 100 development systems. The controls group is responsible for installing the computers, starting all the user processes and ensuring that the computers and the processes run correctly. The operators in the SPS/LEP control room and the Services control room have a Motif-based X window program which gives them, in real time, the state of all the computers and allows them to solve problems or reboot them.

  19. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  20. The AIST Managed Cloud Environment

    NASA Astrophysics Data System (ADS)

    Cook, S.

    2016-12-01

    ESTO is currently in the process of developing and implementing the AIST Managed Cloud Environment (AMCE) to offer cloud computing services to ESTO-funded PIs to conduct their project research. AIST will provide projects access to a cloud computing framework that incorporates NASA security, technical, and financial standards, on which project can freely store, run, and process data. Currently, many projects led by research groups outside of NASA do not have the awareness of requirements or the resources to implement NASA standards into their research, which limits the likelihood of infusing the work into NASA applications. Offering this environment to PIs will allow them to conduct their project research using the many benefits of cloud computing. In addition to the well-known cost and time savings that it allows, it also provides scalability and flexibility. The AMCE will facilitate infusion and end user access by ensuring standardization and security. This approach will ultimately benefit ESTO, the science community, and the research, allowing the technology developments to have quicker and broader applications.

  1. Conceptual Design Optimization of an Augmented Stability Aircraft Incorporating Dynamic Response Performance Constraints

    NASA Technical Reports Server (NTRS)

    Welstead, Jason

    2014-01-01

    This research focused on incorporating stability and control into a multidisciplinary de- sign optimization on a Boeing 737-class advanced concept called the D8.2b. A new method of evaluating the aircraft handling performance using quantitative evaluation of the sys- tem to disturbances, including perturbations, continuous turbulence, and discrete gusts, is presented. A multidisciplinary design optimization was performed using the D8.2b transport air- craft concept. The con guration was optimized for minimum fuel burn using a design range of 3,000 nautical miles. Optimization cases were run using xed tail volume coecients, static trim constraints, and static trim and dynamic response constraints. A Cessna 182T model was used to test the various dynamic analysis components, ensuring the analysis was behaving as expected. Results of the optimizations show that including stability and con- trol in the design process drastically alters the optimal design, indicating that stability and control should be included in conceptual design to avoid system level penalties later in the design process.

  2. Peer review of human studies run amok: a break in the fiduciary relation between scientists and the public.

    PubMed

    Feldstein Ewing, Sarah W; Saitz, Richard

    2015-02-01

    Peer review aims to ensure the quality and credibility of research reporting. Conducted by volunteer scientists who receive no guidance or direction, peer review widely varies from fast and facilitative, to unclear and obstructive. Poor quality is an issue because most science research is publicly funded, whereby scientists must make an effort to quickly disseminate their findings back to the public. An unfortunately not uncommon barrier in this process is ineffective peer review. Most scientists agree that when done well, editors and reviewers drive and maintain the high standards of science. At the same time, ineffective peer review can cause great delay with no introduced improvement in final product. These delays and requests interfere with the path of communication between scientist and public, at a great cost to editors, reviewers, authors and those who stand to benefit from application of the results of the studies. We offer a series of concrete recommendations to improve this process.

  3. Weldability of AA 5052 H32 aluminium alloy by TIG welding and FSW process - A comparative study

    NASA Astrophysics Data System (ADS)

    Shanavas, S.; Raja Dhas, J. Edwin

    2017-10-01

    Aluminium 5xxx series alloys are the strongest non-heat treatable aluminium alloy. Its application found in automotive components and body structures due to its good formability, good strength, high corrosion resistance, and weight savings. In the present work, the influence of Tungsten Inert Gas (TIG) welding parameters on the quality of weld on AA 5052 H32 aluminium alloy plates were analyzed and the mechanical characterization of the joint so produced was compared with Friction stir (FS) welded joint. The selected input variable parameters are welding current and inert gas flow rate. Other parameters such as welding speed and arc voltage were kept constant throughout the study, based on the response from several trial runs conducted. The quality of the weld is measured in terms of ultimate tensile strength. A double side V-butt joints were fabricated by double pass on one side to ensure maximum strength of TIG welded joints. Macro and microstructural examination were conducted for both welding process.

  4. Warehouses information system design and development

    NASA Astrophysics Data System (ADS)

    Darajatun, R. A.; Sukanta

    2017-12-01

    Materials/goods handling industry is fundamental for companies to ensure the smooth running of their warehouses. Efficiency and organization within every aspect of the business is essential in order to gain a competitive advantage. The purpose of this research is design and development of Kanban of inventory storage and delivery system. Application aims to facilitate inventory stock checks to be more efficient and effective. Users easily input finished goods from production department, warehouse, customer, and also suppliers. Master data designed as complete as possible to be prepared applications used in a variety of process logistic warehouse variations. The author uses Java programming language to develop the application, which is used for building Java Web applications, while the database used is MySQL. System development methodology that I use is the Waterfall methodology. Waterfall methodology has several stages of the Analysis, System Design, Implementation, Integration, Operation and Maintenance. In the process of collecting data the author uses the method of observation, interviews, and literature.

  5. A new approach to process control using Instability Index

    NASA Astrophysics Data System (ADS)

    Weintraub, Jeffrey; Warrick, Scott

    2016-03-01

    The merits of a robust Statistical Process Control (SPC) methodology have long been established. In response to the numerous SPC rule combinations, processes, and the high cost of containment, the Instability Index (ISTAB) is presented as a tool for managing these complexities. ISTAB focuses limited resources on key issues and provides a window into the stability of manufacturing operations. ISTAB takes advantage of the statistical nature of processes by comparing the observed average run length (OARL) to the expected run length (ARL), resulting in a gap value called the ISTAB index. The ISTAB index has three characteristic behaviors that are indicative of defects in an SPC instance. Case 1: The observed average run length is excessively long relative to expectation. ISTAB > 0 is indicating the possibility that the limits are too wide. Case 2: The observed average run length is consistent with expectation. ISTAB near zero is indicating that the process is stable. Case 3: The observed average run length is inordinately short relative to expectation. ISTAB < 0 is indicating that the limits are too tight, the process is unstable or both. The probability distribution of run length is the basis for establishing an ARL. We demonstrate that the geometric distribution is a good approximation to run length across a wide variety of rule sets. Excessively long run lengths are associated with one kind of defect in an SPC instance; inordinately short run lengths are associated with another. A sampling distribution is introduced as a way to quantify excessively long and inordinately short observed run lengths. This paper provides detailed guidance for action limits on these run lengths. ISTAB as a statistical method of review facilitates automated instability detection. This paper proposes a management system based on ISTAB as an enhancement to more traditional SPC approaches.

  6. The ALICE DAQ infoLogger

    NASA Astrophysics Data System (ADS)

    Chapeland, S.; Carena, F.; Carena, W.; Chibante Barroso, V.; Costa, F.; Dénes, E.; Divià, R.; Fuchs, U.; Grigore, A.; Ionita, C.; Delort, C.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Von Haller, B.; Alice Collaboration

    2014-04-01

    ALICE (A Large Ion Collider Experiment) is a heavy-ion experiment studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE DAQ (Data Acquisition System) is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches). The DAQ reads the data transferred from the detectors through 500 dedicated optical links at an aggregated and sustained rate of up to 10 Gigabytes per second and stores at up to 2.5 Gigabytes per second. The infoLogger is the log system which collects centrally the messages issued by the thousands of processes running on the DAQ machines. It allows to report errors on the fly, and to keep a trace of runtime execution for later investigation. More than 500000 messages are stored every day in a MySQL database, in a structured table keeping track for each message of 16 indexing fields (e.g. time, host, user, ...). The total amount of logs for 2012 exceeds 75GB of data and 150 million rows. We present in this paper the architecture and implementation of this distributed logging system, consisting of a client programming API, local data collector processes, a central server, and interactive human interfaces. We review the operational experience during the 2012 run, in particular the actions taken to ensure shifters receive manageable and relevant content from the main log stream. Finally, we present the performance of this log system, and future evolutions.

  7. EUV mask pilot line at Intel Corporation

    NASA Astrophysics Data System (ADS)

    Stivers, Alan R.; Yan, Pei-Yang; Zhang, Guojing; Liang, Ted; Shu, Emily Y.; Tejnil, Edita; Lieberman, Barry; Nagpal, Rajesh; Hsia, Kangmin; Penn, Michael; Lo, Fu-Chang

    2004-12-01

    The introduction of extreme ultraviolet (EUV) lithography into high volume manufacturing requires the development of a new mask technology. In support of this, Intel Corporation has established a pilot line devoted to encountering and eliminating barriers to manufacturability of EUV masks. It concentrates on EUV-specific process modules and makes use of the captive standard photomask fabrication capability of Intel Corporation. The goal of the pilot line is to accelerate EUV mask development to intersect the 32nm technology node. This requires EUV mask technology to be comparable to standard photomask technology by the beginning of the silicon wafer process development phase for that technology node. The pilot line embodies Intel's strategy to lead EUV mask development in the areas of the mask patterning process, mask fabrication tools, the starting material (blanks) and the understanding of process interdependencies. The patterning process includes all steps from blank defect inspection through final pattern inspection and repair. We have specified and ordered the EUV-specific tools and most will be installed in 2004. We have worked with International Sematech and others to provide for the next generation of EUV-specific mask tools. Our process of record is run repeatedly to ensure its robustness. This primes the supply chain and collects information needed for blank improvement.

  8. Take Russia to 'task' on bioweapons transparency.

    PubMed

    Zilinskas, Raymond A

    2012-06-06

    In the run-up to his reelection, Russian president Vladimir Putin outlined 28 tasks to be undertaken by his administration, including one that commanded the development of weapons based on “genetic principles.” Political pressure must be applied by governments and professional societies to ensure that there is not a modern reincarnation of the Soviet biological warfare program.

  9. Getting Early Childhood Educators Up and Running: Creating Strong Technology Curators, Facilitators, Guides, and Users. Policy Brief

    ERIC Educational Resources Information Center

    Daugherty, Lindsay; Dossani, Rafiq; Johnson, Erin-Elizabeth; Wright, Cameron

    2014-01-01

    Providers of early childhood education (ECE) are well positioned to help ensure that technology is used effectively in ECE settings. Indeed, the successful integration of technology into ECE depends on providers who have the ability to curate the most appropriate devices and content, "facilitate" effective patterns of use, guide families…

  10. Shifting the Emphasis from Prison to Education: How Indiana Saved over $40 Million

    ERIC Educational Resources Information Center

    Modisett, Jeff

    2004-01-01

    States are facing the worst financial crisis since the Great Depression. Yet, despite these deficiencies in funds, state governors must assure their constituents that dangerous criminals will still be arrested, adjudicated and imprisoned. Experience shows that the best way to ensure public safety efficiently over the long run is to spend less on…

  11. 76 FR 30180 - Notice of Issuance of Final Determination Concerning Pocket Projectors

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-24

    ..., and adhering by electrostatic means. The finished projector will undergo a series of tests in Taiwan: A pre-test, a run-in test, and a function test. The pre-test consists of: ensuring that the... the projector is turned on (developed in Taiwan), (2) test patterns that are projected on the screen...

  12. 78 FR 40391 - Special Local Regulations; Dinghy Poker Run, Middle River; Baltimore County, Essex, MD

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-05

    ... the rule is to ensure safety of life on navigable waters of the United States during the Dinghy Poker...: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard proposes to establish special... River. These special local regulations are necessary to provide for the safety of life on navigable...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, J.; Herner, K.; Jayatilaka, B.

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less

  14. Benchmarks for target tracking

    NASA Astrophysics Data System (ADS)

    Dunham, Darin T.; West, Philip D.

    2011-09-01

    The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.

  15. Data preservation at the Fermilab Tevatron

    DOE PAGES

    Amerio, S.; Behari, S.; Boyd, J.; ...

    2017-01-22

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less

  16. Data preservation at the Fermilab Tevatron

    DOE PAGES

    Boyd, J.; Herner, K.; Jayatilaka, B.; ...

    2015-12-23

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less

  17. Data preservation at the Fermilab Tevatron

    NASA Astrophysics Data System (ADS)

    Boyd, J.; Herner, K.; Jayatilaka, B.; Roser, R.; Sakumoto, W.

    2015-12-01

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. These efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.

  18. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics

    PubMed Central

    Röst, Hannes L.; Liu, Yansheng; D’Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C.; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-01-01

    Large scale, quantitative proteomic studies have become essential for the analysis of clinical cohorts, large perturbation experiments and systems biology studies. While next-generation mass spectrometric techniques such as SWATH-MS have substantially increased throughput and reproducibility, ensuring consistent quantification of thousands of peptide analytes across multiple LC-MS/MS runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we have developed the TRIC software which utilizes fragment ion data to perform cross-run alignment, consistent peak-picking and quantification for high throughput targeted proteomics. TRIC uses a graph-based alignment strategy based on non-linear retention time correction to integrate peak elution information from all LC-MS/MS runs acquired in a study. When compared to state-of-the-art SWATH-MS data analysis, the algorithm was able to reduce the identification error by more than 3-fold at constant recall, while correcting for highly non-linear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem (iPS) cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups and substantially increased the quantitative completeness and biological information in the data, providing insights into protein dynamics of iPS cells. Overall, this study demonstrates the importance of consistent quantification in highly challenging experimental setups, and proposes an algorithm to automate this task, constituting the last missing piece in a pipeline for automated analysis of massively parallel targeted proteomics datasets. PMID:27479329

  19. Fundamental movement skills testing in children with cerebral palsy.

    PubMed

    Capio, Catherine M; Sit, Cindy H P; Abernethy, Bruce

    2011-01-01

    To examine the inter-rater reliability and comparative validity of product-oriented and process-oriented measures of fundamental movement skills among children with cerebral palsy (CP). In total, 30 children with CP aged 6 to 14 years (Mean = 9.83, SD = 2.5) and classified in Gross Motor Function Classification System (GMFCS) levels I-III performed tasks of catching, throwing, kicking, horizontal jumping and running. Process-oriented assessment was undertaken using a number of components of the Test of Gross Motor Development (TGMD-2), while product-oriented assessment included measures of time taken, distance covered and number of successful task completions. Cohen's kappa, Spearman's rank correlation coefficient and tests to compare correlated correlation coefficients were performed. Very good inter-rater reliability was found. Process-oriented measures for running and jumping had significant associations with GMFCS, as did seven product-oriented measures for catching, throwing, kicking, running and jumping. Product-oriented measures of catching, kicking and running had stronger associations with GMFCS than the corresponding process-oriented measures. Findings support the validity of process-oriented measures for running and jumping and of product-oriented measures of catching, throwing, kicking, running and jumping. However, product-oriented measures for catching, kicking and running appear to have stronger associations with functional abilities of children with CP, and are thus recommended for use in rehabilitation processes.

  20. Creating of Central Geospatial Database of the Slovak Republic and Procedures of its Revision

    NASA Astrophysics Data System (ADS)

    Miškolci, M.; Šafář, V.; Šrámková, R.

    2016-06-01

    The article describes the creation of initial three dimensional geodatabase from planning and designing through the determination of technological and manufacturing processes to practical using of Central Geospatial Database (CGD - official name in Slovak language is Centrálna Priestorová Databáza - CPD) and shortly describes procedures of its revision. CGD ensures proper collection, processing, storing, transferring and displaying of digital geospatial information. CGD is used by Ministry of Defense (MoD) for defense and crisis management tasks and by Integrated rescue system. For military personnel CGD is run on MoD intranet, and for other users outside of MoD is transmutated to ZbGIS (Primary Geodatabase of Slovak Republic) and is run on public web site. CGD is a global set of geo-spatial information. CGD is a vector computer model which completely covers entire territory of Slovakia. Seamless CGD is created by digitizing of real world using of photogrammetric stereoscopic methods and measurements of objects properties. Basic vector model of CGD (from photogrammetric processing) is then taken out to the field for inspection and additional gathering of objects properties in the whole area of mapping. Finally real-world objects are spatially modeled as a entities of three-dimensional database. CGD gives us opportunity, to get know the territory complexly in all the three spatial dimensions. Every entity in CGD has recorded the time of collection, which allows the individual to assess the timeliness of information. CGD can be utilized for the purposes of geographical analysis, geo-referencing, cartographic purposes as well as various special-purpose mapping and has the ambition to cover the needs not only the MoD, but to become a reference model for the national geographical infrastructure.

  1. Can an inadequate cervical cytology sample in ThinPrep be converted to a satisfactory sample by processing it with a SurePath preparation?

    PubMed Central

    Sørbye, Sveinung Wergeland; Pedersen, Mette Kristin; Ekeberg, Bente; Williams, Merete E. Johansen; Sauer, Torill; Chen, Ying

    2017-01-01

    Background: The Norwegian Cervical Cancer Screening Program recommends screening every 3 years for women between 25 and 69 years of age. There is a large difference in the percentage of unsatisfactory samples between laboratories that use different brands of liquid-based cytology. We wished to examine if inadequate ThinPrep samples could be satisfactory by processing them with the SurePath protocol. Materials and Methods: A total of 187 inadequate ThinPrep specimens from the Department of Clinical Pathology at University Hospital of North Norway were sent to Akershus University Hospital for conversion to SurePath medium. Ninety-one (48.7%) were processed through the automated “gynecologic” application for cervix cytology samples, and 96 (51.3%) were processed with the “nongynecological” automatic program. Results: Out of 187 samples that had been unsatisfactory by ThinPrep, 93 (49.7%) were satisfactory after being converted to SurePath. The rate of satisfactory cytology was 36.6% and 62.5% for samples run through the “gynecology” program and “nongynecology” program, respectively. Of the 93 samples that became satisfactory after conversion from ThinPrep to SurePath, 80 (86.0%) were screened as normal while 13 samples (14.0%) were given an abnormal diagnosis, which included 5 atypical squamous cells of undetermined significance, 5 low-grade squamous intraepithelial lesion, 2 atypical glandular cells not otherwise specified, and 1 atypical squamous cells cannot exclude high-grade squamous intraepithelial lesion. A total of 2.1% (4/187) of the women got a diagnosis of cervical intraepithelial neoplasia 2 or higher at a later follow-up. Conclusions: Converting cytology samples from ThinPrep to SurePath processing can reduce the number of unsatisfactory samples. The samples should be run through the “nongynecology” program to ensure an adequate number of cells. PMID:28900466

  2. A time-domain digitally controlled oscillator composed of a free running ring oscillator and flying-adder

    NASA Astrophysics Data System (ADS)

    Wei, Liu; Wei, Li; Peng, Ren; Qinglong, Lin; Shengdong, Zhang; Yangyuan, Wang

    2009-09-01

    A time-domain digitally controlled oscillator (DCO) is proposed. The DCO is composed of a free-running ring oscillator (FRO) and a two lap-selectors integrated flying-adder (FA). With a coiled cell array which allows uniform loading capacitances of the delay cells, the FRO produces 32 outputs with consistent tap spacing for the FA as reference clocks. The FA uses the outputs from the FRO to generate the output of the DCO according to the control number, resulting in a linear dependence of the output period, instead of the frequency on the digital controlling word input. Thus the proposed DCO ensures a good conversion linearity in a time-domain, and is suitable for time-domain all-digital phase locked loop applications. The DCO was implemented in a standard 0.13 μm digital logic CMOS process. The measurement results show that the DCO has a linear and monotonic tuning curve with gain variation of less than 10%, and a very low root mean square period jitter of 9.3 ps in the output clocks. The DCO works well at supply voltages ranging from 0.6 to 1.2 V, and consumes 4 mW of power with 500 MHz frequency output at 1.2 V supply voltage.

  3. Planning for distributed workflows: constraint-based coscheduling of computational jobs and data placement in distributed environments

    NASA Astrophysics Data System (ADS)

    Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal

    2015-05-01

    When running data intensive applications on distributed computational resources long I/O overheads may be observed as access to remotely stored data is performed. Latencies and bandwidth can become the major limiting factor for the overall computation performance and can reduce the CPU/WallTime ratio to excessive IO wait. Reusing the knowledge of our previous research, we propose a constraint programming based planner that schedules computational jobs and data placements (transfers) in a distributed environment in order to optimize resource utilization and reduce the overall processing completion time. The optimization is achieved by ensuring that none of the resources (network links, data storages and CPUs) are oversaturated at any moment of time and either (a) that the data is pre-placed at the site where the job runs or (b) that the jobs are scheduled where the data is already present. Such an approach eliminates the idle CPU cycles occurring when the job is waiting for the I/O from a remote site and would have wide application in the community. Our planner was evaluated and simulated based on data extracted from log files of batch and data management systems of the STAR experiment. The results of evaluation and estimation of performance improvements are discussed in this paper.

  4. The influence of training and mental skills preparation on injury incidence and performance in marathon runners.

    PubMed

    Hamstra-Wright, Karrie L; Coumbe-Lilley, John E; Kim, Hajwa; McFarland, Jose A; Huxel Bliven, Kellie C

    2013-10-01

    There has been a considerable increase in the number of participants running marathons over the past several years. The 26.2-mile race requires physical and mental stamina to successfully complete it. However, studies have not investigated how running and mental skills preparation influence injury and performance. The purpose of our study was to describe the training and mental skills preparation of a typical group of runners as they began a marathon training program, assess the influence of training and mental skills preparation on injury incidence, and examine how training and mental skills preparation influence marathon performance. Healthy adults (N = 1,957) participating in an 18-week training program for a fall 2011 marathon were recruited for the study. One hundred twenty-five runners enrolled and received 4 surveys: pretraining, 6 weeks, 12 weeks, posttraining. The pretraining survey asked training and mental skills preparation questions. The 6- and 12-week surveys asked about injury incidence. The posttraining survey asked about injury incidence and marathon performance. Tempo runs during training preparation had a significant positive relationship to injury incidence in the 6-week survey (ρ[93] = 0.26, p = 0.01). The runners who reported incorporating tempo and interval runs, running more miles per week, and running more days per week in their training preparation ran significantly faster than did those reporting less tempo and interval runs, miles per week, and days per week (p ≤ 0.05). Mental skills preparation did not influence injury incidence or marathon performance. To prevent injury, and maximize performance, while marathon training, it is important that coaches and runners ensure that a solid foundation of running fitness and experience exists, followed by gradually building volume, and then strategically incorporating runs of various speeds and distances.

  5. A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa

    2017-06-01

    High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.

  6. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    PubMed

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  7. Fifty Educational Markets: A Playbook of State Laws and Regulations Governing Private Schools. School Choice Issues in Depth

    ERIC Educational Resources Information Center

    Hammons, Christopher

    2008-01-01

    There is a widespread misperception that private schools avoid government oversight or are "unregulated." In fact, private schools are subject to a wide variety of laws and regulations that run the gamut from reasonable rules to ensure health and safety to unreasonable rules that interfere with school curricula, preventing schools from pursuing…

  8. Application of cluster technology in location-based service

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Wang, Xiaoman; Gong, Jianya

    2005-10-01

    This paper introduces the principle, algorithmic and realization of the Load Balancing Technology. It also designs a clustered method in the application of Location-Based Service (LBS), and explains its function characteristics and its whole system structure, followed by some experimental comparisons, showing that the Cluster Technology could ensure a LBS's continuous running and the sharing of fault-tolerance and cluster.

  9. Three Generations of FPGA DAQ Development for the ATLAS Pixel Detector

    NASA Astrophysics Data System (ADS)

    Mayer, Joseph A., II

    The Large Hadron Collider (LHC) at the European Center for Nuclear Research (CERN) tracks a schedule of long physics runs, followed by periods of inactivity known as Long Shutdowns (LS). During these LS phases both the LHC, and the experiments around its ring, undergo maintenance and upgrades. For the LHC these upgrades improve their ability to create data for physicists; the more data the LHC can create the more opportunities there are for rare events to appear that physicists will be interested in. The experiments upgrade so they can record the data and ensure the event won't be missed. Currently the LHC is in Run 2 having completed the first LS of three. This thesis focuses on the development of Field-Programmable Gate Array (FPGA)-based readout systems that span across three major tasks of the ATLAS Pixel data acquisition (DAQ) system. The evolution of Pixel DAQ's Readout Driver (ROD) card is presented. Starting from improvements made to the new Insertable B-Layer (IBL) ROD design, which was part of the LS1 upgrade; to upgrading the old RODs from Run 1 to help them run more efficiently in Run 2. It also includes the research and development of FPGA based DAQs and integrated circuit emulators for the ITk upgrade which will occur during LS3 in 2025.

  10. Levels at gaging stations

    USGS Publications Warehouse

    Kenney, Terry A.

    2010-01-01

    Operational procedures at U.S. Geological Survey gaging stations include periodic leveling checks to ensure that gages are accurately set to the established gage datum. Differential leveling techniques are used to determine elevations for reference marks, reference points, all gages, and the water surface. The techniques presented in this manual provide guidance on instruments and methods that ensure gaging-station levels are run to both a high precision and accuracy. Levels are run at gaging stations whenever differences in gage readings are unresolved, stations may have been damaged, or according to a pre-determined frequency. Engineer's levels, both optical levels and electronic digital levels, are commonly used for gaging-station levels. Collimation tests should be run at least once a week for any week that levels are run, and the absolute value of the collimation error cannot exceed 0.003 foot/100 feet (ft). An acceptable set of gaging-station levels consists of a minimum of two foresights, each from a different instrument height, taken on at least two independent reference marks, all reference points, all gages, and the water surface. The initial instrument height is determined from another independent reference mark, known as the origin, or base reference mark. The absolute value of the closure error of a leveling circuit must be less than or equal to ft, where n is the total number of instrument setups, and may not exceed |0.015| ft regardless of the number of instrument setups. Closure error for a leveling circuit is distributed by instrument setup and adjusted elevations are determined. Side shots in a level circuit are assessed by examining the differences between the adjusted first and second elevations for each objective point in the circuit. The absolute value of these differences must be less than or equal to 0.005 ft. Final elevations for objective points are determined by averaging the valid adjusted first and second elevations. If final elevations indicate that the reference gage is off by |0.015| ft or more, it must be reset.

  11. Honeybees Prefer to Steer on a Smooth Wall With Tetrapod Gaits

    PubMed Central

    Zhao, Jieliang; Zhu, Fei; Yan, Shaoze

    2018-01-01

    Abstract Insects are well equipped in walking on complex three-dimensional terrain, allowing them to overcome obstacles or catch prey. However, the gait transition for insects steering on a wall remains unexplored. Here, we find that honeybees adopted a tetrapod gait to change direction when climbing a wall. On the contrary to the common tripod gait, honeybees propel their body forward by synchronously stepping with both middle legs and then both front legs. This process ensures the angle of the central axis of the honeybee to be consistent with the crawling direction. Interestingly, when running in an alternating tripod gait, the central axis of honeybee sways around the center of mass under alternating tripod gait to maintain stability. Experimental results show that tripod, tetrapod, and random gaits result in the amazing consensus harmony on the climbing speed and gait stability, whether climbing on a smooth wall or walking on smooth ground. PMID:29722862

  12. Method and computer product to increase accuracy of time-based software verification for sensor networks

    DOEpatents

    Foo Kune, Denis [Saint Paul, MN; Mahadevan, Karthikeyan [Mountain View, CA

    2011-01-25

    A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.

  13. Massive parallelization of serial inference algorithms for a complex generalized linear model

    PubMed Central

    Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David

    2014-01-01

    Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363

  14. Pedagogical Techniques Employed by the Television Show "MythBusters"

    NASA Astrophysics Data System (ADS)

    Zavrel, Erik

    2016-11-01

    "MythBusters," the long-running though recently discontinued Discovery Channel science entertainment television program, has proven itself to be far more than just a highly rated show. While its focus is on entertainment, the show employs an array of pedagogical techniques to communicate scientific concepts to its audience. These techniques include: achieving active learning, avoiding jargon, employing repetition to ensure comprehension, using captivating demonstrations, cultivating an enthusiastic disposition, and increasing intrinsic motivation to learn. In this content analysis, episodes from the show's 10-year history were examined for these techniques. "MythBusters" represents an untapped source of pedagogical techniques, which science educators may consider availing themselves of in their tireless effort to better reach their students. Physics educators in particular may look to "MythBusters" for inspiration and guidance in how to incorporate these techniques into their own teaching and help their students in the learning process.

  15. Opto-box: Optical modules and mini-crate for ATLAS pixel and IBL detectors

    NASA Astrophysics Data System (ADS)

    Bertsche, David

    2016-11-01

    The opto-box is a custom mini-crate for housing optical modules which process and transfer optoelectronic data. Many novel solutions were developed for the custom design and manufacturing. The system tightly integrates electrical, mechanical, and thermal functionality into a small package of size 35×10x8 cm3. Special attention was given to ensure proper shielding, grounding, cooling, high reliability, and environmental tolerance. The custom modules, which incorporate Application Specific Integrated Circuits, were developed through a cycle of rigorous testing and redesign. In total, fourteen opto-boxes have been installed and loaded with modules on the ATLAS detector. They are currently in operation as part of the LHC run 2 data read-out chain. This conference proceeding is in support of the poster presented at the International Conference on New Frontiers in Physics (ICNFP) 2015 [1].

  16. An Efficient Method for Verifying Gyrokinetic Microstability Codes

    NASA Astrophysics Data System (ADS)

    Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.

    2009-11-01

    Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.

  17. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler

    PubMed Central

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-01-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  18. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    DOEpatents

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  19. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    PubMed

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  20. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    PubMed

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  1. Solvent Refined Coal (SRC) process. Research and development report No. 53, interim report No. 29, August-November, 1978. Volume VI. Process development unit studies. Part 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1980-01-01

    This report presents the results of seven SRC-II runs on Process Development Unit P99 feeding Pittsburgh Seam coal. Four of these runs (Runs 41-44) were made feeding coal from the Robinson Run Mine and three (Runs 45-47) were made feeding a second shipment of coal from the Powhatan No. 5 Mine. This work showed that both these coals are satisfactory feedstocks for the SRC-II process. Increasing dissolver outlet hydrogen partial pressure from approximately 1300 to about 1400 psia did not have a significant effect on yields from Robinson Run coal, but simultaneously increasing coal concentration in the feed slurry frommore » 25 to 30 wt% and decreasing the percent recycle solids from 21% to 17% lowered distillate yields. With the Powhatan coal, a modest increase in the boiling temperature (approximately 35/sup 0/F) at the 10% point) of the process solvent had essentially no effect on product yields, while lowering the average dissolver temperature from 851/sup 0/F to 842/sup 0/F reduced gas yield.« less

  2. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    NASA Technical Reports Server (NTRS)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  3. Building an Electronic Handover Tool for Physicians Using a Collaborative Approach between Clinicians and the Development Team.

    PubMed

    Guilbeault, Peggy; Momtahan, Kathryn; Hudson, Jordan

    2015-01-01

    In an effort by The Ottawa Hospital (TOH) to become one of the top 10% performers in patient safety and quality of care, the hospital embarked on improving the communication process during handover between physicians by building an electronic handover tool. It is expected that this tool will decrease information loss during handover. The Information Systems (IS) department engaged a workgroup of physicians to become involved in defining requirements to build an electronic handover tool that suited their clinical handover needs. This group became ultimately responsible for defining the graphical user interface (GUI) and all functionality related to the tool. Prior to the pilot, the Information Systems team will run a usability testing session to ensure the application is user friendly and has met the goals and objectives of the workgroup. As a result, The Ottawa Hospital has developed a fully integrated electronic handover tool built on the Clinical Mobile Application (CMA) which allows clinicians to enter patient problems, notes and tasks available to all physicians to facilitate the handover process.

  4. Enabling Incremental Query Re-Optimization.

    PubMed

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  5. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  6. Enabling Incremental Query Re-Optimization

    PubMed Central

    Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau

    2017-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658

  7. Powering the Nuclear Navy (U.S. Department of Energy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Secretary Perry toured the USS Harry Truman with Admiral Caldwell. The Truman is powered by the Department of Energy’s Nuclear Propulsion Program. These ships can run 25 years with a single nuclear-powered reactor. Secretary Perry was briefed on the importance of nuclear propulsion to the carrier’s capabilities. The Naval Nuclear Propulsion Program provides power plants that ensure safety, reliability, and extended deployment capacity.

  8. [Assessment of financial performance improves the quality of healthcare provided by medical organizations].

    PubMed

    Afek, Arnon; Meilik, Ahuva; Rotstein, Zeev

    2009-01-01

    Today, medical organizations have to contend with a highly competitive environment, an atmosphere saturated with a multitude of innovative new technologies and ever-increasing costs. The ability of these organizations to survive and to develop and expand their services mandates adoption of management guidelines based on the world of finance/commerce, adapted to make them relevant to the world of medical service. In this article the authors chose to present a management administration assessment which is a process that ensures that the management will effectively administer the organization's resources, and meet the goals set by the organization. The system demands that hospital "centers of responsibility" be defined, a management information system be set up, activities be priced, budget be defined and the expenses assessed. These processes make it possible to formulate a budget and assess any possible deviation between the budget and the actual running costs. An assessment of deviations will reveal any possible deviation of the most significant factor--efficiency. Medical organization managers, with the cooperation of the directors of the "centers of responsibility", can assess subunit activities and gain an understanding of the significance of management decisions and thus improve the quality of management, and the medical organization. The goal of this management system is not only to Lower costs and to meet the financial goals that were set; it is a tool that ensures quality. Decreasing expenditure is important in this case, but is only secondary in importance and will be a result of reducing the costs incurred by services lacking in quality.

  9. Single frequency free-running low noise compact extended-cavity semiconductor laser at high power level

    NASA Astrophysics Data System (ADS)

    Garnache, Arnaud; Myara, Mikhaël.; Laurain, A.; Bouchier, Aude; Perez, J. P.; Signoret, P.; Sagnes, I.; Romanini, D.

    2017-11-01

    We present a highly coherent semiconductor laser device formed by a ½-VCSEL structure and an external concave mirror in a millimetre high finesse stable cavity. The quantum well structure is diode-pumped by a commercial single mode GaAs laser diode system. This free running low noise tunable single-frequency laser exhibits >50mW output power in a low divergent circular TEM00 beam with a spectral linewidth below 1kHz and a relative intensity noise close to the quantum limit. This approach ensures, with a compact design, homogeneous gain behaviour and a sufficiently long photon lifetime to reach the oscillation-relaxation-free class-A regime, with a cut off frequency around 10MHz.

  10. New operator assistance features in the CMS Run Control System

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. R.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Vougioukas, M.; Zejdl, P.

    2017-10-01

    During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.

  11. New Operator Assistance Features in the CMS Run Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J.M.; et al.

    During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potentialmore » clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.« less

  12. Dawn Usage, Scheduling, and Governance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louis, S

    2009-11-02

    This document describes Dawn use, scheduling, and governance concerns. Users started running full-machine science runs in early April 2009 during the initial open shakedown period. Scheduling Dawn while in the Open Computing Facility (OCF) was controlled and coordinated via phone calls, emails, and a small number of controlled banks. With Dawn moving to the Secure Computing Facility (SCF) in fall of 2009, a more detailed scheduling and governance model is required. The three major objectives are: (1) Ensure Dawn resources are allocated on a program priority-driven basis; (2) Utilize Dawn resources on the job mixes for which they were intended;more » and (3) Minimize idle cycles through use of partitions, banks and proper job mix. The SCF workload for Dawn will be inherently different than Purple or BG/L, and therefore needs a different approach. Dawn's primary function is to permit adequate access for tri-lab code development in preparation for Sequoia, and in particular for weapons multi-physics codes in support of UQ. A second purpose is to provide time allocations for large-scale science runs and for UQ suite calculations to advance SSP program priorities. This proposed governance model will be the basis for initial time allocation of Dawn computing resources for the science and UQ workloads that merit priority on this class of resource, either because they cannot be reasonably attempted on any other resources due to size of problem, or because of the unavailability of sizable allocations on other ASC capability or capacity platforms. This proposed model intends to make the most effective use of Dawn as possible, but without being overly constrained by more formal proposal processes such as those now used for Purple CCCs.« less

  13. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostuk, M.; Uram, T. D.; Evans, T.

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  14. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE PAGES

    Kostuk, M.; Uram, T. D.; Evans, T.; ...

    2018-02-01

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  15. Classification of mouth movements using 7 T fMRI.

    PubMed

    Bleichner, M G; Jansma, J M; Salari, E; Freudenburg, Z V; Raemaekers, M; Ramsey, N F

    2015-12-01

    A brain-computer interface (BCI) is an interface that uses signals from the brain to control a computer. BCIs will likely become important tools for severely paralyzed patients to restore interaction with the environment. The sensorimotor cortex is a promising target brain region for a BCI due to the detailed topography and minimal functional interference with other important brain processes. Previous studies have shown that attempted movements in paralyzed people generate neural activity that strongly resembles actual movements. Hence decodability for BCI applications can be studied in able-bodied volunteers with actual movements. In this study we tested whether mouth movements provide adequate signals in the sensorimotor cortex for a BCI. The study was executed using fMRI at 7 T to ensure relevance for BCI with cortical electrodes, as 7 T measurements have been shown to correlate well with electrocortical measurements. Twelve healthy volunteers executed four mouth movements (lip protrusion, tongue movement, teeth clenching, and the production of a larynx activating sound) while in the scanner. Subjects performed a training and a test run. Single trials were classified based on the Pearson correlation values between the activation patterns per trial type in the training run and single trials in the test run in a 'winner-takes-all' design. Single trial mouth movements could be classified with 90% accuracy. The classification was based on an area with a volume of about 0.5 cc, located on the sensorimotor cortex. If voxels were limited to the surface, which is accessible for electrode grids, classification accuracy was still very high (82%). Voxels located on the precentral cortex performed better (87%) than the postcentral cortex (72%). The high reliability of decoding mouth movements suggests that attempted mouth movements are a promising candidate for BCI in paralyzed people.

  16. Thirteen-year trends in child and adolescent fundamental movement skills: 1997-2010.

    PubMed

    Hardy, Louise L; Barnett, Lisa; Espinel, Paola; Okely, Anthony D

    2013-10-01

    The objective of this study is to describe 13-yr trends in children's fundamental movement skill (FMS) competency. Secondary analysis of representative, cross-sectional, Australian school-based surveys was conducted in 1997, 2004, and 2010 (n = 13,752 children age 9-15 yr). Five FMS (sprint run, vertical jump, catch, kick, and overarm throw) were assessed using process-oriented criteria at each survey and children's skills classified as competent or not competent. Covariates included sex, age, cardiorespiratory endurance (20-m shuttle run test), body mass index (kg·m), and socioeconomic status (residential postcode). At each survey, the children's FMS competency was low, with prevalence rarely above 50%. Between 1997 and 2004, there were significant increases in all students' competency in the sprint run, vertical jump, and catch. For boys, competency increased in the kick (primary) and the overarm throw (high school), but among high school girls, overarm throw competency decreased. Between 2004 and 2010, competency increased in the catch (all students), and in all girls, competency increased in the kick, whereas competency in the vertical jump decreased. Overall, students' FMS competency was low especially in the kick and overarm throw in girls. The observed increase in FMS competency in 2004 was attributed to changes in practice and policy to support the teaching of FMS in schools. In 2010, competency remained low, with improvements in only the catch (all) and kick (girls) and declines in vertical jump. Potentially, the current delivery of FMS programs requires stronger positioning within the school curriculum. Strategies to improve children's physical activity should consider ensuring children are taught FMS to competency level, to enjoy being physically active.

  17. Commissioning and Operation of a Cryogenic Target at HI γS

    NASA Astrophysics Data System (ADS)

    Kendellen, David; Compton@HIγS Collaboration

    2017-01-01

    We have developed a cryogenic target for use at the High Intensity γ-ray Source (HI γS). The target system is able to liquefy helium-4 (LHe) at 4 K, hydrogen (LH2) at 20 K, or deuterium (LD2) at 23 K to fill a 0.3 L Kapton cell. Liquid temperatures and condenser pressures are recorded throughout each run in order to ensure that the target's areal density is known to 1%. A low-temperature valve enables cycling between full and empty modes in less than 15 minutes. The target is being utilized in a series of experiments which probe the electromagnetic polarizabilities of the nucleon by Compton scattering high-energy photons from the liquid and detecting them with the HI γS NaI Detector Array (HINDA). During a 50-hour-long commissioning run, the target held LHe at 3.17 K, followed by 600 hours of production running with LD2 at 23.9 K. The design of the target will be presented and its performance during these runs will be discussed. Work supported by US Department of Energy contracts DE-FG02-97ER41033, DE-FG02-06ER41422, and DE-SCOO0536

  18. Incineration of European non-nuclear radioactive waste in the USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moloney, B. P.; Ferguson, D.; Stephenson, B.

    2013-07-01

    Incineration of dry low level radioactive waste from nuclear stations is a well established process achieving high volume reduction factors to minimise disposal costs and to stabilise residues for disposal. Incineration has also been applied successfully in many European Union member countries to wastes arising from use of radionuclides in medicine, nonnuclear research and industry. However, some nations have preferred to accumulate wastes over many years in decay stores to reduce the radioactive burden at point of processing. After decay and sorting the waste, they then require a safe, industrial scale and affordable processing solution for the large volumes accumulated.more » This paper reports the regulatory, logistical and technical issues encountered in a programme delivered for Eckert and Ziegler Nuclitec to incinerate safely 100 te of waste collected originally from German research, hospital and industrial centres, applying for the first time a 'burn and return' process model for European waste in the US. The EnergySolutions incinerators at Bear Creek, Oak Ridge, Tennessee, USA routinely incinerate waste arising from the non-nuclear user community. To address the requirement from Germany, EnergySolutions had to run a dedicated campaign to reduce cross-contamination with non-German radionuclides to the practical minimum. The waste itself had to be sampled in a carefully controlled programme to ensure the exacting standards of Bear Creek's license and US emissions laws were maintained. Innovation was required in packaging of the waste to minimise transportation costs, including sea freight. The incineration was inspected on behalf of the German regulator (the BfS) to ensure suitability for return to Germany and disposal. This first 'burn and return' programme has safely completed the incineration phase in February and the arising ash will be returned to Germany presently. The paper reports the main findings and lessons learned on this first of its kind project. (authors)« less

  19. In-situ sensing using mass spectrometry and its use for run-to-run control on a W-CVD cluster tool

    NASA Astrophysics Data System (ADS)

    Gougousi, T.; Sreenivasan, R.; Xu, Y.; Henn-Lecordier, L.; Rubloff, G. W.; Kidder, , J. N.; Zafiriou, E.

    2001-01-01

    A 300 amu closed-ion-source RGA (Leybold-Inficon Transpector 2) sampling gases directly from the reactor of an ULVAC ERA-1000 cluster tool has been used for real time process monitoring of a W CVD process. The process involves H2 reduction of WF6 at a total pressure of 67 Pa (0.5 torr) to produce W films on Si wafers heated at temperatures around 350 °C. The normalized RGA signals for the H2 reagent depletion and the HF product generation were correlated with the W film weight as measured post-process with an electronic microbalance for the establishment of thin-film weight (thickness) metrology. The metrology uncertainty (about 7% for the HF product) was limited primarily by the very low conversion efficiency of the W CVD process (around 2-3%). The HF metrology was then used to drive a robust run-to-run control algorithm, with the deposition time selected as the manipulated (or controlled) variable. For that purpose, during a 10 wafer run, a systematic process drift was introduced as a -5 °C processing temperature change for each successive wafer, in an otherwise unchanged process recipe. Without adjustment of the deposition time the W film weight (thickness) would have declined by about 50% by the 10th wafer. With the aid of the process control algorithm, an adjusted deposition time was computed so as to maintain constant HF sensing signal, resulting in weight (thickness) control comparable to the accuracy of the thickness metrology. These results suggest that in-situ chemical sensing, and particularly mass spectrometry, provide the basis for wafer state metrology as needed to achieve run-to-run control. Furthermore, since the control accuracy was consistent with the metrology accuracy, we anticipate significant improvements for processes as used in manufacturing, where conversion rates are much higher (40-50%) and corresponding signals for metrology will be much larger.

  20. Grace: A cross-platform micromagnetic simulator on graphics processing units

    NASA Astrophysics Data System (ADS)

    Zhu, Ru

    2015-12-01

    A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.

  1. 77 FR 60165 - Self-Regulatory Organizations; Fixed Income Clearing Corporation; Order Approving Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-02

    ... Time at Which the Mortgage-Backed Securities Division Runs Its Daily Morning Pass September 26, 2012. I... FICC proposes to move the time at which its Mortgage-Backed Securities Division (``MBSD'') runs its... processing passes. MBSD currently runs its first processing pass of the day (historically referred to as the...

  2. Running Memory for Clinical Handoffs: A Look at Active and Passive Processing.

    PubMed

    Anderson-Montoya, Brittany L; Scerbo, Mark W; Ramirez, Dana E; Hubbard, Thomas W

    2017-05-01

    The goal of the present study was to examine the effects of domain-relevant expertise on running memory and the ability to process handoffs of information. In addition, the role of active or passive processing was examined. Currently, there is little research that addresses how individuals with different levels of expertise process information in running memory when the information is needed to perform a real-world task. Three groups of participants differing in their level of clinical expertise (novice, intermediate, and expert) performed an abstract running memory span task and two tasks resembling real-world activities, a clinical handoff task and an air traffic control (ATC) handoff task. For all tasks, list length and the amount of information to be recalled were manipulated. Regarding processing strategy, all participants used passive processing for the running memory span and ATC tasks. The novices also used passive processing for the clinical task. The experts, however, appeared to use more active processing, and the intermediates fell in between. Overall, the results indicated that individuals with clinical expertise and a developed mental model rely more on active processing of incoming information for the clinical task while individuals with little or no knowledge rely on passive processing. The results have implications about how training should be developed to aid less experienced personnel identify what information should be included in a handoff and what should not.

  3. Improving overlay control through proper use of multilevel query APC

    NASA Astrophysics Data System (ADS)

    Conway, Timothy H.; Carlson, Alan; Crow, David A.

    2003-06-01

    Many state-of-the-art fabs are operating with increasingly diversified product mixes. For example, at Cypress Semiconductor, it is not unusual to be concurrently running multiple technologies and many devices within each technology. This diverse product mix significantly increases the difficulty of manually controlling overlay process corrections. As a result, automated run-to-run feedforward-feedback control has become a necessary and vital component of manufacturing. However, traditional run-to-run controllers rely on highly correlated historical events to forecast process corrections. For example, the historical process events typically are constrained to match the current event for exposure tool, device, process level and reticle ID. This narrowly defined process stream can result in insufficient data when applied to lowvolume or new-release devices. The run-to-run controller implemented at Cypress utilizes a multi-level query (Level-N) correlation algorithm, where each subsequent level widens the search criteria for available historical data. The paper discusses how best to widen the search criteria and how to determine and apply a known bias to account for tool-to-tool and device-to-device differences. Specific applications include offloading lots from one tool to another when the first tool is down for preventive maintenance, utilizing related devices to determine a default feedback vector for new-release devices, and applying bias values to account for known reticle-to-reticle differences. In this study, we will show how historical data can be leveraged from related devices or tools to overcome the limitations of narrow process streams. In particular, this paper discusses how effectively handling narrow process streams allows Cypress to offload lots from a baseline tool to an alternate tool.

  4. Virtual Platform for See Robustness Verification of Bootloader Embedded Software on Board Solar Orbiter's Energetic Particle Detector

    NASA Astrophysics Data System (ADS)

    Da Silva, A.; Sánchez Prieto, S.; Polo, O.; Parra Espada, P.

    2013-05-01

    Because of the tough robustness requirements in space software development, it is imperative to carry out verification tasks at a very early development stage to ensure that the implemented exception mechanisms work properly. All this should be done long time before the real hardware is available. But even if real hardware is available the verification of software fault tolerance mechanisms can be difficult since real faulty situations must be systematically and artificially brought about which can be imposible on real hardware. To solve this problem the Alcala Space Research Group (SRG) has developed a LEON2 virtual platform (Leon2ViP) with fault injection capabilities. This way it is posible to run the exact same target binary software as runs on the physical system in a more controlled and deterministic environment, allowing a more strict requirements verification. Leon2ViP enables unmanned and tightly focused fault injection campaigns, not possible otherwise, in order to expose and diagnose flaws in the software implementation early. Furthermore, the use of a virtual hardware-in-the-loop approach makes it possible to carry out preliminary integration tests with the spacecraft emulator or the sensors. The use of Leon2ViP has meant a signicant improvement, in both time and cost, in the development and verification processes of the Instrument Control Unit boot software on board Solar Orbiter's Energetic Particle Detector.

  5. Waters Without Borders: Scarcity and the Future of State Interactions over Shared Water Resources

    DTIC Science & Technology

    2010-04-01

    urbanization, increasing per capita consumption (associated with globalization and economic development), pollution , and climate change will exacerbate...Standards of Living, and Pollution : Water is fundamental to ensuring an adequate food supply. Agricultural irrigation accounts for 70% of fresh water...Agricultural run-off is also a major source of pollution reducing the quality and availability of drinking water. Energy: Water is also needed for the

  6. 40 CFR Appendix A to Subpart Kk of... - Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Figure 1). This ensures that 95 percent of the time, when the DQO is met, the actual CE value will be ±5... while still being assured of correctly demonstrating compliance. It is designed to reduce “false... approach follows: 4.3A source conducts an initial series of at least three runs. The owner or operator may...

  7. JPRS Report: Telecommunications.

    DTIC Science & Technology

    1988-03-04

    media, as well as other models in which, not only government and business are involved, but workers and community groups. But the study seemed to lean...management decisions are generally made by the RJR senior staff..." the report added. The RJR ownership model also results in the station’s being run on a...that have been raised. The model ensures development input without government control. It provides for pro- fessional management of the facility

  8. Experimental Performance of a Single-Mode Ytterbium-doped Fiber Ring Laser with Intracavity Modulator

    NASA Technical Reports Server (NTRS)

    Numata, Kenji; Camp, Jordan

    2012-01-01

    We have developed a linearly polarized Ytterbium-doped fiber ring laser with a single longitudinal mode output at 1064 run. A fiber-coupled intracavity phase modulator ensured mode-hop free operation and allowed fast frequency tuning. The fiber laser was locked with high stability to an iodine-stabilized laser, showing a frequency noise suppression of a factor approx 10 (exp 5) at 1 mHz

  9. 77 FR 50198 - Self-Regulatory Organizations; The Fixed Income Clearing Corporation; Notice of Filing Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-20

    ... Time at Which the Mortgage-Backed Securities Division Runs Its Daily Morning Pass August 14, 2012... Division (``MBSD'') runs its first processing pass of the day from 2 p.m. to 4 p.m. Eastern Standard Time... MBSD intends to move the time at which it runs its first processing pass of the day (historically...

  10. The X-33 range Operations Control Center

    NASA Technical Reports Server (NTRS)

    Shy, Karla S.; Norman, Cynthia L.

    1998-01-01

    This paper describes the capabilities and features of the X-33 Range Operations Center at NASA Dryden Flight Research Center. All the unprocessed data will be collected and transmitted over fiber optic lines to the Lockheed Operations Control Center for real-time flight monitoring of the X-33 vehicle. By using the existing capabilities of the Western Aeronautical Test Range, the Range Operations Center will provide the ability to monitor all down-range tracking sites for the Extended Test Range systems. In addition to radar tracking and aircraft telemetry data, the Telemetry and Radar Acquisition and Processing System is being enhanced to acquire vehicle command data, differential Global Positioning System corrections and telemetry receiver signal level status. The Telemetry and Radar Acquisition Processing System provides the flexibility to satisfy all X-33 data processing requirements quickly and efficiently. Additionally, the Telemetry and Radar Acquisition Processing System will run a real-time link margin analysis program. The results of this model will be compared in real-time with actual flight data. The hardware and software concepts presented in this paper describe a method of merging all types of data into a common database for real-time display in the Range Operations Center in support of the X-33 program. All types of data will be processed for real-time analysis and display of the range system status to ensure public safety.

  11. Improving the reliability of verbal communication between primary care physicians and pediatric hospitalists at hospital discharge.

    PubMed

    Mussman, Grant M; Vossmeyer, Michael T; Brady, Patrick W; Warrick, Denise M; Simmons, Jeffrey M; White, Christine M

    2015-09-01

    Timely and reliable verbal communication between hospitalists and primary care physicians (PCPs) is critical for prevention of medical adverse events but difficult in practice. Our aim was to increase the proportion of completed verbal handoffs from on-call residents or attendings to PCPs within 24 hours of patient discharge from a hospital medicine service to ≥90% within 18 months. A multidisciplinary team collaborated to redesign the process by which PCPs were contacted following patient discharge. Interventions focused on the key drivers of obtaining stakeholder buy-in, standardization of the communication process, including assigning primary responsibility for discharge communication to a single resident on each team and batching calls during times of maximum resident availability, reliable automated process initiation through leveraging the electronic health record (EHR), and transparency of data. A run chart assessed the impact of interventions over time. The percentage of calls initiated within 24 hours of discharge improved from 52% to 97%, and the percentage of calls completed improved to 93%. Results were sustained for 18 months. Standardization of the communication process through hospital telephone operators, use of the discharge order to ensure initiation of discharge communication, and batching of phone calls were associated with improvements in our measures. Reliable verbal discharge communication can be achieved through the use of a standardized discharge communication process coupled with the EHR. © 2015 Society of Hospital Medicine.

  12. Shadow: Running Tor in a Box for Accurate and Efficient Experimentation

    DTIC Science & Technology

    2011-09-23

    Modeling the speed of a target CPU is done by running an OpenSSL [31] speed test on a real CPU of that type. This provides us with the raw CPU processing...rate, but we are also interested in the processing speed of an application. By running application 5 benchmarks on the same CPU as the OpenSSL speed test...simulation, saving CPU cy- cles on our simulation host machine. Shadow removes cryptographic processing by preloading the main OpenSSL [31] functions used

  13. Sustained Accelerated Idioventricular Rhythm in a Centrifuge-Simulated Suborbital Spaceflight.

    PubMed

    Suresh, Rahul; Blue, Rebecca S; Mathers, Charles; Castleberry, Tarah L; Vanderploeg, James M

    2017-08-01

    Hypergravitational exposures during human centrifugation are known to provoke dysrhythmias, including sinus dysrhythmias/tachycardias, premature atrial/ventricular contractions, and even atrial fibrillations or flutter patterns. However, events are generally short-lived and resolve rapidly after cessation of acceleration. This case report describes a prolonged ectopic ventricular rhythm in response to high G exposure. A previously healthy 30-yr-old man voluntarily participated in centrifuge trials as a part of a larger study, experiencing a total of 7 centrifuge runs over 48 h. Day 1 consisted of two +Gz runs (peak +3.5 Gz, run 2) and two +Gx runs (peak +6.0 Gx, run 4). Day 2 consisted of three runs approximating suborbital spaceflight profiles (combined +Gx and +Gz). Hemodynamic data collected included blood pressure, heart rate, and continuous three-lead electrocardiogram. Following the final acceleration exposure of the last Day 2 run (peak +4.5 Gx and +4.0 Gz combined, resultant +6.0 G), during a period of idle resting centrifuge activity (resultant vector +1.4 G), the subject demonstrated a marked change in his three-lead electrocardiogram from normal sinus rhythm to a wide-complex ectopic ventricular rhythm at a rate of 91-95 bpm, consistent with an accelerated idioventricular rhythm (AIVR). This rhythm was sustained for 2 m, 24 s before reversion to normal sinus. The subject reported no adverse symptoms during this time. While prolonged, the dysrhythmia was asymptomatic and self-limited. AIVR is likely a physiological response to acceleration and can be managed conservatively. Vigilance is needed to ensure that AIVR is correctly distinguished from other, malignant rhythms to avoid inappropriate treatment and negative operational impacts.Suresh R, Blue RS, Mathers C, Castleberry TL, Vanderploeg JM. Sustained accelerated idioventricular rhythm in a centrifuge-simulated suborbital spaceflight. Aerosp Med Hum Perform. 2017; 88(8):789-793.

  14. 41 CFR 301-76.101 - Who is responsible for ensuring that all due process and legal requirements have been met?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ensuring that all due process and legal requirements have been met? 301-76.101 Section 301-76.101 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES... that all due process and legal requirements have been met? You are responsible for ensuring that all...

  15. Process Reengineering for Quality Improvement in ICU Based on Taylor's Management Theory.

    PubMed

    Tao, Ziqi

    2015-06-01

    Using methods including questionnaire-based surveys and control analysis, we analyzed the improvements in the efficiency of ICU rescue, service quality, and patients' satisfaction, in Xuzhou Central Hospital after the implementation of fine management, with an attempt to further introduce the concept of fine management and implement the brand construction. Originating in Taylor's "Theory of Scientific Management" (1982), fine management uses programmed, standardized, digitalized, and informational approaches to ensure each unit of an organization is running with great accuracy, high efficiency, strong coordination, and at sustained duration (Wang et al., Fine Management, 2007). The nature of fine management is a process that breaks up the strategy and goal, and executes it. Strategic planning takes place at every part of the process. Fine management demonstrates that everybody has a role to play in the management process, every area must be examined through the management process, and everything has to be managed (Zhang et al., The Experience of Hospital Nursing Precise Management, 2006). In other words, this kind of management theory demands all people to be involved in the entire process (Liu and Chen, Med Inf, 2007). As public hospital reform is becoming more widespread, it becomes imperative to "build a unified and efficient public hospital management system" and "improve the quality of medical services" (Guidelines on the Pilot Reform of Public Hospitals, 2010). The execution of fine management is of importance in optimizing the medical process, improving medical services and building a prestigious hospital brand.

  16. Uncertainty management, spatial and temporal reasoning, and validation of intelligent environmental decision support systems

    USGS Publications Warehouse

    Sànchez-Marrè, Miquel; Gilbert, Karina; Sojda, Rick S.; Steyer, Jean Philippe; Struss, Peter; Rodríguez-Roda, Ignasi; Voinov, A.A.; Jakeman, A.J.; Rizzoli, A.E.

    2006-01-01

    There are inherent open problems arising when developing and running Intelligent Environmental Decision Support Systems (IEDSS). During daily operation of IEDSS several open challenge problems appear. The uncertainty of data being processed is intrinsic to the environmental system, which is being monitored by several on-line sensors and off-line data. Thus, anomalous data values at data gathering level or even uncertain reasoning process at later levels such as in diagnosis or decision support or planning can lead the environmental process to unsafe critical operation states. At diagnosis level or even at decision support level or planning level, spatial reasoning or temporal reasoning or both aspects can influence the reasoning processes undertaken by the IEDSS. Most of Environmental systems must take into account the spatial relationships between the environmental goal area and the nearby environmental areas and the temporal relationships between the current state and the past states of the environmental system to state accurate and reliable assertions to be used within the diagnosis process or decision support process or planning process. Finally, a related issue is a crucial point: are really reliable and safe the decisions proposed by the IEDSS? Are we sure about the goodness and performance of proposed solutions? How can we ensure a correct evaluation of the IEDSS? Main goal of this paper is to analyse these four issues, review some possible approaches and techniques to cope with them, and study new trends for future research within the IEDSS field.

  17. Power Consumption Analysis of Operating Systems for Wireless Sensor Networks

    PubMed Central

    Lajara, Rafael; Pelegrí-Sebastiá, José; Perez Solano, Juan J.

    2010-01-01

    In this paper four wireless sensor network operating systems are compared in terms of power consumption. The analysis takes into account the most common operating systems—TinyOS v1.0, TinyOS v2.0, Mantis and Contiki—running on Tmote Sky and MICAz devices. With the objective of ensuring a fair evaluation, a benchmark composed of four applications has been developed, covering the most typical tasks that a Wireless Sensor Network performs. The results show the instant and average current consumption of the devices during the execution of these applications. The experimental measurements provide a good insight into the power mode in which the device components are running at every moment, and they can be used to compare the performance of different operating systems executing the same tasks. PMID:22219688

  18. Power consumption analysis of operating systems for wireless sensor networks.

    PubMed

    Lajara, Rafael; Pelegrí-Sebastiá, José; Perez Solano, Juan J

    2010-01-01

    In this paper four wireless sensor network operating systems are compared in terms of power consumption. The analysis takes into account the most common operating systems--TinyOS v1.0, TinyOS v2.0, Mantis and Contiki--running on Tmote Sky and MICAz devices. With the objective of ensuring a fair evaluation, a benchmark composed of four applications has been developed, covering the most typical tasks that a Wireless Sensor Network performs. The results show the instant and average current consumption of the devices during the execution of these applications. The experimental measurements provide a good insight into the power mode in which the device components are running at every moment, and they can be used to compare the performance of different operating systems executing the same tasks.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandersall, K S; Tarver, C M; Garcia, F

    Shock initiation experiments on the HMX based explosives LX-10 (95% HMX, 5% Viton by weight) and LX-07 (90% HMX, 10% Viton by weight) were performed to obtain in-situ pressure gauge data, run-distance-to-detonation thresholds, and Ignition and Growth modeling parameters. A 101 mm diameter propellant driven gas gun was utilized to initiate the explosive samples with manganin piezoresistive pressure gauge packages placed between sample slices. The run-distance-to-detonation points on the Pop-plot for these experiments and prior experiments on another HMX based explosive LX LX-04 (85% HMX, 15% Viton by weight) will be shown, discussed, and compared as a function of themore » binder content. This parameter set will provide additional information to ensure accurate code predictions for safety scenarios involving HMX explosives with different percent binder content additions.« less

  20. A sub-ensemble theory of ideal quantum measurement processes

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Balian, Roger; Nieuwenhuizen, Theo M.

    2017-01-01

    In order to elucidate the properties currently attributed to ideal measurements, one must explain how the concept of an individual event with a well-defined outcome may emerge from quantum theory which deals with statistical ensembles, and how different runs issued from the same initial state may end up with different final states. This so-called "measurement problem" is tackled with two guidelines. On the one hand, the dynamics of the macroscopic apparatus A coupled to the tested system S is described mathematically within a standard quantum formalism, where " q-probabilities" remain devoid of interpretation. On the other hand, interpretative principles, aimed to be minimal, are introduced to account for the expected features of ideal measurements. Most of the five principles stated here, which relate the quantum formalism to physical reality, are straightforward and refer to macroscopic variables. The process can be identified with a relaxation of S + A to thermodynamic equilibrium, not only for a large ensemble E of runs but even for its sub-ensembles. The different mechanisms of quantum statistical dynamics that ensure these types of relaxation are exhibited, and the required properties of the Hamiltonian of S + A are indicated. The additional theoretical information provided by the study of sub-ensembles remove Schrödinger's quantum ambiguity of the final density operator for E which hinders its direct interpretation, and bring out a commutative behaviour of the pointer observable at the final time. The latter property supports the introduction of a last interpretative principle, needed to switch from the statistical ensembles and sub-ensembles described by quantum theory to individual experimental events. It amounts to identify some formal " q-probabilities" with ordinary frequencies, but only those which refer to the final indications of the pointer. The desired properties of ideal measurements, in particular the uniqueness of the result for each individual run of the ensemble and von Neumann's reduction, are thereby recovered with economic interpretations. The status of Born's rule involving both A and S is re-evaluated, and contextuality of quantum measurements is made obvious.

  1. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  2. Diagnostic Value of Run Chart Analysis: Using Likelihood Ratios to Compare Run Chart Rules on Simulated Data Series

    PubMed Central

    Anhøj, Jacob

    2015-01-01

    Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines “unusually long” or “unusually few”. Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules. PMID:25799549

  3. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  4. Effects of long-term voluntary exercise on learning and memory processes: dependency of the task and level of exercise.

    PubMed

    García-Capdevila, Sílvia; Portell-Cortés, Isabel; Torras-Garcia, Meritxell; Coll-Andreu, Margalida; Costa-Miserachs, David

    2009-09-14

    The effect of long-term voluntary exercise (running wheel) on anxiety-like behaviour (plus maze and open field) and learning and memory processes (object recognition and two-way active avoidance) was examined on Wistar rats. Because major individual differences in running wheel behaviour were observed, the data were analysed considering the exercising animals both as a whole and grouped according to the time spent in the running wheel (low, high, and very-high running). Although some variables related to anxiety-like behaviour seem to reflect an anxiogenic compatible effect, the view of the complete set of variables could be interpreted as an enhancement of defensive and risk assessment behaviours in exercised animals, without major differences depending on the exercise level. Effects on learning and memory processes were dependent on task and level of exercise. Two-way avoidance was not affected either in the acquisition or in the retention session, while the retention of object recognition task was affected. In this latter task, an enhancement in low running subjects and impairment in high and very-high running animals were observed.

  5. Keeping waived tests simple.

    PubMed

    2004-01-01

    Laboratories performing waived testing must follow the manufacturer's instructions as well as good laboratory practices to ensure that test results are reliable. Four things to concentrate on to maximize the performance and reliability of waived tests are to: 1. Read and follow the information found in the package inserts. 2. Follow the manufacturer's recommendations for running quality control. 3. Train staff members to perform tests correctly. 4. Follow established policies and procedures for patient testing in the practice.

  6. Processing and Quality Monitoring for the ATLAS Tile Hadronic Calorimeter Data

    NASA Astrophysics Data System (ADS)

    Burghgrave, Blake; ATLAS Collaboration

    2017-10-01

    An overview is presented of Data Processing and Data Quality (DQ) Monitoring for the ATLAS Tile Hadronic Calorimeter. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. Data quality in physics runs is monitored extensively and continuously. Any problems are reported and immediately investigated. The DQ efficiency achieved was 99.6% in 2012 and 100% in 2015, after the detector maintenance in 2013-2014. Changes to detector status or calibrations are entered into the conditions database (DB) during a brief calibration loop between the end of a run and the beginning of bulk processing of data collected in it. Bulk processed data are reviewed and certified for the ATLAS Good Run List if no problem is detected. Experts maintain the tools used by DQ shifters and the calibration teams during normal operation, and prepare new conditions for data reprocessing and Monte Carlo (MC) production campaigns. Conditions data are stored in 3 databases: Online DB, Offline DB for data and a special DB for Monte Carlo. Database updates can be performed through a custom-made web interface.

  7. Commissioning and Operation of a Cryogenic Target at HI γS

    NASA Astrophysics Data System (ADS)

    Kendellen, David; Compton@HIγ Collaboration

    2016-09-01

    We have developed a cryogenic target for use at the High Intensity γ-ray Source (HI γS). The target system is able to liquefy helium-4 (LHe) at 4 K, hydrogen (LH2) at 20 K, or deuterium (LD2) at 23 K to fill a 0.3 L Kapton cell. Liquid temperatures and condenser pressures are recorded throughout each run in order to ensure that the target's areal density is known to 1 % . A low-temperature valve enables cycling between full and empty modes in less than 15 minutes. The target is being utilized in a series of experiments which probe the electromagnetic polarizabilities of the nucleon by Compton scattering high-energy photons from the liquid and detecting them with the HI γS NaI Detector Array (HINDA). During a 50-hour-long commissioning run last fall, the target held LHe at 3.17 K, followed by a 300-hour-long production run this spring with LD2 at 23.9 K. The design of the target will be presented and its performance during these two runs will be discussed. Work supported by US Department of Energy Contracts DE-FG02-97ER41033, DE-FG02-06ER41422, and DE-SCOO0536.

  8. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Hanson, J. M.; Beard, B. B.

    2010-01-01

    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.

  9. One-family walking technicolor in light of LHC Run II

    NASA Astrophysics Data System (ADS)

    Matsuzaki, Shinya

    2017-12-01

    The LHC Higgs can be identified as the technidilaton, a composite scalar, arising as a pseudo Nambu-Goldstone boson for the spontaneous breaking of scale symmetry in walking technicolor. One interesting candidate for the walking technicolor is the QCD with the large number of fermion flavors, involving the one-family model having the eight-fermion flavors. The smallness of the technidilaton mass can be ensured by the generic walking feature, Miransky scaling, and the presence of the “anti-Veneziano limit” characteristic to the large-flavor walking scenario. To tell the standard-model Higgs from the technidilaton, one needs to wait for the precise estimate of the Higgs couplings to the standard model particles, which is expected at the ongoing LHC Run II. In this talk the technidilaton phenomenology in comparison with the LHC Run-I data is summarized with the special emphasis placed on the presence of the anti-Veneziano limit supporting the lightness of technidilaton. Besides the technidilaton, the walking technicolor predicts the rich particle spectrum such as technipions and technirho mesons, arising as composite particles formed by technifermions. The LHC phenomenology of those technihadrons and the discovery channels are also discussed, which are smoking-guns of the walking technicolor, to be accessible at the LHC Run II.

  10. Telerobotic Surgery: An Intelligent Systems Approach to Mitigate the Adverse Effects of Communication Delay. Chapter 4

    NASA Technical Reports Server (NTRS)

    Cardullo, Frank M.; Lewis, Harold W., III; Panfilov, Peter B.

    2007-01-01

    An extremely innovative approach has been presented, which is to have the surgeon operate through a simulator running in real-time enhanced with an intelligent controller component to enhance the safety and efficiency of a remotely conducted operation. The use of a simulator enables the surgeon to operate in a virtual environment free from the impediments of telecommunication delay. The simulator functions as a predictor and periodically the simulator state is corrected with truth data. Three major research areas must be explored in order to ensure achieving the objectives. They are: simulator as predictor, image processing, and intelligent control. Each is equally necessary for success of the project and each of these involves a significant intelligent component in it. These are diverse, interdisciplinary areas of investigation, thereby requiring a highly coordinated effort by all the members of our team, to ensure an integrated system. The following is a brief discussion of those areas. Simulator as a predictor: The delays encountered in remote robotic surgery will be greater than any encountered in human-machine systems analysis, with the possible exception of remote operations in space. Therefore, novel compensation techniques will be developed. Included will be the development of the real-time simulator, which is at the heart of our approach. The simulator will present real-time, stereoscopic images and artificial haptic stimuli to the surgeon. Image processing: Because of the delay and the possibility of insufficient bandwidth a high level of novel image processing is necessary. This image processing will include several innovative aspects, including image interpretation, video to graphical conversion, texture extraction, geometric processing, image compression and image generation at the surgeon station. Intelligent control: Since the approach we propose is in a sense predictor based, albeit a very sophisticated predictor, a controller, which not only optimizes end effector trajectory but also avoids error, is essential. We propose to investigate two different approaches to the controller design. One approach employs an optimal controller based on modern control theory; the other one involves soft computing techniques, i.e. fuzzy logic, neural networks, genetic algorithms and hybrids of these.

  11. Advances in industrial biopharmaceutical batch process monitoring: Machine-learning methods for small data problems.

    PubMed

    Tulsyan, Aditya; Garvin, Christopher; Ündey, Cenk

    2018-04-06

    Biopharmaceutical manufacturing comprises of multiple distinct processing steps that require effective and efficient monitoring of many variables simultaneously in real-time. The state-of-the-art real-time multivariate statistical batch process monitoring (BPM) platforms have been in use in recent years to ensure comprehensive monitoring is in place as a complementary tool for continued process verification to detect weak signals. This article addresses a longstanding, industry-wide problem in BPM, referred to as the "Low-N" problem, wherein a product has a limited production history. The current best industrial practice to address the Low-N problem is to switch from a multivariate to a univariate BPM, until sufficient product history is available to build and deploy a multivariate BPM platform. Every batch run without a robust multivariate BPM platform poses risk of not detecting potential weak signals developing in the process that might have an impact on process and product performance. In this article, we propose an approach to solve the Low-N problem by generating an arbitrarily large number of in silico batches through a combination of hardware exploitation and machine-learning methods. To the best of authors' knowledge, this is the first article to provide a solution to the Low-N problem in biopharmaceutical manufacturing using machine-learning methods. Several industrial case studies from bulk drug substance manufacturing are presented to demonstrate the efficacy of the proposed approach for BPM under various Low-N scenarios. © 2018 Wiley Periodicals, Inc.

  12. Design of two-column batch-to-batch recirculation to enhance performance in ion-exchange chromatography.

    PubMed

    Persson, Oliver; Andersson, Niklas; Nilsson, Bernt

    2018-01-05

    Preparative liquid chromatography is a separation technique widely used in the manufacturing of fine chemicals and pharmaceuticals. A major drawback of traditional single-column batch chromatography step is the trade-off between product purity and process performance. Recirculation of impure product can be utilized to make the trade-off more favorable. The aim of the present study was to investigate the usage of a two-column batch-to-batch recirculation process step to increase the performance compared to single-column batch chromatography at a high purity requirement. The separation of a ternary protein mixture on ion-exchange chromatography columns was used to evaluate the proposed process. The investigation used modelling and simulation of the process step, experimental validation and optimization of the simulated process. In the presented case the yield increases from 45.4% to 93.6% and the productivity increases 3.4 times compared to the performance of a batch run for a nominal case. A rapid concentration build-up product can be seen during the first cycles, before the process reaches a cyclic steady-state with reoccurring concentration profiles. The optimization of the simulation model predicts that the recirculated salt can be used as a flying start of the elution, which would enhance the process performance. The proposed process is more complex than a batch process, but may improve the separation performance, especially while operating at cyclic steady-state. The recirculation of impure fractions reduces the product losses and ensures separation of product to a high degree of purity. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Dynamic Parameters Variability: Time Interval Interference on Ground Reaction Force During Running.

    PubMed

    Pennone, Juliana; Mezêncio, Bruno; Amadio, Alberto C; Serrão, Júlio C

    2016-04-01

    The aim of this study was to determine the effect of the time between measures on ground reaction force running variability; 15 healthy men (age = 23.8 ± 3.7 years; weight = 72.8 ± 7.7 kg; height 174.3 ± 8.4 cm) performed two trials of running 45 minutes at 9 km/hr at intervals of seven days. The ground reaction forces were recorded every 5 minutes. The coefficients of variation of indicative parameters of the ground reaction forces for each condition were compared. The coefficients of variations of the ground reaction forces curve analyzed between intervals and sessions were 21.9% and 21.48%, respectively. There was no significant difference for the ground reaction forces parameters Fy1, tFy1, TC1, Imp50, Fy2, and tFy2 between intervals and sessions. Although the ground reaction forces variables present a natural variability, this variability in intervals and in sessions remained consistent, ensuring a high reliability in repeated measures designs. © The Author(s) 2016.

  14. Data Analysis for the LISA Pathfinder Mission

    NASA Technical Reports Server (NTRS)

    Thorpe, James Ira

    2009-01-01

    The LTP (LISA Technology Package) is the core part of the Laser Interferometer Space Antenna (LISA) Pathfinder mission. The main goal of the mission is to study the sources of any disturbances that perturb the motion of the freely-falling test masses from their geodesic trajectories as well as 10 test various technologies needed for LISA. The LTP experiment is designed as a sequence of experimental runs in which the performance of the instrument is studied and characterized under different operating conditions. In order to best optimize subsequent experimental runs, each run must be promptly analysed to ensure that the following ones make best use of the available knowledge of the instrument ' In order to do this, all analyses must be designed and tested in advance of the mission and have sufficient built-in flexibility to account for unexpected results or behaviour. To support this activity, a robust and flexible data analysis software package is also required. This poster presents two of the main components that make up the data analysis effort: the data analysis software and the mock-data challenges used to validate analysis procedures and experiment designs.

  15. Data Driven Smart Proxy for CFD Application of Big Data Analytics & Machine Learning in Computational Fluid Dynamics, Report Two: Model Building at the Cell Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ansari, A.; Mohaghegh, S.; Shahnam, M.

    To ensure the usefulness of simulation technologies in practice, their credibility needs to be established with Uncertainty Quantification (UQ) methods. In this project, smart proxy is introduced to significantly reduce the computational cost of conducting large number of multiphase CFD simulations, which is typically required for non-intrusive UQ analysis. Smart proxy for CFD models are developed using pattern recognition capabilities of Artificial Intelligence (AI) and Data Mining (DM) technologies. Several CFD simulation runs with different inlet air velocities for a rectangular fluidized bed are used to create a smart CFD proxy that is capable of replicating the CFD results formore » the entire geometry and inlet velocity range. The smart CFD proxy is validated with blind CFD runs (CFD runs that have not played any role during the development of the smart CFD proxy). The developed and validated smart CFD proxy generates its results in seconds with reasonable error (less than 10%). Upon completion of this project, UQ studies that rely on hundreds or thousands of smart CFD proxy runs can be accomplished in minutes. Following figure demonstrates a validation example (blind CFD run) showing the results from the MFiX simulation and the smart CFD proxy for pressure distribution across a fluidized bed at a given time-step (the layer number corresponds to the vertical location in the bed).« less

  16. AutoMap User’s Guide 2012

    DTIC Science & Technology

    2012-06-11

    accompanied by supporting details. All : The entire text Example dairyFarm.txt Ted runs a dairy farm. He milks the cows , runs the office, and...dairy, farm, He, milks , the, cows , runs, the, office, and, cleans, the, barn Property List: Number of Characters,79 Number of Clauses,4 Number of... cows ,runs,1 dairy,farm,1 farm,He,1 milks ,the,1 office,and,1 runs,a,1 runs,the,1 the,barn,1 the, cows ,1 the,office,1 23 SEP 09 Process

  17. Monitoring of the data processing and simulated production at CMS with a web-based service: the Production Monitoring Platform (pMp)

    NASA Astrophysics Data System (ADS)

    Franzoni, G.; Norkus, A.; Pol, A. A.; Srimanobhas, N.; Walker, J.

    2017-10-01

    Physics analysis at the Compact Muon Solenoid requires both the production of simulated events and processing of the data collected by the experiment. Since the end of the LHC Run-I in 2012, CMS has produced over 20 billion simulated events, from 75 thousand processing requests organised in one hundred different campaigns. These campaigns emulate different configurations of collision events, the detector, and LHC running conditions. In the same time span, sixteen data processing campaigns have taken place to reconstruct different portions of the Run-I and Run-II data with ever improving algorithms and calibrations. The scale and complexity of the events simulation and processing, and the requirement that multiple campaigns must proceed in parallel, demand that a comprehensive, frequently updated and easily accessible monitoring be made available. The monitoring must serve both the analysts, who want to know which and when datasets will become available, and the central production teams in charge of submitting, prioritizing, and running the requests across the distributed computing infrastructure. The Production Monitoring Platform (pMp) web-based service, has been developed in 2015 to address those needs. It aggregates information from multiple services used to define, organize, and run the processing requests. Information is updated hourly using a dedicated elastic database and the monitoring provides multiple configurable views to assess the status of single datasets as well as entire production campaigns. This contribution will describe the pMp development, the evolution of its functionalities, and one and half year of operational experience.

  18. Improving overly manufacturing metrics through application of feedforward mask-bias

    NASA Astrophysics Data System (ADS)

    Joubert, Etienne; Pellegrini, Joseph C.; Misra, Manish; Sturtevant, John L.; Bernhard, John M.; Ong, Phu; Crawshaw, Nathan K.; Puchalski, Vern

    2003-06-01

    Traditional run-to-run controllers that rely on highly correlated historical events to forecast process corrections have been shown to provide substantial benefit over manual control in the case of a fab that is primarily manufacturing high volume, frequent running parts (i.e., DRAM, MPU, and similar operations). However, a limitation of the traditional controller emerges when it is applied to a fab whose work in process (WIP) is composed of primarily short-running, high part count products (typical of foundries and ASIC fabs). This limitation exists because there is a strong likelihood that each reticle has a unique set of process corrections different from other reticles at the same process layer. Further limitations exist when it is realized that each reticle is loaded and aligned differently on multiple exposure tools.A structural change in how the run-to-run controller manages the frequent reticle changes associated with the high part count environment has allowed for breakthrough performance to be achieved. This breakthrough was mad possible by the realization that; 1. Reticle sourced errors were highly stable over long periods of time, thus allowing them to be deconvolved from the day to day tool and process drifts. 2. Reticle sourced errors can be modeled as a feedforward disturbance rather than as discriminates in defining and dividing process streams. In this paper, we show how to deconvolve the static (reticle) and dynamic (day to day tool and process) components from the overall error vector to better forecast feedback for existing products as well as how to compute or learn these values for new product introductions - or new tool startups. Manufacturing data will presented to support this discussion with some real world success stories.

  19. A three-level support method for smooth switching of the micro-grid operation model

    NASA Astrophysics Data System (ADS)

    Zong, Yuanyang; Gong, Dongliang; Zhang, Jianzhou; Liu, Bin; Wang, Yun

    2018-01-01

    Smooth switching of micro-grid between the grid-connected operation mode and off-grid operation mode is one of the key technologies to ensure it runs flexible and efficiently. The basic control strategy and the switching principle of micro-grid are analyzed in this paper. The reasons for the fluctuations of the voltage and the frequency in the switching process are analyzed from views of power balance and control strategy, and the operation mode switching strategy has been improved targeted. From the three aspects of controller’s current inner loop reference signal, voltage outer loop control strategy optimization and micro-grid energy balance management, a three-level security strategy for smooth switching of micro-grid operation mode is proposed. From the three aspects of controller’s current inner loop reference signal tracking, voltage outer loop control strategy optimization and micro-grid energy balance management, a three-level strategy for smooth switching of micro-grid operation mode is proposed. At last, it is proved by simulation that the proposed control strategy can make the switching process smooth and stable, the fluctuation problem of the voltage and frequency has been effectively improved.

  20. jTracker and Monte Carlo Comparison

    NASA Astrophysics Data System (ADS)

    Selensky, Lauren; SeaQuest/E906 Collaboration

    2015-10-01

    SeaQuest is designed to observe the characteristics and behavior of `sea-quarks' in a proton by reconstructing them from the subatomic particles produced in a collision. The 120 GeV beam from the main injector collides with a fixed target and then passes through a series of detectors which records information about the particles produced in the collision. However, this data becomes meaningful only after it has been processed, stored, analyzed, and interpreted. Several programs are involved in this process. jTracker (sqerp) reads wire or hodoscope hits and reconstructs the tracks of potential dimuon pairs from a run, and Geant4 Monte Carlo simulates dimuon production and background noise from the beam. During track reconstruction, an event must meet the criteria set by the tracker to be considered a viable dimuon pair; this ensures that relevant data is retained. As a check, a comparison between a new version of jTracker and Monte Carlo was made in order to see how accurately jTracker could reconstruct the events created by Monte Carlo. In this presentation, the results of the inquest and their potential effects on the programming will be shown. This work is supported by U.S. DOE MENP Grant DE-FG02-03ER41243.

  1. Testing the equivalence principle on a trampoline

    NASA Astrophysics Data System (ADS)

    Reasenberg, Robert D.; Phillips, James D.

    2001-07-01

    We are developing a Galilean test of the equivalence principle in which two pairs of test mass assemblies (TMA) are in free fall in a comoving vacuum chamber for about 0.9 s. The TMA are tossed upward, and the process repeats at 1.2 s intervals. Each TMA carries a solid quartz retroreflector and a payload mass of about one-third of the total TMA mass. The relative vertical motion of the TMA of each pair is monitored by a laser gauge working in an optical cavity formed by the retroreflectors. Single-toss precision of the relative acceleration of a single pair of TMA is 3.5×10-12 g. The project goal of Δg/g = 10-13 can be reached in a single night's run, but repetition with altered configurations will be required to ensure the correction of systematic error to the nominal accuracy level. Because the measurements can be made quickly, we plan to study several pairs of materials.

  2. Localizer: fast, accurate, open-source, and modular software package for superresolution microscopy

    PubMed Central

    Duwé, Sam; Neely, Robert K.; Zhang, Jin

    2012-01-01

    Abstract. We present Localizer, a freely available and open source software package that implements the computational data processing inherent to several types of superresolution fluorescence imaging, such as localization (PALM/STORM/GSDIM) and fluctuation imaging (SOFI/pcSOFI). Localizer delivers high accuracy and performance and comes with a fully featured and easy-to-use graphical user interface but is also designed to be integrated in higher-level analysis environments. Due to its modular design, Localizer can be readily extended with new algorithms as they become available, while maintaining the same interface and performance. We provide front-ends for running Localizer from Igor Pro, Matlab, or as a stand-alone program. We show that Localizer performs favorably when compared with two existing superresolution packages, and to our knowledge is the only freely available implementation of SOFI/pcSOFI microscopy. By dramatically improving the analysis performance and ensuring the easy addition of current and future enhancements, Localizer strongly improves the usability of superresolution imaging in a variety of biomedical studies. PMID:23208219

  3. The AMCE (AIST Managed Cloud Environment)

    NASA Astrophysics Data System (ADS)

    Cook, S.

    2017-12-01

    ESTO has developed and implemented the AIST Managed Cloud Environment (AMCE) to offer cloud computing services to SMD-funded PIs to conduct their project research. AIST will provide projects access to a cloud computing framework that incorporates NASA security, technical, and financial standards, on which project can freely store, run, and process data. Currently, many projects led by research groups outside of NASA do not have the awareness of requirements or the resources to implement NASA standards into their research, which limits the likelihood of infusing the work into NASA applications. Offering this environment to PIs allows them to conduct their project research using the many benefits of cloud computing. In addition to the well-known cost and time savings that it allows, it also provides scalability and flexibility. The AMCE facilitates infusion and end user access by ensuring standardization and security. This approach will ultimately benefit ESTO, the science community, and the research, allowing the technology developments to have quicker and broader applications.

  4. Image acquisition device of inspection robot based on adaptive rotation regulation of polarizer

    NASA Astrophysics Data System (ADS)

    Dong, Maoqi; Wang, Xingguang; Liang, Tao; Yang, Guoqing; Zhang, Chuangyou; Gao, Faqin

    2017-12-01

    An image processing device of inspection robot with adaptive polarization adjustment is proposed, that the device includes the inspection robot body, the image collecting mechanism, the polarizer and the polarizer automatic actuating device. Where, the image acquisition mechanism is arranged at the front of the inspection robot body for collecting equipment image data in the substation. Polarizer is fixed on the automatic actuating device of polarizer, and installed in front of the image acquisition mechanism, and that the optical axis of the camera vertically goes through the polarizer and the polarizer rotates with the optical axis of the visible camera as the central axis. The simulation results show that the system solves the fuzzy problems of the equipment that are caused by glare, reflection of light and shadow, and the robot can observe details of the running status of electrical equipment. And the full coverage of the substation equipment inspection robot observation target is achieved, which ensures the safe operation of the substation equipment.

  5. ARIES: Acquisition of Requirements and Incremental Evolution of Specifications

    NASA Technical Reports Server (NTRS)

    Roberts, Nancy A.

    1993-01-01

    This paper describes a requirements/specification environment specifically designed for large-scale software systems. This environment is called ARIES (Acquisition of Requirements and Incremental Evolution of Specifications). ARIES provides assistance to requirements analysts for developing operational specifications of systems. This development begins with the acquisition of informal system requirements. The requirements are then formalized and gradually elaborated (transformed) into formal and complete specifications. ARIES provides guidance to the user in validating formal requirements by translating them into natural language representations and graphical diagrams. ARIES also provides ways of analyzing the specification to ensure that it is correct, e.g., testing the specification against a running simulation of the system to be built. Another important ARIES feature, especially when developing large systems, is the sharing and reuse of requirements knowledge. This leads to much less duplication of effort. ARIES combines all of its features in a single environment that makes the process of capturing a formal specification quicker and easier.

  6. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  7. Status of parallel Python-based implementation of UEDGE

    NASA Astrophysics Data System (ADS)

    Umansky, M. V.; Pankin, A. Y.; Rognlien, T. D.; Dimits, A. M.; Friedman, A.; Joseph, I.

    2017-10-01

    The tokamak edge transport code UEDGE has long used the code-development and run-time framework Basis. However, with the support for Basis expected to terminate in the coming years, and with the advent of the modern numerical language Python, it has become desirable to move UEDGE to Python, to ensure its long-term viability. Our new Python-based UEDGE implementation takes advantage of the portable build system developed for FACETS. The new implementation gives access to Python's graphical libraries and numerical packages for pre- and post-processing, and support of HDF5 simplifies exchanging data. The older serial version of UEDGE has used for time-stepping the Newton-Krylov solver NKSOL. The renovated implementation uses backward Euler discretization with nonlinear solvers from PETSc, which has the promise to significantly improve the UEDGE parallel performance. We will report on assessment of some of the extended UEDGE capabilities emerging in the new implementation, and will discuss the future directions. Work performed for U.S. DOE by LLNL under contract DE-AC52-07NA27344.

  8. Measure Guideline: Optimizing the Configuration of Flexible Duct Junction Boxes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beach, R.; Burdick, A.

    2014-03-01

    This measure guideline offers additional recommendations to heating, ventilation, and air conditioning (HVAC) system designers for optimizing flexible duct, constant-volume HVAC systems using junction boxes within Air Conditioning Contractors of America (ACCA) Manual D guidance. IBACOS used computational fluid dynamics software to explore and develop guidance to better control the airflow effects of factors that may impact pressure losses within junction boxes among various design configurations. These recommendations can help to ensure that a system aligns more closely with the design and the occupants' comfort expectations. Specifically, the recommendations described herein show how to configure a rectangular box with fourmore » outlets, a triangular box with three outlets, metal wyes with two outlets, and multiple configurations for more than four outlets. Designers of HVAC systems, contractors who are fabricating junction boxes on site, and anyone using the ACCA Manual D process for sizing duct runs will find this measure guideline invaluable for more accurately minimizing pressure losses when using junction boxes with flexible ducts.« less

  9. Aircraft engine sensor fault diagnostics using an on-line OBEM update method.

    PubMed

    Liu, Xiaofeng; Xue, Naiyu; Yuan, Ye

    2017-01-01

    This paper proposed a method to update the on-line health reference baseline of the On-Board Engine Model (OBEM) to maintain the effectiveness of an in-flight aircraft sensor Fault Detection and Isolation (FDI) system, in which a Hybrid Kalman Filter (HKF) was incorporated. Generated from a rapid in-flight engine degradation, a large health condition mismatch between the engine and the OBEM can corrupt the performance of the FDI. Therefore, it is necessary to update the OBEM online when a rapid degradation occurs, but the FDI system will lose estimation accuracy if the estimation and update are running simultaneously. To solve this problem, the health reference baseline for a nonlinear OBEM was updated using the proposed channel controller method. Simulations based on the turbojet engine Linear-Parameter Varying (LPV) model demonstrated the effectiveness of the proposed FDI system in the presence of substantial degradation, and the channel controller can ensure that the update process finishes without interference from a single sensor fault.

  10. Aircraft engine sensor fault diagnostics using an on-line OBEM update method

    PubMed Central

    Liu, Xiaofeng; Xue, Naiyu; Yuan, Ye

    2017-01-01

    This paper proposed a method to update the on-line health reference baseline of the On-Board Engine Model (OBEM) to maintain the effectiveness of an in-flight aircraft sensor Fault Detection and Isolation (FDI) system, in which a Hybrid Kalman Filter (HKF) was incorporated. Generated from a rapid in-flight engine degradation, a large health condition mismatch between the engine and the OBEM can corrupt the performance of the FDI. Therefore, it is necessary to update the OBEM online when a rapid degradation occurs, but the FDI system will lose estimation accuracy if the estimation and update are running simultaneously. To solve this problem, the health reference baseline for a nonlinear OBEM was updated using the proposed channel controller method. Simulations based on the turbojet engine Linear-Parameter Varying (LPV) model demonstrated the effectiveness of the proposed FDI system in the presence of substantial degradation, and the channel controller can ensure that the update process finishes without interference from a single sensor fault. PMID:28182692

  11. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  12. Real-time acquisition and tracking system with multiple Kalman filters

    NASA Astrophysics Data System (ADS)

    Beard, Gary C.; McCarter, Timothy G.; Spodeck, Walter; Fletcher, James E.

    1994-07-01

    The design of a real-time, ground-based, infrared tracking system with proven field success in tracking boost vehicles through burnout is presented with emphasis on the software design. The system was originally developed to deliver relative angular positions during boost, and thrust termination time to a sensor fusion station in real-time. Autonomous target acquisition and angle-only tracking features were developed to ensure success under stressing conditions. A unique feature of the system is the incorporation of multiple copies of a Kalman filter tracking algorithm running in parallel in order to minimize run-time. The system is capable of updating the state vector for an object at measurement rates approaching 90 Hz. This paper will address the top-level software design, details of the algorithms employed, system performance history in the field, and possible future upgrades.

  13. The Design of Finite State Machine for Asynchronous Replication Protocol

    NASA Astrophysics Data System (ADS)

    Wang, Yanlong; Li, Zhanhuai; Lin, Wei; Hei, Minglei; Hao, Jianhua

    Data replication is a key way to design a disaster tolerance system and to achieve reliability and availability. It is difficult for a replication protocol to deal with the diverse and complex environment. This means that data is less well replicated than it ought to be. To reduce data loss and to optimize replication protocols, we (1) present a finite state machine, (2) run it to manage an asynchronous replication protocol and (3) report a simple evaluation of the asynchronous replication protocol based on our state machine. It's proved that our state machine is applicable to guarantee the asynchronous replication protocol running in the proper state to the largest extent in the event of various possible events. It also can helpful to build up replication-based disaster tolerance systems to ensure the business continuity.

  14. Chandra X-ray Center Science Data Systems Regression Testing of CIAO

    NASA Astrophysics Data System (ADS)

    Lee, N. P.; Karovska, M.; Galle, E. C.; Bonaventura, N. R.

    2011-07-01

    The Chandra Interactive Analysis of Observations (CIAO) is a software system developed for the analysis of Chandra X-ray Observatory observations. An important component of a successful CIAO release is the repeated testing of the tools across various platforms to ensure consistent and scientifically valid results. We describe the procedures of the scientific regression testing of CIAO and the enhancements made to the testing system to increase the efficiency of run time and result validation.

  15. Influence of Mach Number and Dynamic Pressure on Cavity Tones and Freedrop Trajectories

    DTIC Science & Technology

    2014-03-27

    primary purpose is to ensure a steady flow of high pressure air from the compressors to the stagnation chamber. One side of the diaphragm is connected...collected for 20 psi stagnation pressure due to insufficient run times, even at the increased compressor air pressure of 180 psi. Furthermore, the data from...M-36 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A

  16. Urging the Government of Afghanistan, following a successful first round of the presidential election on April 5, 2014, to pursue a transparent, credible, and inclusive run-off presidential election on June 14, 2014, while ensuring the safety of voters, candidates, poll workers, and election observers.

    THOMAS, 113th Congress

    Rep. Grayson, Alan [D-FL-9

    2014-05-28

    House - 06/09/2014 On motion to suspend the rules and agree to the resolution, as amended Agreed to by voice vote. (All Actions) Tracker: This bill has the status Agreed to in HouseHere are the steps for Status of Legislation:

  17. Laboratory Astrophysics Using a Microcalorimeter and Bragg Crystal Spectrometer on an Electron Beam Ion Trap

    NASA Technical Reports Server (NTRS)

    Brinton, John (Technical Monitor); Silver, Eric

    2005-01-01

    We completed modifications to the new microcalorimeter system dedicated for use on the EBIT at NIST, which included: 1) a redesign of the x-ray calibration source from a direct electron impact source to one that irradiates the microcalorimeter with fluorescent x-rays. The resulting calibration lines are free of bremsstrahlung background; 2) the microcalorimeter electronic circuit was significantly improved to ensure long-term stability for lengthy experimental runs

  18. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    NASA Technical Reports Server (NTRS)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  19. Development of in vitro models to demonstrate the ability of PecSys®, an in situ nasal gelling technology, to reduce nasal run-off and drip

    PubMed Central

    2013-01-01

    Many of the increasing number of intranasal products available for either local or systemic action can be considered sub-optimal, most notably where nasal drip or run-off give rise to discomfort/tolerability issues or reduced/variable efficacy. PecSys, an in situ gelling technology, contains low methoxy (LM) pectin which gels due to interaction with calcium ions present in nasal fluid. PecSys is designed to spray readily, only forming a gel on contact with the mucosal surface. The present study employed two in vitro models to confirm that gelling translates into a reduced potential for drip/run-off: (i) Using an inclined TLC plate treated with a simulated nasal electrolyte solution (SNES), mean drip length [±SD, n = 10] was consistently much shorter for PecSys (1.5 ± 0.4 cm) than non-gelling control (5.8 ± 1.6 cm); (ii) When PecSys was sprayed into a human nasal cavity cast model coated with a substrate containing a physiologically relevant concentration of calcium, PecSys solution was retained at the site of initial deposition with minimal redistribution, and no evidence of run-off/drip anteriorly or down the throat. In contrast, non-gelling control was significantly more mobile and consistently redistributed with run-off towards the throat. Conclusion In both models PecSys significantly reduced the potential for run-off/drip ensuring that more solution remained at the deposition site. In vivo, this enhancement of retention will provide optimum patient acceptability, modulate drug absorption and maximize the ability of drugs to be absorbed across the nasal mucosa and thus reduce variability in drug delivery. PMID:22803832

  20. Characteristics of process oils from HTI coal/plastics co-liquefaction runs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robbins, G.A.; Brandes, S.D.; Winschel, R.A.

    1995-12-31

    The objective of this project is to provide timely analytical support to DOE`s liquefaction development effort. Specific objectives of the work reported here are presented. During a few operating periods of Run POC-2, HTI co-liquefied mixed plastics with coal, and tire rubber with coal. Although steady-state operation was not achieved during these brief tests periods, the results indicated that a liquefaction plant could operate with these waste materials as feedstocks. CONSOL analyzed 65 process stream samples from coal-only and coal/waste portions of the run. Some results obtained from characterization of samples from Run POC-2 coal/plastics operation are presented.

  1. Water depth effects on impact loading, kinematic and physiological variables during water treadmill running.

    PubMed

    Macdermid, Paul W; Wharton, Josh; Schill, Carina; Fink, Philip W

    2017-07-01

    The purpose of this study was to compare impact loading, kinematic and physiological responses to three different immersion depths (mid-shin, mid-thigh, and xiphoid process) while running at the same speed on a water based treadmill. Participants (N=8) ran on a water treadmill at three depths for 3min. Tri-axial accelerometers were used to identify running dynamics plus measures associated with impact loading rates, while heart rate data were logged to indicate physiological demand. Participants had greater peak impact accelerations (p<0.01), greater impact loading rates (p<0.0001), greater stride frequency (p<0.05), shorter stride length (p<0.01), and greater rate of acceleration development at toe-off (p<0.0001) for the mid-shin and mid-thigh compared to running immersed to the xiphoid process. Physiological effort determined by heart rate was also significantly less (p<0.0001) when running immersed to the xiphoid process. Water immersed treadmill running above the waistline alters kinematics of gait, reduces variables associated with impact, while decreasing physiological demand compared to depths below the waistline. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Internal quality control: best practice.

    PubMed

    Kinns, Helen; Pitkin, Sarah; Housley, David; Freedman, Danielle B

    2013-12-01

    There is a wide variation in laboratory practice with regard to implementation and review of internal quality control (IQC). A poor approach can lead to a spectrum of scenarios from validation of incorrect patient results to over investigation of falsely rejected analytical runs. This article will provide a practical approach for the routine clinical biochemistry laboratory to introduce an efficient quality control system that will optimise error detection and reduce the rate of false rejection. Each stage of the IQC system is considered, from selection of IQC material to selection of IQC rules, and finally the appropriate action to follow when a rejection signal has been obtained. The main objective of IQC is to ensure day-to-day consistency of an analytical process and thus help to determine whether patient results are reliable enough to be released. The required quality and assay performance varies between analytes as does the definition of a clinically significant error. Unfortunately many laboratories currently decide what is clinically significant at the troubleshooting stage. Assay-specific IQC systems will reduce the number of inappropriate sample-run rejections compared with the blanket use of one IQC rule. In practice, only three or four different IQC rules are required for the whole of the routine biochemistry repertoire as assays are assigned into groups based on performance. The tools to categorise performance and assign IQC rules based on that performance are presented. Although significant investment of time and education is required prior to implementation, laboratories have shown that such systems achieve considerable reductions in cost and labour.

  3. Block-Level Added Redundancy Explicit Authentication for Parallelized Encryption and Integrity Checking of Processor-Memory Transactions

    NASA Astrophysics Data System (ADS)

    Elbaz, Reouven; Torres, Lionel; Sassatelli, Gilles; Guillemin, Pierre; Bardouillet, Michel; Martinez, Albert

    The bus between the System on Chip (SoC) and the external memory is one of the weakest points of computer systems: an adversary can easily probe this bus in order to read private data (data confidentiality concern) or to inject data (data integrity concern). The conventional way to protect data against such attacks and to ensure data confidentiality and integrity is to implement two dedicated engines: one performing data encryption and another data authentication. This approach, while secure, prevents parallelizability of the underlying computations. In this paper, we introduce the concept of Block-Level Added Redundancy Explicit Authentication (BL-AREA) and we describe a Parallelized Encryption and Integrity Checking Engine (PE-ICE) based on this concept. BL-AREA and PE-ICE have been designed to provide an effective solution to ensure both security services while allowing for full parallelization on processor read and write operations and optimizing the hardware resources. Compared to standard encryption which ensures only confidentiality, we show that PE-ICE additionally guarantees code and data integrity for less than 4% of run-time performance overhead.

  4. Neural network-based run-to-run controller using exposure and resist thickness adjustment

    NASA Astrophysics Data System (ADS)

    Geary, Shane; Barry, Ronan

    2003-06-01

    This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.

  5. An efficient way of layout processing based on calibre DRC and pattern matching for defects inspection application

    NASA Astrophysics Data System (ADS)

    Li, Helen; Lee, Robben; Lee, Tyzy; Xue, Teddy; Liu, Hermes; Wu, Hall; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang

    2018-03-01

    As technology advances, escalating layout design complexity and chip size make defect inspection becomes more challenging than ever before. The YE (Yield Enhancement) engineers are seeking for an efficient strategy to ensure accuracy without suffering running time. A smart way is to set different resolutions for different pattern structures, for examples, logic pattern areas have a higher scan resolution while the dummy areas have a lower resolution, SRAM area may have another different resolution. This can significantly reduce the scan processing time meanwhile the accuracy does not suffer. Due to the limitation of the inspection equipment, the layout must be processed in order to output the Care Area marker in line with the requirement of the equipment, for instance, the marker shapes must be rectangle and the number of the rectangle shapes should be as small as possible. The challenge is how to select the different Care Areas by pattern structures, merge the areas efficiently and then partition them into pieces of rectangle shapes. This paper presents a solution based on Calibre DRC and Pattern Matching. Calibre equation-based DRC is a powerful layout processing engine and Calibre Pattern Matching's automated visual capture capability enables designers to define these geometries as layout patterns and store them in libraries which can be re-used in multiple design layouts. Pattern Matching simplifies the description of very complex relationships between pattern shapes efficiently and accurately. Pattern matching's true power is on display when it is integrated with normal DRC deck. In this application of defects inspection, we first run Calibre DRC to get rule based Care Area then use Calibre Pattern Matching's automated pattern capture capability to capture Care Area shapes which need a higher scan resolution with a tune able pattern halo. In the pattern matching step, when the patterns are matched, a bounding box marker will be output to identify the high resolution area. The equation-based DRC and Pattern Matching effectively work together for different scan phases.

  6. Development of a distributed control system for TOTEM experiment using ASIO Boost C++ libraries

    NASA Astrophysics Data System (ADS)

    Cafagna, F.; Mercadante, A.; Minafra, N.; Quinto, M.; Radicioni, E.

    2014-06-01

    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. Those scientific objectives are achieved by using three tracking detectors symmetrically arranged around the interaction point called IP5. The control system is based on a C++ software that allows the user, by means of a graphical interface, direct access to hardware and handling of devices configuration. A first release of the software was designed as a monolithic block, with all functionalities being merged together. Such approach showed soon its limits, mainly poor reusability and maintainability of the source code, evident not only in phase of bug-fixing, but also when one wants to extend functionalities or apply some other modifications. This led to the decision of a radical redesign of the software, now based on the dialogue (message-passing) among separate building blocks. Thanks to the acquired extensibility, the software gained new features and now is a complete tool by which it is possible not only to configure different devices interfacing with a large subset of buses like I2C and VME, but also to do data acquisition both for calibration and physics runs. Furthermore, the software lets the user set up a series of operations to be executed sequentially to handle complex operations. To achieve maximum flexibility, the program units may be run either as a single process or as separate processes on different PCs which exchange messages over the network, thus allowing remote control of the system. Portability is ensured by the adoption of the ASIO (Asynchronous Input Output) library of Boost, a cross-platform suite of libraries which is candidate to become part of the C++ 11 standard. We present the state of the art of this project and outline the future perspectives. In particular, we describe the system architecture and the message-passing scheme. We also report on the results obtained in a first complete test of the software both as a single process and on two PCs.

  7. Tsunami Wave Run-up on a Vertical Wall in Tidal Environment

    NASA Astrophysics Data System (ADS)

    Didenkulova, Ira; Pelinovsky, Efim

    2018-04-01

    We solve analytically a nonlinear problem of shallow water theory for the tsunami wave run-up on a vertical wall in tidal environment. Shown that the tide can be considered static in the process of tsunami wave run-up. In this approximation, it is possible to obtain the exact solution for the run-up height as a function of the incident wave height. This allows us to investigate the tide influence on the run-up characteristics.

  8. 32 CFR 651.4 - Responsibilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operational testers, producers, users, and disposers) into the decision-making process. (v) Initiate the... environmental perspective, and to ensure that these determinations are part of the Army decision process. (p... agency input into the decision-making process. (5) Ensure that NEPA analysis is prepared and staffed...

  9. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less

  10. Support and Maintenance of the International Monitoring System network

    NASA Astrophysics Data System (ADS)

    Pereira, Jose; Bazarragchaa, Sergelen; Kilgour, Owen; Pretorius, Jacques; Werzi, Robert; Beziat, Guillaume; Hamani, Wacel; Mohammad, Walid; Brely, Natalie

    2014-05-01

    The Monitoring Facilities Support Section of the Provisional Technical Secretariat (PTS) has as its main task to ensure optimal support and maintenance of an array of 321 monitoring stations and 16 radionuclide laboratories distributed worldwide. Raw seismic, infrasonic, hydroacoustic and radionuclide data from these facilities constitutes the basic product delivered by the International Monitoring System (IMS). In the process of maintaining such a wide array of stations of different technologies, the Support Section contributes to ensuring station mission capability. Mission capable data availability according to the IMS requirements should be at least 98% annually (no more than 7 days down time per year per waveform stations - 14 continuous for radionuclide stations) for continuous data sending stations. In this presentation, we will present our case regarding our intervention at stations to address equipment supportability and maintainability, as these are particularly large activities requiring the removal of a substantial part of the station equipment and installation of new equipment. The objective is always to plan these activities while minimizing downtime and continuing to meet all IMS requirements, including those of data availability mentioned above. We postulate that these objectives are better achieved by planning and making use of preventive maintenance, as opposed to "run-to-failure" with associated corrective maintenance. We use two recently upgraded Infrasound Stations (IS39 Palau and IS52 BIOT) as a case study and establish a comparison between these results and several other stations where corrective maintenance was performed, to demonstrate our hypothesis.

  11. The development and validation of the Closed-set Mandarin Sentence (CMS) test.

    PubMed

    Tao, Duo-Duo; Fu, Qian-Jie; Galvin, John J; Yu, Ya-Feng

    2017-09-01

    Matrix-styled sentence tests offer a closed-set paradigm that may be useful when evaluating speech intelligibility. Ideally, sentence test materials should reflect the distribution of phonemes within the target language. We developed and validated the Closed-set Mandarin Sentence (CMS) test to assess Mandarin speech intelligibility in noise. CMS test materials were selected to be familiar words and to represent the natural distribution of vowels, consonants, and lexical tones found in Mandarin Chinese. Ten key words in each of five categories (Name, Verb, Number, Color, and Fruit) were produced by a native Mandarin talker, resulting in a total of 50 words that could be combined to produce 100,000 unique sentences. Normative data were collected in 10 normal-hearing, adult Mandarin-speaking Chinese listeners using a closed-set test paradigm. Two test runs were conducted for each subject, and 20 sentences per run were randomly generated while ensuring that each word was presented only twice in each run. First, the level of the words in each category were adjusted to produce equal intelligibility in noise. Test-retest reliability for word-in-sentence recognition was excellent according to Cronbach's alpha (0.952). After the category level adjustments, speech reception thresholds (SRTs) for sentences in noise, defined as the signal-to-noise ratio (SNR) that produced 50% correct whole sentence recognition, were adaptively measured by adjusting the SNR according to the correctness of response. The mean SRT was -7.9 (SE=0.41) and -8.1 (SE=0.34) dB for runs 1 and 2, respectively. The mean standard deviation across runs was 0.93 dB, and paired t-tests showed no significant difference between runs 1 and 2 (p=0.74) despite random sentences being generated for each run and each subject. The results suggest that the CMS provides large stimulus set with which to repeatedly and reliably measure Mandarin-speaking listeners' speech understanding in noise using a closed-set paradigm.

  12. 20 CFR 670.545 - How does Job Corps ensure that students receive due process in disciplinary actions?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... receive due process in disciplinary actions? 670.545 Section 670.545 Employees' Benefits EMPLOYMENT AND... process in disciplinary actions? The center operator must ensure that all students receive due process in disciplinary proceedings according to procedures developed by the Secretary. These procedures must include, at...

  13. Automated campaign system

    NASA Astrophysics Data System (ADS)

    Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere

    2006-02-01

    To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.

  14. Improved molding process ensures plastic parts of higher tensile strength

    NASA Technical Reports Server (NTRS)

    Heier, W. C.

    1968-01-01

    Single molding process ensures that plastic parts /of a given mechanical design/ produced from a conventional thermosetting molding compound will have a maximum tensile strength. The process can also be used for other thermosetting compounds to produce parts with improved physical properties.

  15. Planetary Geomorphology

    NASA Technical Reports Server (NTRS)

    Malin, Michael C.

    1990-01-01

    One of the major problems in the series of ice runs was that the subsurface temperature probes did not function. AIC re-evaluated the design and, after testing several suitable sensors, installed 50 type T thermocouples, each 2 m long. In this design, each thermocouple was soldered to a rectangular copper foil spreader 0.3 com wide by 2.8 cm long to ensure an acute reading. The long rectangular shape was used because it had a large area for good thermal connection to the test material.

  16. Difficulties Encountered by Students during Cross-Cultural Studies Pertaining to the Ethnic Minority Education Model of Running Schools in "Other Places" and Countermeasures: Taking the Tibetan Classes and the Xinjiang Classes in the Interior Regions as Examples

    ERIC Educational Resources Information Center

    Yan, Qing; Song, Suizhou

    2010-01-01

    In 1984 and 2000, respectively, the Party Central Committee and the state council decided to establish the Tibet Class and the Xinjiang Class in some interior provinces and municipalities ("neidi") to promote economic development and social progress in the motherland's frontier regions as well as to ensure state security and consolidate…

  17. Business Case Analysis for the Versatile Depot Automated Test Station Used in the USAF Warner Robins Air Logistics Center Maintenance Depot

    DTIC Science & Technology

    2008-06-01

    executes the avionics test) can run on the new ATS thus creating the common ATS framework . The system will also enable numerous new functional...Enterprise-level architecture that reflects corporate DoD priorities and requirements for business systems, and provides a common framework to ensure that...entire Business Mission Area (BMA) of the DoD. The BEA also contains a set of integrated Department of Defense Architecture Framework (DoDAF

  18. A Language-Based Approach To Wireless Sensor Network Security

    DTIC Science & Technology

    2014-03-06

    128 – RPC 119 7.0 Secure RPC 87 32.0 Figure 1: SpartanRPC Memory Overhead (L) and Impact on Messaging (R) Figure 2: Scalaness /nesT Compilation and...language for developing real WSN applica- tions. This language, called Scalaness /nesT, extends Scala with staging features for executing programs on hubs...particular note here is the fact that cross-stage type safety of Scalaness source code ensures that compiled bytecode can be deployed to, and run on

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passarge, M; Fix, M K; Manser, P

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less

  20. A novel process control method for a TT-300 E-Beam/X-Ray system

    NASA Astrophysics Data System (ADS)

    Mittendorfer, Josef; Gallnböck-Wagner, Bernhard

    2018-02-01

    This paper presents some aspects of the process control method for a TT-300 E-Beam/X-Ray system at Mediscan, Austria. The novelty of the approach is the seamless integration of routine monitoring dosimetry with process data. This allows to calculate a parametric dose for each production unit and consequently a fine grain and holistic process performance monitoring. Process performance is documented in process control charts for the analysis of individual runs as well as historic trending of runs of specific process categories over a specified time range.

  1. The dynamic relationship between structural change and CO2 emissions in Malaysia: a cointegrating approach.

    PubMed

    Ali, Wajahat; Abdullah, Azrai; Azam, Muhammad

    2017-05-01

    The current study investigates the dynamic relationship between structural changes, real GDP per capita, energy consumption, trade openness, population density, and carbon dioxide (CO 2 ) emissions within the EKC framework over a period 1971-2013. The study used the autoregressive distributed lagged (ARDL) approach to investigate the long-run relationship between the selected variables. The study also employed the dynamic ordinary least squared (DOLS) technique to obtain the robust long-run estimates. Moreover, the causal relationship between the variables is explored using the VECM Granger causality test. Empirical results reveal a negative relationship between structural change and CO 2 emissions in the long run. The results indicate a positive relationship between energy consumption, trade openness, and CO 2 emissions. The study applied the turning point formula of Itkonen (2012) rather than the conventional formula of the turning point. The empirical estimates of the study do not support the presence of the EKC relationship between income and CO 2 emissions. The Granger causality test indicates the presence of long-run bidirectional causality between energy consumption, structural change, and CO 2 emissions in the long run. Economic growth, openness to trade, and population density unidirectionally cause CO 2 emissions. These results suggest that the government should focus more on information-based services rather than energy-intensive manufacturing activities. The feedback relationship between energy consumption and CO 2 emissions suggests that there is an ominous need to refurbish the energy-related policy reforms to ensure the installations of some energy-efficient modern technologies.

  2. Alignment and calibration of the MgF2 biplate compensator for applications in rotating-compensator multichannel ellipsometry.

    PubMed

    Lee, J; Rovira, P I; An, I; Collins, R W

    2001-08-01

    Biplate compensators made from MgF2 are being used increasingly in rotating-element single-channel and multichannel ellipsometers. For the measurement of accurate ellipsometric spectra, the compensator must be carefully (i) aligned internally to ensure that the fast axes of the two plates are perpendicular and (ii) calibrated to determine the phase retardance delta versus photon energy E. We present alignment and calibration procedures for multichannel ellipsometer configurations with special attention directed to the precision, accuracy, and reproducibility in the determination of delta (E). Run-to-run variations in external compensator alignment, i.e., alignment with respect to the incident beam, can lead to irreproducibilities in delta of approximately 0.2 degrees . Errors in the ellipsometric measurement of a sample can be minimized by calibrating with an external compensator alignment that matches as closely as possible that used in the measurement.

  3. PGCA: An algorithm to link protein groups created from MS/MS data

    PubMed Central

    Sasaki, Mayu; Hollander, Zsuzsanna; Smith, Derek; McManus, Bruce; McMaster, W. Robert; Ng, Raymond T.; Cohen Freue, Gabriela V.

    2017-01-01

    The quantitation of proteins using shotgun proteomics has gained popularity in the last decades, simplifying sample handling procedures, removing extensive protein separation steps and achieving a relatively high throughput readout. The process starts with the digestion of the protein mixture into peptides, which are then separated by liquid chromatography and sequenced by tandem mass spectrometry (MS/MS). At the end of the workflow, recovering the identity of the proteins originally present in the sample is often a difficult and ambiguous process, because more than one protein identifier may match a set of peptides identified from the MS/MS spectra. To address this identification problem, many MS/MS data processing software tools combine all plausible protein identifiers matching a common set of peptides into a protein group. However, this solution introduces new challenges in studies with multiple experimental runs, which can be characterized by three main factors: i) protein groups’ identifiers are local, i.e., they vary run to run, ii) the composition of each group may change across runs, and iii) the supporting evidence of proteins within each group may also change across runs. Since in general there is no conclusive evidence about the absence of proteins in the groups, protein groups need to be linked across different runs in subsequent statistical analyses. We propose an algorithm, called Protein Group Code Algorithm (PGCA), to link groups from multiple experimental runs by forming global protein groups from connected local groups. The algorithm is computationally inexpensive and enables the connection and analysis of lists of protein groups across runs needed in biomarkers studies. We illustrate the identification problem and the stability of the PGCA mapping using 65 iTRAQ experimental runs. Further, we use two biomarker studies to show how PGCA enables the discovery of relevant candidate protein group markers with similar but non-identical compositions in different runs. PMID:28562641

  4. Direct liquefaction proof-of-concept program. Topical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comolli, A.G.; Lee, L.K.; Pradhan, V.R.

    This report presents the results of work conducted under the DOE Proof-of-Concept Program in direct coal liquefaction at Hydrocarbon Technologies, Inc. in Lawrenceville, New Jersey, from February 1994 through April 1995. The work includes modifications to HRI`s existing 3 ton per day Process Development Unit (PDU) and completion of the second PDU run (POC Run 2) under the Program. The 45-day POC Run 2 demonstrated scale up of the Catalytic Two-Stage Liquefaction (CTSL Process) for a subbituminous Wyoming Black Thunder Mine coal to produce distillate liquid products at a rate of up to 4 barrels per ton of moisture-ash-free coal.more » The combined processing of organic hydrocarbon wastes, such as waste plastics and used tire rubber, with coal was also successfully demonstrated during the last nine days of operations of Run POC-02. Prior to the first PDU run (POC-01) in this program, a major effort was made to modify the PDU to improve reliability and to provide the flexibility to operate in several alternative modes. The Kerr McGee Rose-SR{sup SM} unit from Wilsonville, Alabama, was redesigned and installed next to the U.S. Filter installation to allow a comparison of the two solids removal systems. The 45-day CTSL Wyoming Black Thunder Mine coal demonstration run achieved several milestones in the effort to further reduce the cost of liquid fuels from coal. The primary objective of PDU Run POC-02 was to scale-up the CTSL extinction recycle process for subbituminous coal to produce a total distillate product using an in-line fixed-bed hydrotreater. Of major concern was whether calcium-carbon deposits would occur in the system as has happened in other low rank coal conversion processes. An additional objective of major importance was to study the co-liquefaction of plastics with coal and waste tire rubber with coal.« less

  5. Microbial Characterization Space Solid Wastes Treated with a Heat Melt Compactor

    NASA Technical Reports Server (NTRS)

    Strayer, Richard F.; Hummerick, Mary E.; Richards, Jeffrey T.; McCoy LaShelle E.; Roberts, Michael S.; Wheeler, Raymond M.

    2012-01-01

    The on going purpose of the project efforts was to characterize and determine the fate of microorganisms in space-generated solid wastes before and after processing by candidate solid waste processing. For FY 11, the candidate technology that was assessed was the Heat Melt Compactor (HMC). The scope included five HMC. product disks produced at ARC from either simulated space-generated trash or from actual space trash, Volume F compartment wet waste, returned on STS 130. This project used conventional microbiological methods to detect and enumerate microorganisms in heat melt compaction (HMC) product disks as well as surface swab samples of the HMC hardware before and after operation. In addition, biological indicators were added to the STS trash prior to compaction in order to determine if these spore-forming bacteria could survive the HMC processing conditions, i.e., high temperature (160 C) over a long duration (3 hrs). To ensure that surface dwelling microbes did not contaminate HMC product disk interiors, the disk surfaces were sanitized with 70% alcohol. Microbiological assays were run before and after sanitization and found that sanitization greatly reduced the number of identified isolates but did not totally eliminate them. To characterize the interior of the disks, ten 1.25 cm diameter core samples were aseptically obtained for each disk. These were run through the microbial characterization analyses. Low counts of bacteria, on the order of 5 to 50 per core, were found, indicating that the HMC operating conditions might not be sufficient for waste sterilization. However, the direct counts were 6 to 8 orders of magnitude greater, indicating that the vast majority of microbes present in the wastes were dead or non-cultivable. An additional indication that the HMC was sterilizing the wastes was the results from the added commercial spore test strips to the wastes prior to HMC operation. Nearly all could be recovered from the HMC disks post-operation and all were showed negative growth when run through the manufacturer's protocol, meaning that the 106 or so spores impregnated into the strips were dead. Control test strips, i.e., not exposed to the HMC conditions were all strongly positive. One area of concern is that the identities of isolates from the cultivable counts included several human pathogens, namely Staphylococcus aureus. The project reported here provides microbial characterization support to the Waste Management Systems element of the Life Support and Habitation Systems program.

  6. Preparation of Effective Operating Manuals to Support Waste Management Plant Operator Training

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S. R.

    2003-02-25

    Effective plant operating manuals used in a formal training program can make the difference between a successful operation and a failure. Once the plant process design and control strategies have been fixed, equipment has been ordered, and the plant is constructed, the only major variable affecting success is the capability of plant operating personnel. It is essential that the myriad details concerning plant operation are documented in comprehensive operating manuals suitable for training the non-technical personnel that will operate the plant. These manuals must cover the fundamental principles of each unit operation including how each operates, what process variables aremore » important, and the impact of each variable on the overall process. In addition, operators must know the process control strategies, process interlocks, how to respond to alarms, each of the detailed procedures required to start up and optimize the plant, and every control loop-including when it is appropriate to take manual control. More than anything else, operating mistakes during the start-up phase can lead to substantial delays in achieving design processing rates as well as to problems with government authorities if environmental permit limits are exceeded. The only way to assure return on plant investment is to ensure plant operators have the knowledge to properly run the plant from the outset. A comprehensive set of operating manuals specifically targeted toward plant operators and supervisors written by experienced operating personnel is the only effective way to provide the necessary information for formal start-up training.« less

  7. DWPF Simulant CPC Studies For SB8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newell, J. D.

    2013-09-25

    Prior to processing a Sludge Batch (SB) in the Defense Waste Processing Facility (DWPF), flowsheet studies using simulants are performed. Typically, the flowsheet studies are conducted based on projected composition(s). The results from the flowsheet testing are used to 1) guide decisions during sludge batch preparation, 2) serve as a preliminary evaluation of potential processing issues, and 3) provide a basis to support the Shielded Cells qualification runs performed at the Savannah River National Laboratory (SRNL). SB8 was initially projected to be a combination of the Tank 40 heel (Sludge Batch 7b), Tank 13, Tank 12, and the Tank 51more » heel. In order to accelerate preparation of SB8, the decision was made to delay the oxalate-rich material from Tank 12 to a future sludge batch. SB8 simulant studies without Tank 12 were reported in a separate report.1 The data presented in this report will be useful when processing future sludge batches containing Tank 12. The wash endpoint target for SB8 was set at a significantly higher sodium concentration to allow acceptable glass compositions at the targeted waste loading. Four non-coupled tests were conducted using simulant representing Tank 40 at 110-146% of the Koopman Minimum Acid requirement. Hydrogen was generated during high acid stoichiometry (146% acid) SRAT testing up to 31% of the DWPF hydrogen limit. SME hydrogen generation reached 48% of of the DWPF limit for the high acid run. Two non-coupled tests were conducted using simulant representing Tank 51 at 110-146% of the Koopman Minimum Acid requirement. Hydrogen was generated during high acid stoichiometry SRAT testing up to 16% of the DWPF limit. SME hydrogen generation reached 49% of the DWPF limit for hydrogen in the SME for the high acid run. Simulant processing was successful using previously established antifoam addition strategy. Foaming during formic acid addition was not observed in any of the runs. Nitrite was destroyed in all runs and no N2O was detected during SME processing. Mercury behavior was consistent with that seen in previous SRAT runs. Mercury was stripped below the DWPF limit on 0.8 wt% for all runs. Rheology yield stress fell within or below the design basis of 1-5 Pa. The low acid Tank 40 run (106% acid stoichiometry) had the highest yield stress at 3.78 Pa.« less

  8. A testing machine for dental air-turbine handpiece characteristics: free-running speed, stall torque, bearing resistance.

    PubMed

    Darvell, Brain W; Dyson, J E

    2005-01-01

    The measurement of performance characteristics of dental air turbine handpieces is of interest with respect to product comparisons, standards specifications and monitoring of bearing longevity in clinical service. Previously, however, bulky and expensive laboratory equipment was required. A portable test machine is described for determining three key characteristics of dental air-turbine handpieces: free-running speed, stall torque and bearing resistance. It relies on a special circuit design for performing a hardware integration of a force signal with respect to rotational position, independent of the rate at which the turbine is allowed to turn during both stall torque and bearing resistance measurements. Free-running speed without the introduction of any imbalance can be readily monitored. From the essential linear relationship between torque and speed, dynamic torque and, hence, power, can then be calculated. In order for these measurements to be performed routinely with the necessary precision of location on the test stage, a detailed procedure for ensuring proper gripping of the handpiece is described. The machine may be used to verify performance claims, standard compliance checks should this be established as appropriate, monitor deterioration with time and usage in the clinical environment and for laboratory investigation of design development.

  9. SeaQuest/E906 Shift Alarm System

    NASA Astrophysics Data System (ADS)

    Kitts, Noah

    2014-09-01

    SeaQuest, Fermilab E906, is a fixed target experiment that measures the Drell-Yan cross-section ratio of proton-proton to proton-deuterium collisions in order to extract the sea anti-quark structure of the proton. SeaQuest will extend the measurements made by E866/NuSea with greater precision at higher Bjorken-x. The continuously running experiment is always being monitored. Those on shift must keep track of all of the detector readouts in order to make sure the experiment is running correctly. As an experiment that is still in its early stages of running, an alarm system for people on shift is being created to provide warnings, such as a plot showing a detector's performance is sufficiently different to need attention. This plan involves python scripts that track live data. When the data shows a problem within the experiment, a corresponding alarm ID is sent to the MySQL database which then sets off an alarm. These alarms, which will alert the person on shift through both an audible and visual response, are important for ensuring that issues do not go unnoticed, and to help make sure the experiment is recording good data.

  10. Considerations on the Optimal and Efficient Processing of Information-Bearing Signals

    ERIC Educational Resources Information Center

    Harms, Herbert Andrew

    2013-01-01

    Noise is a fundamental hurdle that impedes the processing of information-bearing signals, specifically the extraction of salient information. Processing that is both optimal and efficient is desired; optimality ensures the extracted information has the highest fidelity allowed by the noise, while efficiency ensures limited resource usage. Optimal…

  11. Development of advanced Czochralski growth process to produce low-cost 150 kG silicon ingots from a single crucible for technology readiness

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The modified CG2000 crystal grower construction, installation, and machine check out was completed. The process development check out proceeded with several dry runs and one growth run. Several machine calibrations and functional problems were discovered and corrected. Exhaust gas analysis system alternatives were evaluated and an integrated system approved and ordered. Several growth runs on a development CG2000 RC grower show that complete neck, crown, and body automated growth can be achieved with only one operator input.

  12. On the development of a magnetoresistive sensor for blade tip timing and blade tip clearance measurement systems.

    PubMed

    Tomassini, R; Rossi, G; Brouckaert, J-F

    2016-10-01

    A simultaneous blade tip timing (BTT) and blade tip clearance (BTC) measurement system enables the determination of turbomachinery blade vibrations and ensures the monitoring of the existing running gaps between the blade tip and the casing. This contactless instrumentation presents several advantages compared to the well-known telemetry system with strain gauges, at the cost of a more complex data processing procedure. The probes used can be optical, capacitive, eddy current as well as microwaves, everyone with its dedicated electronics and many existing different signal processing algorithms. Every company working in this field has developed its own processing method and sensor technology. Hence, repeating the same test with different instrumentations, the answer is often different. Moreover, rarely it is possible to achieve reliability for in-service measurements. Developments are focused on innovative instrumentations and a common standard. This paper focuses on the results achieved using a novel magnetoresistive sensor for simultaneous tip timing and tip clearance measurements. The sensor measurement principle is described. The sensitivity to gap variation is investigated. In terms of measurement of vibrations, experimental investigations were performed at the Air Force Institute of Technology (ITWL, Warsaw, Poland) in a real aeroengine and in the von Karman Institute (VKI) R2 compressor rig. The advantages and limitations of the magnetoresistive probe for turbomachinery testing are highlighted.

  13. On the development of a magnetoresistive sensor for blade tip timing and blade tip clearance measurement systems

    NASA Astrophysics Data System (ADS)

    Tomassini, R.; Rossi, G.; Brouckaert, J.-F.

    2016-10-01

    A simultaneous blade tip timing (BTT) and blade tip clearance (BTC) measurement system enables the determination of turbomachinery blade vibrations and ensures the monitoring of the existing running gaps between the blade tip and the casing. This contactless instrumentation presents several advantages compared to the well-known telemetry system with strain gauges, at the cost of a more complex data processing procedure. The probes used can be optical, capacitive, eddy current as well as microwaves, everyone with its dedicated electronics and many existing different signal processing algorithms. Every company working in this field has developed its own processing method and sensor technology. Hence, repeating the same test with different instrumentations, the answer is often different. Moreover, rarely it is possible to achieve reliability for in-service measurements. Developments are focused on innovative instrumentations and a common standard. This paper focuses on the results achieved using a novel magnetoresistive sensor for simultaneous tip timing and tip clearance measurements. The sensor measurement principle is described. The sensitivity to gap variation is investigated. In terms of measurement of vibrations, experimental investigations were performed at the Air Force Institute of Technology (ITWL, Warsaw, Poland) in a real aeroengine and in the von Karman Institute (VKI) R2 compressor rig. The advantages and limitations of the magnetoresistive probe for turbomachinery testing are highlighted.

  14. Impact of warming climate and cultivar change on maize phenology in the last three decades in North China Plain

    NASA Astrophysics Data System (ADS)

    Xiao, Dengpan; Qi, Yongqing; Shen, Yanjun; Tao, Fulu; Moiwo, Juana P.; Liu, Jianfeng; Wang, Rede; Zhang, He; Liu, Fengshan

    2016-05-01

    As climate change could significantly influence crop phenology and subsequent crop yield, adaptation is a critical mitigation process of the vulnerability of crop growth and production to climate change. Thus, to ensure crop production and food security, there is the need for research on the natural (shifts in crop growth periods) and artificial (shifts in crop cultivars) modes of crop adaptation to climate change. In this study, field observations in 18 stations in North China Plain (NCP) are used in combination with Agricultural Production Systems Simulator (APSIM)-Maize model to analyze the trends in summer maize phenology in relation to climate change and cultivar shift in 1981-2008. Apparent warming in most of the investigated stations causes early flowering and maturity and consequently shortens reproductive growth stage. However, APSIM-Maize model run for four representative stations suggests that cultivar shift delays maturity and thereby prolongs reproductive growth (flowering to maturity) stage by 2.4-3.7 day per decade (d 10a-1). The study suggests a gradual adaptation of maize production process to ongoing climate change in NCP via shifts in high thermal cultivars and phenological processes. It is concluded that cultivation of maize cultivars with longer growth periods and higher thermal requirements could mitigate the negative effects of warming climate on crop production and food security in the NCP study area and beyond.

  15. HiCAT Software Infrastructure: Safe hardware control with object oriented Python

    NASA Astrophysics Data System (ADS)

    Moriarty, Christopher; Brooks, Keira; Soummer, Remi

    2018-01-01

    High contrast imaging for Complex Aperture Telescopes (HiCAT) is a testbed designed to demonstrate coronagraphy and wavefront control for segmented on-axis space telescopes such as envisioned for LUVOIR. To limit the air movements in the testbed room, software interfaces for several different hardware components were developed to completely automate operations. When developing software interfaces for many different pieces of hardware, unhandled errors are commonplace and can prevent the software from properly closing a hardware resource. Some fragile components (e.g. deformable mirrors) can be permanently damaged because of this. We present an object oriented Python-based infrastructure to safely automate hardware control and optical experiments. Specifically, conducting high-contrast imaging experiments while monitoring humidity and power status along with graceful shutdown processes even for unexpected errors. Python contains a construct called a “context manager” that allows you define code to run when a resource is opened or closed. Context managers ensure that a resource is properly closed, even when unhandled errors occur. Harnessing the context manager design, we also use Python’s multiprocessing library to monitor humidity and power status without interrupting the experiment. Upon detecting a safety problem, the master process sends an event to the child process that triggers the context managers to gracefully close any open resources. This infrastructure allows us to queue up several experiments and safely operate the testbed without a human in the loop.

  16. Heavy tailed bacterial motor switching statistics define macroscopic transport properties during upstream contamination by E. coli

    NASA Astrophysics Data System (ADS)

    Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.

    The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.

  17. NONMEMory: a run management tool for NONMEM.

    PubMed

    Wilkins, Justin J

    2005-06-01

    NONMEM is an extremely powerful tool for nonlinear mixed-effect modelling and simulation of pharmacokinetic and pharmacodynamic data. However, it is a console-based application whose output does not lend itself to rapid interpretation or efficient management. NONMEMory has been created to be a comprehensive project manager for NONMEM, providing detailed summary, comparison and overview of the runs comprising a given project, including the display of output data, simple post-run processing, fast diagnostic plots and run output management, complementary to other available modelling aids. Analysis time ought not to be spent on trivial tasks, and NONMEMory's role is to eliminate these as far as possible by increasing the efficiency of the modelling process. NONMEMory is freely available from http://www.uct.ac.za/depts/pha/nonmemory.php.

  18. Intelligence by mechanics.

    PubMed

    Blickhan, Reinhard; Seyfarth, Andre; Geyer, Hartmut; Grimmer, Sten; Wagner, Heiko; Günther, Michael

    2007-01-15

    Research on the biomechanics of animal and human locomotion provides insight into basic principles of locomotion and respective implications for construction and control. Nearly elastic operation of the leg is necessary to reproduce the basic dynamics in walking and running. Elastic leg operation can be modelled with a spring-mass model. This model can be used as a template with respect to both gaits in the construction and control of legged machines. With respect to the segmented leg, the humanoid arrangement saves energy and ensures structural stability. With the quasi-elastic operation the leg inherits the property of self-stability, i.e. the ability to stabilize a system in the presence of disturbances without sensing the disturbance or its direct effects. Self-stability can be conserved in the presence of musculature with its crucial damping property. To ensure secure foothold visco-elastic suspended muscles serve as shock absorbers. Experiments with technically implemented leg models, which explore some of these principles, are promising.

  19. Bridging the scales in atmospheric composition simulations using a nudging technique

    NASA Astrophysics Data System (ADS)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.

  20. Comparing Effects of Feedstock and Run Conditions on Pyrolysis Products Produced at Pilot-Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunning, Timothy C; Gaston, Katherine R; Wilcox, Esther

    2018-01-19

    Fast pyrolysis is a promising pathway for mass production of liquid transportable biofuels. The Thermochemical Process Development Unit (TCPDU) pilot plant at NREL is conducting research to support the Bioenergy Technologies Office's 2017 goal of a $3 per gallon biofuel. In preparation for down select of feedstock and run conditions, four different feedstocks were run at three different run conditions. The products produced were characterized extensively. Hot pyrolysis vapors and light gasses were analyzed on a slip stream, and oil and char samples were characterized post run.

  1. Simulation-Based Learning: The Learning-Forgetting-Relearning Process and Impact of Learning History

    ERIC Educational Resources Information Center

    Davidovitch, Lior; Parush, Avi; Shtub, Avy

    2008-01-01

    The results of empirical experiments evaluating the effectiveness and efficiency of the learning-forgetting-relearning process in a dynamic project management simulation environment are reported. Sixty-six graduate engineering students performed repetitive simulation-runs with a break period of several weeks between the runs. The students used a…

  2. 40 CFR 63.4371 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... wiper blades. Thus, it includes any cleaning material used in the web coating and printing subcategory... process operation run at atmospheric pressure would be a different operating scenario from the same dyeing process operation run under pressure. Organic HAP content means the mass of organic HAP per mass of solids...

  3. 40 CFR 63.4371 - What definitions apply to this subpart?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... wiper blades. Thus, it includes any cleaning material used in the web coating and printing subcategory... process operation run at atmospheric pressure would be a different operating scenario from the same dyeing process operation run under pressure. Organic HAP content means the mass of organic HAP per mass of solids...

  4. 40 CFR 63.4371 - What definitions apply to this subpart?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... wiper blades. Thus, it includes any cleaning material used in the web coating and printing subcategory... process operation run at atmospheric pressure would be a different operating scenario from the same dyeing process operation run under pressure. Organic HAP content means the mass of organic HAP per mass of solids...

  5. 40 CFR 63.4371 - What definitions apply to this subpart?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... wiper blades. Thus, it includes any cleaning material used in the web coating and printing subcategory... process operation run at atmospheric pressure would be a different operating scenario from the same dyeing process operation run under pressure. Organic HAP content means the mass of organic HAP per mass of solids...

  6. 40 CFR 63.4371 - What definitions apply to this subpart?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... wiper blades. Thus, it includes any cleaning material used in the web coating and printing subcategory... process operation run at atmospheric pressure would be a different operating scenario from the same dyeing process operation run under pressure. Organic HAP content means the mass of organic HAP per mass of solids...

  7. Running Behavioral Experiments with Human Participants: A Practical Guide (Revised Version)

    DTIC Science & Technology

    2010-01-20

    R. (2000). Social research methods: Qualitative and quantitative approaches. Thousand Oaks, CA: Sage. This is a relatively large book. It covers...guide will be useful to anyone who is starting to run research studies, training people to run studies, or studying the experimental process. When...behavior. Running an experiment in exactly the same way regardless of who is conducting it or where (e.g., different research teams or laboratories) is

  8. Predictability of process resource usage - A measurement-based study on UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1989-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient betweeen the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82 percent of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  9. Predictability of process resource usage: A measurement-based study of UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1987-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  10. Experimental evaluation of tool run-out in micro milling

    NASA Astrophysics Data System (ADS)

    Attanasio, Aldo; Ceretti, Elisabetta

    2018-05-01

    This paper deals with micro milling cutting process focusing the attention on tool run-out measurement. In fact, among the effects of the scale reduction from macro to micro (i.e., size effects) tool run-out plays an important role. This research is aimed at developing an easy and reliable method to measure tool run-out in micro milling based on experimental tests and an analytical model. From an Industry 4.0 perspective this measuring strategy can be integrated into an adaptive system for controlling cutting forces, with the objective of improving the production quality, the process stability, reducing at the same time the tool wear and the machining costs. The proposed procedure estimates the tool run-out parameters from the tool diameter, the channel width, and the phase angle between the cutting edges. The cutting edge phase measurement is based on the force signal analysis. The developed procedure has been tested on data coming from micro milling experimental tests performed on a Ti6Al4V sample. The results showed that the developed procedure can be successfully used for tool run-out estimation.

  11. Ensuring Effective Curriculum Approval Processes: A Guide for Local Senates

    ERIC Educational Resources Information Center

    Academic Senate for California Community Colleges, 2016

    2016-01-01

    Curriculum is the heart of the mission of every college. College curriculum approval processes have been established to ensure that rigorous, high quality curriculum is offered that meets the needs of students. While some concerns may exist regarding the effectiveness and efficiency of local curriculum processes, all participants in the process…

  12. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

  13. The use of a running wheel to measure activity in rodents: relationship to energy balance, general activity, and reward.

    PubMed

    Novak, Colleen M; Burghardt, Paul R; Levine, James A

    2012-03-01

    Running wheels are commonly employed to measure rodent physical activity in a variety of contexts, including studies of energy balance and obesity. There is no consensus on the nature of wheel-running activity or its underlying causes, however. Here, we will begin by systematically reviewing how running wheel availability affects physical activity and other aspects of energy balance in laboratory rodents. While wheel running and physical activity in the absence of a wheel commonly correlate in a general sense, in many specific aspects the two do not correspond. In fact, the presence of running wheels alters several aspects of energy balance, including body weight and composition, food intake, and energy expenditure of activity. We contend that wheel-running activity should be considered a behavior in and of itself, reflecting several underlying behavioral processes in addition to a rodent's general, spontaneous activity. These behavioral processes include defensive behavior, predatory aggression, and depression- and anxiety-like behaviors. As it relates to energy balance, wheel running engages several brain systems-including those related to the stress response, mood, and reward, and those responsive to growth factors-that influence energy balance indirectly. We contend that wheel-running behavior represents factors in addition to rodents' tendency to be physically active, engaging additional neural and physiological mechanisms which can then independently alter energy balance and behavior. Given the impact of wheel-running behavior on numerous overlapping systems that influence behavior and physiology, this review outlines the need for careful design and interpretation of studies that utilize running wheels as a means for exercise or as a measurement of general physical activity. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. The use of a running wheel to measure activity in rodents: Relationship to energy balance, general activity, and reward

    PubMed Central

    Levine, James A.

    2015-01-01

    Running wheels are commonly employed to measure rodent physical activity in a variety of contexts, including studies of energy balance and obesity. There is no consensus on the nature of wheel-running activity or its underlying causes, however. Here, we will begin by systematically reviewing how running wheel availability affects physical activity and other aspects of energy balance in laboratory rodents. While wheel running and physical activity in the absence of a wheel commonly correlate in a general sense, in many specific aspects the two do not correspond. In fact, the presence of running wheels alters several aspects of energy balance, including body weight and composition, food intake, and energy expenditure of activity. We contend that wheel-running activity should be considered a behavior in and of itself, reflecting several underlying behavioral processes in addition to a rodent's general, spontaneous activity. These behavioral processes include defensive behavior, predatory aggression, and depression- and anxiety-like behaviors. As it relates to energy balance, wheel running engages several brain systems—including those related to the stress response, mood, and reward, and those responsive to growth factors—that influence energy balance indirectly. We contend that wheel-running behavior represents factors in addition to rodents' tendency to be physically active, engaging additional neural and physiological mechanisms which can then independently alter energy balance and behavior. Given the impact of wheel-running behavior on numerous overlapping systems that influence behavior and physiology, this review outlines the need for careful design and interpretation of studies that utilize running wheels as a means for exercise or as a measurement of general physical activity. PMID:22230703

  15. Single-Run Single-Mask Inductively-Coupled-Plasma Reactive-Ion-Etching Process for Fabricating Suspended High-Aspect-Ratio Microstructures

    NASA Astrophysics Data System (ADS)

    Yang, Yao-Joe; Kuo, Wen-Cheng; Fan, Kuang-Chao

    2006-01-01

    In this work, we present a single-run single-mask (SRM) process for fabricating suspended high-aspect-ratio structures on standard silicon wafers using an inductively coupled plasma-reactive ion etching (ICP-RIE) etcher. This process eliminates extra fabrication steps which are required for structure release after trench etching. Released microstructures with 120 μm thickness are obtained by this process. The corresponding maximum aspect ratio of the trench is 28. The SRM process is an extended version of the standard process proposed by BOSCH GmbH (BOSCH process). The first step of the SRM process is a standard BOSCH process for trench etching, then a polymer layer is deposited on trench sidewalls as a protective layer for the subsequent structure-releasing step. The structure is released by dry isotropic etching after the polymer layer on the trench floor is removed. All the steps can be integrated into a single-run ICP process. Also, only one mask is required. Therefore, the process complexity and fabrication cost can be effectively reduced. Discussions on each SRM step and considerations for avoiding undesired etching of the silicon structures during the release process are also presented.

  16. Modified SPC for short run test and measurement process in multi-stations

    NASA Astrophysics Data System (ADS)

    Koh, C. K.; Chin, J. F.; Kamaruddin, S.

    2018-03-01

    Due to short production runs and measurement error inherent in electronic test and measurement (T&M) processes, continuous quality monitoring through real-time statistical process control (SPC) is challenging. Industry practice allows the installation of guard band using measurement uncertainty to reduce the width of acceptance limit, as an indirect way to compensate the measurement errors. This paper presents a new SPC model combining modified guard band and control charts (\\bar{\\text{Z}} chart and W chart) for short runs in T&M process in multi-stations. The proposed model standardizes the observed value with measurement target (T) and rationed measurement uncertainty (U). S-factor (S f) is introduced to the control limits to improve the sensitivity in detecting small shifts. The model was embedded in automated quality control system and verified with a case study in real industry.

  17. [Engineering a bone free flap for maxillofacial reconstruction: technical restrictions].

    PubMed

    Raoul, G; Myon, L; Chai, F; Blanchemain, N; Ferri, J

    2011-09-01

    Vascularisation is a key for success in bone tissue engineering. Creating a functional vascular network is an important concern so as to ensure vitality in regenerated tissues. Many strategies were developed to achieve this goal. One of these is cellular growth technique by perfusion bioreactor chamber. These new technical requirements came along with improved media and chamber receptacles: bioreactors (chapter 2). Some bone tissue engineering processes already have clinical applications but for volumes limited by the lack of vascularisation. Resorbable or non-resorbable membranes are an example. They are used separately or in association with bone grafts and they protect the graft during the revascularization process. Potentiated osseous regeneration uses molecular or cellular adjuvants (BMPs and autologous stem cells) to improve osseous healing. Significant improvements were made: integration of specific sequences, which may guide and enhance cells differentiation in scaffold; nano- or micro-patterned cell containing scaffolds. Finally, some authors consider the patient body as an ideal bioreactor to induce vascularisation in large volumes of grafted tissues. "Endocultivation", i.e., cellular culture inside the human body was proven to be feasible and safe. The properties of regenerated bone in the long run remain to be assessed. The objective to reach remains the engineering of an "in vitro" osseous free flap without morbidity. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  18. Genetic evolutionary taboo search for optimal marker placement in infrared patient setup

    NASA Astrophysics Data System (ADS)

    Riboldi, M.; Baroni, G.; Spadea, M. F.; Tagaste, B.; Garibaldi, C.; Cambria, R.; Orecchia, R.; Pedotti, A.

    2007-09-01

    In infrared patient setup adequate selection of the external fiducial configuration is required for compensating inner target displacements (target registration error, TRE). Genetic algorithms (GA) and taboo search (TS) were applied in a newly designed approach to optimal marker placement: the genetic evolutionary taboo search (GETS) algorithm. In the GETS paradigm, multiple solutions are simultaneously tested in a stochastic evolutionary scheme, where taboo-based decision making and adaptive memory guide the optimization process. The GETS algorithm was tested on a group of ten prostate patients, to be compared to standard optimization and to randomly selected configurations. The changes in the optimal marker configuration, when TRE is minimized for OARs, were specifically examined. Optimal GETS configurations ensured a 26.5% mean decrease in the TRE value, versus 19.4% for conventional quasi-Newton optimization. Common features in GETS marker configurations were highlighted in the dataset of ten patients, even when multiple runs of the stochastic algorithm were performed. Including OARs in TRE minimization did not considerably affect the spatial distribution of GETS marker configurations. In conclusion, the GETS algorithm proved to be highly effective in solving the optimal marker placement problem. Further work is needed to embed site-specific deformation models in the optimization process.

  19. Extremely accurate sequential verification of RELAP5-3D

    DOE PAGES

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less

  20. Simultaneous determination of 15 marker constituents in various radix Astragali preparations by solid-phase extraction and high-performance liquid chromatography.

    PubMed

    Qi, Lian-Wen; Yu, Qing-Tao; Yi, Ling; Ren, Mei-Ting; Wen, Xiao-Dong; Wang, Yu-Xia; Li, Ping

    2008-01-01

    An improved quality control method was developed to simultaneously determine 15 major constituents (eight flavonoids and seven saponins) in various radix Astragali preparations, using SPE for pretreatment of samples, HPLC with diode-array and evaporative light scattering detectors (DAD-ELSD) for quantification in one run, and HPLC-ESI-TOF/MS for definite identification of compounds in preparations. Optimum separations were obtained with a ZORBAX C(18) column, using a gradient elution with 0.3% aqueous formic acid and ACN. This established method was fully validated with respect to linearity, precision, repeatability, and accuracy, and was successfully applied to quantify the 15 compounds in 19 commercial samples, including 3 dosage forms, i. e., oral solution, injection, concentrated granule, and its processed products of radix Astragali. The results demonstrated that many factors might result in significant differences in quality of the final preparations, including crude drugs, pretreatment processes, manufacturing procedure, storage conditions, etc. Then the developed method provided a reasonable and powerful manner to ensure the efficacy, safety, and batch-to-batch uniformity of radix Astragali products by standardizing each procedure, and thus should be proposed as quality control for the clinical use and modernization of herbal preparations.

  1. Entropy-based gene ranking without selection bias for the predictive classification of microarray data.

    PubMed

    Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe

    2003-11-06

    We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  2. Multi-response optimization of Artemia hatching process using split-split-plot design based response surface methodology

    PubMed Central

    Arun, V. V.; Saharan, Neelam; Ramasubramanian, V.; Babitha Rani, A. M.; Salin, K. R.; Sontakke, Ravindra; Haridas, Harsha; Pazhayamadom, Deepak George

    2017-01-01

    A novel method, BBD-SSPD is proposed by the combination of Box-Behnken Design (BBD) and Split-Split Plot Design (SSPD) which would ensure minimum number of experimental runs, leading to economical utilization in multi- factorial experiments. The brine shrimp Artemia was tested to study the combined effects of photoperiod, temperature and salinity, each with three levels, on the hatching percentage and hatching time of their cysts. The BBD was employed to select 13 treatment combinations out of the 27 possible combinations that were grouped in an SSPD arrangement. Multiple responses were optimized simultaneously using Derringer’s desirability function. Photoperiod and temperature as well as temperature-salinity interaction were found to significantly affect the hatching percentage of Artemia, while the hatching time was significantly influenced by photoperiod and temperature, and their interaction. The optimum conditions were 23 h photoperiod, 29 °C temperature and 28 ppt salinity resulting in 96.8% hatching in 18.94 h. In order to verify the results obtained from BBD-SSPD experiment, the experiment was repeated preserving the same set up. Results of verification experiment were found to be similar to experiment originally conducted. It is expected that this method would be suitable to optimize the hatching process of animal eggs. PMID:28091611

  3. Statistical mixture design and multivariate analysis of inkjet printed a-WO3/TiO2/WOX electrochromic films.

    PubMed

    Wojcik, Pawel Jerzy; Pereira, Luís; Martins, Rodrigo; Fortunato, Elvira

    2014-01-13

    An efficient mathematical strategy in the field of solution processed electrochromic (EC) films is outlined as a combination of an experimental work, modeling, and information extraction from massive computational data via statistical software. Design of Experiment (DOE) was used for statistical multivariate analysis and prediction of mixtures through a multiple regression model, as well as the optimization of a five-component sol-gel precursor subjected to complex constraints. This approach significantly reduces the number of experiments to be realized, from 162 in the full factorial (L=3) and 72 in the extreme vertices (D=2) approach down to only 30 runs, while still maintaining a high accuracy of the analysis. By carrying out a finite number of experiments, the empirical modeling in this study shows reasonably good prediction ability in terms of the overall EC performance. An optimized ink formulation was employed in a prototype of a passive EC matrix fabricated in order to test and trial this optically active material system together with a solid-state electrolyte for the prospective application in EC displays. Coupling of DOE with chromogenic material formulation shows the potential to maximize the capabilities of these systems and ensures increased productivity in many potential solution-processed electrochemical applications.

  4. Accelerating large-scale protein structure alignments with graphics processing units

    PubMed Central

    2012-01-01

    Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132

  5. SU-F-T-507: Modeling Cerenkov Emissions From Medical Linear Accelerators: A Monte Carlo Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrock, Z; Oldham, M; Adamson, J

    2016-06-15

    Purpose: Cerenkov emissions are a natural byproduct of MV radiotherapy but are typically ignored as inconsequential. However, Cerenkov photons may be useful for activation of drugs such as psoralen. Here, we investigate Cerenkov radiation from common radiotherapy beams using Monte Carlo simulations. Methods: GAMOS, a GEANT4-based framework for Monte Carlo simulations, was used to model 6 and 18MV photon beams from a Varian medical linac. Simulations were run to track Cerenkov production from these beams when irradiating a 50cm radius sphere of water. Electron contamination was neglected. 2 million primary photon histories were run for each energy, and values scoredmore » included integral dose and total track length of Cerenkov photons between 100 and 400 nm wavelength. By lowering process energy thresholds, simulations included low energy Bremsstrahlung photons to ensure comprehensive evaluation of UV production in the medium. Results: For the same number of primary photons, UV Cerenkov production for 18MV was greater than 6MV by a factor of 3.72 as determined by total track length. The total integral dose was a factor of 2.31 greater for the 18MV beam. Bremsstrahlung photons were a negligibly small component of photons in the wavelength range of interest, comprising 0.02% of such photons. Conclusion: Cerenkov emissions in water are 1.6x greater for 18MV than 6MV for the same integral dose. Future work will expand the analysis to include optical properties of tissues, and to investigate strategies to maximize Cerenkov emission per unit dose for MV radiotherapy.« less

  6. Re-examining the roles of surface heat flux and latent heat release in a "hurricane-like" polar low over the Barents Sea

    NASA Astrophysics Data System (ADS)

    Kolstad, Erik W.; Bracegirdle, Thomas J.; Zahn, Matthias

    2016-07-01

    Polar lows are intense mesoscale cyclones that occur at high latitudes in both hemispheres during winter. Their sometimes evidently convective nature, fueled by strong surface fluxes and with cloud-free centers, have led to some polar lows being referred to as "arctic hurricanes." Idealized studies have shown that intensification by hurricane development mechanisms is theoretically possible in polar winter atmospheres, but the lack of observations and realistic simulations of actual polar lows have made it difficult to ascertain if this occurs in reality. Here the roles of surface heat fluxes and latent heat release in the development of a Barents Sea polar low, which in its cloud structures showed some similarities to hurricanes, are studied with an ensemble of sensitivity experiments, where latent heating and/or surface fluxes of sensible and latent heat were switched off before the polar low peaked in intensity. To ensure that the polar lows in the sensitivity runs did not track too far away from the actual environmental conditions, a technique known as spectral nudging was applied. This was shown to be crucial for enabling comparisons between the different model runs. The results presented here show that (1) no intensification occurred during the mature, postbaroclinic stage of the simulated polar low; (2) surface heat fluxes, i.e., air-sea interaction, were crucial processes both in order to attain the polar low's peak intensity during the baroclinic stage and to maintain its strength in the mature stage; and (3) latent heat release played a less important role than surface fluxes in both stages.

  7. A Tool for Conditions Tag Management in ATLAS

    NASA Astrophysics Data System (ADS)

    Sharmazanashvili, A.; Batiashvili, G.; Gvaberidze, G.; Shekriladze, L.; Formica, A.; Atlas Collaboration

    2014-06-01

    ATLAS Conditions data include about 2 TB in a relational database and 400 GB of files referenced from the database. Conditions data is entered and retrieved using COOL, the API for accessing data in the LCG Conditions Database infrastructure. It is managed using an ATLAS-customized python based tool set. Conditions data are required for every reconstruction and simulation job, so access to them is crucial for all aspects of ATLAS data taking and analysis, as well as by preceding tasks to derive optimal corrections to reconstruction. Optimized sets of conditions for processing are accomplished using strict version control on those conditions: a process which assigns COOL Tags to sets of conditions, and then unifies those conditions over data-taking intervals into a COOL Global Tag. This Global Tag identifies the set of conditions used to process data so that the underlying conditions can be uniquely identified with 100% reproducibility should the processing be executed again. Understanding shifts in the underlying conditions from one tag to another and ensuring interval completeness for all detectors for a set of runs to be processed is a complex task, requiring tools beyond the above mentioned python utilities. Therefore, a JavaScript /PHP based utility called the Conditions Tag Browser (CTB) has been developed. CTB gives detector and conditions experts the possibility to navigate through the different databases and COOL folders; explore the content of given tags and the differences between them, as well as their extent in time; visualize the content of channels associated with leaf tags. This report describes the structure and PHP/ JavaScript classes of functions of the CTB.

  8. Understanding overlay signatures using machine learning on non-lithography context information

    NASA Astrophysics Data System (ADS)

    Overcast, Marshall; Mellegaard, Corey; Daniel, David; Habets, Boris; Erley, Georg; Guhlemann, Steffen; Thrun, Xaver; Buhl, Stefan; Tottewitz, Steven

    2018-03-01

    Overlay errors between two layers can be caused by non-lithography processes. While these errors can be compensated by the run-to-run system, such process and tool signatures are not always stable. In order to monitor the impact of non-lithography context on overlay at regular intervals, a systematic approach is needed. Using various machine learning techniques, significant context parameters that relate to deviating overlay signatures are automatically identified. Once the most influential context parameters are found, a run-to-run simulation is performed to see how much improvement can be obtained. The resulting analysis shows good potential for reducing the influence of hidden context parameters on overlay performance. Non-lithographic contexts are significant contributors, and their automatic detection and classification will enable the overlay roadmap, given the corresponding control capabilities.

  9. Biomanufacturing process analytical technology (PAT) application for downstream processing: Using dissolved oxygen as an indicator of product quality for a protein refolding reaction.

    PubMed

    Pizarro, Shelly A; Dinges, Rachel; Adams, Rachel; Sanchez, Ailen; Winter, Charles

    2009-10-01

    Process analytical technology (PAT) is an initiative from the US FDA combining analytical and statistical tools to improve manufacturing operations and ensure regulatory compliance. This work describes the use of a continuous monitoring system for a protein refolding reaction to provide consistency in product quality and process performance across batches. A small-scale bioreactor (3 L) is used to understand the impact of aeration for refolding recombinant human vascular endothelial growth factor (rhVEGF) in a reducing environment. A reverse-phase HPLC assay is used to assess product quality. The goal in understanding the oxygen needs of the reaction and its impact to quality, is to make a product that is efficiently refolded to its native and active form with minimum oxidative degradation from batch to batch. Because this refolding process is heavily dependent on oxygen, the % dissolved oxygen (DO) profile is explored as a PAT tool to regulate process performance at commercial manufacturing scale. A dynamic gassing out approach using constant mass transfer (k(L)a) is used for scale-up of the aeration parameters to manufacturing scale tanks (2,000 L, 15,000 L). The resulting DO profiles of the refolding reaction show similar trends across scales and these are analyzed using rpHPLC. The desired product quality attributes are then achieved through alternating air and nitrogen sparging triggered by changes in the monitored DO profile. This approach mitigates the impact of differences in equipment or feedstock components between runs, and is directly inline with the key goal of PAT to "actively manage process variability using a knowledge-based approach." (c) 2009 Wiley Periodicals, Inc.

  10. Integrating end-to-end threads of control into object-oriented analysis and design

    NASA Technical Reports Server (NTRS)

    Mccandlish, Janet E.; Macdonald, James R.; Graves, Sara J.

    1993-01-01

    Current object-oriented analysis and design methodologies fall short in their use of mechanisms for identifying threads of control for the system being developed. The scenarios which typically describe a system are more global than looking at the individual objects and representing their behavior. Unlike conventional methodologies that use data flow and process-dependency diagrams, object-oriented methodologies do not provide a model for representing these global threads end-to-end. Tracing through threads of control is key to ensuring that a system is complete and timing constraints are addressed. The existence of multiple threads of control in a system necessitates a partitioning of the system into processes. This paper describes the application and representation of end-to-end threads of control to the object-oriented analysis and design process using object-oriented constructs. The issue of representation is viewed as a grouping problem, that is, how to group classes/objects at a higher level of abstraction so that the system may be viewed as a whole with both classes/objects and their associated dynamic behavior. Existing object-oriented development methodology techniques are extended by adding design-level constructs termed logical composite classes and process composite classes. Logical composite classes are design-level classes which group classes/objects both logically and by thread of control information. Process composite classes further refine the logical composite class groupings by using process partitioning criteria to produce optimum concurrent execution results. The goal of these design-level constructs is to ultimately provide the basis for a mechanism that can support the creation of process composite classes in an automated way. Using an automated mechanism makes it easier to partition a system into concurrently executing elements that can be run in parallel on multiple processors.

  11. ENKI - An Open Source environmental modelling platfom

    NASA Astrophysics Data System (ADS)

    Kolberg, S.; Bruland, O.

    2012-04-01

    The ENKI software framework for implementing spatio-temporal models is now released under the LGPL license. Originally developed for evaluation and comparison of distributed hydrological model compositions, ENKI can be used for simulating any time-evolving process over a spatial domain. The core approach is to connect a set of user specified subroutines into a complete simulation model, and provide all administrative services needed to calibrate and run that model. This includes functionality for geographical region setup, all file I/O, calibration and uncertainty estimation etc. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines and various model compositions in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational water resource management. ENKI uses a plug-in structure to invoke separately compiled subroutines, separately built as dynamic-link libraries (dlls). The source code of an ENKI routine is highly compact, with a narrow framework-routine interface allowing the main program to recognise the number, types, and names of the routine's variables. The framework then exposes these variables to the user within the proper context, ensuring that distributed maps coincide spatially, time series exist for input variables, states are initialised, GIS data sets exist for static map data, manually or automatically calibrated values for parameters etc. By using function calls and memory data structures to invoke routines and facilitate information flow, ENKI provides good performance. For a typical distributed hydrological model setup in a spatial domain of 25000 grid cells, 3-4 time steps simulated per second should be expected. Future adaptation to parallel processing may further increase this speed. New modifications to ENKI include a full separation of API and user interface, making it possible to run ENKI from GIS programs and other software environments. ENKI currently compiles under Windows and Visual Studio only, but ambitions exist to remove the platform and compiler dependencies.

  12. Effects of Real-Time NASA Vegetation Data on Model Forecasts of Severe Weather

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Bell, Jordan R.; LaFontaine, Frank J.; Peters-Lidard, Christa D.

    2012-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed a Greenness Vegetation Fraction (GVF) dataset, which is updated daily using swaths of Normalized Difference Vegetation Index data from the Moderate Resolution Imaging Spectroradiometer (MODIS) data aboard the NASA-EOS Aqua and Terra satellites. NASA SPoRT started generating daily real-time GVF composites at 1-km resolution over the Continental United States beginning 1 June 2010. A companion poster presentation (Bell et al.) primarily focuses on impact results in an offline configuration of the Noah land surface model (LSM) for the 2010 warm season, comparing the SPoRT/MODIS GVF dataset to the current operational monthly climatology GVF available within the National Centers for Environmental Prediction (NCEP) and Weather Research and Forecasting (WRF) models. This paper/presentation primarily focuses on individual case studies of severe weather events to determine the impacts and possible improvements by using the real-time, high-resolution SPoRT-MODIS GVFs in place of the coarser-resolution NCEP climatological GVFs in model simulations. The NASA-Unified WRF (NU-WRF) modeling system is employed to conduct the sensitivity simulations of individual events. The NU-WRF is an integrated modeling system based on the Advanced Research WRF dynamical core that is designed to represents aerosol, cloud, precipitation, and land processes at satellite-resolved scales in a coupled simulation environment. For this experiment, the coupling between the NASA Land Information System (LIS) and the WRF model is utilized to measure the impacts of the daily SPoRT/MODIS versus the monthly NCEP climatology GVFs. First, a spin-up run of the LIS is integrated for two years using the Noah LSM to ensure that the land surface fields reach an equilibrium state on the 4-km grid mesh used. Next, the spin-up LIS is run in two separate modes beginning on 1 June 2010, one continuing with the climatology GVFs while the other uses the daily SPoRT/MODIS GVFs. Finally, snapshots of the LIS land surface fields are used to initialize two different simulations of the NU-WRF, one running with climatology LIS and GVFs, and the other running with experimental LIS and NASA/SPoRT GVFs. In this paper/presentation, case study results will be highlighted in regions with significant differences in GVF between the NCEP climatology and SPoRT product during severe weather episodes.

  13. Water and processes of degradation in the Martian landscape

    NASA Technical Reports Server (NTRS)

    Milton, D. J.

    1973-01-01

    It is shown that erosion has been active on Mars so that many of the surface landforms are products of degradation. Unlike earth, erosion has not been a universal process, but one areally restricted and intermittently active so that a landscape is the product of one or two cycles of erosion and large areas of essentially undisturbed primitive terrain; running water has been the principal agent of degradation. Many features on Mars are most easily explained by assuming running surface water at some time in the past; for a few features, running water is the only possible explanation.

  14. Dual Optical Comb LWIR Source and Sensor

    DTIC Science & Technology

    2017-10-12

    Figure 39. Locking loop only controls one parameter, whereas there are two free- running parameters to control...optical frequency, along with a 12 point running average (black) equivalent to a 4 cm -1 resolution. .............................. 52 Figure 65...and processed on a single epitaxial substrate. Each OFC will be electrically driven and free- running (requiring no optical locking mechanisms). This

  15. A Functional Approach to Reducing Runaway Behavior and Stabilizing Placements for Adolescents in Foster Care

    ERIC Educational Resources Information Center

    Clark, Hewitt B.; Crosland, Kimberly A.; Geller, David; Cripe, Michael; Kenney, Terresa; Neff, Bryon; Dunlap, Glen

    2008-01-01

    Teenagers' running from foster placement is a significant problem in the field of child protection. This article describes a functional, behavior analytic approach to reducing running away through assessing the motivations for running, involving the youth in the assessment process, and implementing interventions to enhance the reinforcing value of…

  16. Searching and Filtering Tweets: CSIRO at the TREC 2012 Microblog Track

    DTIC Science & Technology

    2012-11-01

    stages. We first evaluate the effect of tweet corpus pre- processing in vanilla runs (no query expansion), and then assess the effect of query expansion...Effect of a vanilla run on D4 index (both realtime and non-real-time), and query expansion methods based on the submitted runs for two sets of queries

  17. Internet-Based Solutions for a Secure and Efficient Seismic Network

    NASA Astrophysics Data System (ADS)

    Bhadha, R.; Black, M.; Bruton, C.; Hauksson, E.; Stubailo, I.; Watkins, M.; Alvarez, M.; Thomas, V.

    2017-12-01

    The Southern California Seismic Network (SCSN), operated by Caltech and USGS, leverages modern Internet-based computing technologies to provide timely earthquake early warning for damage reduction, event notification, ShakeMap, and other data products. Here we present recent and ongoing innovations in telemetry, security, cloud computing, virtualization, and data analysis that have allowed us to develop a network that runs securely and efficiently.Earthquake early warning systems must process seismic data within seconds of being recorded, and SCSN maintains a robust and resilient network of more than 350 digital strong motion and broadband seismic stations to achieve this goal. We have continued to improve the path diversity and fault tolerance within our network, and have also developed new tools for latency monitoring and archiving.Cyberattacks are in the news almost daily, and with most of our seismic data streams running over the Internet, it is only a matter of time before SCSN is targeted. To ensure system integrity and availability across our network, we have implemented strong security, including encryption and Virtual Private Networks (VPNs).SCSN operates its own data center at Caltech, but we have also installed real-time servers on Amazon Web Services (AWS), to provide an additional level of redundancy, and eventually to allow full off-site operations continuity for our network. Our AWS systems receive data from Caltech-based import servers and directly from field locations, and are able to process the seismic data, calculate earthquake locations and magnitudes, and distribute earthquake alerts, directly from the cloud.We have also begun a virtualization project at our Caltech data center, allowing us to serve data from Virtual Machines (VMs), making efficient use of high-performance hardware and increasing flexibility and scalability of our data processing systems.Finally, we have developed new monitoring of station average noise levels at most stations. Noise monitoring is effective at identifying anthropogenic noise sources and malfunctioning acquisition equipment. We have built a dynamic display of results with sorting and mapping capabilities that allow us to quickly identify problematic sites and areas with elevated noise.

  18. Maintenance Cognitive Stimulation Therapy (CST) in practice: study protocol for a randomized controlled trial

    PubMed Central

    2012-01-01

    Background Cognitive Stimulation Therapy (CST) is a psychosocial evidence-based group intervention for people with dementia recommended by the UK NICE guidelines. In clinical trials, CST has been shown to improve cognition and quality of life, but little is known about the best way of ensuring implementation of CST in practice settings. A recent pilot study found that a third of people who attend CST training go on to run CST in practice, but staff identified a lack of support as a key reason for the lack of implementation. Methods/design There are three projects in this study: The first is a pragmatic multi-centre, randomised controlled trial (RCT) of staff training, comparing CST training and outreach support with CST training only; the second, the monitoring and outreach trial, is a phase IV trial that evaluates implementation of CST in practice by staff members who have previously had the CST manual or attended training. Centres will be randomised to receive outreach support. The primary outcome measure for both of these trials is the number of CST sessions run for people with dementia. Secondary outcomes include the number of attenders at sessions, job satisfaction, dementia knowledge and attitudes, competency, barriers to change, approach to learning and a controllability of beliefs and the level of adherence. Focus groups will assess staff members’ perceptions of running CST groups and receiving outreach support. The third study involves monitoring centres running groups in their usual practice and looking at basic outcomes of cognition and quality of life for the person with dementia. Discussion These studies assess the effects of outreach support on putting CST into practice and running groups effectively in a variety of care settings with people with dementia; evaluate the effectiveness of CST in standard clinical practice; and identify key factors promoting or impeding the successful running of groups. Trial registration Clinical trial ISRCTN28793457. PMID:22735077

  19. Software testing

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.

    2016-01-01

    Now more than ever, scientific results are dependent on sophisticated software and analysis. Why should we trust code written by others? How do you ensure your own code produces sensible results? How do you make sure it continues to do so as you update, modify, and add functionality? Software testing is an integral part of code validation and writing tests should be a requirement for any software project. I will talk about Python-based tools that make managing and running tests much easier and explore some statistics for projects hosted on GitHub that contain tests.

  20. A cryogenic target for Compton scattering experiments at HIγS

    DOE PAGES

    Kendellen, D. P.; Ahmed, M. W.; Baird, E.; ...

    2016-10-06

    We have developed a cryogenic target for use at the High Intensity γ-ray Source (HIγS). The target system is able to liquefy 4He at 4 K, hydrogen at 20 K, and deuterium at 23 K to fill a 0.3 L Kapton cell. Liquid temperatures and condenser pressures are recorded throughout each run in order to ensure that the target's areal density is known to ~1%. The target is being utilized in a series of experiments which probe the electromagnetic polarizabilities of the nucleon.

  1. Financial Analysis of a Selected Company

    NASA Astrophysics Data System (ADS)

    Baran, Dušan; Pastýr, Andrej; Baranová, Daniela

    2016-06-01

    The success of every business enterprise is directly related to the competencies of business management. The business enterprise can, as a result, create variations of how to approach the new complex and changing situations of success in the market. Therefore managers are trying during negative times to change their management approach, to ensure long-term and stable running of the business enterprise. They are forced to continuously maintain and obtain customers and suppliers. By implementing these measures they have the opportunity to achieve a competitive advantage over other business enterprises.

  2. Near real time determination of the magnetopause and bow shock shape and position

    NASA Astrophysics Data System (ADS)

    Kartalev, M. D.; Keremidarska, V. I.; Grigorov, K. G.; Romanov, D. K.

    2002-03-01

    We present a web based near real time (once in 90 minutes) automated running of our 3D magnetosheath gasdynamic numerical model. (http://geospace.nat.bg). The determination of the shape and position of the bow shock and the magnetopause is a part of the solution. This approach of the model is utilizing the realistic semi-empirical Tsyganenko magnetosphere model T96-01 for ensuring the pressure balance at the magnetopause. In this realization, we use a real time ACE data, averaged over a 6 minutes time interval.

  3. Financial advantages. Preventative measures ensure the health of your accounts receivable.

    PubMed

    Duda, Michelle

    2009-11-01

    Running a dental practice is no small task; from staying on the leading edge of new medical developments and products, to monitoring ever-changing dental insurance plans, to simply overseeing the fundamental day-to-day operations. But there is one area of your practice that can be streamlined to significantly improve your cash flow, minimize delinquencies and optimize fiscal operations. Your accounts receivable and collections can be economically and efficiently managed by a savvy combination of internal efforts and the partnership of a third party resource.

  4. New train of thought in troubleshooting of modern automobiles

    NASA Astrophysics Data System (ADS)

    Chen, Zhaojun

    2018-03-01

    With the rapid development of social economy in our country, the car has also been popularized more and more widely. In order to ensure the normal running of the car, it is very important to make proper maintenance and safeguard measures. To achieve the effective enhancement of the quality of vehicle maintenance, we must be able to accurately determine the vehicle fault with the shortest possible time. This article focuses on the new ideas of modern vehicle troubleshooting carried out related research, analysis, hoping to provide some valuable reference to the staff.

  5. BioTapestry now provides a web application and improved drawing and layout tools

    PubMed Central

    Paquette, Suzanne M.; Leinonen, Kalle; Longabaugh, William J.R.

    2016-01-01

    Gene regulatory networks (GRNs) control embryonic development, and to understand this process in depth, researchers need to have a detailed understanding of both the network architecture and its dynamic evolution over time and space. Interactive visualization tools better enable researchers to conceptualize, understand, and share GRN models. BioTapestry is an established application designed to fill this role, and recent enhancements released in Versions 6 and 7 have targeted two major facets of the program. First, we introduced significant improvements for network drawing and automatic layout that have now made it much easier for the user to create larger, more organized network drawings. Second, we revised the program architecture so it could continue to support the current Java desktop Editor program, while introducing a new BioTapestry GRN Viewer that runs as a JavaScript web application in a browser. We have deployed a number of GRN models using this new web application. These improvements will ensure that BioTapestry remains viable as a research tool in the face of the continuing evolution of web technologies, and as our understanding of GRN models grows. PMID:27134726

  6. BioTapestry now provides a web application and improved drawing and layout tools.

    PubMed

    Paquette, Suzanne M; Leinonen, Kalle; Longabaugh, William J R

    2016-01-01

    Gene regulatory networks (GRNs) control embryonic development, and to understand this process in depth, researchers need to have a detailed understanding of both the network architecture and its dynamic evolution over time and space. Interactive visualization tools better enable researchers to conceptualize, understand, and share GRN models. BioTapestry is an established application designed to fill this role, and recent enhancements released in Versions 6 and 7 have targeted two major facets of the program. First, we introduced significant improvements for network drawing and automatic layout that have now made it much easier for the user to create larger, more organized network drawings. Second, we revised the program architecture so it could continue to support the current Java desktop Editor program, while introducing a new BioTapestry GRN Viewer that runs as a JavaScript web application in a browser. We have deployed a number of GRN models using this new web application. These improvements will ensure that BioTapestry remains viable as a research tool in the face of the continuing evolution of web technologies, and as our understanding of GRN models grows.

  7. Parasites, proteomes and systems: has Descartes' clock run out of time?

    PubMed

    Wastling, J M; Armstrong, S D; Krishna, R; Xia, D

    2012-08-01

    Systems biology aims to integrate multiple biological data types such as genomics, transcriptomics and proteomics across different levels of structure and scale; it represents an emerging paradigm in the scientific process which challenges the reductionism that has dominated biomedical research for hundreds of years. Systems biology will nevertheless only be successful if the technologies on which it is based are able to deliver the required type and quality of data. In this review we discuss how well positioned is proteomics to deliver the data necessary to support meaningful systems modelling in parasite biology. We summarise the current state of identification proteomics in parasites, but argue that a new generation of quantitative proteomics data is now needed to underpin effective systems modelling. We discuss the challenges faced to acquire more complete knowledge of protein post-translational modifications, protein turnover and protein-protein interactions in parasites. Finally we highlight the central role of proteome-informatics in ensuring that proteomics data is readily accessible to the user-community and can be translated and integrated with other relevant data types.

  8. Parasites, proteomes and systems: has Descartes’ clock run out of time?

    PubMed Central

    WASTLING, J. M.; ARMSTRONG, S. D.; KRISHNA, R.; XIA, D.

    2012-01-01

    SUMMARY Systems biology aims to integrate multiple biological data types such as genomics, transcriptomics and proteomics across different levels of structure and scale; it represents an emerging paradigm in the scientific process which challenges the reductionism that has dominated biomedical research for hundreds of years. Systems biology will nevertheless only be successful if the technologies on which it is based are able to deliver the required type and quality of data. In this review we discuss how well positioned is proteomics to deliver the data necessary to support meaningful systems modelling in parasite biology. We summarise the current state of identification proteomics in parasites, but argue that a new generation of quantitative proteomics data is now needed to underpin effective systems modelling. We discuss the challenges faced to acquire more complete knowledge of protein post-translational modifications, protein turnover and protein-protein interactions in parasites. Finally we highlight the central role of proteome-informatics in ensuring that proteomics data is readily accessible to the user-community and can be translated and integrated with other relevant data types. PMID:22828391

  9. The probability estimation of the electronic lesson implementation taking into account software reliability

    NASA Astrophysics Data System (ADS)

    Gurov, V. V.

    2017-01-01

    Software tools for educational purposes, such as e-lessons, computer-based testing system, from the point of view of reliability, have a number of features. The main ones among them are the need to ensure a sufficiently high probability of their faultless operation for a specified time, as well as the impossibility of their rapid recovery by the way of replacing it with a similar running program during the classes. The article considers the peculiarities of reliability evaluation of programs in contrast to assessments of hardware reliability. The basic requirements to reliability of software used for carrying out practical and laboratory classes in the form of computer-based training programs are given. The essential requirements applicable to the reliability of software used for conducting the practical and laboratory studies in the form of computer-based teaching programs are also described. The mathematical tool based on Markov chains, which allows to determine the degree of debugging of the training program for use in the educational process by means of applying the graph of the software modules interaction, is presented.

  10. Space-Shuttle Emulator Software

    NASA Technical Reports Server (NTRS)

    Arnold, Scott; Askew, Bill; Barry, Matthew R.; Leigh, Agnes; Mermelstein, Scott; Owens, James; Payne, Dan; Pemble, Jim; Sollinger, John; Thompson, Hiram; hide

    2007-01-01

    A package of software has been developed to execute a raw binary image of the space shuttle flight software for simulation of the computational effects of operation of space shuttle avionics. This software can be run on inexpensive computer workstations. Heretofore, it was necessary to use real flight computers to perform such tests and simulations. The package includes a program that emulates the space shuttle orbiter general- purpose computer [consisting of a central processing unit (CPU), input/output processor (IOP), master sequence controller, and buscontrol elements]; an emulator of the orbiter display electronics unit and models of the associated cathode-ray tubes, keyboards, and switch controls; computational models of the data-bus network; computational models of the multiplexer-demultiplexer components; an emulation of the pulse-code modulation master unit; an emulation of the payload data interleaver; a model of the master timing unit; a model of the mass memory unit; and a software component that ensures compatibility of telemetry and command services between the simulated space shuttle avionics and a mission control center. The software package is portable to several host platforms.

  11. Inventory Control System by Using Vendor Managed Inventory (VMI)

    NASA Astrophysics Data System (ADS)

    Sabila, Alzena Dona; Mustafid; Suryono

    2018-02-01

    The inventory control system has a strategic role for the business in managing inventory operations. Management of conventional inventory creates problems in the stock of goods that often runs into vacancies and excess goods at the retail level. This study aims to build inventory control system that can maintain the stability of goods availability at the retail level. The implementation of Vendor Managed Inventory (VMI) method on inventory control system provides transparency of sales data and inventory of goods at retailer level to supplier. Inventory control is performed by calculating safety stock and reorder point of goods based on sales data received by the system. Rule-based reasoning is provided on the system to facilitate the monitoring of inventory status information, thereby helping the process of inventory updates appropriately. Utilization of SMS technology is also considered as a medium of collecting sales data in real-time due to the ease of use. The results of this study indicate that inventory control using VMI ensures the availability of goods ± 70% and can reduce the accumulation of goods ± 30% at the retail level.

  12. VELoCiRaPTORS.

    NASA Astrophysics Data System (ADS)

    Lundgren, J.; Esham, B.; Padalino, S. J.; Sangster, T. C.; Glebov, V.

    2007-11-01

    The Venting and Exhausting of Low Level Air Contaminants in the Rapid Pneumatic Transport of Radioactive Samples (VELoCiRaPTORS) system is constructed to transport radioactive materials quickly and safely at the NIF. A radioactive sample will be placed inside a carrier that is transported via an airflow system produced by controlled differential pressure. Midway through the transportation process, the carrier will be stopped and vented by a powered exhaust blower which will remove radioactive gases within the transport carrier. A Geiger counter will monitor the activity of the exhaust gas to ensure that it is below acceptable levels. If the radiation level is sufficient, the carrier will pass through the remainder of the system, pneumatically braking at the counting station. The complete design will run manually or automatically with control software. Tests were performed using an inactive carrier to determine possible transportation problems. The system underwent many consecutive trials without failure. VELoCiRaPTORS is a prototype of a system that could be installed at both the Laboratory for Laser Energetics at the University of Rochester and the National Ignition Facility at LLNL.

  13. Parallel Key Frame Extraction for Surveillance Video Service in a Smart City.

    PubMed

    Zheng, Ran; Yao, Chuanwei; Jin, Hai; Zhu, Lei; Zhang, Qin; Deng, Wei

    2015-01-01

    Surveillance video service (SVS) is one of the most important services provided in a smart city. It is very important for the utilization of SVS to provide design efficient surveillance video analysis techniques. Key frame extraction is a simple yet effective technique to achieve this goal. In surveillance video applications, key frames are typically used to summarize important video content. It is very important and essential to extract key frames accurately and efficiently. A novel approach is proposed to extract key frames from traffic surveillance videos based on GPU (graphics processing units) to ensure high efficiency and accuracy. For the determination of key frames, motion is a more salient feature in presenting actions or events, especially in surveillance videos. The motion feature is extracted in GPU to reduce running time. It is also smoothed to reduce noise, and the frames with local maxima of motion information are selected as the final key frames. The experimental results show that this approach can extract key frames more accurately and efficiently compared with several other methods.

  14. The continued development of the Spallation Neutron Source external antenna H- ion sourcea)

    NASA Astrophysics Data System (ADS)

    Welton, R. F.; Carmichael, J.; Desai, N. J.; Fuga, R.; Goulding, R. H.; Han, B.; Kang, Y.; Lee, S. W.; Murray, S. N.; Pennisi, T.; Potter, K. G.; Santana, M.; Stockli, M. P.

    2010-02-01

    The U.S. Spallation Neutron Source (SNS) is an accelerator-based, pulsed neutron-scattering facility, currently in the process of ramping up neutron production. In order to ensure that the SNS will meet its operational commitments as well as provide for future facility upgrades with high reliability, we are developing a rf-driven, H- ion source based on a water-cooled, ceramic aluminum nitride (AlN) plasma chamber. To date, early versions of this source have delivered up to 42 mA to the SNS front end and unanalyzed beam currents up to ˜100 mA (60 Hz, 1 ms) to the ion source test stand. This source was operated on the SNS accelerator from February to April 2009 and produced ˜35 mA (beam current required by the ramp up plan) with availability of ˜97%. During this run several ion source failures identified reliability issues, which must be addressed before the source re-enters production: plasma ignition, antenna lifetime, magnet cooling, and cooling jacket integrity. This report discusses these issues, details proposed engineering solutions, and notes progress to date.

  15. Towards an Australian ensemble streamflow forecasting system for flood prediction and water management

    NASA Astrophysics Data System (ADS)

    Bennett, J.; David, R. E.; Wang, Q.; Li, M.; Shrestha, D. L.

    2016-12-01

    Flood forecasting in Australia has historically relied on deterministic forecasting models run only when floods are imminent, with considerable forecaster input and interpretation. These now co-existed with a continually available 7-day streamflow forecasting service (also deterministic) aimed at operational water management applications such as environmental flow releases. The 7-day service is not optimised for flood prediction. We describe progress on developing a system for ensemble streamflow forecasting that is suitable for both flood prediction and water management applications. Precipitation uncertainty is handled through post-processing of Numerical Weather Prediction (NWP) output with a Bayesian rainfall post-processor (RPP). The RPP corrects biases, downscales NWP output, and produces reliable ensemble spread. Ensemble precipitation forecasts are used to force a semi-distributed conceptual rainfall-runoff model. Uncertainty in precipitation forecasts is insufficient to reliably describe streamflow forecast uncertainty, particularly at shorter lead-times. We characterise hydrological prediction uncertainty separately with a 4-stage error model. The error model relies on data transformation to ensure residuals are homoscedastic and symmetrically distributed. To ensure streamflow forecasts are accurate and reliable, the residuals are modelled using a mixture-Gaussian distribution with distinct parameters for the rising and falling limbs of the forecast hydrograph. In a case study of the Murray River in south-eastern Australia, we show ensemble predictions of floods generally have lower errors than deterministic forecasting methods. We also discuss some of the challenges in operationalising short-term ensemble streamflow forecasts in Australia, including meeting the needs for accurate predictions across all flow ranges and comparing forecasts generated by event and continuous hydrological models.

  16. Fate of virginiamycin through the fuel ethanol production process.

    PubMed

    Bischoff, Kenneth M; Zhang, Yanhong; Rich, Joseph O

    2016-05-01

    Antibiotics are frequently used to prevent and treat bacterial contamination of commercial fuel ethanol fermentations, but there is concern that antibiotic residues may persist in the distillers grains coproducts. A study to evaluate the fate of virginiamycin during the ethanol production process was conducted in the pilot plant facilities at the National Corn to Ethanol Research Center, Edwardsville, IL. Three 15,000-liter fermentor runs were performed: one with no antibiotic (F1), one dosed with 2 parts per million (ppm) of a commercial virginiamycin product (F2), and one dosed at 20 ppm of virginiamycin product (F3). Fermentor samples, distillers dried grains with solubles (DDGS), and process intermediates (whole stillage, thin stillage, syrup, and wet cake) were collected from each run and analyzed for virginiamycin M and virginiamycin S using a liquid chromatography-mass spectrometry method. Virginiamycin M was detected in all process intermediates of the F3 run. On a dry-weight basis, virginiamycin M concentrations decreased approximately 97 %, from 41 μg/g in the fermentor to 1.4 μg/g in the DDGS. Using a disc plate bioassay, antibiotic activity was detected in DDGS from both the F2 and F3 runs, with values of 0.69 μg virginiamycin equivalent/g sample and 8.9 μg/g, respectively. No antibiotic activity (<0.6 μg/g) was detected in any of the F1 samples or in the fermentor and process intermediate samples from the F2 run. These results demonstrate that low concentrations of biologically active antibiotic may persist in distillers grains coproducts produced from fermentations treated with virginiamycin.

  17. Cram

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, T.

    2014-08-29

    Large-scale systems like Sequoia allow running small numbers of very large (1M+ process) jobs, but their resource managers and schedulers do not allow large numbers of small (4, 8, 16, etc.) process jobs to run efficiently. Cram is a tool that allows users to launch many small MPI jobs within one large partition, and to overcome the limitations of current resource management software for large ensembles of jobs.

  18. IBIS integrated biological imaging system: electron micrograph image-processing software running on Unix workstations.

    PubMed

    Flifla, M J; Garreau, M; Rolland, J P; Coatrieux, J L; Thomas, D

    1992-12-01

    'IBIS' is a set of computer programs concerned with the processing of electron micrographs, with particular emphasis on the requirements for structural analyses of biological macromolecules. The software is written in FORTRAN 77 and runs on Unix workstations. A description of the various functions and the implementation mode is given. Some examples illustrate the user interface.

  19. Internationalization of Higher Education in China: Chinese-Foreign Cooperation in Running Schools and the Introduction of High-Quality Foreign Educational Resources

    ERIC Educational Resources Information Center

    Tan, Zhen

    2009-01-01

    With the acceleration of the internationalization process of higher education in China, the Chinese-foreign cooperation in running schools (CFCRS) has been developing at an expeditious pace nowadays. It positively enhances the internationalization process of Chinese higher education and greatly contributes to providing the society with talents.…

  20. DIORAMA Communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galassi, Mark C.

    Diorama is written as a collection of modules that can run in separate threads or in separate processes. This defines a clear interface between the modules and also allows concurrent processing of different parts of the pipeline. The pipeline is determined by a description in a scenario file[Norman and Tornga, 2012, Tornga and Norman, 2014]. The scenario manager parses the XML scenario and sets up the sequence of modules which will generate an event, propagate the signal to a set of sensors, and then run processing modules on the results provided by those sensor simulations. During a run a varietymore » of “observer” and “processor” modules can be invoked to do interim analysis of results. Observers do not modify the simulation results, while processors may affect the final result. At the end of a run results are collated and final reports are put out. A detailed description of the scenario file and how it puts together a simulation are given in [Tornga and Norman, 2014]. The processing pipeline and how to program it with the Diorama API is described in Tornga et al. [2015] and Tornga and Wakeford [2015]. In this report I describe the communications infrastructure that is used.« less

  1. File Usage Analysis and Resource Usage Prediction: a Measurement-Based Study. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.-S.

    1987-01-01

    A probabilistic scheme was developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The coefficient of correlation between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  2. A Machine Learning Method for Power Prediction on the Mobile Devices.

    PubMed

    Chen, Da-Ren; Chen, You-Shyang; Chen, Lin-Chih; Hsu, Ming-Yang; Chiang, Kai-Feng

    2015-10-01

    Energy profiling and estimation have been popular areas of research in multicore mobile architectures. While short sequences of system calls have been recognized by machine learning as pattern descriptions for anomalous detection, power consumption of running processes with respect to system-call patterns are not well studied. In this paper, we propose a fuzzy neural network (FNN) for training and analyzing process execution behaviour with respect to series of system calls, parameters and their power consumptions. On the basis of the patterns of a series of system calls, we develop a power estimation daemon (PED) to analyze and predict the energy consumption of the running process. In the initial stage, PED categorizes sequences of system calls as functional groups and predicts their energy consumptions by FNN. In the operational stage, PED is applied to identify the predefined sequences of system calls invoked by running processes and estimates their energy consumption.

  3. The CMIP5 Model Documentation Questionnaire: Development of a Metadata Retrieval System for the METAFOR Common Information Model

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Lawrence, Bryan; Moine, Marie-Pierre; Ford, Rupert; Devine, Gerry

    2010-05-01

    The EU METAFOR Project (http://metaforclimate.eu) has created a web-based model documentation questionnaire to collect metadata from the modelling groups that are running simulations in support of the Coupled Model Intercomparison Project - 5 (CMIP5). The CMIP5 model documentation questionnaire will retrieve information about the details of the models used, how the simulations were carried out, how the simulations conformed to the CMIP5 experiment requirements and details of the hardware used to perform the simulations. The metadata collected by the CMIP5 questionnaire will allow CMIP5 data to be compared in a scientifically meaningful way. This paper describes the life-cycle of the CMIP5 questionnaire development which starts with relatively unstructured input from domain specialists and ends with formal XML documents that comply with the METAFOR Common Information Model (CIM). Each development step is associated with a specific tool. (1) Mind maps are used to capture information requirements from domain experts and build a controlled vocabulary, (2) a python parser processes the XML files generated by the mind maps, (3) Django (python) is used to generate the dynamic structure and content of the web based questionnaire from processed xml and the METAFOR CIM, (4) Python parsers ensure that information entered into the CMIP5 questionnaire is output as CIM compliant xml, (5) CIM compliant output allows automatic information capture tools to harvest questionnaire content into databases such as the Earth System Grid (ESG) metadata catalogue. This paper will focus on how Django (python) and XML input files are used to generate the structure and content of the CMIP5 questionnaire. It will also address how the choice of development tools listed above provided a framework that enabled working scientists (who we would never ordinarily get to interact with UML and XML) to be part the iterative development process and ensure that the CMIP5 model documentation questionnaire reflects what scientists want to know about the models. Keywords: metadata, CMIP5, automatic information capture, tool development

  4. Verification Testing: Meet User Needs Figure of Merit

    NASA Technical Reports Server (NTRS)

    Kelly, Bryan W.; Welch, Bryan W.

    2017-01-01

    Verification is the process through which Modeling and Simulation(M&S) software goes to ensure that it has been rigorously tested and debugged for its intended use. Validation confirms that said software accurately models and represents the real world system. Credibility gives an assessment of the development and testing effort that the software has gone through as well as how accurate and reliable test results are. Together, these three components form Verification, Validation, and Credibility(VV&C), the process by which all NASA modeling software is to be tested to ensure that it is ready for implementation. NASA created this process following the CAIB (Columbia Accident Investigation Board) report seeking to understand the reasons the Columbia space shuttle failed during reentry. The reports conclusion was that the accident was fully avoidable, however, among other issues, the necessary data to make an informed decision was not there and the result was complete loss of the shuttle and crew. In an effort to mitigate this problem, NASA put out their Standard for Models and Simulations, currently in version NASA-STD-7009A, in which they detailed their recommendations, requirements and rationale for the different components of VV&C. They did this with the intention that it would allow for people receiving MS software to clearly understand and have data from the past development effort. This in turn would allow the people who had not worked with the MS software before to move forward with greater confidence and efficiency in their work. This particular project looks to perform Verification on several MATLAB (Registered Trademark)(The MathWorks, Inc.) scripts that will be later implemented in a website interface. It seeks to take note and define the limits of operation, the units and significance, and the expected datatype and format of the inputs and outputs of each of the scripts. This is intended to prevent the code from attempting to make incorrect or impossible calculations. Additionally, this project will look at the coding generally and note inconsistencies, redundancies, and other aspects that may become problematic or slow down the codes run time. Certain scripts lacking in documentation also will be commented and cataloged.

  5. Enrichment rescues contextual discrimination deficit associated with immediate shock.

    PubMed

    Clemenson, Gregory D; Lee, Star W; Deng, Wei; Barrera, Vanessa R; Iwamoto, Kei S; Fanselow, Michael S; Gage, Fred H

    2015-03-01

    Adult animals continue to modify their behavior throughout life, a process that is highly influenced by past experiences. To shape behavior, specific mechanisms of neural plasticity to learn, remember, and recall information are required. One of the most robust examples of adult plasticity in the brain occurs in the dentate gyrus (DG) of the hippocampus, through the process of adult neurogenesis. Adult neurogenesis is strongly upregulated by external factors such as voluntary wheel running (RUN) and environmental enrichment (EE); however, the functional differences between these two factors remain unclear. Although both manipulations result in increased neurogenesis, RUN dramatically increases the proliferation of newborn cells and EE promotes their survival. We hypothesize that the method by which these newborn neurons are induced influences their functional role. Furthermore, we examine how EE-induced neurons may be primed to encode and recognize features of novel environments due to their previous enrichment experience. Here, we gave mice a challenging contextual fear-conditioning (FC) procedure to tease out the behavioral differences between RUN-induced neurogenesis and EE-induced neurogenesis. Despite the robust increases in neurogenesis seen in the RUN mice, we found that only EE mice were able to discriminate between similar contexts in this task, indicating that EE mice might use a different cognitive strategy when processing contextual information. Furthermore, we showed that this improvement was dependent on EE-induced neurogenesis, suggesting a fundamental functional difference between RUN-induced neurogenesis and EE-induced neurogenesis. © 2014 Wiley Periodicals, Inc.

  6. Enrichment Rescues Contextual Discrimination Deficit Associated With Immediate Shock

    PubMed Central

    Clemenson, Gregory D.; Lee, Star W.; Deng, Wei; Barrera, Vanessa R.; Iwamoto, Kei S.; Fanselow, Michael S.; Gage, Fred H.

    2015-01-01

    Adult animals continue to modify their behavior throughout life, a process that is highly influenced by past experiences. To shape behavior, specific mechanisms of neural plasticity to learn, remember, and recall information are required. One of the most robust examples of adult plasticity in the brain occurs in the dentate gyrus (DG) of the hippocampus, through the process of adult neurogenesis. Adult neurogenesis is strongly upregulated by external factors such as voluntary wheel running (RUN) and environmental enrichment (EE); however, the functional differences between these two factors remain unclear. Although both manipulations result in increased neurogenesis, RUN dramatically increases the proliferation of newborn cells and EE promotes their survival. We hypothesize that the method by which these newborn neurons are induced influences their functional role. Furthermore, we examine how EE-induced neurons may be primed to encode and recognize features of novel environments due to their previous enrichment experience. Here, we gave mice a challenging contextual fear-conditioning (FC) procedure to tease out the behavioral differences between RUN-induced neurogenesis and EE-induced neurogenesis. Despite the robust increases in neurogenesis seen in the RUN mice, we found that only EE mice were able to discriminate between similar contexts in this task, indicating that EE mice might use a different cognitive strategy when processing contextual information. Furthermore, we showed that this improvement was dependent on EE-induced neurogenesis, suggesting a fundamental functional difference between RUN-induced neurogenesis and EE-induced neurogenesis. PMID:25330953

  7. Percentiles of the run-length distribution of the Exponentially Weighted Moving Average (EWMA) median chart

    NASA Astrophysics Data System (ADS)

    Tan, K. L.; Chong, Z. L.; Khoo, M. B. C.; Teoh, W. L.; Teh, S. Y.

    2017-09-01

    Quality control is crucial in a wide variety of fields, as it can help to satisfy customers’ needs and requirements by enhancing and improving the products and services to a superior quality level. The EWMA median chart was proposed as a useful alternative to the EWMA \\bar{X} chart because the median-type chart is robust against contamination, outliers or small deviation from the normality assumption compared to the traditional \\bar{X}-type chart. To provide a complete understanding of the run-length distribution, the percentiles of the run-length distribution should be investigated rather than depending solely on the average run length (ARL) performance measure. This is because interpretation depending on the ARL alone can be misleading, as the process mean shifts change according to the skewness and shape of the run-length distribution, varying from almost symmetric when the magnitude of the mean shift is large, to highly right-skewed when the process is in-control (IC) or slightly out-of-control (OOC). Before computing the percentiles of the run-length distribution, optimal parameters of the EWMA median chart will be obtained by minimizing the OOC ARL, while retaining the IC ARL at a desired value.

  8. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  9. A Longitudinal Analysis of the Influence of a Peer Run Warm Line Phone Service on Psychiatric Recovery.

    PubMed

    Dalgin, Rebecca Spirito; Dalgin, M Halim; Metzger, Scott J

    2018-05-01

    This article focuses on the impact of a peer run warm line as part of the psychiatric recovery process. It utilized data including the Recovery Assessment Scale, community integration measures and crisis service usage. Longitudinal statistical analysis was completed on 48 sets of data from 2011, 2012, and 2013. Although no statistically significant differences were observed for the RAS score, community integration data showed increases in visits to primary care doctors, leisure/recreation activities and socialization with others. This study highlights the complexity of psychiatric recovery and that nonclinical peer services like peer run warm lines may be critical to the process.

  10. DOE Centers of Excellence Performance Portability Meeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neely, J. R.

    2016-04-21

    Performance portability is a phrase often used, but not well understood. The DOE is deploying systems at all of the major facilities across ASCR and ASC that are forcing application developers to confront head-on the challenges of running applications across these diverse systems. With GPU-based systems at the OLCF and LLNL, and Phi-based systems landing at NERSC, ACES (LANL/SNL), and the ALCF – the issue of performance portability is confronting the DOE mission like never before. A new best practice in the DOE is to include “Centers of Excellence” with each major procurement, with a goal of focusing efforts onmore » preparing key applications to be ready for the systems coming to each site, and engaging the vendors directly in a “shared fate” approach to ensuring success. While each COE is necessarily focused on a particular deployment, applications almost invariably must be able to run effectively across the entire DOE HPC ecosystem. This tension between optimizing performance for a particular platform, while still being able to run with acceptable performance wherever the resources are available, is the crux of the challenge we call “performance portability”. This meeting was an opportunity to bring application developers, software providers, and vendors together to discuss this challenge and begin to chart a path forward.« less

  11. Neural control of enhanced filtering demands in a combined Flanker and Garner conflict task.

    PubMed

    Berron, David; Frühholz, Sascha; Herrmann, Manfred

    2015-01-01

    Several studies demonstrated that visual filtering mechanisms might underlie both conflict resolution of the Flanker conflict and the control of the Garner effect. However, it remains unclear whether the mechanisms involved in the processing of both effects depend on similar filter mechanisms, such that especially the Garner effect is able to modulate filtering needs in the Flanker conflict. In the present experiment twenty-four subjects participated in a combined Garner and Flanker task during two runs of functional magnetic resonance imaging (fMRI) recordings. Behavioral data showed a significant Flanker but no Garner effect. A run-wise analysis, however, revealed a Flanker effect in the Garner filtering condition in the first experimental run, while we found a Flanker effect in the Garner baseline condition in the second experimental run. The fMRI data revealed a fronto-parietal network involved in the processing of both types of effects. Flanker interference was associated with activity in the inferior frontal gyrus, the anterior cingulate cortex, the precuneus as well as the inferior (IPL) and superior parietal lobule (SPL). Garner interference was associated with activation in middle frontal and middle temporal gyrus, the lingual gyrus as well as the IPL and SPL. Interaction analyses between the Garner and the Flanker effect additionally revealed differences between the two experimental runs. In the first experimental run, activity specifically related to the interaction of effects was found in frontal and parietal regions, while in the second run we found activity in the hippocampus, the parahippocampal cortex and the basal ganglia. This shift in activity for the interaction effects might be associated with a task-related learning process to control filtering demands. Especially perceptual learning mechanisms might play a crucial role in the present Flanker and Garner task design and, therefore, increased performance in the second experimental run could be the reason for the lack of behavioral Garner interference on the level of the whole experiment.

  12. Advance Planning Briefing for Industry: Information Dominance for the Full Spectrum Force.

    DTIC Science & Technology

    1997-05-29

    Electronic Order Processing is projected. The procurement will be a FFP ID/IQ award. BRIEFER: LTC Mary Fuller, Product Manager, Army Small Computer Program...will be a Best Value evaluation with a minimum of 2 awards. The procurement is planned to run for 2 years for ordering, Electronic Order Processing is...procurement is planned to run for three years for ordering, Electronic Order Processing is projected. The procurement will be a FFP ID/IQ award. BRIEFER

  13. Bulk Extractor 1.4 User’s Manual

    DTIC Science & Technology

    2013-08-01

    optimistically decompresses data in ZIP, GZIP, RAR, and Mi- crosoft’s Hibernation files. This has proven useful, for example, in recovering email...command line. Java 7 or above must be installed on the machine for the Bulk Extractor Viewer to run. Instructions on running bulk_extractor from the... Hibernation File Fragments (decompressed and processed, not carved) Subsection 4.6 winprefetch Windows Prefetch files, file fragments (processed

  14. Role of memory errors in quantum repeaters

    NASA Astrophysics Data System (ADS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.

    2007-03-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.

  15. Durham extremely large telescope adaptive optics simulation platform.

    PubMed

    Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard

    2007-03-01

    Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.

  16. Real-time dual-comb spectroscopy with a free-running bidirectionally mode-locked fiber laser

    NASA Astrophysics Data System (ADS)

    Mehravar, S.; Norwood, R. A.; Peyghambarian, N.; Kieu, K.

    2016-06-01

    Dual-comb technique has enabled exciting applications in high resolution spectroscopy, precision distance measurements, and 3D imaging. Major advantages over traditional methods can be achieved with dual-comb technique. For example, dual-comb spectroscopy provides orders of magnitude improvement in acquisition speed over standard Fourier-transform spectroscopy while still preserving the high resolution capability. Wider adoption of the technique has, however, been hindered by the need for complex and expensive ultrafast laser systems. Here, we present a simple and robust dual-comb system that employs a free-running bidirectionally mode-locked fiber laser operating at telecommunication wavelength. Two femtosecond frequency combs (with a small difference in repetition rates) are generated from a single laser cavity to ensure mutual coherent properties and common noise cancellation. As the result, we have achieved real-time absorption spectroscopy measurements without the need for complex servo locking with accurate frequency referencing, and relatively high signal-to-noise ratio.

  17. X-Antenna: A graphical interface for antenna analysis codes

    NASA Technical Reports Server (NTRS)

    Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.

    1995-01-01

    This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.

  18. On the design and development of a miniature ceramic gimbal bearing

    NASA Technical Reports Server (NTRS)

    Hanson, Robert A.; Odwyer, Barry; Gordon, Keith M.; Jarvis, Edward W.

    1990-01-01

    A review is made of a program to develop ceramic gimbal bearings for a miniaturized missile guidance system requiring nonmagnetic properties and higher load capacity than possible with conventional AISI 440C stainless steel bearings. A new gimbal design concept is described which utilizes the compressive strength and nonmagnetic properties of silicon nitride (Si3N4) ceramics for the gimbal bearing. Considerable manufacturing development has occurred in the last 5 years making ceramic bearings a viable option in the gimbal design phase. A preliminary study into the feasibility of the proposed design is summarized. Finite element analysis of the brittle ceramic bearing components under thermal stress and high acceleration loading were conducted to ensure the components will not fail catastrophically in service. Finite element analysis was also used to optimize the adhesive joint design. Bearing torque tests run at various axial loads indicate that the average running torque of ceramic bearings varies with load similarly to that of conventional steel bearings.

  19. Development of advanced Czochralski growth process to produce low cost 150 kg silicon ingots from a single crucible for technology readiness

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The process development continued, with a total of nine crystal growth runs. One of these was a 150 kg run of 5 crystals of approximately 30 kg each. Several machine and process problems were corrected and the 150 kg run was as successful as previous long runs on CG2000 RC's. The accelerated recharge and growth will be attempted when the development program resumes at full capacity in FY '82. The automation controls (Automatic Grower Light Computer System) were integrated to the seed dip temperature, shoulder, and diameter sensors on the CG2000 RC development grower. Test growths included four crystals, which were grown by the computer/sensor system from seed dip through tail off. This system will be integrated on the Mod CG2000 grower during the next quarter. The analytical task included the completion and preliminary testing of the gas chromatograph portion of the Furnace Atmosphere Analysis System. The system can detect CO concentrations and will be expanded to oxygen and water analysis in FY '82.

  20. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter

    2015-12-01

    AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.

  1. The Informatics Challenges Facing Biobanks: A Perspective from a United Kingdom Biobanking Network

    PubMed Central

    Groves, Martin; Jordan, Lee B.; Stobart, Hilary; Purdie, Colin A.; Thompson, Alastair M

    2015-01-01

    The challenges facing biobanks are changing from simple collections of materials to quality-assured fit-for-purpose clinically annotated samples. As a result, informatics awareness and capabilities of a biobank are now intrinsically related to quality. A biobank may be considered a data repository, in the form of raw data (the unprocessed samples), data surrounding the samples (processing and storage conditions), supplementary data (such as clinical annotations), and an increasing ethical requirement for biobanks to have a mechanism for researchers to return their data. The informatics capabilities of a biobank are no longer simply knowing sample locations; instead the capabilities will become a distinguishing factor in the ability of a biobank to provide appropriate samples. There is an increasing requirement for biobanking systems (whether in-house or commercially sourced) to ensure the informatics systems stay apace with the changes being experienced by the biobanking community. In turn, there is a requirement for the biobanks to have a clear informatics policy and directive that is embedded into the wider decision making process. As an example, the Breast Cancer Campaign Tissue Bank in the UK was a collaboration between four individual and diverse biobanks in the UK, and an informatics platform has been developed to address the challenges of running a distributed network. From developing such a system there are key observations about what can or cannot be achieved by informatics in isolation. This article will highlight some of the lessons learned during this development process. PMID:26418270

  2. Development of image processing method to detect noise in geostationary imagery

    NASA Astrophysics Data System (ADS)

    Khlopenkov, Konstantin V.; Doelling, David R.

    2016-10-01

    The Clouds and the Earth's Radiant Energy System (CERES) has incorporated imagery from 16 individual geostationary (GEO) satellites across five contiguous domains since March 2000. In order to derive broadband fluxes uniform across satellite platforms it is important to ensure a good quality of the input raw count data. GEO data obtained by older GOES imagers (such as MTSAT-1, Meteosat-5, Meteosat-7, GMS-5, and GOES-9) are known to frequently contain various types of noise caused by transmission errors, sync errors, stray light contamination, and others. This work presents an image processing methodology designed to detect most kinds of noise and corrupt data in all bands of raw imagery from modern and historic GEO satellites. The algorithm is based on a set of different approaches to detect abnormal image patterns, including inter-line and inter-pixel differences within a scanline, correlation between scanlines, analysis of spatial variance, and also a 2D Fourier analysis of the image spatial frequencies. In spite of computational complexity, the described method is highly optimized for performance to facilitate volume processing of multi-year data and runs in fully automated mode. Reliability of this noise detection technique has been assessed by human supervision for each GEO dataset obtained during selected time periods in 2005 and 2006. This assessment has demonstrated the overall detection accuracy of over 99.5% and the false alarm rate of under 0.3%. The described noise detection routine is currently used in volume processing of historical GEO imagery for subsequent production of global gridded data products and for cross-platform calibration.

  3. Effects of cell culture conditions on antibody N-linked glycosylation--what affects high mannose 5 glycoform.

    PubMed

    Pacis, Efren; Yu, Marcella; Autsen, Jennifer; Bayer, Robert; Li, Feng

    2011-10-01

    The glycosylation profile of therapeutic antibodies is routinely analyzed throughout development to monitor the impact of process parameters and to ensure consistency, efficacy, and safety for clinical and commercial batches of therapeutic products. In this study, unusually high levels of the mannose-5 (Man5) glycoform were observed during the early development of a therapeutic antibody produced from a Chinese hamster ovary (CHO) cell line, model cell line A. Follow up studies indicated that the antibody Man5 level was increased throughout the course of cell culture production as a result of increasing cell culture medium osmolality levels and extending culture duration. With model cell line A, Man5 glycosylation increased more than twofold from 12% to 28% in the fed-batch process through a combination of high basal and feed media osmolality and increased run duration. The osmolality and culture duration effects were also observed for four other CHO antibody producing cell lines by adding NaCl in both basal and feed media and extending the culture duration of the cell culture process. Moreover, reduction of Man5 level from model cell line A was achieved by supplementing MnCl2 at appropriate concentrations. To further understand the role of glycosyltransferases in Man5 level, N-acetylglucosaminyltransferase I GnT-I mRNA levels at different osmolality conditions were measured. It has been hypothesized that specific enzyme activity in the glycosylation pathway could have been altered in this fed-batch process. Copyright © 2011 Wiley Periodicals, Inc.

  4. Software framework for the upcoming MMT Observatory primary mirror re-aluminization

    NASA Astrophysics Data System (ADS)

    Gibson, J. Duane; Clark, Dusty; Porter, Dallan

    2014-07-01

    Details of the software framework for the upcoming in-situ re-aluminization of the 6.5m MMT Observatory (MMTO) primary mirror are presented. This framework includes: 1) a centralized key-value store and data structure server for data exchange between software modules, 2) a newly developed hardware-software interface for faster data sampling and better hardware control, 3) automated control algorithms that are based upon empirical testing, modeling, and simulation of the aluminization process, 4) re-engineered graphical user interfaces (GUI's) that use state-of-the-art web technologies, and 5) redundant relational databases for data logging. Redesign of the software framework has several objectives: 1) automated process control to provide more consistent and uniform mirror coatings, 2) optional manual control of the aluminization process, 3) modular design to allow flexibility in process control and software implementation, 4) faster data sampling and logging rates to better characterize the approximately 100-second aluminization event, and 5) synchronized "real-time" web application GUI's to provide all users with exactly the same data. The framework has been implemented as four modules interconnected by a data store/server. The four modules are integrated into two Linux system services that start automatically at boot-time and remain running at all times. Performance of the software framework is assessed through extensive testing within 2.0 meter and smaller coating chambers at the Sunnyside Test Facility. The redesigned software framework helps ensure that a better performing and longer lasting coating will be achieved during the re-aluminization of the MMTO primary mirror.

  5. Running SINDA '85/FLUINT interactive on the VAX

    NASA Technical Reports Server (NTRS)

    Simmonds, Boris

    1992-01-01

    Computer software as engineering tools are typically run in three modes: Batch, Demand, and Interactive. The first two are the most popular in the SINDA world. The third one is not so popular, due probably to the users inaccessibility to the command procedure files for running SINDA '85, or lack of familiarity with the SINDA '85 execution processes (pre-processor, processor, compilation, linking, execution and all of the file assignment, creation, deletions and de-assignments). Interactive is the mode that makes thermal analysis with SINDA '85 a real-time design tool. This paper explains a command procedure sufficient (the minimum modifications required in an existing demand command procedure) to run SINDA '85 on the VAX in an interactive mode. To exercise the procedure a sample problem is presented exemplifying the mode, plus additional programming capabilities available in SINDA '85. Following the same guidelines the process can be extended to other SINDA '85 residence computer platforms.

  6. Inland Waterway Environmental Safety

    NASA Astrophysics Data System (ADS)

    Reshnyak, Valery; Sokolov, Sergey; Nyrkov, Anatoliy; Budnik, Vlad

    2018-05-01

    The article presents the results of development of the main components of the environmental safety when operating vessels on inland waterways, which include strategy selection ensuring the environmental safety of vessels, the selection and justification of a complex of environmental technical means, activities to ensure operation of vessels taking into account the environmental technical means. Measures to ensure environmental safety are developed on the basis of the principles aimed at ensuring environmental safety of vessels. They include the development of strategies for the use of environmental protection equipment, which are determined by the conditions for wastewater treatment of purified sewage and oily bilge water as well as technical characteristics of the vessels, the introduction of the process of the out-of-the-vessel processing of ship pollution as a technology for their movement. This must take into account the operating conditions of vessels on different sections of waterways. An algorithm of actions aimed at ensuring ecological safety of operated vessels is proposed.

  7. Running as Interoceptive Exposure for Decreasing Anxiety Sensitivity: Replication and Extension.

    PubMed

    Sabourin, Brigitte C; Stewart, Sherry H; Watt, Margo C; Krigolson, Olav E

    2015-01-01

    A brief, group cognitive behavioural therapy with running as the interoceptive exposure (IE; exposure to physiological sensations) component was effective in decreasing anxiety sensitivity (AS; fear of arousal sensations) levels in female undergraduates (Watt et al., Anxiety and Substance Use Disorders: The Vicious Cycle of Comorbidity, 201-219, 2008). Additionally, repeated exposure to running resulted in decreases in cognitive (i.e., catastrophic thoughts) and affective (i.e., feelings of anxiety) reactions to running over time for high AS, but not low AS, participants (Sabourin et al., "Physical exercise as interoceptive exposure within a brief cognitive-behavioral treatment for anxiety-sensitive women", Journal of Cognitive Psychotherapy, 22:302-320, 2008). A follow-up study including the above-mentioned intervention with an expanded IE component also resulted in decreases in AS levels (Sabourin et al., under review). The goals of the present process study were (1) to replicate the original process study, with the expanded IE component, and (2) to determine whether decreases in cognitive, affective, and/or somatic (physiological sensations) reactions to running would be related to decreases in AS. Eighteen high AS and 10 low AS participants completed 20 IE running trials following the 3-day group intervention. As predicted, high AS participants, but not low AS participants, experienced decreases in cognitive, affective, and somatic reactions to running over time. Furthermore, decreases in cognitive and affective, but not in somatic, reactions to running were related to decreases in AS levels. These results suggest that the therapeutic effects of repeated exposure to running in decreasing sensitivity to anxiety-related sensations are not related to decreasing the experience of somatic sensations themselves. Rather, they are related to altering the cognitive and affective reactions to these sensations.

  8. Electron beam processing of fresh produce - A critical review

    NASA Astrophysics Data System (ADS)

    Pillai, Suresh D.; Shayanfar, Shima

    2018-02-01

    To meet the increasing global demand for fresh produce, robust processing methods that ensures both the safety and quality of fresh produce are needed. Since fresh produce cannot withstand thermal processing conditions, most of common safety interventions used in other foods are ineffective. Electron beam (eBeam) is a non-thermal technology that can be used to extend the shelf life and ensure the microbiological safety of fresh produce. There have been studies documenting the application of eBeam to ensure both safety and quality in fresh produce, however, there are still unexplored areas that still need further research. This is a critical review on the current literature on the application of eBeam technology for fresh produce.

  9. 50 CFR 648.162 - Bluefish specifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... following measures to ensure that the ACL specified by the process outlined in § 648.160(a) will not be... necessary to ensure that the ACL will not be exceeded. The MAFMC shall review these recommendations and... September 1 measures necessary to ensure that the applicable ACL will not be exceeded. The MAFMC's...

  10. 50 CFR 648.162 - Bluefish specifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... following measures to ensure that the ACL specified by the process outlined in § 648.160(a) will not be... necessary to ensure that the ACL will not be exceeded. The MAFMC shall review these recommendations and... September 1 measures necessary to ensure that the applicable ACL will not be exceeded. The MAFMC's...

  11. 50 CFR 648.162 - Bluefish specifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... following measures to ensure that the ACL specified by the process outlined in § 648.160(a) will not be... necessary to ensure that the ACL will not be exceeded. The MAFMC shall review these recommendations and... September 1 measures necessary to ensure that the applicable ACL will not be exceeded. The MAFMC's...

  12. The SISMA Project: A pre-operative seismic hazard monitoring system.

    NASA Astrophysics Data System (ADS)

    Massimiliano Chersich, M. C.; Amodio, A. A. Angelo; Francia, A. F. Andrea; Sparpaglione, C. S. Claudio

    2009-04-01

    Galileian Plus is currently leading the development, in collaboration with several Italian Universities, of the SISMA (Seismic Information System for Monitoring and Alert) Pilot Project financed by the Italian Space Agency. The system is devoted to the continuous monitoring of the seismic risk and is addressed to support the Italian Civil Protection decisional process. Completion of the Pilot Project is planned at the beginning of 2010. Main scientific paradigm of SISMA is an innovative deterministic approach integrating geophysical models, geodesy and active tectonics. This paper will give a general overview of project along with its progress status and a particular focus will be put on the architectural design details and to the software implementation choices. SISMA is built on top of a software infrastructure developed by Galileian Plus to integrate the scientific programs devoted to the update of seismic risk maps. The main characteristics of the system may be resumed as follow: automatic download of input data; integration of scientific programs; definition and scheduling of chains of processes; monitoring and control of the system through a graphical user interface (GUI); compatibility of the products with ESRI ArcGIS, by mean of post-processing conversion. a) automatic download of input data SISMA needs input data such as GNSS observations, updated seismic catalogue, SAR satellites orbits, etc. that are periodically updated and made available from remote servers through FTP and HTTP. This task is accomplished by a dedicated user configurable component. b) integration of scientific programs SISMA integrates many scientific programs written in different languages (Fortran, C, C++, Perl and Bash) and running into different operating systems. This design requirements lead to the development of a distributed system which is platform independent and is able to run any terminal-based program following few simple predefined rules. c) definition and scheduling of chains of processes Processes are bound each other, in the sense that the output of process "A" should be passed as input to process "B". In this case the process "B" must run automatically as soon as the required input is ready. In SISMA this issue is handled with the "data-driven" activation concept allowing specifying that a process should be started as soon as the needed input datum has been made available in the archive. Moreover SISMA may run processes on a "time-driven" base. The infrastructure of SISMA provides a configurable scheduler allowing the user to define the start time and the periodicity of such processes. d) monitoring and control The operator of the system needs to monitor and control every process running in the system. The SISMA infrastructure allows, through its GUI, the user to: view log messages of running and old processes; stop running processes; monitor processes executions; monitor resource status (available ram, network reachability, and available disk space) for every machine in the system. e) compatibility with ESRI Shapefiles Nearly all the SISMA data has some geographic information, and it is useful to integrate it in a Geographic Information System (GIS). Processors output are georeferred, but they are generated as ASCII files in a proprietary format, and thus cannot directly loaded in a GIS. The infrastructures provides a simple framework for adding filters that reads the data in the proprietary format and converts it to ESRI Shapefile format.

  13. Forward conditioning with wheel running causes place aversion in rats.

    PubMed

    Masaki, Takahisa; Nakajima, Sadahiko

    2008-09-01

    Backward pairings of a distinctive chamber as a conditioned stimulus and wheel running as an unconditioned stimulus (i.e., running-then-chamber) can produce a conditioned place preference in rats. The present study explored whether a forward conditioning procedure with these stimuli (i.e., chamber-then-running) would yield place preference or aversion. Confinement of a rat in one of two distinctive chambers was followed by a 20- or 60-min running opportunity, but confinement in the other was not. After four repetitions of this treatment (i.e., differential conditioning), a choice preference test was given in which the rat had free access to both chambers. This choice test showed that the rats given 60-min running opportunities spent less time in the running-paired chamber than in the unpaired chamber. Namely, a 60-min running opportunity after confinement in a distinctive chamber caused conditioned aversion to that chamber after four paired trials. This result was discussed with regard to the opponent-process theory of motivation.

  14. A simple field method to identify foot strike pattern during running.

    PubMed

    Giandolini, Marlène; Poupard, Thibaut; Gimenez, Philippe; Horvais, Nicolas; Millet, Guillaume Y; Morin, Jean-Benoît; Samozino, Pierre

    2014-05-07

    Identifying foot strike patterns in running is an important issue for sport clinicians, coaches and footwear industrials. Current methods allow the monitoring of either many steps in laboratory conditions or only a few steps in the field. Because measuring running biomechanics during actual practice is critical, our purpose is to validate a method aiming at identifying foot strike patterns during continuous field measurements. Based on heel and metatarsal accelerations, this method requires two uniaxial accelerometers. The time between heel and metatarsal acceleration peaks (THM) was compared to the foot strike angle in the sagittal plane (αfoot) obtained by 2D video analysis for various conditions of speed, slope, footwear, foot strike and state of fatigue. Acceleration and kinematic measurements were performed at 1000Hz and 120Hz, respectively, during 2-min treadmill running bouts. Significant correlations were observed between THM and αfoot for 14 out of 15 conditions. The overall correlation coefficient was r=0.916 (P<0.0001, n=288). The THM method is thus highly reliable for a wide range of speeds and slopes, and for all types of foot strike except for extreme forefoot strike during which the heel rarely or never strikes the ground, and for different footwears and states of fatigue. We proposed a classification based on THM: FFS<-5.49ms

  15. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    NASA Astrophysics Data System (ADS)

    Adam, C.; Barberis, D.; Crépé-Renaudin, S.; De, K.; Fassi, F.; Stradling, A.; Svatos, M.; Vartapetian, A.; Wolters, H.

    2017-10-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates communication between the ADC experts team and the other ADC shifters. These include the Distributed Analysis Support Team (DAST), which is the first point of contact for addressing all distributed analysis questions, and the ATLAS Distributed Computing Shifters (ADCoS), which check and report problems in central services, sites, Tier-0 export, data transfers and production tasks. Finally, the CRC looks at the level of ADC activities on a weekly or monthly timescale to ensure that ADC resources are used efficiently.

  16. Medicare Postacute Care Payment Reforms Have Potential to Improve Efficiency, but May Need Changes to Cut Costs

    PubMed Central

    Grabowski, David C.; Huckfeldt, Peter J.; Sood, Neeraj; Escarce, José J; Newhouse, Joseph P.

    2012-01-01

    The Affordable Care Act mandates changes in payment policies for Medicare postacute care services intended to contain spending in the long run and help ensure the program’s financial sustainability. In addition to reducing annual payment increases to providers under the existing prospective payment systems, the act calls for demonstration projects of bundled payment, accountable care organizations, and other strategies to promote care coordination and reduce spending. Experience with the adoption of Medicare prospective payment systems in postacute care settings approximately a decade ago suggests that current reforms could, but need not necessarily, produce such undesirable effects as decreased access for less profitable patients, poorer patient outcomes, and only short-lived curbs on spending. Policy makers will need to be vigilant in monitoring the impact of the Affordable Care Act reforms and be prepared to amend policies as necessary to ensure that the reforms exert persistent controls on spending without compromising the delivery of patient-appropriate postacute services. PMID:22949442

  17. Synthesis of water suitable as the MEPC.174(58) G8 influent water for testing ballast water management systems.

    PubMed

    D'Agostino, Fabio; Del Core, Marianna; Cappello, Simone; Mazzola, Salvatore; Sprovieri, Mario

    2015-10-01

    Here, we describe the methodologies adopted to ensure that natural seawater, used as "influent water" for the land test, complies with the requirement that should be fulfilled to show the efficacy of the new ballast water treatment system (BWTS). The new BWTS was located on the coast of SW Sicily (Italy), and the sampled seawater showed that bacteria and plankton were two orders of magnitude lower than requested. Integrated approaches for preparation of massive cultures of bacteria (Alcanivorax borkumensis and Marinobacter hydrocarbonoclasticus), algae (Tetraselmis suecica), rotifers (Brachionus plicatilis), and crustaceans (Artemia salina) suitable to ensure that 200 m(3) of water fulfilled the international guidelines of MEPC.174(58)G8 are here described. These methodologies allowed us to prepare the "influent water" in good agreement with guidelines and without specific problems arising from natural conditions (seasons, weather, etc.) which significantly affect the concentrations of organisms at sea. This approach also offered the chance to reliably run land tests once every two weeks.

  18. A Formal Model of Partitioning for Integrated Modular Avionics

    NASA Technical Reports Server (NTRS)

    DiVito, Ben L.

    1998-01-01

    The aviation industry is gradually moving toward the use of integrated modular avionics (IMA) for civilian transport aircraft. An important concern for IMA is ensuring that applications are safely partitioned so they cannot interfere with one another. We have investigated the problem of ensuring safe partitioning and logical non-interference among separate applications running on a shared Avionics Computer Resource (ACR). This research was performed in the context of ongoing standardization efforts, in particular, the work of RTCA committee SC-182, and the recently completed ARINC 653 application executive (APEX) interface standard. We have developed a formal model of partitioning suitable for evaluating the design of an ACR. The model draws from the mathematical modeling techniques developed by the computer security community. This report presents a formulation of partitioning requirements expressed first using conventional mathematical notation, then formalized using the language of SRI'S Prototype Verification System (PVS). The approach is demonstrated on three candidate designs, each an abstraction of features found in real systems.

  19. The Use of a UNIX-Based Workstation in the Information Systems Laboratory

    DTIC Science & Technology

    1989-03-01

    system. The conclusions of the research and the resulting recommendations are presented in Chapter III. These recommendations include how to manage...required to run the program on a new system, these should not be significant changes. 2. Processing Environment The UNIX processing environment is...interactive with multi-tasking and multi-user capabilities. Multi-tasking refers to the fact that many programs can be run concurrently. This capability

  20. Friction Mapping as a Tool for Measuring the Elastohydrodynamic Contact Running-in Process

    DTIC Science & Technology

    2015-10-01

    ARL-TR-7501 ● OCT 2015 US Army Research Laboratory Friction Mapping as a Tool for Measuring the Elastohydrodynamic Contact...Research Laboratory Friction Mapping as a Tool for Measuring the Elastohydrodynamic Contact Running-in Process by Stephen Berkebile Vehicle...YYYY) October 2015 2. REPORT TYPE Final 3. DATES COVERED (From - To) 1 January–30 June 2015 4. TITLE AND SUBTITLE Friction Mapping as a Tool for

  1. Power Analysis of an Enterprise Wireless Communication Architecture

    DTIC Science & Technology

    2017-09-01

    easily plug a satellite-based communication module into the enterprise processor when needed. Once plugged-in, it automatically runs the corresponding...reduce the SWaP by using a singular processing/computing module to run user applications and to implement waveform algorithms. This approach would...GPP) technology improved enough to allow a wide variety of waveforms to run in the GPP; thus giving rise to the SDR (Brannon 2004). Today’s

  2. Barefoot running: does it prevent injuries?

    PubMed

    Murphy, Kelly; Curry, Emily J; Matzkin, Elizabeth G

    2013-11-01

    Endurance running has evolved over the course of millions of years and it is now one of the most popular sports today. However, the risk of stress injury in distance runners is high because of the repetitive ground impact forces exerted. These injuries are not only detrimental to the runner, but also place a burden on the medical community. Preventative measures are essential to decrease the risk of injury within the sport. Common running injuries include patellofemoral pain syndrome, tibial stress fractures, plantar fasciitis, and Achilles tendonitis. Barefoot running, as opposed to shod running (with shoes), has recently received significant attention in both the media and the market place for the potential to promote the healing process, increase performance, and decrease injury rates. However, there is controversy over the use of barefoot running to decrease the overall risk of injury secondary to individual differences in lower extremity alignment, gait patterns, and running biomechanics. While barefoot running may benefit certain types of individuals, differences in running stance and individual biomechanics may actually increase injury risk when transitioning to barefoot running. The purpose of this article is to review the currently available clinical evidence on barefoot running and its effectiveness for preventing injury in the runner. Based on a review of current literature, barefoot running is not a substantiated preventative running measure to reduce injury rates in runners. However, barefoot running utility should be assessed on an athlete-specific basis to determine whether barefoot running will be beneficial.

  3. The Robust Running Ape: Unraveling the Deep Underpinnings of Coordinated Human Running Proficiency

    PubMed Central

    Kiely, John

    2017-01-01

    In comparison to other mammals, humans are not especially strong, swift or supple. Nevertheless, despite these apparent physical limitations, we are among Natures most superbly well-adapted endurance runners. Paradoxically, however, notwithstanding this evolutionary-bestowed proficiency, running-related injuries, and Overuse syndromes in particular, are widely pervasive. The term ‘coordination’ is similarly ubiquitous within contemporary coaching, conditioning, and rehabilitation cultures. Various theoretical models of coordination exist within the academic literature. However, the specific neural and biological underpinnings of ‘running coordination,’ and the nature of their integration, remain poorly elaborated. Conventionally running is considered a mundane, readily mastered coordination skill. This illusion of coordinative simplicity, however, is founded upon a platform of immense neural and biological complexities. This extensive complexity presents extreme organizational difficulties yet, simultaneously, provides a multiplicity of viable pathways through which the computational and mechanical burden of running can be proficiently dispersed amongst expanded networks of conditioned neural and peripheral tissue collaborators. Learning to adequately harness this available complexity, however, is a painstakingly slowly emerging, practice-driven process, greatly facilitated by innate evolutionary organizing principles serving to constrain otherwise overwhelming complexity to manageable proportions. As we accumulate running experiences persistent plastic remodeling customizes networked neural connectivity and biological tissue properties to best fit our unique neural and architectural idiosyncrasies, and personal histories: thus neural and peripheral tissue plasticity embeds coordination habits. When, however, coordinative processes are compromised—under the integrated influence of fatigue and/or accumulative cycles of injury, overuse, misuse, and disuse—this spectrum of available ‘choice’ dysfunctionally contracts, and our capacity to safely disperse the mechanical ‘stress’ of running progressively diminishes. Now the running work burden falls increasingly on reduced populations of collaborating components. Accordingly our capacity to effectively manage, dissipate and accommodate running-imposed stress diminishes, and vulnerability to Overuse syndromes escalates. Awareness of the deep underpinnings of running coordination enhances conceptual clarity, thereby informing training and rehabilitation insights designed to offset the legacy of excessive or progressively accumulating exposure to running-imposed mechanical stress. PMID:28659838

  4. Validation of CFD/Heat Transfer Software for Turbine Blade Analysis

    NASA Technical Reports Server (NTRS)

    Kiefer, Walter D.

    2004-01-01

    I am an intern in the Turbine Branch of the Turbomachinery and Propulsion Systems Division. The division is primarily concerned with experimental and computational methods of calculating heat transfer effects of turbine blades during operation in jet engines and land-based power systems. These include modeling flow in internal cooling passages and film cooling, as well as calculating heat flux and peak temperatures to ensure safe and efficient operation. The branch is research-oriented, emphasizing the development of tools that may be used by gas turbine designers in industry. The branch has been developing a computational fluid dynamics (CFD) and heat transfer code called GlennHT to achieve the computational end of this analysis. The code was originally written in FORTRAN 77 and run on Silicon Graphics machines. However the code has been rewritten and compiled in FORTRAN 90 to take advantage of more modem computer memory systems. In addition the branch has made a switch in system architectures from SGI's to Linux PC's. The newly modified code therefore needs to be tested and validated. This is the primary goal of my internship. To validate the GlennHT code, it must be run using benchmark fluid mechanics and heat transfer test cases, for which there are either analytical solutions or widely accepted experimental data. From the solutions generated by the code, comparisons can be made to the correct solutions to establish the accuracy of the code. To design and create these test cases, there are many steps and programs that must be used. Before a test case can be run, pre-processing steps must be accomplished. These include generating a grid to describe the geometry, using a software package called GridPro. Also various files required by the GlennHT code must be created including a boundary condition file, a file for multi-processor computing, and a file to describe problem and algorithm parameters. A good deal of this internship will be to become familiar with these programs and the structure of the GlennHT code. Additional information is included in the original extended abstract.

  5. MHD Simulation of Magnetic Nozzle Plasma with the NIMROD Code: Applications to the VASIMR Advanced Space Propulsion Concept

    NASA Astrophysics Data System (ADS)

    Tarditi, Alfonso G.; Shebalin, John V.

    2002-11-01

    A simulation study with the NIMROD code [1] is being carried on to investigate the efficiency of the thrust generation process and the properties of the plasma detachment in a magnetic nozzle. In the simulation, hot plasma is injected in the magnetic nozzle, modeled as a 2D, axi-symmetric domain. NIMROD has two-fluid, 3D capabilities but the present runs are being conducted within the MHD, 2D approximation. As the plasma travels through the magnetic field, part of its thermal energy is converted into longitudinal kinetic energy, along the axis of the nozzle. The plasma eventually detaches from the magnetic field at a certain distance from the nozzle throat where the kinetic energy becomes larger than the magnetic energy. Preliminary NIMROD 2D runs have been benchmarked with a particle trajectory code showing satisfactory results [2]. Further testing is here reported with the emphasis on the analysis of the diffusion rate across the field lines and of the overall nozzle efficiency. These simulation runs are specifically designed for obtaining comparisons with laboratory measurements of the VASIMR experiment, by looking at the evolution of the radial plasma density and temperature profiles in the nozzle. VASIMR (Variable Specific Impulse Magnetoplasma Rocket, [3]) is an advanced space propulsion concept currently under experimental development at the Advanced Space Propulsion Laboratory, NASA Johnson Space Center. A plasma (typically ionized Hydrogen or Helium) is generated by a RF (Helicon) discharge and heated by an Ion Cyclotron Resonance Heating antenna. The heated plasma is then guided into a magnetic nozzle to convert the thermal plasma energy into effective thrust. The VASIMR system has no electrodes and a solenoidal magnetic field produced by an asymmetric mirror configuration ensures magnetic insulation of the plasma from the material surfaces. By powering the plasma source and the heating antenna at different levels it is possible to vary smoothly of the thrust-to-specific impulse ratio while maintaining maximum power utilization. [1] http://www.nimrodteam.org [2] A. V. Ilin et al., Proc. 40th AIAA Aerospace Sciences Meeting, Reno, NV, Jan. 2002 [3] F. R. Chang-Diaz, Scientific American, p. 90, Nov. 2000

  6. Sourdough microbial community dynamics: An analysis during French organic bread-making processes.

    PubMed

    Lhomme, Emilie; Urien, Charlotte; Legrand, Judith; Dousset, Xavier; Onno, Bernard; Sicard, Delphine

    2016-02-01

    Natural sourdoughs are commonly used in bread-making processes, especially for organic bread. Despite its role in bread flavor and dough rise, the stability of the sourdough microbial community during and between bread-making processes is debated. We investigated the dynamics of lactic acid bacteria (LAB) and yeast communities in traditional organic sourdoughs of five French bakeries during the bread-making process and several months apart using classical and molecular microbiology techniques. Sourdoughs were sampled at four steps of the bread-making process with repetition. The analysis of microbial density over 68 sourdough/dough samples revealed that both LAB and yeast counts changed along the bread-making process and between bread-making runs. The species composition was less variable. A total of six LAB and nine yeast species was identified from 520 and 1675 isolates, respectively. The dominant LAB species was Lactobacillus sanfranciscensis, found for all bakeries and each bread-making run. The dominant yeast species changed only once between bread-making processes but differed between bakeries. They mostly belonged to the Kazachstania clade. Overall, this study highlights the change of population density within the bread-making process and between bread-making runs and the relative stability of the sourdough species community during bread-making process. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Deployment of IPv6-only CPU resources at WLCG sites

    NASA Astrophysics Data System (ADS)

    Babik, M.; Chudoba, J.; Dewhurst, A.; Finnern, T.; Froy, T.; Grigoras, C.; Hafeez, K.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Martelli, E.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Traynor, D.

    2017-10-01

    The fraction of Internet traffic carried over IPv6 continues to grow rapidly. IPv6 support from network hardware vendors and carriers is pervasive and becoming mature. A network infrastructure upgrade often offers sites an excellent window of opportunity to configure and enable IPv6. There is a significant overhead when setting up and maintaining dual-stack machines, so where possible sites would like to upgrade their services directly to IPv6 only. In doing so, they are also expediting the transition process towards its desired completion. While the LHC experiments accept there is a need to move to IPv6, it is currently not directly affecting their work. Sites are unwilling to upgrade if they will be unable to run LHC experiment workflows. This has resulted in a very slow uptake of IPv6 from WLCG sites. For several years the HEPiX IPv6 Working Group has been testing a range of WLCG services to ensure they are IPv6 compliant. Several sites are now running many of their services as dual-stack. The working group, driven by the requirements of the LHC VOs to be able to use IPv6-only opportunistic resources, continues to encourage wider deployment of dual-stack services to make the use of such IPv6-only clients viable. This paper presents the working group’s plan and progress so far to allow sites to deploy IPv6-only CPU resources. This includes making experiment central services dual-stack as well as a number of storage services. The monitoring, accounting and information services that are used by jobs also need to be upgraded. Finally the VO testing that has taken place on hosts connected via IPv6-only is reported.

  8. RELAP5-3D Resolution of Known Restart/Backup Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesina, George L.; Anderson, Nolan A.

    2014-12-01

    The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less

  9. Integration of EGA secure data access into Galaxy.

    PubMed

    Hoogstrate, Youri; Zhang, Chao; Senf, Alexander; Bijlard, Jochem; Hiltemann, Saskia; van Enckevort, David; Repo, Susanna; Heringa, Jaap; Jenster, Guido; J A Fijneman, Remond; Boiten, Jan-Willem; A Meijer, Gerrit; Stubbs, Andrew; Rambla, Jordi; Spalding, Dylan; Abeln, Sanne

    2016-01-01

    High-throughput molecular profiling techniques are routinely generating vast amounts of data for translational medicine studies. Secure access controlled systems are needed to manage, store, transfer and distribute these data due to its personally identifiable nature. The European Genome-phenome Archive (EGA) was created to facilitate access and management to long-term archival of bio-molecular data. Each data provider is responsible for ensuring a Data Access Committee is in place to grant access to data stored in the EGA. Moreover, the transfer of data during upload and download is encrypted. ELIXIR, a European research infrastructure for life-science data, initiated a project (2016 Human Data Implementation Study) to understand and document the ELIXIR requirements for secure management of controlled-access data. As part of this project, a full ecosystem was designed to connect archived raw experimental molecular profiling data with interpreted data and the computational workflows, using the CTMM Translational Research IT (CTMM-TraIT) infrastructure http://www.ctmm-trait.nl as an example. Here we present the first outcomes of this project, a framework to enable the download of EGA data to a Galaxy server in a secure way. Galaxy provides an intuitive user interface for molecular biologists and bioinformaticians to run and design data analysis workflows. More specifically, we developed a tool -- ega_download_streamer - that can download data securely from EGA into a Galaxy server, which can subsequently be further processed. This tool will allow a user within the browser to run an entire analysis containing sensitive data from EGA, and to make this analysis available for other researchers in a reproducible manner, as shown with a proof of concept study.  The tool ega_download_streamer is available in the Galaxy tool shed: https://toolshed.g2.bx.psu.edu/view/yhoogstrate/ega_download_streamer.

  10. Integration of EGA secure data access into Galaxy

    PubMed Central

    Hoogstrate, Youri; Zhang, Chao; Senf, Alexander; Bijlard, Jochem; Hiltemann, Saskia; van Enckevort, David; Repo, Susanna; Heringa, Jaap; Jenster, Guido; Fijneman, Remond J.A.; Boiten, Jan-Willem; A. Meijer, Gerrit; Stubbs, Andrew; Rambla, Jordi; Spalding, Dylan; Abeln, Sanne

    2016-01-01

    High-throughput molecular profiling techniques are routinely generating vast amounts of data for translational medicine studies. Secure access controlled systems are needed to manage, store, transfer and distribute these data due to its personally identifiable nature. The European Genome-phenome Archive (EGA) was created to facilitate access and management to long-term archival of bio-molecular data. Each data provider is responsible for ensuring a Data Access Committee is in place to grant access to data stored in the EGA. Moreover, the transfer of data during upload and download is encrypted. ELIXIR, a European research infrastructure for life-science data, initiated a project (2016 Human Data Implementation Study) to understand and document the ELIXIR requirements for secure management of controlled-access data. As part of this project, a full ecosystem was designed to connect archived raw experimental molecular profiling data with interpreted data and the computational workflows, using the CTMM Translational Research IT (CTMM-TraIT) infrastructure http://www.ctmm-trait.nl as an example. Here we present the first outcomes of this project, a framework to enable the download of EGA data to a Galaxy server in a secure way. Galaxy provides an intuitive user interface for molecular biologists and bioinformaticians to run and design data analysis workflows. More specifically, we developed a tool -- ega_download_streamer - that can download data securely from EGA into a Galaxy server, which can subsequently be further processed. This tool will allow a user within the browser to run an entire analysis containing sensitive data from EGA, and to make this analysis available for other researchers in a reproducible manner, as shown with a proof of concept study.  The tool ega_download_streamer is available in the Galaxy tool shed: https://toolshed.g2.bx.psu.edu/view/yhoogstrate/ega_download_streamer. PMID:28232859

  11. Silicon solar cell process development, fabrication and analysis

    NASA Technical Reports Server (NTRS)

    Yoo, H. I.; Iles, P. A.; Leung, D. C.

    1981-01-01

    Solar cells were fabricated from EFG ribbons dendritic webs, cast ingots by heat exchanger method, and cast ingots by ubiquitous crystallization process. Baseline and other process variations were applied to fabricate solar cells. EFG ribbons grown in a carbon-containing gas atmosphere showed significant improvement in silicon quality. Baseline solar cells from dendritic webs of various runs indicated that the quality of the webs under investigation was not as good as the conventional CZ silicon, showing an average minority carrier diffusion length of about 60 um versus 120 um of CZ wafers. Detail evaluation of large cast ingots by HEM showed ingot reproducibility problems from run to run and uniformity problems of sheet quality within an ingot. Initial evaluation of the wafers prepared from the cast polycrystalline ingots by UCP suggested that the quality of the wafers from this process is considerably lower than the conventional CZ wafers. Overall performance was relatively uniform, except for a few cells which showed shunting problems caused by inclusions.

  12. Design of Simple Landslide Monitoring System

    NASA Astrophysics Data System (ADS)

    Meng, Qingjia; Cai, Lingling

    2018-01-01

    The simple landslide monitoring system is mainly designed for slope, collapse body and surface crack. In the harsh environment, the dynamic displacement data of the disaster body is transmitted to the terminal acquisition system in real time. The main body of the system adopt is PIC32MX795F512. This chip is to realize low power design, wakes the system up through the clock chip, and turns on the switching power supply at set time, which makes the wireless transmission module running during the interval to ensure the maximum battery consumption, so that the system can be stable long term work.

  13. Cleaning Insertions and Collimation Challenges

    NASA Astrophysics Data System (ADS)

    Redaelli, S.; Appleby, R. B.; Bertarelli, A.; Bruce, R.; Jowett, J. M.; Lechner, A.; Losito, R.

    High-performance collimation systems are essential for operating efficiently modern hadron machine with large beam intensities. In particular, at the LHC the collimation system ensures a clean disposal of beam halos in the superconducting environment. The challenges of the HL-LHC study pose various demanding requests for beam collimation. In this paper we review the present collimation system and its performance during the LHC Run 1 in 2010-2013. Various collimation solutions under study to address the HL-LHC requirements are then reviewed, identifying the main upgrade baseline and pointing out advanced collimation concept for further enhancement of the performance.

  14. Tethering sockets and wrenches

    NASA Technical Reports Server (NTRS)

    Johnson, E. P.

    1990-01-01

    The tethering of sockets and wrenches was accomplished to improve the safety of working over motor segments. To accomplish the tethering of the sockets to the ratchets, a special design was implemented in which a groove was machined into each socket. Each socket was then fitted with a snap ring that can spin around the machined groove. The snap ring is tethered to the handle of the ratchet. All open end wrenches are also tethered to the ratchet or to the operator, depending upon the type. Tests were run to ensure that the modified tools meet torque requirements. The design was subsequently approved by Space Safety.

  15. A distributed infrastructure for publishing VO services: an implementation

    NASA Astrophysics Data System (ADS)

    Cepparo, Francesco; Scagnetto, Ivan; Molinaro, Marco; Smareglia, Riccardo

    2016-07-01

    This contribution describes both the design and the implementation details of a new solution for publishing VO services, enlightening its maintainable, distributed, modular and scalable architecture. Indeed, the new publisher is multithreaded and multiprocess. Multiple instances of the modules can run on different machines to ensure high performance and high availability, and this will be true both for the interface modules of the services and the back end data access ones. The system uses message passing to let its components communicate through an AMQP message broker that can itself be distributed to provide better scalability and availability.

  16. Sexual orientation of adolescent girls.

    PubMed

    Frankowski, Barbara L

    2002-12-01

    It is important for healthcare providers to have a clear understanding of sexual orientation and other components of sexual identity (genetic gender, anatomic gender, gender identity, gender role, and sexual behavior). Knowledge of how a lesbian identity is formed will aide providers in guiding these girls through adolescence. Societal stigma often forces isolation that leads to many risky behaviors that affect health (alcohol and drug use; risky sexual behaviors; truancy and dropping out; running away and homelessness; and depression and suicide). Health providers need to ensure a safe and understanding environment for these girls, to enhance their physical, emotional, and social development to healthy adulthood.

  17. Quality and Efficiency Improvement Tools for Every Radiologist.

    PubMed

    Kudla, Alexei U; Brook, Olga R

    2018-06-01

    In an era of value-based medicine, data-driven quality improvement is more important than ever to ensure safe and efficient imaging services. Familiarity with high-value tools enables all radiologists to successfully engage in quality and efficiency improvement. In this article, we review the model for improvement, strategies for measurement, and common practical tools with real-life examples that include Run chart, Control chart (Shewhart chart), Fishbone (Cause-and-Effect or Ishikawa) diagram, Pareto chart, 5 Whys, and Root Cause Analysis. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  18. [The fight of the communist authorities with the Catholic Church in the health service in Poland (1945-1970)].

    PubMed

    Jastrzebowski, Zbigniew

    2005-01-01

    The article shows the process of nationalization of Polish hospitals run by religious congregations and elimination of priests runs from medical care. The process lasted until resignation of Władysław Gomułka from the post of the first secretary of the Polish United Worker's Party. We can distinguish two periods of the process: 1948-1953 of nationalized of the congregation hospitals and 1960 - 1970 when this presence of the Catholic Church in the health was limited to indispensable minimum.

  19. Continuing and developing the engagement with Mediterranean stakeholders in the CLIM-RUN project

    NASA Astrophysics Data System (ADS)

    Goodess, Clare

    2013-04-01

    The CLIM-RUN case studies provide a real-world and Mediterranean context for bringing together experts on the demand and supply side of climate services. They are essential to the CLIM-RUN objective of using iterative and bottom-up (i.e., stakeholder led) approaches for optimizing the two-way information transfer between climate experts and stakeholders - and focus on specific locations and sectors (such as tourism and renewable energy). Stakeholder involvement has been critical from the start of the project in March 2011, with an early series of targeted workshops used to define the framework for each case study as well as the needs of stakeholders. Following these workshops, the user needs were translated into specific requirements from climate observations and models and areas identified where additional modelling and analysis are required. The first set of new products and tools produced by the CLIM-RUN modelling and observational experts are presented in a series of short briefing notes. A second round of CLIM-RUN stakeholder workshops will be held for each of the case studies in Spring 2013 as an essential part of the fourth CLIM-RUN key stage: Consolidation and collective review/assessment. During these workshops the process of interaction between CLIM-RUN scientists and case-study stakeholders will be reviewed, as well as the utility of the products and information developed in CLIM-RUN. Review questions will include: How far have we got? How successful have we been? What are the remaining problems/gaps? How to sustain and extend the interactions? The process of planning for and running these second workshops will be outlined and emerging outcomes presented, focusing on common messages which are relevant for development of the CLIM-RUN protocol for providing improved climate services to stakeholders together with the identification of best practices and policy recommendations for climate services development.

  20. Stream Restoration to Manage Nutrients in Degraded Watersheds

    EPA Science Inventory

    Historic land-use change can reduce water quality by impairing the ability of stream ecosystems to efficiently process nutrients such as nitrogen. Study results of two streams (Minebank Run and Big Spring Run) affected by urbanization, quarrying, agriculture, and impoundments in...

  1. The Role of Independent V&V in Upstream Software Development Processes

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve

    1996-01-01

    This paper describes the role of Verification and Validation (V&V) during the requirements and high level design processes, and in particular the role of Independent V&V (IV&V). The job of IV&V during these phases is to ensure that the requirements are complete, consistent and valid, and to ensure that the high level design meets the requirements. This contrasts with the role of Quality Assurance (QA), which ensures that appropriate standards and process models are defined and applied. This paper describes the current state of practice for IV&V, concentrating on the process model used in NASA projects. We describe a case study, showing the processes by which problem reporting and tracking takes place, and how IV&V feeds into decision making by the development team. We then describe the problems faced in implementing IV&V. We conclude that despite a well defined process model, and tools to support it, IV&V is still beset by communication and coordination problems.

  2. Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.

    PubMed

    Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J

    2017-09-01

    A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.

  3. Within-Subject Correlation Analysis to Detect Functional Areas Associated With Response Inhibition.

    PubMed

    Yamasaki, Tomoko; Ogawa, Akitoshi; Osada, Takahiro; Jimura, Koji; Konishi, Seiki

    2018-01-01

    Functional areas in fMRI studies are often detected by brain-behavior correlation, calculating across-subject correlation between the behavioral index and the brain activity related to a function of interest. Within-subject correlation analysis is also employed in a single subject level, which utilizes cognitive fluctuations in a shorter time period by correlating the behavioral index with the brain activity across trials. In the present study, the within-subject analysis was applied to the stop-signal task, a standard task to probe response inhibition, where efficiency of response inhibition can be evaluated by the stop-signal reaction time (SSRT). Since the SSRT is estimated, by definition, not in a trial basis but from pooled trials, the correlation across runs was calculated between the SSRT and the brain activity related to response inhibition. The within-subject correlation revealed negative correlations in the anterior cingulate cortex and the cerebellum. Moreover, the dissociation pattern was observed in the within-subject analysis when earlier vs. later parts of the runs were analyzed: negative correlation was dominant in earlier runs, whereas positive correlation was dominant in later runs. Regions of interest analyses revealed that the negative correlation in the anterior cingulate cortex, but not in the cerebellum, was dominant in earlier runs, suggesting multiple mechanisms associated with inhibitory processes that fluctuate on a run-by-run basis. These results indicate that the within-subject analysis compliments the across-subject analysis by highlighting different aspects of cognitive/affective processes related to response inhibition.

  4. Coastal and Submesoscale Process Studies for ASIRI

    DTIC Science & Technology

    2017-01-30

    of upper ocean processes and air- sea interaction in the Bay of Bengal. This, in the long run , would contribute toward improving the intra-seasonal...and air- sea interaction in the Bay of Bengal. This, in the long run, would contribute toward improving the intra-seasonal Monsoonal forecast in...different times of year, and to understand its relationship with air- sea fluxes of heat and moisture in the Bay of Bengal. 2. To determine what

  5. Accelerating Molecular Dynamic Simulation on Graphics Processing Units

    PubMed Central

    Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.

    2009-01-01

    We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. PMID:19191337

  6. Laser Doppler velocimeter system simulation for sensing aircraft wake vortices. Part 2: Processing and analysis of LDV data (for runs 1023 and 2023)

    NASA Technical Reports Server (NTRS)

    Meng, J. C. S.; Thomson, J. A. L.

    1975-01-01

    A data analysis program constructed to assess LDV system performance, to validate the simulation model, and to test various vortex location algorithms is presented. Real or simulated Doppler spectra versus range and elevation is used and the spatial distributions of various spectral moments or other spectral characteristics are calculated and displayed. Each of the real or simulated scans can be processed by one of three different procedures: simple frequency or wavenumber filtering, matched filtering, and deconvolution filtering. The final output is displayed as contour plots in an x-y coordinate system, as well as in the form of vortex tracks deduced from the maxima of the processed data. A detailed analysis of run number 1023 and run number 2023 is presented to demonstrate the data analysis procedure. Vortex tracks and system range resolutions are compared with theoretical predictions.

  7. The direction of cloud computing for Malaysian education sector in 21st century

    NASA Astrophysics Data System (ADS)

    Jaafar, Jazurainifariza; Rahman, M. Nordin A.; Kadir, M. Fadzil A.; Shamsudin, Syadiah Nor; Saany, Syarilla Iryani A.

    2017-08-01

    In 21st century, technology has turned learning environment into a new way of education to make learning systems more effective and systematic. Nowadays, education institutions are faced many challenges to ensure the teaching and learning process is running smoothly and manageable. Some of challenges in the current education management are lack of integrated systems, high cost of maintenance, difficulty of configuration and deployment as well as complexity of storage provision. Digital learning is an instructional practice that use technology to make learning experience more effective, provides education process more systematic and attractive. Digital learning can be considered as one of the prominent application that implemented under cloud computing environment. Cloud computing is a type of network resources that provides on-demands services where the users can access applications inside it at any location and no time border. It also promises for minimizing the cost of maintenance and provides a flexible of data storage capacity. The aim of this article is to review the definition and types of cloud computing for improving digital learning management as required in the 21st century education. The analysis of digital learning context focused on primary school in Malaysia. Types of cloud applications and services in education sector are also discussed in the article. Finally, gap analysis and direction of cloud computing in education sector for facing the 21st century challenges are suggested.

  8. Health and safety issues pertaining to genetically modified foods.

    PubMed

    Goodyear-Smith, F

    2001-08-01

    Genetic modification involves the insertion of genes from other organisms (within or between species) into host cells to select for desirable qualities. Potential benefits of GM foods include increased nutritional value; reduced allergenicity; pest and disease-resistance; and enhanced processing value. Possible detrimental outcomes include producing foods with novel toxins, allergens or reduced nutritional value, and development of antibiotic resistance or herbicide-resistant weeds. Benefits to individuals or populations need to be weighed against adverse health and environmental risks, and may differ between developing and Westernised countries. Whether testing and monitoring should exceed requirements for conventional foods is under debate. While not necessarily scientifically justifiable, consumer concerns have resulted in Australian and New Zealand requirements to label foods containing GM-produced proteins. Dissatisfied consumer advocacy groups are calling for all foods involving GM technology to be labelled, irrelevant of whether the final product contains novel protein. Goals to improve the quantity, quality and safety of foods are laudable; however, the primary aim of the bio-food industry is financial gain. GM foods may be as safe as conventional foods but public distrust runs high. It is important that discussion is informed by science and that claims of both benefits and risks are evidence-based, to ensure that the process is driven neither by the vested interest of the bio-technical multinational companies on the one hand, nor ill-informed public fears on the other.

  9. Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Oostdyk, Rebecca; Perotti, Jose

    2009-01-01

    When setting out to model and/or simulate a complex mechanical or electrical system, a modeler is faced with a vast array of tools, software, equations, algorithms and techniques that may individually or in concert aid in the development of the model. Mature requirements and a well understood purpose for the model may considerably shrink the field of possible tools and algorithms that will suit the modeling solution. Is the model intended to be used in an offline fashion or in real-time? On what platform does it need to execute? How long will the model be allowed to run before it outputs the desired parameters? What resolution is desired? Do the parameters need to be qualitative or quantitative? Is it more important to capture the physics or the function of the system in the model? Does the model need to produce simulated data? All these questions and more will drive the selection of the appropriate tools and algorithms, but the modeler must be diligent to bear in mind the final application throughout the modeling process to ensure the model meets its requirements without needless iterations of the design. The purpose of this paper is to describe the considerations and techniques used in the process of creating a functional fault model of a liquid hydrogen (LH2) system that will be used in a real-time environment to automatically detect and isolate failures.

  10. A Monte Carlo risk assessment model for acrylamide formation in French fries.

    PubMed

    Cummins, Enda; Butler, Francis; Gormley, Ronan; Brunton, Nigel

    2009-10-01

    The objective of this study is to estimate the likely human exposure to the group 2a carcinogen, acrylamide, from French fries by Irish consumers by developing a quantitative risk assessment model using Monte Carlo simulation techniques. Various stages in the French-fry-making process were modeled from initial potato harvest, storage, and processing procedures. The model was developed in Microsoft Excel with the @Risk add-on package. The model was run for 10,000 iterations using Latin hypercube sampling. The simulated mean acrylamide level in French fries was calculated to be 317 microg/kg. It was found that females are exposed to smaller levels of acrylamide than males (mean exposure of 0.20 microg/kg bw/day and 0.27 microg/kg bw/day, respectively). Although the carcinogenic potency of acrylamide is not well known, the simulated probability of exceeding the average chronic human dietary intake of 1 microg/kg bw/day (as suggested by WHO) was 0.054 and 0.029 for males and females, respectively. A sensitivity analysis highlighted the importance of the selection of appropriate cultivars with known low reducing sugar levels for French fry production. Strict control of cooking conditions (correlation coefficient of 0.42 and 0.35 for frying time and temperature, respectively) and blanching procedures (correlation coefficient -0.25) were also found to be important in ensuring minimal acrylamide formation.

  11. Barreloid Borders and Neuronal Activity Shape Panglial Gap Junction-Coupled Networks in the Mouse Thalamus.

    PubMed

    Claus, Lena; Philippot, Camille; Griemsmann, Stephanie; Timmermann, Aline; Jabs, Ronald; Henneberger, Christian; Kettenmann, Helmut; Steinhäuser, Christian

    2018-01-01

    The ventral posterior nucleus of the thalamus plays an important role in somatosensory information processing. It contains elongated cellular domains called barreloids, which are the structural basis for the somatotopic organization of vibrissae representation. So far, the organization of glial networks in these barreloid structures and its modulation by neuronal activity has not been studied. We have developed a method to visualize thalamic barreloid fields in acute slices. Combining electrophysiology, immunohistochemistry, and electroporation in transgenic mice with cell type-specific fluorescence labeling, we provide the first structure-function analyses of barreloidal glial gap junction networks. We observed coupled networks, which comprised both astrocytes and oligodendrocytes. The spread of tracers or a fluorescent glucose derivative through these networks was dependent on neuronal activity and limited by the barreloid borders, which were formed by uncoupled or weakly coupled oligodendrocytes. Neuronal somata were distributed homogeneously across barreloid fields with their processes running in parallel to the barreloid borders. Many astrocytes and oligodendrocytes were not part of the panglial networks. Thus, oligodendrocytes are the cellular elements limiting the communicating panglial network to a single barreloid, which might be important to ensure proper metabolic support to active neurons located within a particular vibrissae signaling pathway. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  13. Shared address collectives using counter mechanisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blocksome, Michael; Dozsa, Gabor; Gooding, Thomas M

    A shared address space on a compute node stores data received from a network and data to transmit to the network. The shared address space includes an application buffer that can be directly operated upon by a plurality of processes, for instance, running on different cores on the compute node. A shared counter is used for one or more of signaling arrival of the data across the plurality of processes running on the compute node, signaling completion of an operation performed by one or more of the plurality of processes, obtaining reservation slots by one or more of the pluralitymore » of processes, or combinations thereof.« less

  14. Radar Unix: a complete package for GPR data processing

    NASA Astrophysics Data System (ADS)

    Grandjean, Gilles; Durand, Herve

    1999-03-01

    A complete package for ground penetrating radar data interpretation including data processing, forward modeling and a case history database consultation is presented. Running on an Unix operating system, its architecture consists of a graphical user interface generating batch files transmitted to a library of processing routines. This design allows a better software maintenance and the possibility for the user to run processing or modeling batch files by itself and differed in time. A case history data base is available and consists of an hypertext document which can be consulted by using a standard HTML browser. All the software specifications are presented through a realistic example.

  15. Streaming data analytics via message passing with application to graph algorithms

    DOE PAGES

    Plimpton, Steven J.; Shead, Tim

    2014-05-06

    The need to process streaming data, which arrives continuously at high-volume in real-time, arises in a variety of contexts including data produced by experiments, collections of environmental or network sensors, and running simulations. Streaming data can also be formulated as queries or transactions which operate on a large dynamic data store, e.g. a distributed database. We describe a lightweight, portable framework named PHISH which enables a set of independent processes to compute on a stream of data in a distributed-memory parallel manner. Datums are routed between processes in patterns defined by the application. PHISH can run on top of eithermore » message-passing via MPI or sockets via ZMQ. The former means streaming computations can be run on any parallel machine which supports MPI; the latter allows them to run on a heterogeneous, geographically dispersed network of machines. We illustrate how PHISH can support streaming MapReduce operations, and describe streaming versions of three algorithms for large, sparse graph analytics: triangle enumeration, subgraph isomorphism matching, and connected component finding. Lastly, we also provide benchmark timings for MPI versus socket performance of several kernel operations useful in streaming algorithms.« less

  16. AgMIP 1.5°C Assessment: Mitigation and Adaptation at Coordinated Global and Regional Scales

    NASA Astrophysics Data System (ADS)

    Rosenzweig, C.

    2016-12-01

    The AgMIP 1.5°C Coordinated Global and Regional Integrated Assessments of Climate Change and Food Security (AgMIP 1.5 CGRA) is linking site-based crop and livestock models with similar models run on global grids, and then links these biophysical components with economics models and nutrition metrics at regional and global scales. The AgMIP 1.5 CGRA assessment brings together experts in climate, crop, livestock, economics, nutrition, and food security to define the 1.5°C Protocols and guide the process throughout the assessment. Scenarios are designed to consistently combine elements of intertwined storylines of future society including socioeconomic development (Shared Socioeconomic Pathways), greenhouse gas concentrations (Representative Concentration Pathways), and specific pathways of agricultural sector development (Representative Agricultural Pathways). Shared Climate Policy Assumptions will be extended to provide additional agricultural detail on mitigation and adaptation strategies. The multi-model, multi-disciplinary, multi-scale integrated assessment framework is using scenarios of economic development, adaptation, mitigation, food policy, and food security. These coordinated assessments are grounded in the expertise of AgMIP partners around the world, leading to more consistent results and messages for stakeholders, policymakers, and the scientific community. The early inclusion of nutrition and food security experts has helped to ensure that assessment outputs include important metrics upon which investment and policy decisions may be based. The CGRA builds upon existing AgMIP research groups (e.g., the AgMIP Wheat Team and the AgMIP Global Gridded Crop Modeling Initiative; GGCMI) and regional programs (e.g., AgMIP Regional Teams in Sub-Saharan Africa and South Asia), with new protocols for cross-scale and cross-disciplinary linkages to ensure the propagation of expert judgment and consistent assumptions.

  17. Reproducible Bioconductor workflows using browser-based interactive notebooks and containers.

    PubMed

    Almugbel, Reem; Hung, Ling-Hong; Hu, Jiaming; Almutairy, Abeer; Ortogero, Nicole; Tamta, Yashaswi; Yeung, Ka Yee

    2018-01-01

    Bioinformatics publications typically include complex software workflows that are difficult to describe in a manuscript. We describe and demonstrate the use of interactive software notebooks to document and distribute bioinformatics research. We provide a user-friendly tool, BiocImageBuilder, that allows users to easily distribute their bioinformatics protocols through interactive notebooks uploaded to either a GitHub repository or a private server. We present four different interactive Jupyter notebooks using R and Bioconductor workflows to infer differential gene expression, analyze cross-platform datasets, process RNA-seq data and KinomeScan data. These interactive notebooks are available on GitHub. The analytical results can be viewed in a browser. Most importantly, the software contents can be executed and modified. This is accomplished using Binder, which runs the notebook inside software containers, thus avoiding the need to install any software and ensuring reproducibility. All the notebooks were produced using custom files generated by BiocImageBuilder. BiocImageBuilder facilitates the publication of workflows with a point-and-click user interface. We demonstrate that interactive notebooks can be used to disseminate a wide range of bioinformatics analyses. The use of software containers to mirror the original software environment ensures reproducibility of results. Parameters and code can be dynamically modified, allowing for robust verification of published results and encouraging rapid adoption of new methods. Given the increasing complexity of bioinformatics workflows, we anticipate that these interactive software notebooks will become as necessary for documenting software methods as traditional laboratory notebooks have been for documenting bench protocols, and as ubiquitous. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiCostanzo, D; Ayan, A; Woollard, J

    Purpose: To automate the daily verification of each patient’s treatment by utilizing the trajectory log files (TLs) written by the Varian TrueBeam linear accelerator while reducing the number of false positives including jaw and gantry positioning errors, that are displayed in the Treatment History tab of Varian’s Chart QA module. Methods: Small deviations in treatment parameters are difficult to detect in weekly chart checks, but may be significant in reducing delivery errors, and would be critical if detected daily. Software was developed in house to read TLs. Multiple functions were implemented within the software that allow it to operate viamore » a GUI to analyze TLs, or as a script to run on a regular basis. In order to determine tolerance levels for the scripted analysis, 15,241 TLs from seven TrueBeams were analyzed. The maximum error of each axis for each TL was written to a CSV file and statistically analyzed to determine the tolerance for each axis accessible in the TLs to flag for manual review. The software/scripts developed were tested by varying the tolerance values to ensure veracity. After tolerances were determined, multiple weeks of manual chart checks were performed simultaneously with the automated analysis to ensure validity. Results: The tolerance values for the major axis were determined to be, 0.025 degrees for the collimator, 1.0 degree for the gantry, 0.002cm for the y-jaws, 0.01cm for the x-jaws, and 0.5MU for the MU. The automated verification of treatment parameters has been in clinical use for 4 months. During that time, no errors in machine delivery of the patient treatments were found. Conclusion: The process detailed here is a viable and effective alternative to manually checking treatment parameters during weekly chart checks.« less

  19. Balance disorders caused by running and jumping occurring in young basketball players.

    PubMed

    Struzik, Artur; Zawadzki, Jerzy; Pietraszewski, Bogdan

    2015-01-01

    Body balance, as one of the coordination abilities,is a desirable variable for basketball players as regards the necessity of efficient responses in constantly changing situations on a basketball court. The aim of this study was to check whether physical activity in the form of running and jumping influences variables characterizing the process of keeping body balance of a basketball player in the standing position. The research was conducted on 11 young basketball players. The measurements were taken with a Kistler force plate. Apart from commonly registered COP displacements, an additional variable describing the process of keeping body balance by a basketball player was ankle joint stiffness on the basis of which an "Index of Balance-Stiffness" (IB-S) was created. Statistically significant differences were obtained for the maximum COP displacements and ankle joint stiffness between measurements of balance in the standing position before and after the employed movement tasks whereas there were no statistically significant differences for the aforementioned variables describing the process of keeping balance between measurements after running and after jumping. The research results indicate that the employed movement activities brought about significant changes in the process of keeping balance of basketball player in the standing position which, after the run performed, remain on a similar level to the series of jumps being performed. The authors attempted to establish an index based on the stiffness which yields a possibility to perceive each basketball player as an individual person in the process of keeping balance.

  20. Advanced overlay: sampling and modeling for optimized run-to-run control

    NASA Astrophysics Data System (ADS)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.

  1. Nitrogen conservation in simulated food waste aerobic composting process with different Mg and P salt mixtures.

    PubMed

    Li, Yu; Su, Bensheng; Liu, Jianlin; Du, Xianyuan; Huang, Guohe

    2011-07-01

    To assess the effects of three types of Mg and P salt mixtures (potassium phosphate [K3PO4]/magnesium sulfate [MgSO4], potassium dihydrogen phosphate [K2HPO4]/MgSO4, KH2PO4/MgSO4) on the conservation of N and the biodegradation of organic materials in an aerobic food waste composting process, batch experiments were undertaken in four reactors (each with an effective volume of 30 L). The synthetic food waste was composted of potatoes, rice, carrots, leaves, meat, soybeans, and seed soil, and the ratio of C and N was 17:1. Runs R1-R3 were conducted with the addition of K3PO4/ MgSO4, K2HPO4/MgSO4, and KH2PO4/MgSO4 mixtures, respectively; run R0 was a blank performed without the addition of Mg and P salts. After composting for 25 days, the degrees of degradation of the organic materials in runs R0-R3 were 53.87, 62.58, 59.14, and 49.13%, respectively. X-ray diffraction indicated that struvite crystals were formed in runs R1-R3 but not in run R0; the gaseous ammonia nitrogen (NH3-N) losses in runs R0-R3 were 21.2, 32.8, 12.6, and 3.5% of the initial total N, respectively. Of the tested Mg/P salt mixtures, the K2HPO4/ MgSO4 system provided the best combination of conservation of N and biodegradation of organic materials in this food waste composting process.

  2. Development of Advanced Czochralski Growth Process to produce low cost 150 KG silicon ingots from a single crucible for technology readiness

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The modified CG2000 crystal grower construction, installation, and machine check-out was completed. The process development check-out proceeded with several dry runs and one growth run. Several machine calibrations and functional problems were discovered and corrected. Several exhaust gas analysis system alternatives were evaluated and an integrated system approved and ordered. A contract presentation was made at the Project Integration Meeting at JPL, including cost-projections using contract projected throughput and machine parameters. Several growth runs on a development CG200 RC grower show that complete neck, crown, and body automated growth can be achieved with only one operator input. Work continued for melt level, melt temperature, and diameter sensor development.

  3. Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Bayard, David S.

    2013-01-01

    G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to run on any engineer's desktop computer.

  4. Purple L1 Milestone Review Panel TotalView Debugger Functionality and Performance for ASC Purple

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, M

    2006-12-12

    ASC code teams require a robust software debugging tool to help developers quickly find bugs in their codes and get their codes running. Development debugging commonly runs up to 512 processes. Production jobs run up to full ASC Purple scale, and at times require introspection while running. Developers want a debugger that runs on all their development and production platforms and that works with all compilers and runtimes used with ASC codes. The TotalView Multiprocess Debugger made by Etnus was specified for ASC Purple to address this needed capability. The ASC Purple environment builds on the environment seen by TotalViewmore » on ASCI White. The debugger must now operate with the Power5 CPU, Federation switch, AIX 5.3 operating system including large pages, IBM compilers 7 and 9, POE 4.2 parallel environment, and rs6000 SLURM resource manager. Users require robust, basic debugger functionality with acceptable performance at development debugging scale. A TotalView installation must be provided at the beginning of the early user access period that meets these requirements. A functional enhancement, fast conditional data watchpoints, and a scalability enhancement, capability up to 8192 processes, are to be demonstrated.« less

  5. CERN openlab: Engaging industry for innovation in the LHC Run 3-4 R&D programme

    NASA Astrophysics Data System (ADS)

    Girone, M.; Purcell, A.; Di Meglio, A.; Rademakers, F.; Gunne, K.; Pachou, M.; Pavlou, S.

    2017-10-01

    LHC Run3 and Run4 represent an unprecedented challenge for HEP computing in terms of both data volume and complexity. New approaches are needed for how data is collected and filtered, processed, moved, stored and analysed if these challenges are to be met with a realistic budget. To develop innovative techniques we are fostering relationships with industry leaders. CERN openlab is a unique resource for public-private partnership between CERN and leading Information Communication and Technology (ICT) companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. In 2015, CERN openlab started its phase V with a strong focus on tackling the upcoming LHC challenges. Several R&D programs are ongoing in the areas of data acquisition, networks and connectivity, data storage architectures, computing provisioning, computing platforms and code optimisation and data analytics. This paper gives an overview of the various innovative technologies that are currently being explored by CERN openlab V and discusses the long-term strategies that are pursued by the LHC communities with the help of industry in closing the technological gap in processing and storage needs expected in Run3 and Run4.

  6. Recursive least squares estimation and its application to shallow trench isolation

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Qin, S. Joe; Bode, Christopher A.; Purdy, Matthew A.

    2003-06-01

    In recent years, run-to-run (R2R) control technology has received tremendous interest in semiconductor manufacturing. One class of widely used run-to-run controllers is based on the exponentially weighted moving average (EWMA) statistics to estimate process deviations. Using an EWMA filter to smooth the control action on a linear process has been shown to provide good results in a number of applications. However, for a process with severe drifts, the EWMA controller is insufficient even when large weights are used. This problem becomes more severe when there is measurement delay, which is almost inevitable in semiconductor industry. In order to control drifting processes, a predictor-corrector controller (PCC) and a double EWMA controller have been developed. Chen and Guo (2001) show that both PCC and double-EWMA controller are in effect Integral-double-Integral (I-II) controllers, which are able to control drifting processes. However, since offset is often within the noise of the process, the second integrator can actually cause jittering. Besides, tuning the second filter is not as intuitive as a single EWMA filter. In this work, we look at an alternative way Recursive Least Squares (RLS), to estimate and control the drifting process. EWMA and double-EWMA are shown to be the least squares estimate for locally constant mean model and locally constant linear trend model. Then the recursive least squares with exponential factor is applied to shallow trench isolation etch process to predict the future etch rate. The etch process, which is a critical process in the flash memory manufacturing, is known to suffer from significant etch rate drift due to chamber seasoning. In order to handle the metrology delay, we propose a new time update scheme. RLS with the new time update method gives very good result. The estimate error variance is smaller than that from EWMA, and mean square error decrease more than 10% compared to that from EWMA.

  7. Internal Quality Control Practices in Coagulation Laboratories: recommendations based on a patterns-of-practice survey.

    PubMed

    McFarlane, A; Aslan, B; Raby, A; Moffat, K A; Selby, R; Padmore, R

    2015-12-01

    Internal quality control (IQC) procedures are crucial for ensuring accurate patient test results. The IQMH Centre for Proficiency Testing conducted a web-based survey to gather information on the current IQC practices in coagulation testing. A questionnaire was distributed to 174 Ontario laboratories licensed to perform prothrombin time (PT) and activated partial thromboplastin time (APTT). All laboratories reported using two levels of commercial QC (CQC); 12% incorporate pooled patient plasma into their IQC program; >68% run CQC at the beginning of each shift; 56% following maintenance, with reagent changes, during a shift, or with every repeat sample; 6% only run CQC at the beginning of the day and 25% when the instruments have been idle for a defined period of time. IQC run frequency was determined by manufacturer recommendations (71%) but also influenced by the stability of test (27%), clinical impact of an incorrect test result (25%), and sample's batch number (10%). IQC was monitored using preset limits based on standard deviation (66%), precision goals (46%), or allowable performance limits (36%). 95% use multirules. Failure actions include repeating the IQC (90%) and reporting patient results; if repeat passes, 42% perform repeat analysis of all patient samples from last acceptable IQC. Variability exists in coagulation IQC practices among Ontario clinical laboratories. The recommendations presented here would be useful in encouraging standardized IQC practices. © 2015 John Wiley & Sons Ltd.

  8. Exhaustive Exercise-induced Oxidative Stress Alteration of Erythrocyte Oxygen Release Capacity.

    PubMed

    Xiong, Yanlian; Xiong, Yanlei; Wang, Yueming; Zhao, Yajin; Li, Yaojin; Ren, Yang; Wang, Ruofeng; Zhao, Mingzi; Hao, Yitong; Liu, Haibei; Wang, Xiang

    2018-05-24

    The aim of the present study is to explore the effect of exhaustive running exercise (ERE) in the oxygen release capacity of rat erythrocytes. Rats were divided into sedentary control (C), moderate running exercise (MRE) and exhaustive running exercise groups. The thermodynamics and kinetics properties of the erythrocyte oxygen release process of different groups were tested. We also determined the degree of band-3 oxidative and phosphorylation, anion transport activity and carbonic anhydrase isoform II(CAII) activity. Biochemical studies suggested that exhaustive running significantly increased oxidative injury parameters in TBARS and methaemoglobin levels. Furthermore, exhaustive running significantly decreased anion transport activity and carbonic anhydrase isoform II(CAII) activity. Thermodynamic analysis indicated that erythrocytes oxygen release ability also significantly increased due to elevated 2,3-DPG level after exhaustive running. Kinetic analysis indicated that exhaustive running resulted in significantly decreased T50 value. We presented evidence that exhaustive running remarkably impacted thermodynamics and kinetics properties of RBCs oxygen release. In addition, changes in 2,3-DPG levels and band-3 oxidation and phosphorylation could be the driving force for exhaustive running induced alterations in erythrocytes oxygen release thermodynamics and kinetics properties.

  9. A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink

    NASA Technical Reports Server (NTRS)

    Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.

    2012-01-01

    This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.

  10. A Survey of Collective Intelligence

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan

    1999-01-01

    This chapter presents the science of "COllective INtelligence" (COIN). A COIN is a large multi-agent systems where: i) the agents each run reinforcement learning (RL) algorithms; ii) there is little to no centralized communication or control; iii) there is a provided world utility function that, rates the possible histories of tile full system. Tile conventional approach to designing large distributed systems to optimize a world utility does not use agents running RL algorithms. Rather that approach begins with explicit modeling of the overall system's dynamics, followed by detailed hand-tuning of the interactions between the components to ensure that they "cooperate" as far as the world utility is concerned. This approach is labor-intensive, often results in highly non-robust systems, and usually results in design techniques that, have limited applicability. In contrast, with COINs we wish to solve the system design problems implicitly, via the 'adaptive' character of the RL algorithms of each of the agents. This COIN approach introduces an entirely new, profound design problem: Assuming the RL algorithms are able to achieve high rewards, what reward functions for the individual agents will, when pursued by those agents, result in high world utility? In other words, what reward functions will best ensure that we do not have phenomena like the tragedy of the commons, or Braess's paradox? Although still very young, the science of COINs has already resulted in successes in artificial domains, in particular in packet-routing, the leader-follower problem, and in variants of Arthur's "El Farol bar problem". It is expected that as it matures not only will COIN science expand greatly the range of tasks addressable by human engineers, but it will also provide much insight into already established scientific fields, such as economics, game theory, or population biology.

  11. Level 1 Processing of MODIS Direct Broadcast Data From Terra

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Smith, Peter; Shotland, Larry; El-Ghazawi, Tarek; Zhu, Ming

    2000-01-01

    In February 2000, an effort was begun to adapt the Moderate Resolution Imaging Spectroradiometer (MODIS) Level 1 production software to process direct broadcast data. Three Level 1 algorithms have been adapted and packaged for release: Level 1A converts raw (level 0) data into Hierarchical Data Format (HDF), unpacking packets into scans; Geolocation computes geographic information for the data points in the Level 1A; and the Level 1B computes geolocated, calibrated radiances from the Level 1A and Geolocation products. One useful aspect of adapting the production software is the ability to incorporate enhancements contributed by the MODIS Science Team. We have therefore tried to limit changes to the software. However, in order to process the data immediately on receipt, we have taken advantage of a branch in the geolocation software that reads orbit and altitude information from the packets themselves, rather than external ancillary files used in standard production. We have also verified that the algorithms can be run with smaller time increments (2.5 minutes) than the five-minute increments used in production. To make the code easier to build and run, we have simplified directories and build scripts. Also, dependencies on a commercial numerics library have been replaced by public domain software. A version of the adapted code has been released for Silicon Graphics machines running lrix. Perhaps owing to its origin in production, the software is rather CPU-intensive. Consequently, a port to Linux is underway, followed by a version to run on PC clusters, with an eventual goal of running in near-real-time (i.e., process a ten-minute pass in ten minutes).

  12. Parameterization of a numerical 2-D debris flow model with entrainment: a case study of the Faucon catchment, Southern French Alps

    NASA Astrophysics Data System (ADS)

    Hussin, H. Y.; Luna, B. Quan; van Westen, C. J.; Christen, M.; Malet, J.-P.; van Asch, Th. W. J.

    2012-10-01

    The occurrence of debris flows has been recorded for more than a century in the European Alps, accounting for the risk to settlements and other human infrastructure that have led to death, building damage and traffic disruptions. One of the difficulties in the quantitative hazard assessment of debris flows is estimating the run-out behavior, which includes the run-out distance and the related hazard intensities like the height and velocity of a debris flow. In addition, as observed in the French Alps, the process of entrainment of material during the run-out can be 10-50 times in volume with respect to the initially mobilized mass triggered at the source area. The entrainment process is evidently an important factor that can further determine the magnitude and intensity of debris flows. Research on numerical modeling of debris flow entrainment is still ongoing and involves some difficulties. This is partly due to our lack of knowledge of the actual process of the uptake and incorporation of material and due the effect of entrainment on the final behavior of a debris flow. Therefore, it is important to model the effects of this key erosional process on the formation of run-outs and related intensities. In this study we analyzed a debris flow with high entrainment rates that occurred in 2003 at the Faucon catchment in the Barcelonnette Basin (Southern French Alps). The historic event was back-analyzed using the Voellmy rheology and an entrainment model imbedded in the RAMMS 2-D numerical modeling software. A sensitivity analysis of the rheological and entrainment parameters was carried out and the effects of modeling with entrainment on the debris flow run-out, height and velocity were assessed.

  13. Catalysts and process developments for two-stage liquefaction. Fourth quarterly technical progress report, July 1, 1991--September 30, 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cronauer, D.C.; Swanson, A.J.; Sajkowski, D.J.

    Research under way in this project centers upon developing and evaluating catalysts and process improvements for coal liquefaction in the two-stage close-coupled catalytic process. As documented in the previous quarterly report there was little advantage for presoaking Black Thunder coal or Martin Lake lignite in a hydrogen-donor solvent, such as tetralin, at temperatures up to 600{degrees}F prior to liquefaction at higher temperatures. The amount of decarboxylation that occurred during the presoaking of Black Thunder coal or Martin Lake lignite in tetralin in the temperature range of 400 to 600{degrees}F was also relatively small. As indicated by both CO{sub 2} releasemore » and the change in oxygen-containing coal functionality, the level of decarboxylation in coal-derived solvent seems to correlate with the depth of coal dissolution. The feedstock liquefaction studies for the three feedstocks (Black Thunder subbituminous coal, Martin Lake lignite, and Illinois No. 6 coal) have been completed, and their results were compared in this report. Both Black Thunder coal and Martin Lake lignite gave lighter products than Illinois No. 6 coal at similar process conditions. Severe catalyst deactivation in the first stage was also observed with the Martin Lake lignite run. The first stage catalyst testing program was started. After a successful reference run with Illinois No. 6 coal, a high temperature run with AMOCAT{trademark} 1C was completed. In addition, a run was made with Illinois No. 6 coal using an oil-soluble catalyst, Molyvan L, in the first stage and AMOCAT{trademark} 1C in the second stage, where preliminary run results look promising.« less

  14. Dimensional modeling: beyond data processing constraints.

    PubMed

    Bunardzic, A

    1995-01-01

    The focus of information processing requirements is shifting from the on-line transaction processing (OLTP) issues to the on-line analytical processing (OLAP) issues. While the former serves to ensure the feasibility of the real-time on-line transaction processing (which has already exceeded a level of up to 1,000 transactions per second under normal conditions), the latter aims at enabling more sophisticated analytical manipulation of data. The OLTP requirements, or how to efficiently get data into the system, have been solved by applying the Relational theory in the form of Entity-Relation model. There is presently no theory related to OLAP that would resolve the analytical processing requirements as efficiently as Relational theory provided for the transaction processing. The "relational dogma" also provides the mathematical foundation for the Centralized Data Processing paradigm in which mission-critical information is incorporated as 'one and only one instance' of data, thus ensuring data integrity. In such surroundings, the information that supports business analysis and decision support activities is obtained by running predefined reports and queries that are provided by the IS department. In today's intensified competitive climate, businesses are finding that this traditional approach is not good enough. The only way to stay on top of things, and to survive and prosper, is to decentralize the IS services. The newly emerging Distributed Data Processing, with its increased emphasis on empowering the end user, does not seem to find enough merit in the relational database model to justify relying upon it. Relational theory proved too rigid and complex to accommodate the analytical processing needs. In order to satisfy the OLAP requirements, or how to efficiently get the data out of the system, different models, metaphors, and theories have been devised. All of them are pointing to the need for simplifying the highly non-intuitive mathematical constraints found in the relational databases normalized to their 3rd normal form. Object-oriented approach insists on the importance of the common sense component of the data processing activities. But, particularly interesting, is the approach that advocates the necessity of 'flattening' the structure of the business models as we know them today. This discipline is called Dimensional Modeling and it enables users to form multidimensional views of the relevant facts which are stored in a 'flat' (non-structured), easy-to-comprehend and easy-to-access database. When using dimensional modeling, we relax many of the axioms inherent in a relational model. We focus on the knowledge of the relevant facts which are reflecting the business operations and are the real basis for the decision support and business analysis. At the core of the dimensional modeling are fact tables that contain the non-discrete, additive data. To determine the level of aggregation of these facts, we use granularity tables that specify the resolution, or the level/detail, that the user is allowed to entertain. The third component is dimension tables that embody the knowledge of the constraints to be used to form the views.

  15. [A new strategy for Chinese medicine processing technologies: coupled with individuation processed and cybernetics].

    PubMed

    Zhang, Ding-kun; Yang, Ming; Han, Xue; Lin, Jun-zhi; Wang, Jia-bo; Xiao, Xiao-he

    2015-08-01

    The stable and controllable quality of decoction pieces is an important factor to ensure the efficacy of clinical medicine. Considering the dilemma that the existing standardization of processing mode cannot effectively eliminate the variability of quality raw ingredients, and ensure the stability between different batches, we first propose a new strategy for Chinese medicine processing technologies that coupled with individuation processed and cybernetics. In order to explain this thinking, an individual study case about different grades aconite is provided. We hope this strategy could better serve for clinical medicine, and promote the inheritance and innovation of Chinese medicine processing skills and theories.

  16. Process and domain specificity in regions engaged for face processing: an fMRI study of perceptual differentiation.

    PubMed

    Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E

    2012-12-01

    The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.

  17. Process- and Domain-Specificity in Regions Engaged for Face Processing: An fMRI Study of Perceptual Differentiation

    PubMed Central

    Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.

    2015-01-01

    The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402

  18. Nonhydrostatic and surfbeat model predictions of extreme wave run-up in fringing reef environments

    USGS Publications Warehouse

    Lashley, Christopher H.; Roelvink, Dano; van Dongeren, Ap R.; Buckley, Mark L.; Lowe, Ryan J.

    2018-01-01

    The accurate prediction of extreme wave run-up is important for effective coastal engineering design and coastal hazard management. While run-up processes on open sandy coasts have been reasonably well-studied, very few studies have focused on understanding and predicting wave run-up at coral reef-fronted coastlines. This paper applies the short-wave resolving, Nonhydrostatic (XB-NH) and short-wave averaged, Surfbeat (XB-SB) modes of the XBeach numerical model to validate run-up using data from two 1D (alongshore uniform) fringing-reef profiles without roughness elements, with two objectives: i) to provide insight into the physical processes governing run-up in such environments; and ii) to evaluate the performance of both modes in accurately predicting run-up over a wide range of conditions. XBeach was calibrated by optimizing the maximum wave steepness parameter (maxbrsteep) in XB-NH and the dissipation coefficient (alpha) in XB-SB) using the first dataset; and then applied to the second dataset for validation. XB-NH and XB-SB predictions of extreme wave run-up (Rmax and R2%) and its components, infragravity- and sea-swell band swash (SIG and SSS) and shoreline setup (<η>), were compared to observations. XB-NH more accurately simulated wave transformation but under-predicted shoreline setup due to its exclusion of parameterized wave-roller dynamics. XB-SB under-predicted sea-swell band swash but overestimated shoreline setup due to an over-prediction of wave heights on the reef flat. Run-up (swash) spectra were dominated by infragravity motions, allowing the short-wave (but not wave group) averaged model (XB-SB) to perform comparably well to its more complete, short-wave resolving (XB-NH) counterpart. Despite their respective limitations, both modes were able to accurately predict Rmax and R2%.

  19. Pressure intelligent control strategy of Waste heat recovery system of converter vapors

    NASA Astrophysics Data System (ADS)

    Feng, Xugang; Wu, Zhiwei; Zhang, Jiayan; Qian, Hong

    2013-01-01

    The converter gas evaporative cooling system is mainly used for absorbing heat in the high temperature exhaust gas which produced by the oxygen blowing reaction. Vaporization cooling steam pressure control system of converter is a nonlinear, time-varying, lagging behind, close coupling of multivariable control object. This article based on the analysis of converter operation characteristics of evaporation cooling system, of vaporization in a production run of pipe pressure variation and disturbance factors.For the dynamic characteristics of the controlled objects,we have improved the conventional PID control scheme.In Oxygen blowing process, we make intelligent control by using fuzzy-PID cascade control method and adjusting the Lance,that it can realize the optimization of the boiler steam pressure control.By design simulation, results show that the design has a good control not only ensures drum steam pressure in the context of security, enabling efficient conversion of waste heat.And the converter of 1800 flue gas through pipes and cool and dust removal also can be cooled to about 800. Therefore the converter haze evaporative cooling system has achieved to the converter haze temperature decrease effect and enhanced to the coal gas returns-ratio.

  20. Electro-aerodynamic field aided needleless electrospinning.

    PubMed

    Yan, Guilong; Niu, Haitao; Zhou, Hua; Wang, Hongxia; Shao, Hao; Zhao, Xueting; Lin, Tong

    2018-06-08

    Auxiliary fields have been used to enhance the performance of needle electrospinning. However, much less has been reported on how auxiliary fields affect needleless electrospinning. Herein, we report a novel needleless electrospinning technique that consists of an aerodynamic field and a second electric field. The second electric field is generated by setting two grounded inductive electrodes near the spinneret. The two auxiliary fields have to be applied simultaneously to ensure working of the electrospinning process. A synergistic effect was observed between inductive electrode and airflow. The aerodynamic-electric auxiliary field was found to significantly increase fiber production rate (4.5 g h -1 ), by 350% in comparison to the setup without auxiliary field (1.0 g h -1 ), whereas it had little effect on fiber diameter. The auxiliary fields allow running needleless electrospinning at an applied voltage equivalent to that in needle electrospinning (e.g. 10-30 kV). The finite element analyses of electric field and airflow field verify that the inductive electrodes increase electric field strength near the spinneret, and the airflow assists in fiber deposition. This novel needleless electrospinning may be useful for development of high-efficiency, low energy-consumption nanofiber production systems.

  1. How to securely replicate services (preliminary version)

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth

    1992-01-01

    A method is presented for constructing replicated services that retain their availability and integrity despite several servers and clients being corrupted by an intruder, in addition to others failing benignly. More precisely, a service is replicated by 'n' servers in such a way that a correct client will accept a correct server's response if, for some prespecified parameter, k, at least k servers are correct and fewer than k servers are correct. The issue of maintaining causality among client requests is also addressed. A security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service is illustrated. An approach to counter this problem is proposed that requires that fewer than k servers are corrupt and, to ensure liveness, that k is less than or = n - 2t, where t is the assumed maximum total number of both corruptions and benign failures suffered by servers in any system run. An important and novel feature of these schemes is that the client need not be able to identify or authenticate even a single server. Instead, the client is required only to possess at most two public keys for the service.

  2. Electro-aerodynamic field aided needleless electrospinning

    NASA Astrophysics Data System (ADS)

    Yan, Guilong; Niu, Haitao; Zhou, Hua; Wang, Hongxia; Shao, Hao; Zhao, Xueting; Lin, Tong

    2018-06-01

    Auxiliary fields have been used to enhance the performance of needle electrospinning. However, much less has been reported on how auxiliary fields affect needleless electrospinning. Herein, we report a novel needleless electrospinning technique that consists of an aerodynamic field and a second electric field. The second electric field is generated by setting two grounded inductive electrodes near the spinneret. The two auxiliary fields have to be applied simultaneously to ensure working of the electrospinning process. A synergistic effect was observed between inductive electrode and airflow. The aerodynamic-electric auxiliary field was found to significantly increase fiber production rate (4.5 g h‑1), by 350% in comparison to the setup without auxiliary field (1.0 g h‑1), whereas it had little effect on fiber diameter. The auxiliary fields allow running needleless electrospinning at an applied voltage equivalent to that in needle electrospinning (e.g. 10–30 kV). The finite element analyses of electric field and airflow field verify that the inductive electrodes increase electric field strength near the spinneret, and the airflow assists in fiber deposition. This novel needleless electrospinning may be useful for development of high-efficiency, low energy-consumption nanofiber production systems.

  3. Combined state and parameter identification of nonlinear structural dynamical systems based on Rao-Blackwellization and Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Abhinav, S.; Manohar, C. S.

    2018-03-01

    The problem of combined state and parameter estimation in nonlinear state space models, based on Bayesian filtering methods, is considered. A novel approach, which combines Rao-Blackwellized particle filters for state estimation with Markov chain Monte Carlo (MCMC) simulations for parameter identification, is proposed. In order to ensure successful performance of the MCMC samplers, in situations involving large amount of dynamic measurement data and (or) low measurement noise, the study employs a modified measurement model combined with an importance sampling based correction. The parameters of the process noise covariance matrix are also included as quantities to be identified. The study employs the Rao-Blackwellization step at two stages: one, associated with the state estimation problem in the particle filtering step, and, secondly, in the evaluation of the ratio of likelihoods in the MCMC run. The satisfactory performance of the proposed method is illustrated on three dynamical systems: (a) a computational model of a nonlinear beam-moving oscillator system, (b) a laboratory scale beam traversed by a loaded trolley, and (c) an earthquake shake table study on a bending-torsion coupled nonlinear frame subjected to uniaxial support motion.

  4. Utilities and manufacturers: Pioneering partnerships and their lessons for the 21st century

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartsch, C.; DeVaul, D.

    1994-12-31

    Manufacturers who, in partnership with utilities, improved their production process through energy efficiency and waste minimization strategies are discussed. Frequently these investments changed the corporate culture and resulted in a commitment to continuous improvement that may ensure the industrialists adapt to a rapidly evolving marketplace. The Northeast-Midwest Institute`s work to record these case studies developed out of the observation that older manufacturing facilities too often are run until no longer competitive, then closed, and new plants are built somewhere else - increasingly overseas. Unemployment, poverty, and cycles of economic and social deterioration too often follow if a new economic basemore » cannot be created. At the same time, inefficient industrial plants tend to emit large quantities of waste materials; industry produces more than 600 million tons of hazardous wastes and approximately 13 billion tons of solid wastes each year. To help identify how to avoid such pitfalls, the Institute sought out manufacturers who modernized successfully. Case studies are presented that show that utilities often are instrumental in catalyzing change in their industrial partners. In fact, much can be gained from utilities and industries working together. Many manufacturers need technical and financial assistance to maintain peak productivity.« less

  5. Detection of Olea europaea subsp. cuspidata and Juniperus procera in the dry Afromontane forest of northern Ethiopia using subpixel analysis of Landsat imagery

    NASA Astrophysics Data System (ADS)

    Hishe, Hadgu; Giday, Kidane; Neka, Mulugeta; Soromessa, Teshome; Van Orshoven, Jos; Muys, Bart

    2015-01-01

    Comprehensive and less costly forest inventory approaches are required to monitor the spatiotemporal dynamics of key species in forest ecosystems. Subpixel analysis using the earth resources data analysis system imagine subpixel classification procedure was tested to extract Olea europaea subsp. cuspidata and Juniperus procera canopies from Landsat 7 enhanced thematic mapper plus imagery. Control points with various canopy area fractions of the target species were collected to develop signatures for each of the species. With these signatures, the imagine subpixel classification procedure was run for each species independently. The subpixel process enabled the detection of O. europaea subsp. cuspidata and J. procera trees in pure and mixed pixels. Total of 100 pixels each were field verified for both species. An overall accuracy of 85% was achieved for O. europaea subsp. cuspidata and 89% for J. procera. A high overall accuracy level of detecting species at a natural forest was achieved, which encourages using the algorithm for future species monitoring activities. We recommend that the algorithm has to be validated in similar environment to enrich the knowledge on its capability to ensure its wider usage.

  6. A Real-World Community Health Worker Care Coordination Model for High-Risk Children.

    PubMed

    Martin, Molly A; Perry-Bell, Kenita; Minier, Mark; Glassgow, Anne Elizabeth; Van Voorhees, Benjamin W

    2018-04-01

    Health care systems across the United States are considering community health worker (CHW) services for high-risk patients, despite limited data on how to build and sustain effective CHW programs. We describe the process of providing CHW services to 5,289 at-risk patients within a state-run health system. The program includes 30 CHWs, six care coordinators, the Director of Care Coordination, the Medical Director, a registered nurse, mental health specialists, and legal specialists. CHWs are organized into geographic and specialized teams. All CHWs receive basic training that includes oral and mental health; some receive additional disease-specific training. CHWs develop individualized care coordination plans with patients. The implementation of these plans involves delivery of a wide range of social service and coordination support. The number of CHW contacts is determined by patient risk. CHWs spend about 60% of their time in an office setting. To deliver the program optimally, we had to develop multiple CHW job categories that allow for CHW specialization. We created new technology systems to manage operations. Field issues resulted in program changes to improve service delivery and ensure safety. Our experience serves as a model for how to integrate CHWs into clinical and community systems.

  7. Semicontinuous Production of Lactic Acid From Cheese Whey Using Integrated Membrane Reactor

    NASA Astrophysics Data System (ADS)

    Li, Yebo; Shahbazi, Abolghasem; Coulibaly, Sekou; Mims, Michele M.

    Semicontinuous production of lactic acid from cheese whey using free cells of Bifidobacterium longum with and without nanofiltration was studied. For the semicontinuous fermentation without membrane separation, the lactic acid productivity of the second and third runs is much lower than the first run. The semicontinuous fermentation with nanoseparation was run semicontinuously for 72 h with lactic acid to be harvested every 24 h using a nanofiltration membrane unit. The cells and unutilized lactose were kept in the reactor and mixed with newly added cheese whey in the subsequent runs. Slight increase in the lactic acid productivity was observed in the second and third runs during the semicontinuous fermentation with nanofiltration. It can be concluded that nanoseparation could improve the lactic acid productivity of the semicontinuous fermentation process.

  8. Improved performance in NASTRAN (R)

    NASA Technical Reports Server (NTRS)

    Chan, Gordon C.

    1989-01-01

    Three areas of improvement in COSMIC/NASTRAN, 1989 release, were incorporated recently that make the analysis program run faster on large problems. Actual log files and actual timings on a few test samples that were run on IBM, CDC, VAX, and CRAY computers were compiled. The speed improvement is proportional to the problem size and number of continuation cards. Vectorizing certain operations in BANDIT, makes BANDIT run twice as fast in some large problems using structural elements with many node points. BANDIT is a built-in NASTRAN processor that optimizes the structural matrix bandwidth. The VAX matrix packing routine BLDPK was modified so that it is now packing a column of a matrix 3 to 9 times faster. The denser and bigger the matrix, the greater is the speed improvement. This improvement makes a host of routines and modules that involve matrix operation run significantly faster, and saves disc space for dense matrices. A UNIX version, converted from 1988 COSMIC/NASTRAN, was tested successfully on a Silicon Graphics computer using the UNIX V Operating System, with Berkeley 4.3 Extensions. The Utility Modules INPUTT5 and OUTPUT5 were expanded to handle table data, as well as matrices. Both INPUTT5 and OUTPUT5 are general input/output modules that read and write FORTRAN files with or without format. More user informative messages are echoed from PARAMR, PARAMD, and SCALAR modules to ensure proper data values and data types being handled. Two new Utility Modules, GINOFILE and DATABASE, were written for the 1989 release. Seven rigid elements are added to COSMIC/NASTRAN. They are: CRROD, CRBAR, CRTRPLT, CRBE1, CRBE2, CRBE3, and CRSPLINE.

  9. Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.

    2012-09-01

    Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.

  10. Metabolic Factors Limiting Performance in Marathon Runners

    PubMed Central

    Rapoport, Benjamin I.

    2010-01-01

    Each year in the past three decades has seen hundreds of thousands of runners register to run a major marathon. Of those who attempt to race over the marathon distance of 26 miles and 385 yards (42.195 kilometers), more than two-fifths experience severe and performance-limiting depletion of physiologic carbohydrate reserves (a phenomenon known as ‘hitting the wall’), and thousands drop out before reaching the finish lines (approximately 1–2% of those who start). Analyses of endurance physiology have often either used coarse approximations to suggest that human glycogen reserves are insufficient to fuel a marathon (making ‘hitting the wall’ seem inevitable), or implied that maximal glycogen loading is required in order to complete a marathon without ‘hitting the wall.’ The present computational study demonstrates that the energetic constraints on endurance runners are more subtle, and depend on several physiologic variables including the muscle mass distribution, liver and muscle glycogen densities, and running speed (exercise intensity as a fraction of aerobic capacity) of individual runners, in personalized but nevertheless quantifiable and predictable ways. The analytic approach presented here is used to estimate the distance at which runners will exhaust their glycogen stores as a function of running intensity. In so doing it also provides a basis for guidelines ensuring the safety and optimizing the performance of endurance runners, both by setting personally appropriate paces and by prescribing midrace fueling requirements for avoiding ‘the wall.’ The present analysis also sheds physiologically principled light on important standards in marathon running that until now have remained empirically defined: The qualifying times for the Boston Marathon. PMID:20975938

  11. Phalangeal joints kinematics during ostrich (Struthio camelus) locomotion

    PubMed Central

    Ji, Qiaoli; Luo, Gang; Xue, Shuliang; Ma, Songsong; Li, Jianqiao

    2017-01-01

    The ostrich is a highly cursorial bipedal land animal with a permanently elevated metatarsophalangeal joint supported by only two toes. Although locomotor kinematics in walking and running ostriches have been examined, these studies have been largely limited to above the metatarsophalangeal joint. In this study, kinematic data of all major toe joints were collected from gaits with double support (slow walking) to running during stance period in a semi-natural setup with two selected cooperative ostriches. Statistical analyses were conducted to investigate the effect of locomotor gait on toe joint kinematics. The MTP3 and MTP4 joints exhibit the largest range of motion whereas the first phalangeal joint of the 4th toe shows the largest motion variability. The interphalangeal joints of the 3rd and 4th toes present very similar motion patterns over stance phases of slow walking and running. However, the motion patterns of the MTP3 and MTP4 joints and the vertical displacement of the metatarsophalangeal joint are significantly different during running and slow walking. Because of the biomechanical requirements, osctriches are likely to select the inverted pendulum gait at low speeds and the bouncing gait at high speeds to improve movement performance and energy economy. Interestingly, the motions of the MTP3 and MTP4 joints are highly synchronized from slow to fast locomotion. This strongly suggests that the 3rd and 4th toes really work as an “integrated system” with the 3rd toe as the main load bearing element whilst the 4th toe as the complementary load sharing element with a primary role to ensure the lateral stability of the permanently elevated metatarsophalangeal joint. PMID:28097064

  12. RNA-Sequencing Reveals Unique Transcriptional Signatures of Running and Running-Independent Environmental Enrichment in the Adult Mouse Dentate Gyrus.

    PubMed

    Grégoire, Catherine-Alexandra; Tobin, Stephanie; Goldenstein, Brianna L; Samarut, Éric; Leclerc, Andréanne; Aumont, Anne; Drapeau, Pierre; Fulton, Stephanie; Fernandes, Karl J L

    2018-01-01

    Environmental enrichment (EE) is a powerful stimulus of brain plasticity and is among the most accessible treatment options for brain disease. In rodents, EE is modeled using multi-factorial environments that include running, social interactions, and/or complex surroundings. Here, we show that running and running-independent EE differentially affect the hippocampal dentate gyrus (DG), a brain region critical for learning and memory. Outbred male CD1 mice housed individually with a voluntary running disk showed improved spatial memory in the radial arm maze compared to individually- or socially-housed mice with a locked disk. We therefore used RNA sequencing to perform an unbiased interrogation of DG gene expression in mice exposed to either a voluntary running disk (RUN), a locked disk (LD), or a locked disk plus social enrichment and tunnels [i.e., a running-independent complex environment (CE)]. RNA sequencing revealed that RUN and CE mice showed distinct, non-overlapping patterns of transcriptomic changes versus the LD control. Bio-informatics uncovered that the RUN and CE environments modulate separate transcriptional networks, biological processes, cellular compartments and molecular pathways, with RUN preferentially regulating synaptic and growth-related pathways and CE altering extracellular matrix-related functions. Within the RUN group, high-distance runners also showed selective stress pathway alterations that correlated with a drastic decline in overall transcriptional changes, suggesting that excess running causes a stress-induced suppression of running's genetic effects. Our findings reveal stimulus-dependent transcriptional signatures of EE on the DG, and provide a resource for generating unbiased, data-driven hypotheses for novel mediators of EE-induced cognitive changes.

  13. Seven Processes that Enable NASA Software Engineering Technologies

    NASA Technical Reports Server (NTRS)

    Housch, Helen; Godfrey, Sally

    2011-01-01

    This slide presentation reviews seven processes that NASA uses to ensure that software is developed, acquired and maintained as specified in the NPR 7150.2A requirement. The requirement is to ensure that all software be appraised for the Capability Maturity Model Integration (CMMI). The enumerated processes are: (7) Product Integration, (6) Configuration Management, (5) Verification, (4) Software Assurance, (3) Measurement and Analysis, (2) Requirements Management and (1) Planning & Monitoring. Each of these is described and the group(s) that are responsible is described.

  14. Rapid tooling for functional prototyping of metal mold processes. CRADA final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacharia, T.; Ludtka, G.M.; Bjerke, M.A.

    1997-12-01

    The overall scope of this endeavor was to develop an integrated computer system, running on a network of heterogeneous computers, that would allow the rapid development of tool designs, and then use process models to determine whether the initial tooling would have characteristics which produce the prototype parts. The major thrust of this program for ORNL was the definition of the requirements for the development of the integrated die design system with the functional purpose to link part design, tool design, and component fabrication through a seamless software environment. The principal product would be a system control program that wouldmore » coordinate the various application programs and implement the data transfer so that any networked workstation would be useable. The overall system control architecture was to be required to easily facilitate any changes, upgrades, or replacements of the model from either the manufacturing end or the design criteria standpoint. The initial design of such a program is described in the section labeled ``Control Program Design``. A critical aspect of this research was the design of the system flow chart showing the exact system components and the data to be transferred. All of the major system components would have been configured to ensure data file compatibility and transferability across the Internet. The intent was to use commercially available packages to model the various manufacturing processes for creating the die and die inserts in addition to modeling the processes for which these parts were to be used. In order to meet all of these requirements, investigative research was conducted to determine the system flow features and software components within the various organizations contributing to this project. This research is summarized.« less

  15. Laser-based gluing of diamond-tipped saw blades

    NASA Astrophysics Data System (ADS)

    Hennigs, Christian; Lahdo, Rabi; Springer, André; Kaierle, Stefan; Hustedt, Michael; Brand, Helmut; Wloka, Richard; Zobel, Frank; Dültgen, Peter

    2016-03-01

    To process natural stone such as marble or granite, saw blades equipped with wear-resistant diamond grinding segments are used, typically joined to the blade by brazing. In case of damage or wear, they must be exchanged. Due to the large energy input during thermal loosening and subsequent brazing, the repair causes extended heat-affected zones with serious microstructure changes, resulting in shape distortions and disadvantageous stress distributions. Consequently, axial run-out deviations and cutting losses increase. In this work, a new near-infrared laser-based process chain is presented to overcome the deficits of conventional brazing-based repair of diamond-tipped steel saw blades. Thus, additional tensioning and straightening steps can be avoided. The process chain starts with thermal debonding of the worn grinding segments, using a continuous-wave laser to heat the segments gently and to exceed the adhesive's decomposition temperature. Afterwards, short-pulsed laser radiation removes remaining adhesive from the blade in order to achieve clean joining surfaces. The third step is roughening and activation of the joining surfaces, again using short-pulsed laser radiation. Finally, the grinding segments are glued onto the blade with a defined adhesive layer, using continuous-wave laser radiation. Here, the adhesive is heated to its curing temperature by irradiating the respective grinding segment, ensuring minimal thermal influence on the blade. For demonstration, a prototype unit was constructed to perform the different steps of the process chain on-site at the saw-blade user's facilities. This unit was used to re-equip a saw blade with a complete set of grinding segments. This saw blade was used successfully to cut different materials, amongst others granite.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Looney, J.H.; Im, C.J.

    The following report presents the technical progress achieved during the first quarter. The completion of this contract entails engineering evaluation in conjunction with basic laboratory research to determine overall process improvements, associated cost savings and the effect of these savings on product price as they relate to the UCC Physical Beneficiation Process for coal-water slurry manufacture. The technical effort for this quarter has concentrated on two basic areas of concern as they relate to the above-mentioned process. First, an engineering evaluation was carried out to examine the critical areas of improvement in the existing UCC Research Corporation single-stage cleaning circuitmore » (coarse coal, heavy media washer). When the plant runs for low ash coal product, at the specific gravity near 1.30, it was found that substantial product contamination resulted from magnetite carry over in the clean coal product. The reduction of the magnetite contamination would entail the application of more spray water to the clean coal drain and rinse screen, and the refinement of the existing dilute media handling system, to accept the increased quality of rinse water. It was also determined that a basic mechanical overhaul is needed on the washbox to ensure dependable operation during the future production of low-ash coal. The various cost elements involved with this renovation were determined by UCC personnel in the operational division. The second area of investigation was concerned with the laboratory evaluation of three separate source coals obtained from United Coal Company (UCC) and nearby mines to determine probable cleanability when using each seam of coal as a feed in the existing beneficiation process. Washability analyses were performed on each sample utilizing a specific gravity range from 1.25 to 1.50. 4 figures, 3 tables.« less

  17. PanDA for COMPASS at JINR

    NASA Astrophysics Data System (ADS)

    Petrosyan, A. Sh.

    2016-09-01

    PanDA (Production and Distributed Analysis System) is a workload management system, widely used for data processing at experiments on Large Hadron Collider and others. COMPASS is a high-energy physics experiment at the Super Proton Synchrotron. Data processing for COMPASS runs locally at CERN, on lxbatch, the data itself stored in CASTOR. In 2014 an idea to start running COMPASS production through PanDA arose. Such transformation in experiment's data processing will allow COMPASS community to use not only CERN resources, but also Grid resources worldwide. During the spring and summer of 2015 installation, validation and migration work is being performed at JINR. Details and results of this process are presented in this paper.

  18. Controlling Laboratory Processes From A Personal Computer

    NASA Technical Reports Server (NTRS)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  19. On the water lapping of felines and the water running of lizards

    PubMed Central

    Aristoff, Jeffrey M; Stocker, Roman; Reis, Pedro M

    2011-01-01

    We consider two biological phenomena taking place at the air-water interface: the water lapping of felines and the water running of lizards. Although seemingly disparate motions, we show that they are intimately linked by their underlying hydrodynamics and belong to a broader class of processes called Froude mechanisms. We describe how both felines and lizards exploit inertia to defeat gravity, and discuss water lapping and water running in the broader context of water exit and water entry, respectively. PMID:21655444

  20. Return-to-Duty Toolkit: Assessments and Tasks for Determining Military Functional Performance Following Neurosensory Injury

    DTIC Science & Technology

    2017-09-29

    the warfighter to engage in aerobic activity such as running in place or push-ups until 65–85% of the target heart rate is reached (the target heart...85% of the target heart rate is reached (the target heart rate is 220 minus age). Options for activity include but are not limited to running in...Time • Pursuit Tracking • Running Memory CPT • Simple Reaction Time • Sleep Scale • Spatial Processing – Sequential and Simultaneous • Manikin

  1. Fabrication and Characterization of the US Army Research Laboratory Surface Enhanced Raman Scattering (SERS) Substrates

    DTIC Science & Technology

    2017-12-04

    gap spacing.92,93 By running current through an EBL-fabricated gap array, it has been shown to be possible to impact atomic positions within a...Spectra were collected and the instrument was run using Wire 2.0 software operating on a dedicated computer. 2.5 Data Analysis Data analysis...accomplished using the Unaxis VLR 700 Etch PM3-Dieclectric etch. For this step it is important to first run the process on a dummy wafer to

  2. A numerical study on combustion process in a small compression ignition engine run dual-fuel mode (diesel-biogas)

    NASA Astrophysics Data System (ADS)

    Ambarita, H.; Widodo, T. I.; Nasution, D. M.

    2017-01-01

    In order to reduce the consumption of fossil fuel of a compression ignition (CI) engines which is usually used in transportation and heavy machineries, it can be operated in dual-fuel mode (diesel-biogas). However, the literature reviews show that the thermal efficiency is lower due to incomplete combustion process. In order to increase the efficiency, the combustion process in the combustion chamber need to be explored. Here, a commercial CFD code is used to explore the combustion process of a small CI engine run on dual fuel mode (diesel-biogas). The turbulent governing equations are solved based on finite volume method. A simulation of compression and expansions strokes at an engine speed and load of 1000 rpm and 2500W, respectively has been carried out. The pressure and temperature distributions and streamlines are plotted. The simulation results show that at engine power of 732.27 Watt the thermal efficiency is 9.05%. The experiment and simulation results show a good agreement. The method developed in this study can be used to investigate the combustion process of CI engine run on dual-fuel mode.

  3. Nuclear shell model code CRUNCHER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resler, D.A.; Grimes, S.M.

    1988-05-01

    A new nuclear shell model code CRUNCHER, patterned after the code VLADIMIR, has been developed. While CRUNCHER and VLADIMIR employ the techniques of an uncoupled basis and the Lanczos process, improvements in the new code allow it to handle much larger problems than the previous code and to perform them more efficiently. Tests involving a moderately sized calculation indicate that CRUNCHER running on a SUN 3/260 workstation requires approximately one-half the central processing unit (CPU) time required by VLADIMIR running on a CRAY-1 supercomputer.

  4. SUPPORT Tools for evidence-informed health Policymaking (STP) 1: What is evidence-informed policymaking?

    PubMed Central

    2009-01-01

    This article is part of a series written for people responsible for making decisions about health policies and programmes and for those who support these decision makers. In this article, we discuss the following three questions: What is evidence? What is the role of research evidence in informing health policy decisions? What is evidence-informed policymaking? Evidence-informed health policymaking is an approach to policy decisions that aims to ensure that decision making is well-informed by the best available research evidence. It is characterised by the systematic and transparent access to, and appraisal of, evidence as an input into the policymaking process. The overall process of policymaking is not assumed to be systematic and transparent. However, within the overall process of policymaking, systematic processes are used to ensure that relevant research is identified, appraised and used appropriately. These processes are transparent in order to ensure that others can examine what research evidence was used to inform policy decisions, as well as the judgements made about the evidence and its implications. Evidence-informed policymaking helps policymakers gain an understanding of these processes. PMID:20018099

  5. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    PubMed

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  6. The use of designed experiments in the process development of continuous propellant mixing

    NASA Technical Reports Server (NTRS)

    Campbell, J. A.; Clemons, K. T.; Wong, M. K.

    1993-01-01

    A continuous mix pilot plant was constructed at Aerojet Propulsion Division in Sacramento, California to develop a robust propellant mixing process for the full scale plant that was to be built at the NASA Advanced Solid Rocket Motor facility Yellow Creek, Mississippi. The plant was used to conduct dozens of subsystem and full system mixing tests for evaluation of equipment, processing methods, and control schemes for later use at the production plant. As a culmination to this work, a series of designed experiments were conducted using an eight run Taguchi analysis with four factors at two levels each to determine the primary effect of processing parameters on propellant ballistic and mechanical properties. The factors examined in these runs included the propellant production rate (454 (1000) and 622 kg/hr (1371 Ib/hr)), the product temperature out of the mixer (49 (120) and 63 deg C (145 deg F)), mixer screw speed (75 and 90 rpm), and the deaerator excess capacity (20 and 80 percent). Measured response variables included the uncured and cured density, Crawford Bomb liquid strand burning rates, and selected mechanical properties. The experiment revealed that several of the response variables displayed significant changes from run-to-run with the product temperature being the single most important factor. After concluding this experiment, a twenty-six hour confirmation run was conducted to verify the conclusions reached in the designed experiment. The extended run produced over 12,250 kgs (27,000 lbs) of propellant meeting all of the pre-run targeted properties including density (1.803 g/cc (0.065 lb/in(exp 3)) with a 0.12 percent coefficient of variation (CV) at 25 deg C (77 deg F)), liquid strand burn rate (0.889 cm/s (0.350 in/s) with a 0.69 percent CV at 4210 KPa (610 psig), 15.6 deg C (60 deg F)), nominal maximum stress (828 KPa (120 psig) with a 2.84 percent CV, S&E at 25 deg C (77 deg F), 5.08 cm/min (2 in/min)), strain at nominal maximum (47.4 percent with a 3.96 percent CV), and initial tangent modulus (5349 KPa (775 psig) with a 7.26 percent CV).

  7. Calibration of hydrological model with programme PEST

    NASA Astrophysics Data System (ADS)

    Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca

    2016-04-01

    PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.

  8. Biosensors for EVA: Muscle Oxygen and pH During Walking, Running and Simulated Reduced Gravity

    NASA Technical Reports Server (NTRS)

    Lee, S. M. C.; Ellerby, G.; Scott, P.; Stroud, L.; Norcross, J.; Pesholov, B.; Zou, F.; Gernhardt, M.; Soller, B.

    2009-01-01

    During lunar excursions in the EVA suit, real-time measurement of metabolic rate is required to manage consumables and guide activities to ensure safe return to the base. Metabolic rate, or oxygen consumption (VO2), is normally measured from pulmonary parameters but cannot be determined with standard techniques in the oxygen-rich environment of a spacesuit. Our group developed novel near infrared spectroscopic (NIRS) methods to calculate muscle oxygen saturation (SmO2), hematocrit, and pH, and we recently demonstrated that we can use our NIRS sensor to measure VO2 on the leg during cycling. Our NSBRI-funded project is looking to extend this methodology to examine activities which more appropriately represent EVA activities, such as walking and running and to better understand factors that determine the metabolic cost of exercise in both normal and lunar gravity. Our 4 year project specifically addresses risk: ExMC 4.18: Lack of adequate biomedical monitoring capability for Constellation EVA Suits and EPSP risk: Risk of compromised EVA performance and crew health due to inadequate EVA suit systems.

  9. “People Knew They Could Come Here to Get Help”: An Ethnographic Study of Assisted Injection Practices at a Peer-Run ‘Unsanctioned’ Supervised Drug Consumption Room in a Canadian Setting

    PubMed Central

    McNeil, Ryan; Small, Will; Lampkin, Hugh; Shannon, Kate; Kerr, Thomas

    2013-01-01

    People who require help injecting are disproportionately vulnerable to drug-related harm, including HIV transmission. North America’s only sanctioned SIF operates in Vancouver, Canada under an exemption to federal drug laws, which imposes operating regulations prohibiting assisted injections. In response, the Vancouver Area Network of Drug Users (VANDU) launched a peer-run unsanctioned SIF in which trained peer volunteers provide assisted injections to increase the coverage of supervised injection services and minimize drug-related harm. We undertook qualitative interviews (n=23) and ethnographic observation (50 hours) to explore how this facility shaped assisted injection practices. Findings indicated that VANDU reshaped the social, structural, and spatial contexts of assisted injection practices in a manner that minimized HIV and other health risks, while allowing people who require help injecting to escape drug scene violence. Findings underscore the need for changes to regulatory frameworks governing SIFs to ensure that they accommodate people who require help injecting. PMID:23797831

  10. An 8-Week Ketogenic Low Carbohydrate, High Fat Diet Enhanced Exhaustive Exercise Capacity in Mice.

    PubMed

    Ma, Sihui; Huang, Qingyi; Yada, Koichi; Liu, Chunhong; Suzuki, Katsuhiko

    2018-05-25

    Current fueling tactics for endurance exercise encourage athletes to ingest a high carbohydrate diet. However, athletes are not generally encouraged to use fat, the largest energy reserve in the human body. A low carbohydrate, high fat ketogenic diet (KD) is a nutritional approach ensuring that the body utilizes lipids. Although KD has been associated with weight-loss, enhanced fat utilization in muscle and other beneficial effects, there is currently no clear proof whether it could lead to performance advantage. To evaluate the effects of KD on endurance exercise capacity, we studied the performance of mice subjected to a running model after consuming KD for eight weeks. Weight dropped dramatically in KD-feeding mice, even though they ate more calories. KD-feeding mice showed enhanced running time without aggravated muscle injury. Blood biochemistry and correlation analysis indicated the potential mechanism is likely to be a keto-adaptation enhanced capacity to transport and metabolize fat. KD also showed a potential preventive effect on organ injury caused by acute exercise, although KD failed to exert protection from muscle injury. Ultimately, KD may contribute to prolonged exercise capacity.

  11. Real-time analysis system for gas turbine ground test acoustic measurements.

    PubMed

    Johnston, Robert T

    2003-10-01

    This paper provides an overview of a data system upgrade to the Pratt and Whitney facility designed for making acoustic measurements on aircraft gas turbine engines. A data system upgrade was undertaken because the return-on-investment was determined to be extremely high. That is, the savings on the first test series recovered the cost of the hardware. The commercial system selected for this application utilizes 48 input channels, which allows either 1/3 octave and/or narrow-band analyses to be preformed real-time. A high-speed disk drive allows raw data from all 48 channels to be stored simultaneously while the analyses are being preformed. Results of tests to ensure compliance of the new system with regulations and with existing systems are presented. Test times were reduced from 5 h to 1 h of engine run time per engine configuration by the introduction of this new system. Conservative cost reduction estimates for future acoustic testing are 75% on items related to engine run time and 50% on items related to the overall length of the test.

  12. Microform Publishing: Salvation for Short-Run Periodicals?

    ERIC Educational Resources Information Center

    Bovee, Warren G.

    Micropublishing, a new technology, has provided small-circulation periodicals, which have little advertising revenues, with an alternative to escalating costs of traditional paper publication. The process of micropublishing which is most serviceable for short-run periodicals involves the use of microfiche--a small piece of film which can contain…

  13. Advanced planning for ISS payload ground processing

    NASA Astrophysics Data System (ADS)

    Page, Kimberly A.

    2000-01-01

    Ground processing at John F. Kennedy Space Center (KSC) is the concluding phase of the payload/flight hardware development process and is the final opportunity to ensure safe and successful recognition of mission objectives. Planning for the ground processing of on-orbit flight hardware elements and payloads for the International Space Station is a responsibility taken seriously at KSC. Realizing that entering into this operational environment can be an enormous undertaking for a payload customer, KSC continually works to improve this process by instituting new/improved services for payload developer/owner, applying state-of-the-art technologies to the advanced planning process, and incorporating lessons learned for payload ground processing planning to ensure complete customer satisfaction. This paper will present an overview of the KSC advanced planning activities for ISS hardware/payload ground processing. It will focus on when and how KSC begins to interact with the payload developer/owner, how that interaction changes (and grows) throughout the planning process, and how KSC ensures that advanced planning is successfully implemented at the launch site. It will also briefly consider the type of advance planning conducted by the launch site that is transparent to the payload user but essential to the successful processing of the payload (i.e. resource allocation, executing documentation, etc.) .

  14. Optimization of Primary Drying in Lyophilization during Early Phase Drug Development using a Definitive Screening Design with Formulation and Process Factors.

    PubMed

    Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram

    2018-06-08

    Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.

  15. Auto-biometric for M-mode echocardiography

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Park, Jinhyong; Zhou, S. Kevin

    2010-03-01

    In this paper we present a system for fast and accurate detection of anatomical structures (calipers) in M-mode images. The task is challenging because of dramatic variations in their appearances. We propose to solve the problem in a progressive manner, which ensures both robustness and efficiency. It first obtains rough caliper localization using the intensity profile image. Then run a constrained search for accurate caliper positions. Markov Random Field (MRF) and warping image detectors are used for jointly considering appearance information and the geometric relationship between calipers. Extensive experiments show that our system achieves more accurate results and uses less time in comparison with previously reported work.

  16. Neurovascular patterning cues and implications for central and peripheral neurological disease

    PubMed Central

    Gamboa, Nicholas T.; Taussky, Philipp; Park, Min S.; Couldwell, William T.; Mahan, Mark A.; Kalani, M. Yashar S.

    2017-01-01

    The highly branched nervous and vascular systems run along parallel trajectories throughout the human body. This stereotyped pattern of branching shared by the nervous and vascular systems stems from a common reliance on specific cues critical to both neurogenesis and angiogenesis. Continually emerging evidence supports the notion of later-evolving vascular networks co-opting neural molecular mechanisms to ensure close proximity and adequate delivery of oxygen and nutrients to nervous tissue. As our understanding of these biologic pathways and their phenotypic manifestations continues to advance, identification of where pathways go awry will provide critical insight into central and peripheral nervous system pathology. PMID:28966815

  17. Commending the Government of Afghanistan for certifying the results of the national election held on April 5, 2014, and urging the Government of Afghanistan to continue to pursue a "transparent, credible, and inclusive" run-off presidential election on June 14, 2014, while ensuring the safety of voters and candidates.

    THOMAS, 113th Congress

    Rep. Grayson, Alan [D-FL-9

    2014-05-21

    House - 05/21/2014 Referred to the Committee on Foreign Affairs, and in addition to the Committee on Armed Services, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  18. Simple Backdoors on RSA Modulus by Using RSA Vulnerability

    NASA Astrophysics Data System (ADS)

    Sun, Hung-Min; Wu, Mu-En; Yang, Cheng-Ta

    This investigation proposes two methods for embedding backdoors in the RSA modulus N=pq rather than in the public exponent e. This strategy not only permits manufacturers to embed backdoors in an RSA system, but also allows users to choose any desired public exponent, such as e=216+1, to ensure efficient encryption. This work utilizes lattice attack and exhaustive attack to embed backdoors in two proposed methods, called RSASBLT and RSASBES, respectively. Both approaches involve straightforward steps, making their running time roughly the same as that of normal RSA key-generation time, implying that no one can detect the backdoor by observing time imparity.

  19. The arrangement of deformation monitoring project and analysis of monitoring data of a hydropower engineering safety monitoring system

    NASA Astrophysics Data System (ADS)

    Wang, Wanshun; Chen, Zhuo; Li, Xiuwen

    2018-03-01

    The safety monitoring is very important in the operation and management of water resources and hydropower projects. It is the important means to understand the dam running status, to ensure the dam safety, to safeguard people’s life and property security, and to make full use of engineering benefits. This paper introduces the arrangement of engineering safety monitoring system based on the example of a water resource control project. The monitoring results of each monitoring project are analyzed intensively to show the operating status of the monitoring system and to provide useful reference for similar projects.

  20. Sq Currents and Neutral Winds

    NASA Astrophysics Data System (ADS)

    Yamazaki, Y.

    2015-12-01

    The relationship between ionospheric dynamo currents and neutral winds is examined using the Thermosphere Ionosphere Mesosphere Electrodynamic General Circulation Model (TIME-GCM). The simulation is run for May and June 2009 with variable neutral winds but with constant solar and magnetospheric energy inputs, which ensures that day-to-day changes in the solar quiet (Sq) current system arise only from lower atmospheric forcing. The intensity and focus position of the simulated Sq current system exhibit large day-to-day variability, as is also seen in ground magnetometer data. We show how the day-to-day variation of the Sq current system relate to variable winds at various altitudes, latitudes, and longitudes.

Top